url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://jyx.jyu.fi/handle/123456789/64185?show=full
dc.contributor.author Salvioni, Gianluca dc.date.accessioned 2019-05-24T11:07:21Z dc.date.available 2019-05-24T11:07:21Z dc.date.issued 2019 dc.identifier.isbn 978-951-39-7775-7 dc.identifier.uri https://jyx.jyu.fi/handle/123456789/64185 dc.description.abstract This monograph focused on a method to link nuclear energy density functionals to the ab initio solution of the nuclear many-body problem. This method, proposed in Ref. [1], was here discussed in many aspects as well as applied to a state-of-art ab initio approach. We introduced the basis of the density functional theory, paying attention to the concept of generators of the functional. In parallel, we explored the Self-Consistent Green's Function approach as ab initio framework to calculate ground-state energies. We derived the model functional based on the Levy-Lieb constrained variation, which exploited the response of the nucleus to an external perturbation. Using the Green's function technique and the NNLOsat chiral interaction in the ab initio Hamiltonian, seven semi-magic nuclei were probed with perturbations induced by generators of two- and three-body contact interaction (Skyrme-like). We employed the same generators to built model functionals, whereupon the coupling constants were fitted to reproduce the perturbed ground-state energies. Several parametrizations of the functionals were obtained for given choices of generators, selection of data points, and assumed uncertainties. We analysed the derived parametrizations according to their statistical performances, magnitude of the propagated errors, and corresponding nuclear matter description. Two parametrizations emerged as the most promising, but the model functionals built from them did not produce meaningful results. As it turned out, zero-range generators provided a poor description of the chiral interaction. Moreover, the performed error analysis suggested that the actual precision of the ab initio approach may not be sufficient to improve the quality of the novel energy density functionals. en dc.relation.ispartofseries JYU dissertations dc.title Model nuclear energy density functionals derived from ab initio calculations dc.identifier.urn URN:ISBN:978-951-39-7775-7 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452100992202759, "perplexity": 1773.24727680628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00606.warc.gz"}
https://brilliant.org/problems/reversed-branched-logs/
# Reversed Branched logs Algebra Level 4 $\begin{cases} \log_3(\log_2x)+\log_{\frac{1}{3}}(\log_{\frac{1}{2}}y)=1 \\ xy^2=4 \end{cases}$ If the above equations hold for some values of $$x$$ and $$y$$, then find the value of $$xy$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185349345207214, "perplexity": 521.0040051688721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00141-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/object-moving-at-speed-of-light-as-reference-frame.612620/
# Object moving at speed of light as Reference frame. 1. Jun 9, 2012 ### aleemudasir Is there any other object except photon which moves at the speed of light? Why can't an object moving at the speed of light be taken as reference frame? Can we use the equation m=m(0)/sqrt(1-v^2/c^2) for an object moving with speed of light? 2. Jun 9, 2012 ### harrylin Part of the answer directly follows from your questions: using your equation, you will find that only an object with zero rest mass can propagate at the speed of light; and such objects are called photons (it is assumed that photons have exactly zero rest mass). Note that as 0/0 is useless, for the "mass equivalent" of light you can use m=p/c. And how would you use a photon as reference frame? A reference frame is a system for comparing (measuring) such things as time and distance. If a clock and ruler would be accelerated to light speed (although impossible), they would stop ticking and have zero length. 3. Jun 9, 2012 ### Staff: Mentor here is a FAQ which explains why. 4. Jun 9, 2012 ### aleemudasir So does that mean it is impossible for an object with non-zero rest mass to move with speed of light? And Why? 5. Jun 9, 2012 ### harrylin Again, use your own equation! How much relativistic mass will it have at the speed of light? How much energy is needed to bring it to that speed? 6. Jun 9, 2012 ### bobc2 Yes, other massless bosons. Your question has been answered quite well here. Also, you might consider the problem in the context of space-time diagrams (google it or find discussions of space-time diagrams in other posts). The sketches below show a sequence in which an observer (blue frames of reference) moves at ever greater relativistic velocities with respect to a rest frame (black perpendicular coordinates). One aspect of the photon (any massless boson) that makes it so special is that its worldline always bisects the angle between the time axis and the spatial axis for any observer, no matter what the observer's speed (thus, the speed of light is the same for all observers). Notice in the sequence that the moving observer's X4 and X1 axes rotate toward each other, getting closer and closer to each other as the speed of light is approached. In the limit the X4 axis and the X1 axis overlay each other. So, if the observer were actually moving at the speed of light, both his time axis and his spatial axis would be colinear with the photon worldline. How would you define that as a coordinate system? Last edited: Jun 9, 2012 7. Jun 9, 2012 ### aleemudasir I am not talking of an observer moving at the speed of light rather I am talking about an observer observing an object x w.r.t to an object y(moving at the speed of light). I didn't get well this graphical explanation, would you please elaborate. 8. Jun 9, 2012 ### HallsofIvy Staff Emeritus Then you will have to explain what you mean by that. What do you mean by "observing x with respect to y"? Any observer see object with respect to himself, not with respect to any other frame of reference. 9. Jun 9, 2012 ### aleemudasir I don't think that is necessary, let's talk as an example about 3-Dimensinal co-ordinate system in which an observer observes the motion of any object w.r.t to the origin(0,0,0). 10. Jun 9, 2012 ### Staff: Mentor Ah, but is the observer moving with respect to that origin? If so, we have three frames (observer, object, and frame-with-origin-at-(0,0,0)) to transform between, not two. None of these frame can have a velocity greater than or equal to to the speed of light relative to any other of these frames. 11. Jun 9, 2012 ### aleemudasir The observer is at rest w.r.t origin. 12. Jun 9, 2012 ### Staff: Mentor Yes, any object with non zero rest mass must move slower than c in any inertial frame. As far as why, that is inherently a tricky question. What are you allowing to be assumed when answering? And what kind of answer are you looking for? If I were asking the question I would be looking for a geometric answer and I would allow the Minkowski metric to be assumed. Then the answer is that a massive object has a timelike four momentum by definition, and any timelike four momentum corresponds to a three velocity < c. If that doesn't answer the question then you will need to clarify what you want better. 13. Jun 10, 2012 ### harrylin OK, then my answer of post #5 applies. How much energy do you think is required? If you did not manage to calculate that a division by zero is infinite, the answer is given in section 10 of http://www.fourmilab.ch/etexts/einstein/specrel/www/ :
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114892244338989, "perplexity": 613.9397941991273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607846.35/warc/CC-MAIN-20170524131951-20170524151951-00213.warc.gz"}
http://psychology.wikia.com/wiki/Prior_probability?oldid=29370
# Prior probability 34,189pages on this wiki (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. The posterior probability is then the conditional probability of the variable taking the evidence into account. The posterior probability is computed from the prior and the likelihood function via Bayes' theorem. As prior and posterior are not terms used in frequentist analyses, this article uses the vocabulary of Bayesian probability and Bayesian inference. Throughout this article, for the sake of brevity the term variable encompasses observable variables, latent (unobserved) variables, parameters, and hypotheses. ## Prior probability distribution Edit In Bayesian statistical inference, a prior probability distribution, often called simply the prior, of an uncertain quantity p (for example, suppose p is the proportion of voters who will vote for the politician named Smith in a future election) is the probability distribution that would express one's uncertainty about p before the "data" (for example, an opinion poll) are taken into account. It is meant to attribute uncertainty rather than randomness to the uncertain quantity. One applies Bayes' theorem, multiplying the prior by the likelihood function and then normalizing, to get the posterior probability distribution, which is the conditional distribution of the uncertain quantity given the data. A prior is often the purely subjective assessment of an experienced expert. Some will choose a conjugate prior when they can, to make calculation of the posterior distribution easier. ## Informative priors Edit An informative prior expresses specific, definite information about a variable. An example is a prior distribution for the temperature at noon tomorrow. A reasonable approach is to make the prior a normal distribution with expected value equal to today's noontime temperature, with variance equal to the day-to-day variance of atmospheric temperature. This example has a property in common with many priors, namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and as more evidence accumulates the prior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation. ## Uninformative priors Edit An uninformative prior expresses vague or general information about a variable. The term "uninformative prior" is a misnomer; such a prior might be called a not very informative prior. Uninformative priors can express information such as "the variable is positive" or "the variable is less than some limit". Some authorities prefer the term objective prior. In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior. Some attempts have been made at finding probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy. For example, Edwin T. Jaynes has published an argument (Jaynes 1968) based on Lie groups that suggests that the prior for the proportion $p$ of voters voting for a candidate, given no other information, should be $p^{-1}(1-p)^{-1}$. If one is so uncertain about the value of the aforementioned proportion $p$ that one knows only that at least one voter will vote for Smith and at least one will not, then the conditional probability distribution of $p$ given this information alone is the uniform distribution on the interval [0, 1], which is obtained by applying Bayes' Theorem to the data set consisting of one vote for Smith and one vote against, using the above prior. Priors can be constructed which are proportional to the Haar measure if the parameter space $X$ carries a natural group structure. For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the translation group on $X$, and the resulting prior is a constant improper prior. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (i.e., it doesn't matter if we use centimeters or inches, we should get results that are physically the same). In such a case, the scale group is the natural group structure, and the corresponding prior on $X$ is proportional to $1/x$. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the affine group are not equal. Berger (1985, p. 413) argues that the right-invariant Haar measure is the correct choice. Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy. The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on $X$, one finds that distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and variance unity is the standard normal distribution. A related idea, reference priors, was introduced by Jose M. Bernardo. Here, the idea is to maximize the expected Kullback-Leibler divergence of the posterior distribution relative to the prior. This maximizes the expected posterior information about $x$ when the prior density is $p(x)$. The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., Jeffreys' rule) may result in priors with problematic behavior. Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred. Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, which gives a prior proportional to the logarithm. The Jeffreys prior attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion $p$ is $p^{1/2}(1-p)^{1/2}$, which differs from Jaynes' recommendation. Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used routinely, i.e., with many different data sets, it should have good frequentist properties. Normally a Bayesian would not be concerned with such issues, but it can be important in this situation. For example, one would want any decision rule based on the posterior distribution to be admissible under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with hierarchical Bayes models; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy. ## Improper priorsEdit If Bayes' theorem is written as $P(A_i|B) = \frac{P(B | A_i) P(A_i)}{\sum_j P(B|A_j)P(A_j)}\, ,$ then it is clear that it would remain true if all the prior probabilities P(Ai) and P(Aj) were multiplied by a given constant; the same would be true for a continuous random variable. The posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors only need be specified in the correct proportion. Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities. When this is the case, the prior is called an improper prior. Some statisticians use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(mv) ~ 1/v (for v > 0) which would suggest that any value for the mean is equally likely and that a value for the positive variance becomes less likely in inverse proportion to its value. Since $\int_{-\infty}^{\infty} dm\, = \int_{0}^{\infty} \frac{1}{v} \,dv = \infty,$ this would be an improper prior both for the mean and for the variance. ## References Edit Probability distributions [[[:Template:Tnavbar-plain-nodiv]]] Univariate Multivariate Discrete: BernoullibinomialBoltzmanncompound PoissondegeneratedegreeGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircle DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropy phase-typeposterior priorquasisampling </center>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609204530715942, "perplexity": 513.9400188766543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00103-ip-10-168-14-71.ec2.internal.warc.gz"}
https://mathstodon.xyz/@11011110/100908632866635208
An upper bound for Lebesgue’s universal covering problem vixra.org/abs/1801.0292 Philip Gibbs makes progress on the smallest area needed to cover a congruent copy of every diameter-one curve in the plane, with additional contributions from John Baez, Karine Bagdasaryan, and Greg Egan. See Baez's blog post johncarlosbaez.wordpress.com/2 for more. But why vixra?? A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961891531944275, "perplexity": 2195.232500232908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257605.76/warc/CC-MAIN-20190524104501-20190524130501-00091.warc.gz"}
https://quantumcomputing.stackexchange.com/tags/fidelity/new
# Tag Info The fidelity case was already worked in the other answer. Here is an idea for the trace distance one. The trace distance between $\rho$ and some $|\psi\rangle\!\langle\psi|$ is $$\|\rho - |\psi\rangle\!\langle\psi|\|_1 = \operatorname{Tr}\lvert \,\rho - |\psi\rangle\!\langle\psi|\,\rvert,$$ which is equal to the sum of the singular values of $\rho-|\psi\... 5 Recall that for any Hermitian operator$A$and any unit vector$|\psi\rangle$the real number$\langle \psi|A|\psi\rangle$, known as the Rayleigh quotient, is bounded by the largest eigenvalue$\lambda_{max}$of$A$$$\langle \psi|A|\psi\rangle \le \lambda_{max}.$$ Moreover, the maximum is achieved when$|\psi\rangle$is the unit norm eigenvector of$A\$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928565621376038, "perplexity": 268.86556570432015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00471.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/157079-continuity-degree-spline-curves.html
# Math Help - Continuity degree of spline curves? 1. ## Continuity degree of spline curves? I'm trying to solve a question that simply asks what the degree of smoothness (I assume this is the continuity degree) is for certain spline curves. For a cardinal spline and a Kochnaek-Bartels spline: Degree of polynomial: 2n - 1 Gives continuity degree of: C^(n-1) For a B-spline: Degree of polynomial: d - 1 Gives continuity degree: C^(d-2) Are these correct? I've tried to find what the continuity degree is for the Bézier curve, but I can't find it anywhere. 2. A Bezier curve is a cubic spline. d- 1= 3.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797763824462891, "perplexity": 3128.8828481882856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471441.74/warc/CC-MAIN-20151124205431-00150-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=93g:35100
MathSciNet bibliographic data MR1152231 (93g:35100) 35P05 (35J05 58G25) Melas, Antonios D. On the nodal line of the second eigenfunction of the Laplacian in ${\bf R}\sp 2$${\bf R}\sp 2$. J. Differential Geom. 35 (1992), no. 1, 255–263. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926114678382874, "perplexity": 3644.177371540143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047545/warc/CC-MAIN-20131204131727-00071-ip-10-33-133-15.ec2.internal.warc.gz"}
https://rd.springer.com/article/10.3103/S106345411801003X
Vestnik St. Petersburg University, Mathematics , Volume 51, Issue 1, pp 31–35 # On the Stability of the Zero Solution of a Second-Order Differential Equation under a Periodic Perturbation of the Center • A. A. Dorodenkov Mathematics ## Abstract Small periodic perturbations of the oscillator $$\ddot x + {x^{2n}}$$ sgn x = Y(t, x, $$\dot x$$) are considered, where n < 1 is a positive integer and the right-hand side is a small perturbation periodic in t, which is an analytic function in $$\dot x$$ and x in a neighborhood of the origin. New Lyapunov-type periodic functions are introduced and used to investigate the stability of the equilibrium position of the given equation. Sufficient conditions for asymptotic stability and instability are given. ## Keywords asymptotic stability small periodic perturbation oscillator ## References 1. 1. A. M. Lyapunov, “A study of one of the special cases of the motion stability problem,” in Collected Works (Akad. Nauk SSSR, Moscow, 1956), Vol. 2, pp. 272–331 [in Russian].Google Scholar 2. 2. Yu. N. Bibikov, “Stability and bifurcation for periodic perturbations of the equilibrium of an oscillator with infinite or infinitesimal oscillation frequency,” Math. Notes 65, 269–279 (1999). 3. 3. Yu. N. Bibikov and A. G. Savelyeva, “Periodic perturbations of a nonlinear oscillator,” Differ. Equations 52, 405–412 (2016). 4. 4. A. A. Dorodenkov, “Stability and bifurcation of the birth of invariant tori for an equilibrium state of an essentially nonlinear second-order differential equation,” Vestn. S.-Petersburg Univ.: Math. 42, 262–268 (2009).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363864421844482, "perplexity": 1305.8907826690415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159561.37/warc/CC-MAIN-20180923153915-20180923174315-00060.warc.gz"}
https://biomch-l.isbweb.org/search?searchJSON=%7B%22tag%22%3A%5B%22dynamometry%22%5D%7D
Hi All, I am using a Biodex system 3 dynamometer and Bipac but unfortunately they do not communicate when it comes to gravity correction...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468731641769409, "perplexity": 2447.4958474764235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00130.warc.gz"}
https://discuss.codechef.com/questions/29659/lowsum-editorial
× # LOWSUM - Editorial Author: Vineet Paliwal Tester: Roman Rubanenko Editorialist: Jingbo Shang EASY # PREREQUISITES: Sort, Priority Queue, Binary Search # PROBLEM: Given two arrays A[1..K], B[1..K], deal with Q queries of finding the n-th smallest pair sum in all K^2 pair sums (A[i] + B[j]). # EXPLANATION: A brute force enumeration can give us an O(K^2 logK) - O(1) algorithm: just simply store all possible sums and use the some sorting algorithm such as quick sort. And then, for each query, return the n-th number of the stored sorted array. This brute force algortihm’s time complexity is O(K^2 + Q). Also, it needs O(K^2) space. Both time and memory are exceeded. There are 2 ways to improve this brute force algorithm. The common key point to improve this brute force algorithm is as following: Suppose A[] and B[] are sorted ascendingly, A[i] + B[j] is smaller than or equal to any A[i] + B[k] if k > j. For instance, we use quick sort to sort A[] and B[] in O(K logK) time. And then, 2 possible solutions are here. The first solution is that we can find the smallest sum among at most K candidates (one for each A[i]) and remove it. After n removes, the n-th smallest sum is found. More specifically, we can maintain K pointers for each A[i]. Let’s say ptr[1..K] (equals 1 initially). First, we can use a binary heap (or other priority queues, balanced binary search trees, etc...) to find the smallest sum among A[i] + B[ptr[i]]. Second, suppose the smallest is A[p] + B[ptr[p]]. We remove it from the heap, then increase the pointer ptr[p] by 1 and insert a new element A[p] + B[ptr[p]] if it exists. Repeat this process n times, the n-th smallest sum is got. This algorithm’s time complexity is O(n log K) for each query, and thus O(K logK + Q n logK) in total. The second solution is more tricky and useful. Consider the dual problem: given a number X, find how many pair sums are smaller than or equal to X (The answer of the original problem is that the smallest X such that there are at least n pair sums smaller than or equal to X). To solve the dual problem, based on the previous observation, there exists limit[i] such that A[i] + B[1..limit[i]] are all smaller than or equal to X while A[i] + B[limit[i] + 1 … K] are all greater than X. Furthermore, limit[i] >= limit[i + 1] since A[i] <= A[i + 1]. Using these two properties, we can simply get the rank of X in O(K) time. Through binary search, we can get the answer of original problem in O(K logAnswer), and thus O(K logK + Q K logAnswer) in total. # AUTHOR'S AND TESTER'S SOLUTIONS: Author's solution can be found here. Tester's solution can be found here. This question is marked "community wiki". 13.8k347483500 accept rate: 35% 161446072 2 I solved this after the contest and used a very simple approach. First sort both the arrays. Since the limit on q is 10000, this can be done with the following code. for(j=1;j<=n;++j) { k=10001/j; ind=min(k,n); num=a[j]; for(k=1;k<=ind;++k) v.pb(num+b[k]); } This will ensure that atleast the first 10000 sums will be stored in v. Then we just have to sort the vector v and print the answer of every query. answered 21 Nov '13, 02:08 1.5k●8●21●30 accept rate: 0% can u please explain me ur approach? (12 Dec '13, 10:32) @sikander_nsit please explain your solution more. It would be great for us to get a solution which is very simple and sweet. Thanks, (14 Mar '14, 00:13) your trick is great .. can you give proof of correctness of your algo..please?? (21 Aug '15, 00:58) 1 @arcturus I used binary search to search for X and then binary search to count the pairs, but also a trick (when count exceeds 10.000 break, since qi is maximum 10.000). Without the trick it gave TLE. answered 18 Nov '13, 14:17 4★lazzrov 136●1●1●8 accept rate: 0% Ah, I see. But I guess if the testcase was really evil, the second solution will also get TLE even with that trick. Probably it was not the intended way, as both tester and setter used the first approach. (18 Nov '13, 22:25) arcturus5★ 1 can anybody please explain me the first approach? not getting it :( answered 12 Dec '13, 10:24 296●8●21●22 accept rate: 11% 1 though editorial has been provided for this problem but still i am not able to understand it. it would be great if someone explain me the correct approach to solve this problem.......... answered 28 Dec '13, 14:40 4★zealf 1.1k●6●11●26 accept rate: 3% well,i am requesting again especially to the editorialist of this problem to explain his solution given above... (29 Dec '13, 18:12) zealf4★ 0 Hmm, I tried both approach in the contest. However, only the first one get AC (http://www.codechef.com/viewsolution/2998337 ). The second solution gave me TLE (http://www.codechef.com/viewsolution/2997409 ). Is the time limit too strict for the second solution or it is just me that didn't implement the algorithm efficiently? answered 18 Nov '13, 08:14 5★arcturus 1●2 accept rate: 0% 0 I'm getting wrong answer for this solution. Can someone help me out? http://www.codechef.com/viewsolution/3718821 answered 08 Apr '14, 04:43 3★anndr31 16●1 accept rate: 16% 0 Though editorial has been delivered for this tricky but quiet i am not competent to comprehend it. it would be good if somebody clarify me the exact method to resolve this problem pay to get a research paper done answered 21 Jun '16, 11:35 1 accept rate: 0% 0 Despite the fact that publication has been conveyed for this dubious however calm i am not capable to grasp it. somebody should illuminate me the precise strategy to determine this issue. startup demo video answered 15 Jul '16, 11:47 1 accept rate: 0% toggle preview community wiki By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Tags: ×9,473 ×2,119 ×431 ×397 ×10 ×6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689684271812439, "perplexity": 2461.403195065529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00018-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/18124/daniel?tab=activity&sort=comments
daniel Reputation Top tag Next privilege 5,000 Rep. Approve tag wiki edits 11h comment Why are very large prime numbers important in cryptography? If you have a question the site functions best if you post it as a question rather than an answer. Feb 7 comment Using the Brun Sieve to show very weak approximation to twin prime conjecture Halberstam and Richert in Sieve Methods (Dover, 2011) prove using Brun's sieve that there are infinitely many p such that p+2 has at most 8 prime factors. Including some introductory material the exposition takes 67 pages. The key is their definition of the characteristic function on p. 58. The basic idea is simple enough but doesn't look like it lends itself to anything one could describe as a "straightforward exercise." Jan 18 comment Proof of inequality involving multiplicative function? Terms in the binomial expressions on the left with m factors are all covered by terms in mth powers of the expression on the right. Once we see the LHS can be written as a product of binomials we can compare the two sides. Your hint prompted me to look at the LHS again. The key (which I forgot or didn't know) is that $n=p_k\#,~\binom{k}{m}$ is the number of squarefree divisors of n having $\nu(d)=m.$ Jan 14 comment Proof of inequality involving multiplicative function? @user1952009: edited to reflect that. Jan 10 comment English wording for “first level of asymptotic expansion” You can say $f$ is asymptotically equivalent to $g.$ Jan 9 comment “The PNT obtained by statistical methods” @ErickWong: In that case I will vote to reopen. Jan 8 comment “The PNT obtained by statistical methods” Maybe asking about Erdos-Kac theorem? Jan 1 comment Prove $| \sum_{i \leq n} \frac{\mu(i)}{i} | \leq 1$ This result was not relegated to exercises in Apostol and OP say s/he is new to number theory. So I wonder if this is enough. Dec 31 comment Prove $| \sum_{i \leq n} \frac{\mu(i)}{i} | \leq 1$ This is proved at pp. 66-67 of Apostol (Intro. to Analytic Number Thy.) Different approach. Dec 24 comment Equidistribution theorem of Weyl Perhaps OP is asking if equidistribution of a sequence $a\cdot n$ can be used to show that $a$ is irrational? I don't think Weyl works in that direction but at least it's a question. Vote to reopen. Dec 12 comment What is the effective lower bound on gaps between zeta zeros? jstor.org/stable/pdf/2372402.pdf?seq=1#page_scan_tab_contents. See paragraph below formula (2). There are a lot of questions here on lim inf with good answers BTW. Nov 25 comment Zeros of the prime zeta function @mixedmath: Minimally, is it possible to show $P(s) =\zeta(s)$ implies $s$ is a zero of $C(s)$ without direct reference to $C(s).$ The argument begins, "Suppose the two integrals are equal for some value of s," and concludes, for example, "s is thus a zero of this series on the right, which with some work is seen to be $C(s).$" Nov 15 comment What is the proportion of primes that can be written as $a^2 + b^2$? See Ingham, The Distribution of Prime Numbers, pp. 106-107. Nov 8 comment What is your idea about this conjecture? @Dylan: Using Eric N's reformulation my claim is $j(2\cdot 3\cdot...\cdot13)\leq 30.$ In the table in the paper $n$ is the index of the largest prime. If $n=6$ then $h(n)$ is 22. According to this paper my claim is true, but the paper gives a stronger result. Nov 7 comment What is your idea about this conjecture? @SimonS: Good question either way, but the numerical work in this case might be misleading. Suppose it is true for some but not all n? Oct 28 comment Proof of Prime Number Theorem The Prime Number Theorem by Jameson is also good. Oct 19 comment Which progressions and sequences are guaranteed to contain infinitely many primes? You can construct infinitely many such sequences. OEIS contains some of the interesting ones. Sep 23 comment What are all of the possible fractional forms an offspring's genetic makeup? Maybe you should post this at the Bio SE site. Sep 10 comment Approximate zeros of a (hypothetical) analog of $\zeta(s)$ @draks: I looked at that question and the very nice answers there (and long ago upvoted). The r.h.s. of (1) in Ray M's answer is quite different from (1) above. Sep 5 comment A (possibly) easier version of Bertrand's Postulate Bertrand implies a prime on p, 2p. Choose p(n) max less than a non-prime n. Then there is a prime on n, 2p(n) which implies a prime on n,2n (Bertrand). So I think the two are equivalent. You don't need case 1, for the reason you give.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339137077331543, "perplexity": 522.3934481105608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00235-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/109548/x-compact-metric-space-fx-rightarrow-mathbbr-continuous-attains-max-min
# $X$ compact metric space, $f:X\rightarrow\mathbb{R}$ continuous attains max/min Let $X$ be a compact metric space, show that a continuous function $f:X\rightarrow\mathbb{R}$ attains a maximum and a minimum value on $X$. Attempt: So the important thing is that I have previously shown that such a function is bounded and that for compact $X$, $f(X)$ is compact given $f$ continuous. In $\mathbb{R}$, compact $\implies$ closed and bounded. So $f(X)$ is closed and contains its accumulation points, and it is bounded so $\exists \sup(A),\inf(A)$ and since closed $\implies \sup(A)\in A, \inf(A)\in A$. Did I miss anything/make an unwarranted leap of logic? - The ideas are all there, and they’re connected properly, but it’s not really a well-written proof as it stands. –  Brian M. Scott Feb 15 '12 at 7:24 By the way, $\sup$ and $\inf$ are predefined: use \sup, \inf. –  Brian M. Scott Feb 15 '12 at 7:25 It is better to avoid $\exists$ and $\implies$ symbols when writing math. Just write "there exist" and "then". After all, you write math in english not in other strange symbolic language. –  leo Jun 9 '12 at 3:15 Here’s an example of how the same argument could be written up nicely. Since $X$ is compact and $f$ is continuous, $f[X]$ is a compact subset of $\mathbb{R}$ and therefore closed and bounded. Since $f[X]$ is bounded, it has both a supremum and an infimum, and since it is closed, $\sup f[X]\in f[X]$ and $\inf f[X]\in f[X]$. Thus, there are $x_0,x_1\in X$ such that $f(x_0)=\inf f[X]$ and $f(x_1)=\sup f[X]$; that is, $f$ attains its minimum and maximum values at $x_0$ and $x_1$, respectively. - +1 Textbook proof. –  user38268 Feb 15 '12 at 9:44 Your argument is fundamentally sound, but you have to assume that your metric space is nonempty. Here is a direct proof that requires no other results (the proof generalizes, like yours, to arbitrary topological spaces): Let $X$ be a nonempty metric space and $f:X\to\mathbb{R}$ a continuous function without a maximum. Then $X$ has an open cover without a finite subcover. Proof: 1. Suppose $f(X)$ is unbounded. Then $\big\{f^{-1}\big((-\infty, n)\big):n\in\mathbb{N}\big\}$ is an open cover without finite subcover. 2. Suppose $f(X)$ is bounded, with supremum $s$. Since $f$ has no maximum, $s\notin f(X)$ and $\big\{f^{-1}\big((-\infty, s-1/n)\big):n\in\mathbb{N}\big\}$ is an open cover without finite subcover. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656176567077637, "perplexity": 211.04115382324284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00233-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1788075/proof-of-an-equality-norm
# proof of an equality norm Let the mapping $T:\ell^{2}\rightarrow \ell^{2}$ is defined as follow. $$T(x_1,x_2,\ldots,x_n,\ldots)=(x_1,\dfrac{1}{2}x_2,\ldots,\dfrac{1}{n}x_n,\ldots)$$ In this case, i've easily earned: $$\sigma(T)=\{0\}\cup\{\dfrac{1}{n}:n\in\mathbb{N}\}$$ Now, if $\lambda=x+iy\in\mathbb{C}$ and $\lambda\notin\sigma(T)$, i should prove $$\Vert(\lambda I-T)^{-1}\Vert=\dfrac{1}{\displaystyle\inf_{n\in\mathbb{N}}\,\left\vert\lambda-\dfrac{1}{n}\right\vert}$$ can you help me for starting of prove? Thanks. You have $$(\lambda I-T)(x_1,x_2,\ldots)=((\lambda-1)x_1, (\lambda-\frac12)x_2,\ldots).$$ Define an operator $S$ by $$S (x_1,x_2,\ldots)=(\frac{x_1} {\lambda -1},\frac {x_2}{\lambda -\frac12},\ldots).$$ By the choice of $\lambda$ the linear operator $S$ is well-defined and bounded: by construction, $S (\lambda I-T)=(\lambda I-T)S=I$. So $S=(\lambda I-T)^{-1}$ (bounded, because $\lambda\not\in\sigma (T)$. And $$\tag{1}\|S\|=\sup\left\{\frac1 {\left|\lambda-\frac1n\right|}:\ n\in\mathbb N\right\}=\frac1 {\inf\left\{\left|\lambda-\frac1n\right |:\ n\in\mathbb N\right\}}.$$ $\\ \$ The only piece left hanging, to justify the first equality in $(1)$, is to show that if $$R (x_1,x_2,\ldots)=(r_1x_1,r_2x_2,\ldots)$$ then $\|R\|=\sup\{|r_n|:\ n\}$. This follows, if we write $c=\sup\{|r_n|:\ n\}$: $$\|Rx\|^2=\sum_j|r_jx_j|^2\leq c^2\sum_j|x_j|^2=c^2\,\|x\|^2,$$ so $\|R\|\leq c$. Let $j$ such that $|r_j|>r-\frac1j$. If $e_j\in\ell^2$ is the sequence with a $1$ in the $j^{\rm th}$ position and zeroes elsewhere, then $\|e_j\|=1$ and $$\|Re_j\|=|r_j|>r-\frac1j.$$ So $\|R\|=c=\sup\{|r_n|:\ n\}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917354583740234, "perplexity": 115.13234288126687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00140.warc.gz"}
https://www.physicsforums.com/threads/does-time-translational-symmetry-imply-h-0-or-e-0.839007/
# Does time translational symmetry imply H'=0 or E'=0? 1. Oct 21, 2015 ### davidbenari The Hamiltonian is not always equal to the total energy. In fact the Hamiltonian for a system of particles could be defined as $H=L-\sum \dot{q_i}\frac{\partial L}{\partial \dot{q_i}}$ Which is the total energy only if the potential energy is a function of $q_i$ and if the kinetic energy is a homogeneous quadratic function of $\dot{q_i}$. I know how to show that the condition $\frac{\partial L}{\partial t}=0$ implies $\frac{d}{dt}H=0$. But I was left wondering: People always say time-translational symmetry implies conservation of energy, but I don't think this is the case. Time translational symmetry implies the conservation of the Hamiltonian, which may or may not be the total energy. So which one is true? Does time translational symmetry imply conservation of the Hamiltonian or of the Energy? In my opinion it could imply the energy too, given a good set of coordinates that aren't flying around in space w.r.t to an inertial frame such that it would involve time in your Lagrangian... Thanks. Last edited: Oct 21, 2015 2. Oct 22, 2015 ### Staff: Mentor You can always write the total energy as Hamiltonian. It might be possible to write down a proper Hamiltonian for things that are not the total energy (not sure), but that doesn't change the result of energy conservation. 3. Oct 22, 2015 ### davidbenari Hmm. My book derives $\frac{d}{dt}(L-\sum \dot{q}_i \frac{\partial L}{\partial \dot{q}_i})=0$ from time translational symmetry. Where the quantity in parenthesis is $-H$. In order to show $H=K+U$ you would need $U=U(q_i)$ and $\sum \dot{q}_i\frac{\partial K}{\partial \dot{q}_i}=2K$ (which is Eulers theorem for homogeneous functions). Also you need that the transformation equations between generalized coordinates and rectangular coordinates don't contain time. Which makes sense once you verify those statements (I could post some of this work in case its not too clear). I don't see why total energy would always be the Hamiltonian given the restrictions above. Is there a theorem you could point me to? Something to ponder? Thanks. 4. Oct 22, 2015 ### davidbenari Is there a way to circumvent the Hamiltonian expression to derive $\frac{d}{dt} E =0$ from $\frac{\partial L}{\partial t}=0$? 5. Oct 22, 2015 ### davidbenari Also I've noticed many proofs of the typical statements of Noether's theorem aren't quite that general as people try to say. For example, "space translational symmetry implies conservation of linear momentum". Well, that requires that the potential be velocity independent. So its not as general as the sentence in quotations tries to imply. I guess most potentials are velocity independent though...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164883494377136, "perplexity": 351.8831870555308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590443.0/warc/CC-MAIN-20180719012155-20180719032155-00608.warc.gz"}
http://web.eecs.utk.edu/~dongarra/etemplates/node145.html
Next: Software Availability Up: Jacobi-Davidson Methods   G. Sleijpen Previous: An Algorithm Template.   Contents   Index Computing Interior Eigenvalues If one is searching for the eigenpair with the smallest or largest eigenvalue only, then the obvious restart approach works quite well, but often it does not do very well if one is interested in an interior eigenvalue. The problem is that the Ritz values converge monotonically towards exterior eigenvalues, and a Ritz value that is close to a target value in the interior of the spectrum may be well on its way to some other exterior eigenvalue. It may even be the case that the corresponding Ritz vector has only a small component in the direction of the desired eigenvector. It will be clear that such a Ritz vector represents a poor candidate for restart and the question is, What is a better vector for restart? One answer is given by the so-called harmonic Ritz vectors, discussed in §3.2; see also [331,349,411]. As we have seen, the Jacobi-Davidson methods generate basis vectors for a subspace . For the projection of onto this subspace we compute the vectors . The harmonic Ritz values are inverses of the Ritz values of , with respect to the subspace spanned by the . They can be computed without inverting , since a harmonic Ritz pair satisfies (60) for and . This implies that the harmonic Ritz values are the eigenvalues of the pencil , or, since : For stability reasons we orthonormalize the columns of and transform the columns of accordingly. This also further simplifies the equation: we see that the harmonic Ritz values are the inverses of the eigenvalues of the symmetric matrix . In [349] it is shown that for Hermitian the harmonic Ritz values converge monotonically towards the smallest nonzero eigenvalues in absolute value. Note that the harmonic Ritz values are unable to identify a zero eigenvalue of , since that would correspond to an infinite eigenvalue of . Likewise, the harmonic Ritz values for the shifted matrix converge monotonically towards eigenvalues closest to the target value . Fortunately, the search subspace for the shifted matrix and the unshifted matrix coincide, which facilitates the computation of harmonic Ritz pairs for any shift. The harmonic Ritz vector for the shifted matrix, corresponding to the harmonic Ritz value closest to , can be interpreted as maximizing a Rayleigh quotient for . It represents asymptotically the best information that is available for the wanted eigenvalue, and hence it represents asymptotically the best candidate as a starting vector after restart, provided that . For harmonic Ritz values, the correction equation has to take into account the orthogonality with respect to , and this leads to skew projections. We can use orthogonal projections in the following way. If is the selected approximation of an eigenvector, the Rayleigh quotient leads to the residual with smallest norm; that is, with , we have that for any scalar , including the harmonic Ritz value . Moreover, the residual for the Rayleigh quotient is orthogonal to . This makes compatible'' with the operator in the correction equation. Here . An algorithm for the Jacobi-Davidson method based on harmonic Ritz values and vectors, combined with restart and deflation, is given in Algorithm 4.19. The algorithm can be used for the computation of a number of successive eigenvalues immediately to the right of the target value . To apply this algorithm we need to specify a starting vector , a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of denotes the maximum dimension of the search subspace. If it is exceeded, a restart takes place with a subspace of specified dimension . On completion, the eigenvalues at the right side nearest to are delivered. The computed eigenpairs , , satisfy , where denotes the th column of . For exterior eigenvalues a simpler algorithm has been described in §4.7.3. We will now comment on some parts of the algorithm in view of our discussions in previous subsections. (1) Initialization phase. (3)-(7) The vector is made orthogonal with respect to the current test subspace by means of modified Gram-Schmidt. This can be replaced, for improved numerical stability, by an adoption (for the vector ) of the template given in Algorithm 4.14. (8)-(10) The values represent elements of the square by matrix , where denotes the by matrix with columns , and likewise . Because is Hermitian, only the upper triangular part of this matrix is computed. (11)-(13) At this point the eigenpairs for the problem should be computed. This can be done with a suitable routine for Hermitian dense matrices from LAPACK. Note that the harmonic Ritz values are the inverses of the eigenvalues of . We have to compute the Rayleigh quotient for , and next normalize , in order to compute a proper residual . We have used that . The vectors are the columns of by matrix and . (14) The stopping criterion is to accept an eigenvector approximation as soon as the norm of the residual (for the normalized eigenvector approximation) is below . This means that we accept inaccuracies in the order of in the computed eigenvalues, and inaccuracies (in angle) in the eigenvectors of , provided that the associated eigenvalue is simple and well separated from the others; see (4.4). Detection of all wanted eigenvalues cannot be guaranteed; see note (14) for Algorithm 4.13 and note (13) for Algorithm 4.17. (17) This is a restart after acceptance of an approximate eigenpair. The restart is slightly more complicated since two subspaces are involved. Recomputation of the spanning vectors from these subspaces is done in (18)-(21). (24) At this point we have a restart when the dimension of the subspace exceeds . After a restart, the Jacobi-Davidson iterations are resumed with a subspace of dimension . The selection of this subspace is based on the harmonic Ritz values nearest to the target . (31)-(32) The deflation with computed eigenvectors is represented by the factors with . The matrix has the computed eigenvectors as its columns. If a left preconditioner is available for the operator , then with a Krylov solver similar reductions are realizable, as in the situation for exterior eigenvalues. A template for the efficient handling of the left-preconditioned operator is given in Algorithm 4.18. Next: Software Availability Up: Jacobi-Davidson Methods   G. Sleijpen Previous: An Algorithm Template.   Contents   Index Susan Blackford 2000-11-20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120842814445496, "perplexity": 502.99966576387254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826322.0/warc/CC-MAIN-20140820021346-00017-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/speed-of-electron.217436/
# Homework Help: Speed of electron 1. Feb 23, 2008 ### tony873004 ** Edit: Nevermind. I figured it out using the Work Energy theorem. An electron is released from rest 1.0 cim above a uniformly charged infinite plane with a charge density of 10-9C/m2. What is the speed of the electron when it hits the plane? my attempt: Potential energy when it is released= kinetic energy when it hits. kqQ/r = 0.5 mv2 isolate v: $$v = \sqrt {\frac{{2kqQ}}{{m \cdot r}}}$$ This would work if I was given 2 point charges, but how do I do this with a charge density and a point charge? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Last edited: Feb 23, 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648508429527283, "perplexity": 913.7835017461068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00500.warc.gz"}
http://math.tutornext.com/calculus/limits-at-infinity.html
# Limits at Infinity Sub Topics Limits at infinity is an important topic in calculus. It deals with the maximization of any application that has to be dealt with in physics, chemistry and engineering applications. Example : A particle inside a potential well varies from x $\rightarrow$ 0 to $\infty$. ## Finding Limits at Infinity Finding limits at infinity is an important concept as it helps us understand the behavior of a function. A function can have a domain from +$\infty$ to -$\infty$ or can have a range from +$\infty$ to -$\infty$. Limits at infinity has a broad meaning. It may refer to the limit of the function when the variable approaches $\pm \infty$ , the function approaches $\pm \infty$ for some values of the variable or when the variable approaches $\pm \infty$ , the function also approaches $\pm \infty$ . In the first case it would be referred to as the limits at infinity, in the second case it is called as the infinite limits and in the third case it is referred to as infinite limits at infinity. ## Limits at Infinity Rules To make the evaluation of the limits at infinity, a closer study has revealed certain interesting facts and helped mathematicians to frame the limits at infinity rules. Before we proceed, we would like to remind you that limits at infinity are relevant only in case of rational functions. For polynomial functions, the quick answer is the limit would either be $\infty$ or -$\infty$. Let f(x) is a rational function expressed in the form f(x) = $\frac{g(x)}{h(x)}$ $h(x)\neq 0$. The order of the functions play an important role. The limit at infinity rules state that, 1) If the order of g(x) is higher than that of h(x) then the limit at infinity is $\infty$ , if the sign of the ratio of the leading coefficients is positive and -$\infty$ if the sign of the ratio of the leading coefficients is negative. 2) If the order of g(x) is same as that of h(x) then the limit at infinity is  the ratio of the leading coefficients. 3) If the order of g(x) is lower than that of h(x) then the limit at infinity is  0. ## Evaluating Limits at Infinity Evaluating limits at infinity for rational functions is easily done by the rules we have already stated. These rules are framed by just simple algebraic operations.  As an example, let us illustrate one case. Let f(x) = $\left [ \frac{6x^{2}}{(3x^{2}+2x)} \right ]$, dividing both the numerator and the denominator by x2, f(x) = $\frac{6}{[3+(\frac{2}{x})]}$. As x ->$\infty$, the term $\frac{2}{x}$ becomes 0 and hence the limit is $\frac{6}{3}$ = 2, which is  the ratio of the leading coefficients. In case of polynomial functions the limit at infinity is $\infty$ or -$\infty$, depends on the sign of the leading coefficient. In case of logarithmic functions, the limit at infinity is $\infty$ . In case of exponential functions, the limit at infinity is $\infty$or 0, depending on the sign of the exponent variable. For example, the limit at infinity of ex is $\infty$ and of e-x is 0.The limit at infinity of trigonometric functions are interesting. Since the range of sine and cosine functions are restricted to [-1,1], the limit is indefinite but it is bounded only between -1 and 1. For the remaining four trigonometric functions the limit at infinity is undefined because all have a range of (- $\infty$,$\infty$). ## Limits at Infinity Examples Let us see few examples of limits at infinity, Let f(x) = x4 - 5x2 + 6. This is a polynomial function and the leading coefficient is positive. Hence the limit at infinity of this function is $\infty$ . Let f(x) = -2x3 - 4x2 + 8. This is a polynomial function and the leading coefficient is negative. Hence the limit at infinity of this function is -$\infty$. Let f(x) = $\left [ \frac{3x}{(4x^{2}+5x)} \right ]$. This is a rational function and the order of the numerator function is less than that of the denominator function. As per the limits at infinity rules, the limit at infinity for this function is 0. We have already explained about the limits at infinity for logarithmic, exponential and trigonometric functions. We will see another important example, probably, the definition of e, the exponential constant. Consider the function f(x)= $[1+(\frac{1}{x})]^{x}$. Let us evaluate the limit at infinity of this function. Taking the natural logarithm on both sides, In f(x) = In F(x) = $[1+(\frac{1}{x})]^{x}$  = $x*f(x)$ = $[1+(\frac{1}{x})]$ = $x*[(\frac{1}{x})-(\frac{1}{2x^{2}})+(\frac{1}{3x^{3}})-.....]$ =$[(1)-(\frac{1}{2x})+(\frac{1}{3x^{2}})]$ or, f(x) = $e^{[(1)-(\frac{1}{2x})+(\frac{1}{3x^{2}})-.....]}$ Now the limit at infinity  of f(x) = limit at infinity  of $e^{[(1)-(\frac{1}{2x})+(\frac{1}{3x^{2}})-.....]}$ = e Therefore, limit at infinity  of $[1+(\frac{1}{x})]^{x}$  = e. ## Infinite Limits Infinite limits means the limit of the function is $\infty$ or -$\infty$ for one or more values of the variable. Obvious examples are logarithmic functions and trigonometric functions, other than sine and cosine functions. For rational functions, infinite limits occur at points where the denominator function becomes 0. A logarithmic function has infinite limit of  - $\infty$ for logarithm of a term approaches 0. Trigonometric functions, other than sine and cosine functions have infinite limits of $\infty$ or -$\infty$ for some values of the variable. For example, for f(x) = tan (x) the left hand side limit at odd multiples of $\frac{\pi }{2}$ is $\infty$ and the right hand side limit at odd multiples of $\frac{\pi }{2}$ is -$\infty$. ## Finding Infinite Limits There are different methods of finding infinite limits of functions, depending on the types of functions. The infinite limits of rational functions occur at the zeroes of the denominator function. Hence the method to find infinite limits is to find the zeroes of the function. It may be noted that if there are common factor terms between the numerator and the denominator functions, the limit is not infinite for the solution of the common factor. The rational function has only a hole there. For example if $f(x)$ = $\frac{(x^{2}-1)}{[(x-1)(x+2)]}$, the limit at x = 1 is not infinite. On the other hand, it has an infinite limit at x = -2.The infinite limits of other functions can be found by solving for the value of the variable which would make the function approach $\infty$ or - $\infty$. We will discuss more with examples in the next section. ## Evaluating Infinite Limits Evaluating infinite limits or solving infinite limits involve more of algebraic techniques based on the concept of infinite limits. Let us illustrate with a few examples. ## Infinite Limits at Infinity Functions which tend to $\infty$ or -$\infty$ when the variable approaches $\infty$ or -$\infty$, they are said to have infinite limits at infinity. All polynomial functions, logarithmic function and exponential functions with positive exponents have infinite limits at infinity. A rational function in which the numerator function is of higher order than that of the denominator function has infinite limits at infinity. Because the rational function can be simplified by long division and the quotient will contain the variable or its higher powers, For example, the limit at infinity for the function f(x) = $\frac{(x^{2}+2x-4)}{(x)}$ is $\infty$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967238903045654, "perplexity": 232.73916037751692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00069-ip-10-171-96-226.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1950206/understanding-low-rank-approximation-from-the-svd
# Understanding low-rank approximation, from the SVD I've been on a couple Wikipedia pages today reading up on the SVD and the use of low rank approximation, and I have a couple of basic questions: if $$A = U\Sigma V^*$$ $$= [U_1 U_2] \begin{bmatrix} \Sigma_1 & 0 \\ 0 & \Sigma_2 \\ \end{bmatrix}[V_1 V_2]$$ then $A'=U_1\Sigma_1V_1$, called a "reduced SVD", is a rank $r$ matrix such that the Frobenius norm $||A-A'||_F$ is minimized. So, does this mean that for some large data matrix -- let's say all the columns of $A$ represent lung cancer patients, and the rows represent variables such as the patients' age, height, weight, marital status, smoker / non-smoker, has or doesn't have family history of cancer, etc. -- with the lower rank $r$ matrix, we essentially "delete" all of the rows that are insignificant, in the sense that those rows of variables showed no variance and so isn't helpful? E.g. maybe the vast majority of patients are married, and so we delete the row corresponding to marital status. And so we keep all the rows of the matrix that have the most variance. Intuitively, this seems wrong: based on the above, I could wrongly throw out the row variable of smoker status, if the vast majority of the patients were smokers and so there is little variance. But that would be throwing out pretty essential data that shows that most lung cancer patients were smokers. So, where have I gone wrong in my thinking of low-rank approximation / the SVD? Also, concerning the data matrix $A$: does it ever act on vectors via ? That would seem silly...what would its "action" even be? It's just an enormous array of the patients' data. It's not some...rotation...or dilation...or reflection....or projection... whereas, in contrast, a stochastic matrix acting on probability vectors has the effect of 'updating' the probability vector of some Markov chain. Thanks, • – dantopa Jun 5 '17 at 22:38 Also remember that when you do low rank approximation you basically remove the contribution of the singular vectors that correspond to the smallest singular values. These do not necessarily correspond to the rows of the matrix $A$ but could correspond to some linear combination of them. I hope this helps. • Glad I could help. As far as I know Matlab doesn't do any qualitative stuff on its own. Matlab will perform SVD on the Matrix as you say and will give the researcher 3 matrices. The $U$,$V$ matrices contain singular vectors and the $\Sigma$ matrix will have the singular values. The researcher can then decide how many principal components he wants to use. He can keep the singular vectors (these correspond to the directions of maximum variance as we have pointed out). He may then project onto the space of the principal components to get a new feature description – MrHat Oct 3 '16 at 13:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072715401649475, "perplexity": 471.359562481911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00388.warc.gz"}
http://mathoverflow.net/questions/115207/finite-order-arithmetic-and-etcs?answertab=oldest
# Finite order arithmetic and ETCS I'm looking for a reference to the statement that Lawvere's Elementary Theory of the Category of Sets (ETCS) is equal in proof-theoretic strength to finite order arithmetic. The person who informed me of this said it was well-known in certain circles, but he couldn't think of a reference. Actually, all I need is a reference to one half of the equivalence: that anything provable in finite order arithmetic is provable in ETCS. The story: I've been looking at Colin McLarty's paper A finite order arithemetic foundation for cohomology, which shows that nothing stronger than finite order arithmetic is needed anywhere in EGA or SGA. I want to state that nothing stronger than ETCS is needed anywhere in EGA or SGA. To back that up with references, I therefore need something that relates ETCS to finite order arithmetic. Edit This question has generated lots of discussion about McLarty's paper. I'm genuinely interested in that discussion, but I'd also like to emphasize that it's peripheral to my question, which is simply a reference request: where can I find it stated/proved that ETCS is equal in strength to finite order arithmetic? Further edit Maybe I can make this question more transparent to experts in non-categorical set theory. ETCS is well-known to have the same strength as the membership-based theory known as "bounded Zermelo with choice" or "restricted Zermelo with choice". (One reference: Mac Lane and Moerdijk, Sheaves in Geometry and Logic, Section VI.10.) The axioms are extensionality, empty set, pairing, union, power set, foundation, restricted comprehension, infinity, and choice. Here "restricted comprehension" means that we only consider formulas that are restricted in the sense that all quantifiers are of the form "$\forall x \in y$" or "$\exists x \in y$". - David: I don't know. I simply want a reference! –  Tom Leinster Dec 2 '12 at 22:14 @xuhan: I guess I'm not sure 26 letters suffice to write every word in English since I haven't read the entire OED... –  François G. Dorais Dec 2 '12 at 23:15 @Xuhan. The fact is that little of EGA or SGA raises any set theoretic issues. Much of it is commutative algebra transparently formalizable in second order arithmetic. While crystalline cohomology does go beyond that, it may not go beyond third order arithmetic, and clearly is far short of arithmetic of all finite orders. The stronger claims in "arxiv.org/abs/1102.1773" are supported by arguments in the paper. –  Colin McLarty Dec 3 '12 at 0:23 As Tom knows, ETCS in the original published form is proof theoretically equivalent to Zermelo set theory with the separation axiom restricted to formulas with all quantifiers bounded. That proof is published in several places. Both those theories are equivalent to the arithmetic of all finite orders. The result does follow from the result I address in arxiv.org/abs/1207.6357 but that draft is defective and I have a repair in progress. I first said the fact about finite order arithmetic is simpler than that paper, but actually I do not know it is, and anyway I do not know a reference for it. –  Colin McLarty Dec 3 '12 at 0:27 @Joel David Hamkins. The place to go for intuition on set theoretic issues in ETCS is A.R.D. Mathias, 1992: "What is Mac Lane missing", in W.J.H. Judah and H.Woodin (eds), `Set Theory of the Continuum', although that paper puts ETCS into a membership-based form. The title is a joke as Adrien uses "MacLane" as the name of a set theory. In short, you cannot use induction on the natural numbers in unbounded set theoretic constructions so you can prove each transfinite cardinal has a successor but not that there are unboundedly many of them. –  Colin McLarty Dec 3 '12 at 0:41 Ah, Thomas Forster's 1998 paper "Weak systems of set theory related to HOL" is available on-line at various places including https://www.dpmms.cam.ac.uk/~tf/maltapaper.ps He says it is proved in Jensen RB "On the consistency of a slight (?) modification of Quine's NF" Synthese 19 1969 pp 25--63. Lake J "Comparing Type theory and Set theory" Zeitschrift fur Matematische Logik 21 1975 pp 355-56. For a fanatically detailed proof and discussion see Mathias at https://www.dpmms.cam.ac.uk/~ardm/maclane.pdf - Thanks a lot, Colin, but I'm pretty confused. None of Forster, Jensen or Mathias seem to mention finite order arithmetic (or $n$th order arithmetic) by name. I haven't been able to get hold of Lake yet. Forster says that Jensen and Lake prove the equivalence of ETCS (or rather, what he calls Mac Lane set theory) with something he calls TST. But then, neither of the strings "TST" or "theory of simple types" appear in Jensen. –  Tom Leinster Dec 3 '12 at 13:57 So here's my understanding. We know that ETCS is equivalent to bounded Zermelo with choice, also called Mac Lane set theory: there are plenty of references for that. Forster says that Jensen and Lake prove that Mac Lane set theory is equivalent to something that Forster calls TST (Theory of Simple Types). Jensen doesn't mention anything by this name, nor finite order arithmetic. –  Tom Leinster Dec 3 '12 at 14:01 Yes. More than knowing ETCS has the strength of bounded Zermelo (which equals the strength of bounded Zermelo with choice) we have many published proofs. I did not look at Jensen or Lake. I could not get Jensen here at home. And I always like to read Mathias. ETCS is actually bi-interpretable with certain variants of bounded Z with choice, meaning they not only have the same strength but prove exactly the same theorems. The main issue there seems to be existence of transitive closures. –  Colin McLarty Dec 3 '12 at 15:47 Ok, so I notice you (Colin) say "It has the proof theoretic strength of finite order arithmetic, in the sense of the simple theory of types with infinity (see Takeuti (1987, Part II))." (For others, Takeuti's book is reviewed in Zentralblatt here zentralblatt-math.org/zmath/… (first edition) and here zentralblatt-math.org/zbmath/search/?q=an%3A0609.03019 (second edition)). Where in Takeuti is it shown that TST is equivalent to finite-order arithmetic? –  David Roberts Dec 4 '12 at 0:24 For Takeuti, "simple type theory" is synonymous with "higher (finite) order predicate logic," and does not include an axiom of infinity (though I think for most people simple type theory does include infinity). this is my favorite reference on simple type theory. So I call simple type theory with infinity "finite order arithmetic." –  Colin McLarty Dec 4 '12 at 2:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019448518753052, "perplexity": 1177.6712017929417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465456.40/warc/CC-MAIN-20150226074105-00139-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/198385-gaussian-elimination-3-simultaneous-equations.html
# Math Help - Gaussian Elimination - 3 Simultaneous equations 1. ## Gaussian Elimination - 3 Simultaneous equations Hi all, Really, really struggling with this question. I've tried using a few examples but I cant seem to get narrow down 3 terms to just 1. I always end up with 2 letters/terms remaining. The 3 equations I have are: 7i + 8ii + 7iii = 2 (Equation 1) 6i + 7ii + 5iii = 8 (Equation 2) 6i + 9ii + 6iii = 2 (Equation 3) So I need to work out the values for i, ii and iii. I tried using the 3 step procedure as described in a text book I have. But it doesn't seem to work. I hope someone can help. 2. ## Re: Gaussian Elimination - 3 Simultaneous equations instead of i, ii and iii i will use a,b, and c. so our 3 equations become: 7a + 8b + 7c = 2 6a + 7b + 5c = 8 6a + 9b + 6c = 2. subtract equation 2 from equation 3 to obtain: 2b + c = -6 (*) now, we need another equation with just b and c in it, to see if we can eliminate another variable. since we want the "a" terms to cancel, multiply equation 1 by 6, and equation 2 by 7: 42a + 48b + 42c = 12 (equation 1a) 42a + 49b + 35c = 56 (equation 2a) then subtract equation 1a from equation 2a to get: b - 7c = 44 (**), and now we multiply (**) by 2 to get: 2b - 14c = 88. subtract (*) from this, and we have: -15c = 94, so c = -94/15. using (*) we have: 2b - 94/15 = -6 b - 47/15 = -3 b = -3 + 47/15 = -45/15 + 47/15 = 2/15 and finally, using equation 1, we have: 7a + 8(2/15) + 7(-94/15) = 2, so: a + 16/105 - 94/15 = 2/7 thus: a = 2/7 - 16/105 + 94/15 = 30/105 - 16/105 + 658/105 = 672/105 = 96/15 = 32/5. now let's see if our solution is correct (the first two times i did this, i made errors, so don't feel bad if you did, too): 7(32/5) + 8(2/15) + 7(-94/15) = 224/5 + 16/15 - 658/15 = 4704/105 + 112/105 - 4606/105 = 210/105 = 2 (so equation 1 checks out). 6(32/5) + 7(2/15) + 5(-94/15) = 192/5 + 14/15 - 94/3 = 576/15 + 14/15 - 470/15 = 120/15 = 8 (so equation 2 checks out). 6(32/5) + 9(2/15) + 6(-94/15) = 192/5 + 6/5 - 188/5 = 10/5 = 2 (all three equations check out). so a = 32/5, b = 2/15, c = -94/15 is indeed the correct solution (with some rather ugly arithmetic attached). 3. ## Re: Gaussian Elimination - 3 Simultaneous equations Thanks Deveno! That is perfect!! Going back over my original attempts I realise that NOT using fractions was a mistake! The decimal numbers got VERY messy. Thanks again, for taking the time to write it all out. I actually understand the process better now. Chris 4. ## Re: Gaussian Elimination - 3 Simultaneous equations Hello, chrisa112! Didn't you say Gaussian elimination? $\begin{array}{ccc|c}7a + 8b + 7c &=& 2 \\ 6a + 7b + 5c &=& 8 \\ 6a + 9b + 6c &=& 2 \end{array}$ We have: . $\left|\begin{array}{ccc|c}7&8&7&2 \\ 6&7&5&8 \\ 6&9&6&2 \end{array}\right|$ $\begin{array}{c}R_1-R_2 \\ \\ R_3-R_2 \end{array}\left|\begin{array}{ccc|c}1&1&2&\text{-}6 \\ 6&7&5&8 \\ 0&2&1&\text{-}6 \end{array}\right|$ $\begin{array}{c} \\ R_2-6R_1 \\ \\ \end{array}\left|\begin{array}{ccc|c}1&1&2&\text{-}6 \\ 0&1&\text{-}7&44 \\ 0&2&1&\text{-}6 \end{array}\right|$ $\begin{array}{c}R_1-R_2\ \\ R_3-2R_2 \end{array}\left|\begin{array}{ccc|c}1&0&9&\text{-}50 \\ 0&1&\text{-}7&44 \\ 0&0&15&\text{-}94 \end{array}\right|$ . . $\begin{array}{c}\\ \\ \frac{1}{15}R_3 \end{array}\left|\begin{array}{ccc|c}1&0&9&\text{-}50 \\ 0&1&\text{-}7&44 \\ 0&0&1&\text{-}\frac{94}{15}\end{array}\right|$ $\begin{array}{c}R_1 - 9R_3 \\ R_2 + 7R_3 \\ \\ \end{array}\left|\begin{array}{ccc|c}1&0&0&\frac{3 2}{5} \\ 0&1&0&\frac{2}{15} \\ 0&0&1&\text{-}\frac{94}{15} \end{array}\right|$ 5. ## Re: Gaussian Elimination - 3 Simultaneous equations What Deveno did is also "Gaussian Elimination" as well as your row-reduction. 6. ## Re: Gaussian Elimination - 3 Simultaneous equations Yes Deveno had solved it very smartly and this is a “Gaussian Elimination” even I got agreed with Hallsoflvy. Good job done by Deveno.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042813539505005, "perplexity": 1879.6887529545984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274191.57/warc/CC-MAIN-20160524002114-00139-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/205446/norm-of-complex-matrix-with-euclidean-norm
# Norm of complex Matrix with euclidean norm Would you help me to prove the norm of 2 by 2 matrix $$A=\begin{bmatrix} a & b \\ c & d% \end{bmatrix}$$ is $$\def\trc{\operatorname{tr}}\frac{\trc(A^*A)+\sqrt{\trc(A^*A)^2-4\det(A^*A)}}{2}.$$ I just try by maximizing $h(x,y)=||A\begin{bmatrix} x \\ y% \end{bmatrix}||$ where $x^2+y^2=1$ but not get this result. - As I mentioned in my answer, your formula does not give a norm. You need to take the square root of that. For example, if you let $a=2$, $b=c=0$, $d=1$, then for $A+A$ your formula gives $16$, while for $A$ it gives $4$: so it doesn't satisfy the triangle inequality. – Martin Argerami Oct 3 '12 at 1:36 I think you are talking about the operator norm induced by the euclidean norm, and not the euclidean norm per se. In that case, the norm will be the square root of the number you mention. The operator norm of $A$ can be characterized as the square root of the biggest eigenvalue of $A^*A$. The eigenvalues of $A^*A$ are the roots of its characteristic polynomial. As $A^*A$ is $2\times 2$, its characteristic polynomial is $p(t)=t^2-\mbox{tr}(A^*A)\,t+\det(A^*A)$. So its biggest eigenvalue is $$\frac{\mbox{tr}(A^*A)+\sqrt{\mbox{tr}(A^*A)^2-4\det(A^*A)}}2$$ (the polynomial $p$ has always non-negative roots, so the formula above certainly gives the biggest eigenvalue). So the norm of $A$ is $$\left(\frac{\mbox{tr}(A^*A)+\sqrt{\mbox{tr}(A^*A)^2-4\det(A^*A)}}2\right)^{1/2}.$$ - Couldnt be solve by calculus only? – beginner Oct 1 '12 at 15:04 Probably. It will be certainly more convoluted, because if you try to do Lagrange multipliers to your $h$, what you are really doing is first finding a vector where the maximum occurs, and then evaluating $h$ at that vector. – Martin Argerami Oct 1 '12 at 15:19 It's the spectral radius of $A^*A$, that is, the largest modulus of the eigenvalues of $A^*A$. We can express them with a second degree equation satisfied by the eigenvalues, and knowing they are non-negative, we can see that the given expression is the norm of the matrix. - The problem cant be solved only by multivariable calculus (using Lagrange Multiplier)? – beginner Oct 1 '12 at 14:50 It's possible, but I don't know whether it is simpler. You could include your attempt with this method in the OP, then we will try to see where you think there is a problem. – Davide Giraudo Oct 1 '12 at 14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9735608696937561, "perplexity": 145.54791552576475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00075-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/82879-second-order-differential-equations.html
# Thread: Second order differential equations! 1. ## Second order differential equations! Find y in terms of x given that d2y/dx2 - 4(dy/dx) + 4y = e^2x and that dy/dx=1 and y=0 at x=0 I've worked out that the complementary function is (A+Bx)e^2x However, i don't understand why I should be using y=k(x^2)e^2x as the particular integral and not y=ke^2x Can anyone explain? 2. Originally Posted by Erghhh Find y in terms of x given that d2y/dx2 - 4(dy/dx) + 4y = e^2x and that dy/dx=1 and y=0 at x=0 I've worked out that the complementary function is (A+Bx)e^2x However, i don't understand why I should be using y=k(x^2)e^2x as the particular integral and not y=ke^2x Can anyone explain? Since your complimentary solution contains a $xe^{2x}$ term, then by reduction of order, your particular solution must contain a $x^2e^{2x}$ term. Thus, I believe your particular solution must take on the form $y_p=\left(A_1+A_2x+A_3x^2\right)e^{2x}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717824220657349, "perplexity": 1318.592883699985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662698.85/warc/CC-MAIN-20160924173742-00085-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/e-mc-2-do-i-have-this-right.65805/
# E=MC^2 - Do I have this right? 1. Mar 3, 2005 OK first let me explain, I have NO formal physics background, not even high school, so please don't flame me if these are dumb questions 1. What unit of measure is E? If it were distance we could say miles or kilometers. 2. I understand that this formula is for mass at rest. If the mass were moving at 50% light speed, would the answer to E=MC^2 be half of what it would be for the same mass at rest? 3. In nuclear fusion I understand the energy is based on applying this formula to the mass left over when 2 atoms fuse. So looking at a Hydrogen atom, if .00794 atomic mass were converted into energy during the fusion, does that mean the energy gained from this fusion is Energy = .00794 x c^2? I don't know if that is the right number but I needed to apply one for the purpose of this question. Thanks 2. Mar 3, 2005 ### dextercioby Energy,and it is measured in Joules. Not exactly in that form.$$E=m_{0}c^{2}$$ is for mass at rest.Notice the fact that the subscript "0" indicates that thing... No.It would be more.It's really easy to compute the energy,once you know the connection between rest mass $m_{0}$,movement mass $m$ (or "M",as you denoted it) and the velocity $v=|\vec{v}|$.Just arithmetics. EXACTLY...Einstein's formula tells us how much enegy is involved in nuclear reactions...Pay attention with the units,though.That 0.00794 a.m.u.needs to be converted to Kg and then the final result (the energy) would be in Joules,energy's unit. Daniel. 3. Mar 3, 2005 ### James R 1. Any unit of energy will do. The standard SI unit is the Joule. If m is in kilograms and c is in metres per second, then E works out in Joules. 2. The complete formula is: $$E = \frac{mc^2}{\sqrt{1-(v/c)^2}}$$ where $v$ is the speed of the object. If you plug in $v=c/2$, you get: $$E = \frac{mc^2}{\sqrt{1-(1/2)^2}} = \frac{2}{\sqrt{3}}mc^2 \approx (1.15)mc^2$$ When an object is moving, the energy is greater than $mc^2$. In fact, $mc^2$ is the rest energy only. The "extra" energy is kinetic energy. 3. Essentially, you are right, but you need the mass in kilograms rather than atomic mass units. The conversion factor is: $1$ amu = $1.66 \times 10^{-24}[/tex] kilograms. Hope this helps. 4. Mar 3, 2005 ### TsunamiJoe also, not to sound stupid, but what exactly is E=mc^2 solving, the energy at rest? and if that is the case would that be more commonly known as potential energy? 5. Mar 4, 2005 ### dextercioby No.In the form where that "m" is the rest mass,it is just the rest energy of the particle.If that "m" is not the rest mass,it is the TOTAL ENERGY OF THE PARTICLE,rest+kinetic... Daniel. 6. Mar 4, 2005 ### SpaceTiger Staff Emeritus Physicist - Joules Astrophysicist - Ergs Particle Physicist - eV Nutritionist - Calories Electrician - Kilowatt-hours King of England - Foot-pounds Oil Tycoon - Barrels Enron Executive - Dollars Mars Spacecraft Operator - Joules, no....foot-pounds, wait, no... 7. Mar 4, 2005 ### pmb_phy The unit is Jopules. This a derived unit and as such can be exressed in term of basic units, i.e. [joules] = [N = Newton's][L = distance] where [joules] = [kg][m2]/[s2] If you intend m to mean rest mass then your equation leaves something to be desired. The actuall expression is [itex]E = \gamma mc^2$. You should have written E0 = mc2. The expression for energy for a particle in motion is $$E = \gamma m c^2$$ recall that $$\gamma = \frac{1}{\sqrt{1-v^2/c^2}}$$ Its rather simple to get the energy you're speaking of. Merely replace v by v/2. Pete 8. Mar 4, 2005 ### Moose352 He meant to write c/2, not v/2. 9. Mar 5, 2005 Thanks! I understand it better now. Similar Discussions: E=MC^2 - Do I have this right?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721924066543579, "perplexity": 974.5456832917555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423809.62/warc/CC-MAIN-20170721202430-20170721222430-00233.warc.gz"}
http://rcd.ics.org.ru/authors/detail/1602-alexandr_kirillov
0 2013 Impact Factor # Alexandr Kirillov ## Publications: Kirillov A. A. Modification of gravity and Dark Matter 2006, vol. 11, no. 2, pp.  269-280 Abstract Upon a phenomenological consideration of possible modifications of gravity we introduce a bias operator $ρ_{DM} =\hat{K}ρ_{vis}$. We show that the empirical definition of a single bias function $K_{emp}(r,t)$ (i.e., of the kernel for the bias operator) allows to account for all the variety of Dark Matter halos in astrophysical systems. For every discrete source such a bias produces a specific correction to the Newton's potential $\phi=-GM(1/r+F(r,t))$ and therefore all DM effects can be explained as a modification of the gravity law. We also demonstrate that a specific choice of the bias $K \sim 1/r^2$ (which produces a logarithmic correction to the Newton's law $F \sim -\ln r$) shows quite a good qualitative agreement with the observed picture of the modern Universe Keywords: galaxy formation, clusters, dark matter Citation: Kirillov A. A.,  Modification of gravity and Dark Matter , Regular and Chaotic Dynamics, 2006, vol. 11, no. 2, pp. 269-280 DOI: 10.1070/RD2006v011n02ABEH000350 Kirillov A. A. Billiards in Cosmological Models 1996, vol. 1, no. 2, pp.  13-22 Abstract Recently the billiards, forming an important part in theory of dynamical systems with singularities [1,2], found their unexpected application in cosmological problems. It turnes out that a wide class of cosmological models near a singular point, corresponding to the origin of development of our Universe, admits its representation as billiards on the space of constant negative curvature [4,5]. A problem of similar model randomness is reduced to a problem of properties of corresponding billiards. The aim of the present paper is to show the way in which such representation is reached and to present the results obtained within the limits of those models. Citation: Kirillov A. A.,  Billiards in Cosmological Models, Regular and Chaotic Dynamics, 1996, vol. 1, no. 2, pp. 13-22 DOI:10.1070/RD1996v001n02ABEH000011
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541853427886963, "perplexity": 586.5855480210655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156471.4/warc/CC-MAIN-20180920120835-20180920141235-00402.warc.gz"}
https://scribesoftimbuktu.com/convert-to-a-fraction-7-826/
# Convert to a Fraction 7.826 7.826 Convert the decimal number to a fraction by placing the decimal number over a power of ten. Since there are 3 numbers to the right of the decimal point, place the decimal number over 103 (1000). Next, add the whole number to the left of the decimal. 78261000 Reduce the fractional part of the mixed number. 7413500 Convert 7413500 to an improper fraction. A mixed number is an addition of its whole and fractional parts. 7+413500 To write 7 as a fraction with a common denominator, multiply by 500500. 7⋅500500+413500 Combine 7 and 500500. 7⋅500500+413500 Combine the numerators over the common denominator. 7⋅500+413500 Simplify the numerator. Multiply 7 by 500. 3500+413500
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266333341598511, "perplexity": 720.4248157979507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00379.warc.gz"}
https://everything.explained.today/Generalized_continued_fraction/
# Generalized continued fraction explained In complex analysis, a branch of mathematics, a generalized continued fraction is a generalization of regular continued fractions in canonical form, in which the partial numerators and partial denominators can assume arbitrary complex values. A generalized continued fraction is an expression of the form x=b0+\cfrac{a1}{b1+\cfrac{a2}{b2+\cfrac{a3}{b3+\cfrac{a4}{b4+\ddots}}}} where the are the partial numerators, the are the partial denominators, and the leading term is called the integer part of the continued fraction. The successive convergents of the continued fraction are formed by applying the fundamental recurrence formulas: \begin{align} x0&= A0 B0 =b0,\\[4px] x1&= A1 B1 = b1b0+a1 b1 ,\\[4px] x2&= A2 B2 = b2(b1b0+a1)+a2b0 b2b1+a2 , ... \end{align} where is the numerator and is the denominator, called continuants, of the th convergent. They are given by the recursion \begin{align} An&=bnAn-1+anAn-2,\\ Bn&=bnBn-1+anBn-2    forn\ge1\end{align} with initial values \begin{align} A-1&=1,&A0&=b0,\B-1&=0,&B0&=1. \end{align} If the sequence of convergents approaches a limit the continued fraction is convergent and has a definite value. If the sequence of convergents never approaches a limit the continued fraction is divergent. It may diverge by oscillation (for example, the odd and even convergents may approach two different limits), or it may produce an infinite number of zero denominators . ## History The story of continued fractions begins with the Euclidean algorithm,[1] a procedure for finding the greatest common divisor of two natural numbers and . That algorithm introduced the idea of dividing to extract a new remainder  - and then dividing by the new remainder repeatedly. Nearly two thousand years passed before devised a technique for approximating the roots of quadratic equations with continued fractions in the mid-sixteenth century. Now the pace of development quickened. Just 24 years later, in 1613, Pietro Cataldi introduced the first formal notation for the generalized continued fraction. Cataldi represented a continued fraction as {a0 ⋅ }\& n1 d1 ⋅ \& n2 d2 ⋅ \& n3 d3 with the dots indicating where the next fraction goes, and each representing a modern plus sign. Late in the seventeenth century John Wallis introduced the term "continued fraction" into mathematical literature. New techniques for mathematical analysis (Newton's and Leibniz's calculus) had recently come onto the scene, and a generation of Wallis' contemporaries put the new phrase to use. In 1748 Euler published a theorem showing that a particular kind of continued fraction is equivalent to a certain very general infinite series. Euler's continued fraction formula is still the basis of many modern proofs of convergence of continued fractions. In 1761, Johann Heinrich Lambert gave the first proof that is irrational, by using the following continued fraction for : \tan(x)=\cfrac{x}{1+\cfrac{-x2}{3+\cfrac{-x2}{5+\cfrac{-x2}{7+{}\ddots}}}} Continued fractions can also be applied to problems in number theory, and are especially useful in the study of Diophantine equations. In the late eighteenth century Lagrange used continued fractions to construct the general solution of Pell's equation, thus answering a question that had fascinated mathematicians for more than a thousand years.[2] Amazingly, Lagrange's discovery implies that the canonical continued fraction expansion of the square root of every non-square integer is periodic and that, if the period is of length, it contains a palindromic string of length . In 1813 Gauss derived from complex-valued hypergeometric functions what is now called Gauss's continued fractions. They can be used to express many elementary functions and some more advanced functions (such as the Bessel functions), as continued fractions that are rapidly convergent almost everywhere in the complex plane. ## Notation The long continued fraction expression displayed in the introduction is probably the most intuitive form for the reader. Unfortunately, it takes up a lot of space in a book (and is not easy for the typesetter, either). So mathematicians have devised several alternative notations. One convenient way to express a generalized continued fraction looks like this: x= b 0+ a1 b1+ a2 b2+ a3 b3+ Pringsheim wrote a generalized continued fraction this way: x=b0+ a1\mid \midb1 + a2\mid \midb2 + a3\mid \midb3 + … . Carl Friedrich Gauss evoked the more familiar infinite product when he devised this notation: x=b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi . Here the "" stands for Kettenbruch, the German word for "continued fraction". This is probably the most compact and convenient way to express continued fractions; however, it is not widely used by English typesetters. ## Some elementary considerations Here are some elementary results that are of fundamental importance in the further development of the analytic theory of continued fractions. ### Partial numerators and denominators If one of the partial numerators is zero, the infinite continued fraction b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi is really just a finite continued fraction with fractional terms, and therefore a rational function of to and to . Such an object is of little interest from the point of view adopted in mathematical analysis, so it is usually assumed that all . There is no need to place this restriction on the partial denominators . ### The determinant formula When the th convergent of a continued fraction xn=b0+\underset{i=1}\overset{n}\operatorname{K} ai bi is expressed as a simple fraction we can use the determinant formula to relate the numerators and denominators of successive convergents and to one another. The proof for this can be easily seen by induction. Base case The case results from a very simple computation. Inductive step Assume that holds for . Then we need to see the same relation holding true for . Substituting the value of and in we obtain: \begin{align} &=bnAn-1Bn-1+anAn-1Bn-2-bnAn-1Bn-1-anAn-2Bn-1\\ &=an(An-1Bn-2-An-2Bn-1) \end{align} which is true because of our induction hypothesis. An-1Bn-AnBn-1= na \left(-1\right) 1a 2 … an= n \prod i=1 (-ai) Specifically, if neither nor is zero we can express the difference between the th and th convergents like this: xn-1-xn= An-1 Bn-1 - An Bn =\left(-1\right)n a1a2 … an BnBn-1 = n \prod (-ai) i=1 BnBn-1 . ### The equivalence transformation If is any infinite sequence of non-zero complex numbers we can prove, by induction, that b0+\cfrac{a1}{b1+\cfrac{a2}{b2+\cfrac{a3}{b3+\cfrac{a4}{b4+\ddots}}}}= b0+\cfrac{c1a1}{c1b1+\cfrac{c1c2a2}{c2b2+\cfrac{c2c3a3}{c3b3+\cfrac{c3c4a4}{c4b4+\ddots}}}} where equality is understood as equivalence, which is to say that the successive convergents of the continued fraction on the left are exactly the same as the convergents of the fraction on the right. The equivalence transformation is perfectly general, but two particular cases deserve special mention. First, if none of the are zero a sequence can be chosen to make each partial numerator a 1: b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi =b0+\underset{i=1}\overset{infty}\operatorname{K} 1 cibi where,,, and in general . Second, if none of the partial denominators are zero we can use a similar procedure to choose another sequence to make each partial denominator a 1: b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi =b0+\underset{i=1}\overset{infty}\operatorname{K} diai 1 where and otherwise . These two special cases of the equivalence transformation are enormously useful when the general convergence problem is analyzed. ### Notions of convergence As mentioned in the introduction, the continued fraction x=b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi converges if the sequence of convergents tends to a finite limit. This notion of convergence is very natural, but it is sometimes too restrictive. It is therefore useful to introduce the notion of general convergence of a continued fraction. Roughly speaking, this consists in replacing the infty \operatorname{K} i=n \tfrac{ai}{bi} part of the fraction by, instead of by 0, to compute the convergents. The convergents thus obtained are called modified convergents. We say that the continued fraction converges generally if there exists a sequence *\} \{w n such that the sequence of modified convergents converges for all \{wn\} sufficiently distinct from *\} \{w n . The sequence *\} \{w n is then called an exceptional sequence for the continued fraction. See Chapter 2 of for a rigorous definition. There also exists a notion of absolute convergence for continued fractions, which is based on the notion of absolute convergence of a series: a continued fraction is said to be absolutely convergent when the series f=\sumn\left(fn-fn-1\right), where fn= n \operatorname{K} i=1 \tfrac{ai}{bi} are the convergents of the continued fraction, converges absolutely. The Śleszyński–Pringsheim theorem provides a sufficient condition for absolute convergence. Finally, a continued fraction of one or more complex variables is uniformly convergent in an open neighborhood when its convergents converge uniformly on ; that is, when for every there exists such that for all, for all z\in\Omega , |f(z)-fn(z)|<\varepsilon. ### Even and odd convergents It is sometimes necessary to separate a continued fraction into its even and odd parts. For example, if the continued fraction diverges by oscillation between two distinct limit points and, then the sequence must converge to one of these, and must converge to the other. In such a situation it may be convenient to express the original continued fraction as two different continued fractions, one of them converging to, and the other converging to . The formulas for the even and odd parts of a continued fraction can be written most compactly if the fraction has already been transformed so that all its partial denominators are unity. Specifically, if x=\underset{i=1}\overset{infty}\operatorname{K} ai 1 is a continued fraction, then the even part and the odd part are given by xeven=\cfrac{a1}{1+a2-\cfrac{a2a3}{1+a3+a4-\cfrac{a4a5}{1+a5+a6-\cfrac{a6a7}{1+a7+a8-\ddots}}}} and xodd=a1-\cfrac{a1a2}{1+a2+a3-\cfrac{a3a4}{1+a4+a5-\cfrac{a5a6}{1+a6+a7-\cfrac{a7a8}{1+a8+a9-\ddots}}}} respectively. More precisely, if the successive convergents of the continued fraction are, then the successive convergents of as written above are, and the successive convergents of are .[3] ### Conditions for irrationality If and are positive integers with for all sufficiently large, then x=b0+\underset{i=1}\overset{infty}\operatorname{K} ai bi converges to an irrational limit. ### Fundamental recurrence formulas The partial numerators and denominators of the fraction's successive convergents are related by the fundamental recurrence formulas: \begin{align} A-1&=1&B-1&=0\\ A0&=b0&B0&=1\\ An+1&=bn+1An+an+1An-1&Bn+1&=bn+1Bn+an+1Bn-1 \end{align} The continued fraction's successive convergents are then given by x n= An Bn . These recurrence relations are due to John Wallis (1616–1703) and Leonhard Euler (1707–1783). As an example, consider the regular continued fraction in canonical form that represents the golden ratio : x=1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\ddots}}}} Applying the fundamental recurrence formulas we find that the successive numerators are and the successive denominators are, the Fibonacci numbers. Since all the partial numerators in this example are equal to one, the determinant formula assures us that the absolute value of the difference between successive convergents approaches zero quite rapidly. ## Linear fractional transformations A linear fractional transformation (LFT) is a complex function of the form w=f(z)= a+bz c+dz , where is a complex variable, and are arbitrary complex constants such that . An additional restriction that is customarily imposed, to rule out the cases in which is a constant. The linear fractional transformation, also known as a Möbius transformation, has many fascinating properties. Four of these are of primary importance in developing the analytic theory of continued fractions. • If the LFT has one or two fixed points. This can be seen by considering the equation f(z)=zdz2+cz=a+bz which is clearly a quadratic equation in . The roots of this equation are the fixed points of . If the discriminant is zero the LFT fixes a single point; otherwise it has two fixed points. z=g(w)= -a+cw b-dw such that for every point in the extended complex plane, and both and preserve angles and shapes at vanishingly small scales. From the form of we see that is also an LFT. • The composition of two different LFTs for which is itself an LFT for which . In other words, the set of all LFTs for which is closed under composition of functions. The collection of all such LFTs, together with the "group operation" composition of functions, is known as the automorphism group of the extended complex plane. • If the LFT reduces to w=f(z)= a c+dz , which is a very simple meromorphic function of with one simple pole (at) and a residue equal to . (See also Laurent series.) ### The continued fraction as a composition of LFTs Consider a sequence of simple linear fractional transformations \begin{align} \tau0(z)&=b0+z,\\[4px]\tau1(z)&= a1 b1+z ,\\[4px] \tau2(z)&= a2 b2+z ,\\[4px]\tau3(z)&= a3 b3+z ,\\&\vdots \end{align} Here we use to represent each simple LFT, and we adopt the conventional circle notation for composition of functions. We also introduce a new symbol to represent the composition of transformations ; that is, \begin{align} \boldsymbol{\Tau}\boldsymbol{1}(z)&=\tau0\circ\tau1(z)=\tau0(\tau1(z)),\\ \boldsymbol{\Tau}\boldsymbol{2}(z)&=\tau0\circ\tau1\circ\tau2(z)=\tau0(\tau1(\tau2(z))), \end{align} and so forth. By direct substitution from the first set of expressions into the second we see that \begin{align} \boldsymbol{\Tau}\boldsymbol{1}(z)&=\tau0\circ\tau1(z)&=&b0+\cfrac{a1}{b1+z}\\[4px] \boldsymbol{\Tau}\boldsymbol{2}(z)&=\tau0\circ\tau1\circ\tau2(z)&=&b0+\cfrac{a1}{b1+\cfrac{a2}{b2+z}} \end{align} and, in general, \boldsymbol{\Tau}\boldsymbol{n}(z)=\tau0\circ\tau1\circ\tau2\circ\circ\taun(z)= b0+\underset{i=1}\overset{n}\operatorname{K} ai bi where the last partial denominator in the finite continued fraction is understood to be . And, since, the image of the point under the iterated LFT is indeed the value of the finite continued fraction with partial numerators: \boldsymbol{\Tau}\boldsymbol{n}(0)=\boldsymbol{\Tau}\boldsymbol{n+1}(infty)=b0+\underset{i=1}\overset{n}\operatorname{K} ai bi . ### A geometric interpretation Defining a finite continued fraction as the image of a point under the iterated linear functional transformation leads to an intuitively appealing geometric interpretation of infinite continued fractions. The relationship xn=b0+\underset{i=1}\overset{n}\operatorname{K} ai bi = An Bn =\boldsymbol{\Tau}\boldsymbol{n }(0) = \boldsymbol_(\infty)\, can be understood by rewriting and in terms of the fundamental recurrence formulas: \begin{align} \boldsymbol{\Tau}\boldsymbol{n }(z)& = \frac& \boldsymbol_(z)& = \frac;\\[6px]\boldsymbol_(z)& = \frac& \boldsymbol_(z)& = \frac .\,\end In the first of these equations the ratio tends toward as tends toward zero. In the second, the ratio tends toward as tends to infinity. This leads us to our first geometric interpretation. If the continued fraction converges, the successive convergents are eventually arbitrarily close together. Since the linear fractional transformation is a continuous mapping, there must be a neighborhood of that is mapped into an arbitrarily small neighborhood of . Similarly, there must be a neighborhood of the point at infinity which is mapped into an arbitrarily small neighborhood of . So if the continued fraction converges the transformation maps both very small and very large into an arbitrarily small neighborhood of, the value of the continued fraction, as gets larger and larger. For intermediate values of, since the successive convergents are getting closer together we must have An-1 Bn-1 An Bn ⇒ An-1 An Bn-1 Bn =k where is a constant, introduced for convenience. But then, by substituting in the expression for we obtain \boldsymbol{\Tau}\boldsymbol{n }(z) = \frac= \frac \left(\frac\right)\approx \frac \left(\frac\right) = \frac\, so that even the intermediate values of (except when) are mapped into an arbitrarily small neighborhood of, the value of the continued fraction, as gets larger and larger. Intuitively, it is almost as if the convergent continued fraction maps the entire extended complex plane into a single point.[4] Notice that the sequence lies within the automorphism group of the extended complex plane, since each is a linear fractional transformation for which . And every member of that automorphism group maps the extended complex plane into itself: not one of the can possibly map the plane into a single point. Yet in the limit the sequence defines an infinite continued fraction which (if it converges) represents a single point in the complex plane. When an infinite continued fraction converges, the corresponding sequence of LFTs "focuses" the plane in the direction of, the value of the continued fraction. At each stage of the process a larger and larger region of the plane is mapped into a neighborhood of, and the smaller and smaller region of the plane that's left over is stretched out ever more thinly to cover everything outside that neighborhood.[5] For divergent continued fractions, we can distinguish three cases: 1. The two sequences and might themselves define two convergent continued fractions that have two different values, and . In this case the continued fraction defined by the sequence diverges by oscillation between two distinct limit points. And in fact this idea can be generalized: sequences can be constructed that oscillate among three, or four, or indeed any number of limit points. Interesting instances of this case arise when the sequence constitutes a subgroup of finite order within the group of automorphisms over the extended complex plane. 2. The sequence may produce an infinite number of zero denominators while also producing a subsequence of finite convergents. These finite convergents may not repeat themselves or fall into a recognizable oscillating pattern. Or they may converge to a finite limit, or even oscillate among multiple finite limits. No matter how the finite convergents behave, the continued fraction defined by the sequence diverges by oscillation with the point at infinity in this case.[6] 3. The sequence may produce no more than a finite number of zero denominators . while the subsequence of finite convergents dances wildly around the plane in a pattern that never repeats itself and never approaches any finite limit either. Interesting examples of cases 1 and 3 can be constructed by studying the simple continued fraction x=1+\cfrac{z}{1+\cfrac{z}{1+\cfrac{z}{1+\cfrac{z}{1+\ddots}}}} where is any real number such that .[7] ## Euler's continued fraction formula See main article: Euler's continued fraction formula. Euler proved the following identity: a0+a0a1+a0a1a2++a0a1a2 … an= a0 1- a1 1+a1- a2 … 1+a2- an 1+an . From this many other results can be derived, such as 1 + u1 1 + u2 1 + … + u3 1 = un 1 u1- 2 u 1 u1+u2- 2 u 2 u2+u3- 2 u n-1 un-1+un , and 1 a0 + x a0a1 + x2 a0a1a2 + … + xn = a0a1a2\ldotsan 1 a0- a0x a1+x- a1x … a2+x- an-1x an+x . Euler's formula connecting continued fractions and series is the motivation for the, and also the basis of elementary approaches to the convergence problem. ## Examples ### Transcendental functions and numbers Here are two continued fractions that can be built via Euler's identity. ex= x0 0! + x1 1! + x2 2! + x3 3! + x4 4! +=1+\cfrac{x}{1-\cfrac{1x}{2+x-\cfrac{2x}{3+x-\cfrac{3x}{4+x-\ddots}}}} log(1+x)= x1 1 - x2 2 + x3 3 - x4 4 +=\cfrac{x}{1-0x+\cfrac{12x}{2-1x+\cfrac{22x}{3-2x+\cfrac{32x}{4-3x+\ddots}}}} Here are additional generalized continued fractions: \arctan\cfrac{x}{y}=\cfrac{xy}{1y2+\cfrac{(1xy)2}{3y2-1x2+\cfrac{(3xy)2}{5y2-3x2+\cfrac{(5xy)2}{7y2-5x2+\ddots}}}} =\cfrac{x}{1y+\cfrac{(1x)2}{3y+\cfrac{(2x)2}{5y+\cfrac{(3x)2}{7y+\ddots}}}} x y e =1+\cfrac{2x}{2y-x+\cfrac{x2}{6y+\cfrac{x2}{10y+\cfrac{x2}{14y+\cfrac{x2}{18y+\ddots}}}}}   ⇒   e2=7+\cfrac{2}{5+\cfrac{1}{7+\cfrac{1}{9+\cfrac{1}{11+\ddots}}}} log\left(1+ x y \right)=\cfrac{x}{y+\cfrac{1x}{2+\cfrac{1x}{3y+\cfrac{2x}{2+\cfrac{2x}{5y+\cfrac{3x}{2+\ddots}}}}}}=\cfrac{2x}{2y+x-\cfrac{(1x)2}{3(2y+x)-\cfrac{(2x)2}{5(2y+x)-\cfrac{(3x)2}{7(2y+x)-\ddots}}}} This last is based on an algorithm derived by Aleksei Nikolaevich Khovansky in the 1970s.[8] Example: the natural logarithm of 2 (= ≈ 0.693147...): log2=log(1+1)=\cfrac{1}{1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{2}{2+\cfrac{2}{5+\cfrac{3}{2+\ddots}}}}}}=\cfrac{2}{3-\cfrac{12}{9-\cfrac{22}{15-\cfrac{32}{21-\ddots}}}} Here are three of 's best-known generalized continued fractions, the first and third of which are derived from their respective arctangent formulas above by setting and multiplying by 4. The Leibniz formula for : \pi=\cfrac{4}{1+\cfrac{12}{2+\cfrac{32}{2+\cfrac{52}{2+\ddots}}}} = infty \sum n=0 4(-1)n 2n+1 = 4 1 - 4 3 + 4 5 - 4 7 +- converges too slowly, requiring roughly terms to achieve correct decimal places. The series derived by Nilakantha Somayaji: \pi=3+\cfrac{12}{6+\cfrac{32}{6+\cfrac{52}{6+\ddots}}} =3- infty \sum n=1 (-1)n n(n+1)(2n+1) =3+ 1 1 ⋅ 2 ⋅ 3 - 1 2 ⋅ 3 ⋅ 5 + 1 3 ⋅ 4 ⋅ 7 -+ is a much more obvious expression but still converges quite slowly, requiring nearly 50 terms for five decimals and nearly 120 for six. Both converge sublinearly to . On the other hand: \pi=\cfrac{4}{1+\cfrac{12}{3+\cfrac{22}{5+\cfrac{32}{7+\ddots}}}} =4-1+ 1 6 - 1 34 + 16 3145 - 4 4551 + 1 6601 - 1 38341 +- converges linearly to, adding at least three digits of precision per four terms, a pace slightly faster than the arcsine formula for : \pi=6\sin-1\left( 1 2 \right)= infty \sum n=0 3 ⋅ \binom{2n n } = \frac + \frac + \frac + \frac + \cdots\! which adds at least three decimal digits per five terms. • Note: this continued fraction's rate of convergence tends to, hence tends to, whose common logarithm is . The same (the silver ratio squared) also is observed in the unfolded general continued fractions of both the natural logarithm of 2 and the th root of 2 (which works for any integer) if calculated using . For the folded general continued fractions of both expressions, the rate convergence, hence, whose common logarithm is, thus adding at least three digits per two terms. This is because the folded GCF folds each pair of fractions from the unfolded GCF into one fraction, thus doubling the convergence pace. The Manny Sardina reference further explains "folded" continued fractions. • Note: Using the continued fraction for cited above with the best-known Machin-like formula provides an even more rapidly, although still linearly, converging expression: \pi=16\tan-1\cfrac{1}{5}-4\tan-1\cfrac{1}{239}=\cfrac{16}{u+\cfrac{12}{3u+\cfrac{22}{5u+\cfrac{32}{7u+\ddots}}}} -\cfrac{4}{v+\cfrac{12}{3v+\cfrac{22}{5v+\cfrac{32}{7v+\ddots}}}}. with and . ### Roots of positive numbers The th root of any positive number can be expressed by restating, resulting in \sqrt[n]{zm}=\sqrt[n]{\left(xn+y\right)m}=xm+\cfrac{my}{nxn-m+\cfrac{(n-m)y}{2xm+\cfrac{(n+m)y}{3nxn-m+\cfrac{(2n-m)y}{2xm+\cfrac{(2n+m)y}{5nxn-m+\cfrac{(3n-m)y}{2xm+\ddots}}}}}} which can be simplified, by folding each pair of fractions into one fraction, to \sqrt[n]{zm}=xm+\cfrac{2xmmy}{n(2xn+y)-my-\cfrac{(12n2-m2)y2}{3n(2xn+y)-\cfrac{(22n2-m2)y2}{5n(2xn+y)-\cfrac{(32n2-m2)y2}{7n(2xn+y)-\cfrac{(42n2-m2)y2}{9n(2xn+y)-\ddots}}}}}. The square root of is a special case with and : \sqrt{z}=\sqrt{x2+y}=x+\cfrac{y}{2x+\cfrac{y}{2x+\cfrac{3y}{6x+\cfrac{3y}{2x+\ddots}}}}=x+\cfrac{2xy}{2(2x2+y)-y-\cfrac{1 ⋅ 3y2}{6(2x2+y)-\cfrac{3 ⋅ 5y2}{10(2x2+y)-\ddots}}} which can be simplified by noting that : \sqrt{z}=\sqrt{x2+y}=x+\cfrac{y}{2x+\cfrac{y}{2x+\cfrac{y}{2x+\cfrac{y}{2x+\ddots}}}}=x+\cfrac{2xy}{2(2x2+y)-y-\cfrac{y2}{2(2x2+y)-\cfrac{y2}{2(2x2+y)-\ddots}}}. The square root can also be expressed by a periodic continued fraction, but the above form converges more quickly with the proper and . #### Example 1 The cube root of two (21/3 or ≈ 1.259921...) can be calculated in two ways: Firstly, "standard notation" of,, and : \sqrt[3]2=1+\cfrac{1}{3+\cfrac{2}{2+\cfrac{4}{9+\cfrac{5}{2+\cfrac{7}{15+\cfrac{8}{2+\cfrac{10}{21+\cfrac{11}{2+\ddots}}}}}}}}=1+\cfrac{21}{9-1-\cfrac{24}{27-\cfrac{57}{45-\cfrac{810}{63-\cfrac{1113}{81-\ddots}}}}}. Secondly, a rapid convergence with, and : \sqrt[3]2=\cfrac{5}{4}+\cfrac{0.5}{50+\cfrac{2}{5+\cfrac{4}{150+\cfrac{5}{5+\cfrac{7}{250+\cfrac{8}{5+\cfrac{10}{350+\cfrac{11}{5+\ddots}}}}}}}}=\cfrac{5}{4}+\cfrac{2.51}{253-1-\cfrac{24}{759-\cfrac{57}{1265-\cfrac{810}{1771-\ddots}}}}. #### Example 2 Pogson's ratio (1001/5 or ≈ 2.511886...), with, and : \sqrt[5]{100}=\cfrac{5}{2}+\cfrac{3}{250+\cfrac{12}{5+\cfrac{18}{750+\cfrac{27}{5+\cfrac{33}{1250+\cfrac{42}{5+\ddots}}}}}}=\cfrac{5}{2}+\cfrac{5 ⋅ 3}{1265-3-\cfrac{1218}{3795-\cfrac{2733}{6325-\cfrac{4248}{8855-\ddots}}}}. #### Example 3 The twelfth root of two (21/12 or ≈ 1.059463...), using "standard notation": \sqrt[12]2=1+\cfrac{1}{12+\cfrac{11}{2+\cfrac{13}{36+\cfrac{23}{2+\cfrac{25}{60+\cfrac{35}{2+\cfrac{37}{84+\cfrac{47}{2+\ddots}}}}}}}}=1+\cfrac{21}{36-1-\cfrac{1113}{108-\cfrac{2325}{180-\cfrac{3537}{252-\cfrac{4749}{324-\ddots}}}}}. #### Example 4 Equal temperament's perfect fifth (27/12 or ≈ 1.498307...), with : With "standard notation": \sqrt[12]{27}=1+\cfrac{7}{12+\cfrac{5}{2+\cfrac{19}{36+\cfrac{17}{2+\cfrac{31}{60+\cfrac{29}{2+\cfrac{43}{84+\cfrac{41}{2+\ddots}}}}}}}}=1+\cfrac{27}{36-7-\cfrac{519}{108-\cfrac{1731}{180-\cfrac{2943}{252-\cfrac{4155}{324-\ddots}}}}}. A rapid convergence with,, and : \sqrt[12]{27}=\cfrac{1}{2}\sqrt[12]{312-7153}=\cfrac{3}{2}-\cfrac{0.57153}{4 ⋅ 312-\cfrac{11 ⋅ 7153}{6-\cfrac{13 ⋅ 7153}{12 ⋅ 312-\cfrac{23 ⋅ 7153}{6-\cfrac{25 ⋅ 7153}{20 ⋅ 312-\cfrac{35 ⋅ 7153}{6-\cfrac{37 ⋅ 7153}{28 ⋅ 312-\cfrac{47 ⋅ 7153}{6-\ddots}}}}}}}} \sqrt[12]{27}=\cfrac{3}{2}-\cfrac{3 ⋅ 7153}{12(219+312)+7153-\cfrac{11 ⋅ 13 ⋅ 71532}{36(219+312)-\cfrac{23 ⋅ 25 ⋅ 71532}{60(219+312)-\cfrac{35 ⋅ 37 ⋅ 71532}{84(219+312)-\ddots}}}}. More details on this technique can be found in General Method for Extracting Roots using (Folded) Continued Fractions. ## Higher dimensions Another meaning for generalized continued fraction is a generalization to higher dimensions. For example, there is a close relationship between the simple continued fraction in canonical form for the irrational real number, and the way lattice points in two dimensions lie to either side of the line . Generalizing this idea, one might ask about something related to lattice points in three or more dimensions. One reason to study this area is to quantify the mathematical coincidence idea; for example, for monomials in several real numbers, take the logarithmic form and consider how small it can be. Another reason is to find a possible solution to Hermite's problem. There have been numerous attempts to construct a generalized theory. Notable efforts in this direction were made by Felix Klein (the Klein polyhedron), Georges Poitou and George Szekeres. ## References • Angell. David. 2010. A family of continued fractions. Journal of Number Theory. Elsevier. 130. 4. 904–911. 10.1016/j.jnt.2009.12.003. • Book: Angell , David . 2021. Irrationality and Transcendence in Number Theory. Chapman and Hall/CRC. 9780367628376. • Book: Beckmann , Petr . 1971. A History of Pi. St. Martin's Press, Inc.. 131–133, 140–143. 0-88029-418-3. registration. • Book: Bombelli , Rafael . Rafael Bombelli. 1579. L'algebra. • Borwein. Jonathan Michael. Jonathan Borwein. Crandall. Richard E.. Richard Crandall. Fee. Greg. Greg Fee. 2004. On the Ramanujan AGM Fraction, I: The Real-Parameter Case. Experimental Mathematics. 13. 3. 275–285. 10.1080/10586458.2004.10504540. 17758274. • Book: Cataldi , Pietro Antonio . Pietro Cataldi. 1613. Trattato del modo brevissimo di trovar la radice quadra delli numeri. A treatise on a quick way to find square roots of numbers. • Book: Chrystal , George . George Chrystal. 1999. Algebra, an Elementary Text-book for the Higher Classes of Secondary Schools and for Colleges: Pt. 1. American Mathematical Society. 0-8218-1649-7. 500. • Book: Cusick. Thomas W.. Flahive. Mary E.. 1989. The Markoff and Lagrange Spectra. limited. American Mathematical Society. 0-8218-1531-8. 89. • Web site: Euclid. Euclid. 2008. 300 BC. Clay Mathematics Institute. Elements. • Web site: Euler. Leonhard. Leonhard Euler. 1748. E101 – Introductio in analysin infinitorum, volume 1. The Euler Archive. 2 May 2022. • Book: Gauss , Carl Friedrich . Carl Friedrich Gauss. 1813. Disquisitiones generales circa seriem infinitam. • Book: Havil , Julian . 2012. The Irrationals: A Story of the Numbers You Can't Count On. Princeton University Press. j.ctt7smdw. 280. 978-0691143422 . • Book: Jones. William B.. Thron. W.J.. 1980. Continued fractions. Analytic theory and applications. Encyclopedia of Mathematics and its Applications. 11. Reading, MA. Addison-Wesley. 0-201-13510-8. 0445.30003. registration. (Covers both analytic theory and history.) • Book: Lorentzen. Lisa. Lisa Lorentzen. Waadeland. Haakon. 1992. Continued Fractions with Applications. Reading, MA. North Holland. 978-0-444-89265-2. (Covers primarily analytic theory and some arithmetic theory.) • Book: Perron , Oskar . Oskar Perron. 1977a. 1954. Die Lehre von den Kettenbrüchen. Band I: Elementare Kettenbrüche. 3. Vieweg + Teubner Verlag. 9783519020219. • Book: Perron , Oskar . Oskar Perron. 1977b. 1954. Die Lehre von den Kettenbrüchen. Band II: Analytisch-funktionentheoretische Kettenbrüche. 3. Vieweg + Teubner Verlag. 9783519020226. • Web site: Porubský. Štefan. Basic definitions for continued fractions. 2008. Interactive Information Portal for Algorithmic Mathematics. Institute of Computer Science of the Czech Academy of Sciences. Prague, Czech Republic. 2 May 2022. • Book: Press. WH. Teukolsky. SA. Vetterling. WT. Flannery. BP. 2007. Numerical Recipes: The Art of Scientific Computing. 3rd. Cambridge University Press. New York. 978-0-521-88068-8. Section 5.2. Evaluation of Continued Fractions. http://apps.nrbook.com/empanel/index.html?pg=206. • Web site: Sardina. Manny. 2007. General Method for Extracting Roots using (Folded) Continued Fractions. Surrey (UK). • Szekeres. George. George Szekeres. 1970. Multidimensional continued fractions. Ann. Univ. Sci. Budapest. Eötvös Sect. Math.. 13. 113–140. • Von Koch. Helge. Helge von Koch. 1895. Sur un théorème de Stieltjes et sur les fonctions définies par des fractions continues. Bulletin de la Société Mathématique de France. 23. 33–40. 10.24033/bsmf.508. 26.0233.01. • Book: Wall , Hubert Stanley . Hubert Stanley Wall. 1967. Analytic Theory of Continued Fractions. Reprint. Chelsea Pub Co. 0-8284-0207-8. (This reprint of the D. Van Nostrand edition of 1948 covers both history and analytic theory.) • Book: Wallis , John . John Wallis. 1699. Opera mathematica. Mathematical Works.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694328904151917, "perplexity": 1542.5484848221258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00307.warc.gz"}
https://makarandtapaswi.wordpress.com/2009/11/13/from-real-to-complex-to-vector-gaussian-distributions/
# From Real to Complex to Vector Gaussians Gaussian distributions are probably the most widely used distributions in mathematics and engineering. Though one strange aspect of them is that they have slightly different equations for the real and complex cases. This is an interesting problem, and has a really nice reasoning, which generally is not taught in classes. In an attempt to answer the why here goes… All of us have definitely at some point of time seen this familiar real Gaussian distribution $p_x(x) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{x^2}{2\sigma^2}}$ which changes to a real vector form as $p_x(\mathbf{x}) = \frac{1}{\sqrt{(2\pi)^N |\mathbf{C_x}|}} e^{-\frac{1}{2}\mathbf{x}^T \mathbf{C_x}^{-1}\mathbf{x}}$ where $x$ is a vector of Gaussian distributions $x_1, x_2, \ldots, x_N$ with 0 mean and covariance matrix $\mathbf{C_x}$. This can be written directly under the assumption that the individual $x_i$ are uncorrelated, and since uncorrelated gaussians are independent, a multiplication of the individual gaussian distributions can be carried out. Further, there is a change when referring to complex distributions. This is due to the difference in the definition of the Covariance matrix itself, which can be now written as a covariance matrix of the real and imaginary parts $\mathbf{C_z}$ or the covariance matrix generated by using the complex number directly as $\mathbf{C_s}$ where $\mathbf{z} = [\mathbf{x y}]^T$ and $\mathbf{u} = [\mathbf{u u^*}]^T$ are both column vectors of 2N size. This gives a relation between $\mathbf{C_z} = \frac{1}{2} \mathbf{T}^H \mathbf{C_s} \mathbf{T}$ where $\mathbf{T}$ is a 2-D unitary matrix. This half factor is responsible for the disappearance of the 2 in the fraction, and $|\mathbf{C_s}| = |\mathbf{C_u}|^2$ is responsible for the removal of the square root. Thus the final Complex Guassian Multivariate distribution ends up (different from the real one) as $p_u(\mathbf{u}) = \frac{1}{(\pi)^N |\mathbf{C_u}|} e^{-\mathbf{u}^T \mathbf{C_u}^{-1}\mathbf{u}}$ when $\mathbf{u}$ is circulant complex random variable, i.e. the covariance of real and imaginary parts is same, and they are uncorrelated (generally satisfied in applications). One last point, its important to note that independence implies uncorrelation, but uncorrelation need not imply independence except for the nice and widely used Gaussian case 🙂
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575178027153015, "perplexity": 203.899802924897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00084.warc.gz"}
http://mathhelpforum.com/discrete-math/26349-proving-disproving-print.html
# Proving and Disproving • January 18th 2008, 06:30 AM TheRekz Proving and Disproving 1. [tex] \forall m,n \in Z, if m + n is even then either m and n are both even or they are both odd [\math] 2. Prove or disprove using contradiction. Every integer > 11 is the sum of two composite integers. For the first number, is it the same if I prove that if m and n are both even then m +n is even and if m and n are both odd then m + n is even? • January 18th 2008, 07:06 AM ThePerfectHacker Quote: Originally Posted by TheRekz 1. $\forall m,n \in Z, if m + n$ is even then either m and n are both even or they are both odd By division algorithm we can write $m=2a+r_1$ where $r_1=0,1$ and $n=2b+r_2$ then $n+m = 2(a + b) + (r_1+r_2)$ for this to be even we require that $(r_1,r_2)=(0,0)$ or $(r_1,r_2)=(1,1)$, i.e. both even or both odd. Quote: 2. Prove or disprove using contradiction. Every integer > 11 is the sum of two composite integers. If $n>11$ is even then $n - 4 > 2$ is even and these are composite, thus, $n=4+(n-4)$. If $n>11$ and is odd then we $n-9>2$ is even and so composite which means we can write $n = 9 + (n-9)$. • January 18th 2008, 07:39 AM TheRekz is there any other way to proof this?? your explanation seems complicated although it makes sense. I don't quite understand where did you get the m = 2a + r from?? so you mean r can be either 0 or 1 here depends whether m is odd or even? if m is even then r is 0?? r can therefore also be 2 right? • January 18th 2008, 08:17 AM CaptainBlack Quote: Originally Posted by TheRekz 1. [tex] \forall m,n \in Z, if m + n is even then either m and n are both even or they are both odd [\math] (all number refered to are in $\mathbb{Z}$) Supose there are $n$ and $m$ such that $n$ is odd and $m$ even and $n+m$ is even. By supposition there exist $k_1$ and $k_2$ such that $n=2k_1+1,\ m=2k_2$. Then $n+m=2(k_1+k_2)+1$ which is odd, which contradicts our supposition. Hence if $n+m$ is even either $m$ and $n$ are both even or they are both odd RonL • January 18th 2008, 08:20 AM CaptainBlack Quote: Originally Posted by TheRekz 1 For the first number, is it the same if I prove that if m and n are both even then m +n is even and if m and n are both odd then m + n is even? No you have to show that if n+m is even both n and m are even or both are odd, and that you will not have done. RonL • January 18th 2008, 08:40 AM TheRekz Quote: Originally Posted by CaptainBlack No you have to show that if n+m is even both n and m are even or both are odd, and that you will not have done. RonL can you help me to do number 2?? Cause I don't really get it on the post no.2 answer • January 18th 2008, 10:16 AM CaptainBlack Quote: Originally Posted by TheRekz can you help me to do number 2?? Cause I don't really get it on the post no.2 answer ImPerfectHackers proof is quite simple (and neat): Suppose that there is an integer $N> 11$ not the sum of two composite integers. Then $N$ is even or odd. Case 1: $N$ even, put $n_1=4,\ n_2=N-4$, then both $n_1$ and $n_2$ are even and greater than $2$ and hence composite, but this contradicts our assumption so $N$ cannot be composite. Case 2: $N$ odd, put $n_1=9,\ n_2=N-9$, then both $n_1$ is composite and $n_2$ is even and greater than $2$ and hence composite, but this contradicts our assumption so $N$ cannot be composite. Case 1 and Case 2 together contradict the original assumption and so the theorem: Every integer > 11 is the sum of two composite integers; is proven by contradiction. RonL • January 23rd 2008, 03:19 PM TheRekz Quote: Originally Posted by CaptainBlack ImPerfectHackers proof is quite simple (and neat): Suppose that there is an integer $N> 11$ not the sum of two composite integers. Then $N$ is even or odd. Case 1: $N$ even, put $n_1=4,\ n_2=N-4$, then both $n_1$ and $n_2$ are even and greater than $2$ and hence composite, but this contradicts our assumption so $N$ cannot be composite. Case 2: $N$ odd, put $n_1=9,\ n_2=N-9$, then both $n_1$ is composite and $n_2$ is even and greater than $2$ and hence composite, but this contradicts our assumption so $N$ cannot be composite. Case 1 and Case 2 together contradict the original assumption and so the theorem: Every integer > 11 is the sum of two composite integers; is proven by contradiction. RonL just one more question, how do we know that n2 is even here?? thanks?? • January 23rd 2008, 08:13 PM CaptainBlack Quote: Originally Posted by TheRekz just one more question, how do we know that n2 is even here?? thanks?? In Case 1:By hypothesis N is even, and > 11, so N-4 is even (and >7) In Case 2:By hypothesis N is odd, and > 11, so N-9 is even (and >2) (even-even is even, and odd-odd is also even) RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753541350364685, "perplexity": 384.42897487389234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988458.74/warc/CC-MAIN-20150728002308-00342-ip-10-236-191-2.ec2.internal.warc.gz"}
https://chet-aero.com/2018/10/02/consistency-convergence-and-stability-of-lax-wendroff-scheme-applied-to-convection-equation/
The purpose of this project is to examine the Lax-Wendroff scheme to solve the convection (or one-way wave) equation and to determine its consistency, convergence and stability. ## Overview of Taylor Series Expansions The case examined utilized a Taylor Series expansion, so some explanation common to both is in order. The general expression for a Taylor series is found in A Course in Mathematical Analysis Volume 1: Derivatives and Differentials; Definite Integrals; Expansion in Series; Applications to Geometry (Dover Books on Mathematics) and is given as As a general rule, $h$ will represent a time or distance step, i.e. $\Delta_{x},\,\Delta_{t}$, although the second case will require a more versatile application of $h$. In any event, the forward spatial Taylor series expansion from a single point is given as $u(x_{{k+1}},t_{{n}})=u(x_{{k}},t_{{n}})+D_{{1}}(u)(x_{{k}},t_{{n}})\Delta_{{x}}+1/2\,\left(D_{{1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{2}+1/6\,\left(D_{{1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{3}\\ +1/24\,\left(D_{{1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{4}+{\frac{1}{120}}\,\left(D_{{1,1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{5}\\+{\frac{1}{720}}\,\left(D_{{1,1,1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{6}+O(1){\Delta_{{x}}}^{7}$ For our analysis $u\left(x,t\right)$ is the function of the finite difference approximation, contrasted with the exact function $v\left(x,t\right)$. The subscripts $k,n$ and indices for space and time respectively. The backward spatial expansion is given as $u(x_{{k-1}},t_{{n}})=u(x_{{k}},t_{{n}})-D_{{1}}(u)(x_{{k}},t_{{n}})\Delta_{{x}}+1/2\,\left(D_{{1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{2}\\-1/6\,\left(D_{{1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{3}+1/24\,\left(D_{{1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{4}\\-{\frac{1}{120}}\,\left(D_{{1,1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{5}+{\frac{1}{720}}\,\left(D_{{1,1,1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{6}-O(1){\Delta_{{x}}}^{7}$ In like fashion the expansion for time is as follows: $u(x_{{k}},t_{{n+1}})u(x_{{k}},t_{{n}})+D_{{2}}(u)(x_{{k}},t_{{n}})\Delta_{{t}}+1/2\,\left(D_{{2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{2}\\+1/6\,\left(D_{{2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{3}+1/24\,\left(D_{{2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{4}\\+{\frac{1}{120}}\,\left(D_{{2,2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{5}+{\frac{1}{720}}\,\left(D_{{2,2,2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{6}+O(1){\Delta_{{t}}}^{7}$ ## Convection Equation Now let us turn to the convection equation.  Although CFD aficionados refer to this equation in this way, in solid mechanics this is the “one-way” wave equation, i.e., without reflections. The derivation and solution of this equation is detailed here. In either case the governing equation is ${\frac{\partial}{\partial t}}v(x,t)+a{\frac{\partial}{\partial x}}v(x,t)=0$ When solved using the Lax-Wendroff scheme, it is expressed as $u(x_{{k}},t_{{n+1}})=u(x_{{k}},t_{{n}})-1/2\, R\left(u(x_{{k+1}},t_{{n}})-u(x_{{k-1}},t_{{n}})\right)\\+1/2\,{R}^{2}\left(u(x_{{k+1}},t_{{n}})-2\, u(x_{{k}},t_{{n}})+u(x_{{k-1}},t_{{n}})\right)$ where $R=a\frac{\Delta_{t}}{\Delta_{x}}$ The solution for this problem is given in Numerical Methods for Engineers and Scientists, Second Edition. ### Application of Taylor Series Expansions for Consistency If we apply the results of the Taylor series expansions to the Lax-Wendroff scheme and perform a good deal of algebra (including substituting for $R$,) the result is $u(x_{{k}},t_{{n}})+D_{{2}}(u)(x_{{k}},t_{{n}})\Delta_{{t}}+1/2\,\left(D_{{2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{2}+1/6\,\left(D_{{2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{3}\\+1/24\,\left(D_{{2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{4}+{\frac{1}{120}}\,\left(D_{{2,2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{5}+{\frac{1}{720}}\,\left(D_{{2,2,2,2,2,2}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{t}}}^{6}+O(1){\Delta_{{t}}}^{7}\\=\\u(x_{{k}},t_{{n}})+r\left(D_{{1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{2}-1/12\, r\left(D_{{1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{4}-1/40\, r\left(D_{{1,1,1,1,1,1}}\right)(u)(x_{{k}},t_{{n}}){\Delta_{{x}}}^{6}$ Rearranging, making a change in notation and dropping the $\mathcal{O}$ terms as well, On the left hand side is the exact equation, which is strictly speaking equal to zero. On the right hand side is the residual for consistency of the finite difference scheme. If the scheme is consistent with the original equation, it too should approach zero as $\Delta_{x},\,\Delta_{t}\rightarrow0$. Based on this we note the following: • All of the right hand side terms contain $\Delta_{x},\,\Delta_{t}$ or both. Thus, as these approach zero, the entire right hand side will approach zero. Thus the scheme is consistent with the original differential equation. • The lowest order terms for the time and spatial steps on the right hand side are $\Delta_{t}$ and $\Delta_{x}^{2}$ respectively. Thus we can conclude that the truncation error is $\mathcal{O}\left(t\right)+\mathcal{O}\left(x^{2}\right)$. Or is it? Let us assume that the solution is twice differentiable. By this we mean that the function has second derivatives in both space and time. (Another way of interpreting this is to say that “twice differentiable” means that the solution has no derivatives beyond the second, in which case many of the terms in the Taylor Series expansion would go to zero.) Then we differentiate the original equation once temporally, thus ${\frac{\partial^{2}}{\partial{t}^{2}}}v(x,t)+a{\frac{\partial^{2}}{\partial t\partial x}}v(x,t)=0$ Now let us do the same thing but spatially, and (with a little additional algebra) we obtain $-a{\frac{\partial^{2}}{\partial t\partial x}}v(x,t)-{a}^{2}{\frac{\partial^{2}}{\partial{x}^{2}}}v(x,t)=0$ ${\frac{\partial^{2}}{\partial{t}^{2}}}v(x,t)-{a}^{2}{\frac{\partial^{2}}{\partial{x}^{2}}}v(x,t)=0$ which is, mirabile visu, the wave equation. Applying this solution for the original equation to the finite difference residual results in Now we see that the lowest order terms are $\Delta_{t}^{2}$ and $\Delta_{x}^{2}$,  which means that the truncation error is $\mathcal{O}\left(t^{2}\right)+\mathcal{O}\left(x^{2}\right)$. We duly note that the fourth order spatial derivative is multiplied by $\Delta_{{t}}{\Delta_{{x}}}^{2}$. However, the squared term will be the predominant one as $\Delta_{x},\,\Delta_{t}\rightarrow0$, so this does not change our conclusion. Also, if “twice differentiable” means that the function has no further derivatives beyond the second, then all of the terms go to zero, and the numerical solution, within machine accuracy, is exact. This also applies to the next section as well; the vector described there would be the zero vector under these conditions. ### Consistency in a Norm The Taylor Series expansion is only valid at the point at which it is taken. For most differential equations, we are interested in solutions along a broader region. This is in part the purpose for considering consistency in a norm. Let us consider the result we just obtained, thus The right hand side represents the residual for consistency of the finite difference scheme. If we were to consider a Taylor Series expansion for all of the points in space and time under consideration, what we would end up with is an infinite set of residuals, i.e., the right hand side of the above equation, which could then be arranged in a vector. If we designate this vector as$R$, then each entry can be designated as follows: Now let us consider the nature of the differential equation. The following is adapted from Numerical Solution of Differential Equations: Finite Difference and Finite Element Solution of the Initial, Boundary and Eigenvalue Problem in the… (Computer Science and Applied Mathematics). We can consider the differential equation as a linear transformation. Since we have defined the results as an vector, we can express this as follows for the exact solution: $Av\left(x,t\right)=F$ and for the finite difference solution $Au\left(x,t\right)=F+R$ The result difference between the two is the residual we defined earlier. The finite difference representation is the same as the original if and only if $A$ is the same in both cases. Combining both equations, $A\left(v\left(x,t\right)-u\left(x,t\right)\right)=R$ and rearranging $v\left(x,t\right)-u\left(x,t\right)=A^{-1}R$ Now let us consider the norm. Given the infinite number of entries in this vector, the most convenient norm to take would be the infinity norm, where the norm would be the largest absolute value in the set. For an inner product space, $||v\left(x,t\right)-u\left(x,t\right)||_{\infty}=||A^{-1}||_{\infty}\,||R||_{\infty}$ We have shown that each and every $r_{n}\rightarrow0$ as $\Delta_{x},\,\Delta_{t}\rightarrow0$. (Additionally the function would have to be bounded, continuous and at least twice differentiable at all points.) From this, $R\rightarrow0$ and $||R||_{\infty}\rightarrow0$. If $A$ and $A^{-1}$ are bounded (as they are in a linear transformation,) then $||v\left(x,t\right)-u\left(x,t\right)||_{\infty}\rightarrow0$ and thus the exact solution and its finite difference counterpart become the same. This is consistency by definition. (The most serious obstacle to actually constructing such a vector–a necessary prerequisite to a norm–is evaluating the derivatives. One “solution” would be to used the exact solution of the original differential equation, but that assumes we can arrive at an exact solution. In many cases, the whole point of a numerical solution is because the exact, “closed form” solution is unavailable. Thus we would end up with numerical evaluations for the derivatives.) As for other norms such as the Euclidean norm, if the entries in the vector approach zero as $\Delta_{x},\,\Delta_{t}\rightarrow0$, we would expect the norm to do so as well, as discussed above. It should be noted that the infinity norm would be best to pick up a point slowly converging on zero than a Euclidean norm. ### von Neumann Stability Analysis Turning to the issue of stability, we will perform a von Neumann analysis. In this type of analysis we will analyze a stability factor $|G|$ defined as follows: $1\geq|G|=\frac{u(x_{{k}},t_{{n+1}})}{u(x_{{k}},t_{{n}})}$ The idea behind this is to determine “whether or not the calculation can be rendered useless by unfavourable error propagation” (from The Numerical Treatment of Differential Equations.) In other methods, such as perturbation methods, an error is introduced into the scheme and its propagation is explicitly analysed. The von Neumann analysis does the same thing but in a more compact form. The heart of the von Neumann method is to substitute a Fourier series expression into the difference scheme. Thus, for our difference scheme $u(x_{{k}},t_{{n+1}})=u(x_{{k}},t_{{n}})-1/2\, R\left(u(x_{{k+1}},t_{{n}})-u(x_{{k-1}},t_{{n}})\right)+1/2\,{R}^{2}\left(u(x_{{k+1}},t_{{n}})-2\, u(x_{{k}},t_{{n}})+u(x_{{k-1}},t_{{n}})\right)$ we substitute $u(x_{{k}},t_{{n+1}})={e^{p_{{m}}\left(t+\Delta_{{t}}\right)+\sqrt{-1}k_{{m}}x}}\\u(x_{{k}},t_{{n}})={e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}\\u(x_{{k+1}},t_{{n}})={e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x+\Delta_{{x}}\right)}}\\u(x_{{k-1}},t_{{n}})= {e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x-\Delta_{{x}}\right)}}$ to yield ${e^{p_{{m}}\left(t+\Delta_{{t}}\right)+\sqrt{-1}k_{{m}}x}}={e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}\\-1/2\, R\left({e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x+\Delta_{{x}}\right)}}-{e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x-\Delta_{{x}}\right)}}\right)\\+1/2\,{R}^{2}\left({e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x+\Delta_{{x}}\right)}}-2\,{e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}+{e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x-\Delta_{{x}}\right)}}\right)$ As an aside, When most people think of “Fourier series” they think of a real series of sines, cosines and coefficients. This was certainly in evidence in the presentation of the method in The Numerical Treatment of Differential Equations. However, it has been the author’s experience that the best way to treat these is to do so in a “real-complex continuum,” i.e., to express these exponentially and to convert them to circular (or in some cases hyperbolic) functions as the complex analysis would admit. An example of that is here. Solving for the amplification factor defined above, $|G|={\frac{{e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}-1/2\, R\left({e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x+\Delta_{{x}}\right)}}-{e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x-\Delta_{{x}}\right)}}\right)}{{e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}}}\\+{\frac{1/2\,{R}^{2}\left({e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x+\Delta_{{x}}\right)}}-2\,{e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}+{e^{p_{{m}}t+\sqrt{-1}k_{{m}}\left(x-\Delta_{{x}}\right)}}\right)}{{e^{p_{{m}}t+\sqrt{-1}k_{{m}}x}}}}$ Simplifying, $|G|=1-1/2\, R{e^{\sqrt{-1}k_{{m}}\Delta_{{x}}}}+1/2\, R{e^{-\sqrt{-1}k_{{m}}\Delta_{{x}}}}+1/2\,{R}^{2}{e^{\sqrt{-1}k_{{m}}\Delta_{{x}}}}-{R}^{2}+1/2\,{R}^{2}{e^{-\sqrt{-1}k_{{m}}\Delta_{{x}}}}$ or $|G|=1+{R}^{2}\cos(k_{{m}}\Delta_{{x}})-{R}^{2}-\sqrt{-1}R\sin(k_{{m}}\Delta_{{x}})$ and then $1\geq G=\sqrt{\left(1+{R}^{2}\cos(k_{{m}}\Delta_{{x}})-{R}^{2}\right)^{2}+{R}^{2}\left(\sin(k_{{m}}\Delta_{{x}})\right)^{2}}$ Solving for $R$ at the points of equality yields three results: $R=-1,0,1$. Since negative values for $R$ have no physical meaning, we conclude that $0\leq R\leq1$ for this method to be stable. Thus we can say that the method is conditionally unstable. ### Convergence The Lax Equivalence Theorem posits that, if the problem is properly posed and the finite difference scheme used is consistent and stable, the necessary and consistent conditions for convergence have been met (see Numerical Methods for Engineers and Scientists, Second Edition.) We have shown that, with the assumptions stated above, the scheme is consistent with the original differential equation, with or without the provision of twice differentiability. The method is thus convergent within the conditions stated above for stability; outside of those conditions the method is neither stable nor convergent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 56, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658638834953308, "perplexity": 228.9743793592161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00354.warc.gz"}
https://socratic.org/questions/a-nitrogen-gas-occupies-a-volume-of-500-ml-at-a-pressure-of-0-971-atm-what-volum#113715
Physics Topics # A nitrogen gas occupies a volume of 500 ml at a pressure of 0.971 atm. What volume will the gas occupy at a pressure of 1.50 atm, assuming the temperature remains constant? Dec 7, 2014 The answer is $324 m L$. This is a simple application of Boyle's law ${P}_{1} {V}_{1} = {P}_{2} {V}_{2}$, which states that a gas' pressure and volume are proportional to eachother. It can be derived from the combined gas law, $P V = n R T$, by keeping $T$ constant. So, we have ${V}_{2} = {P}_{1} / {P}_{2} \cdot {V}_{1} = \frac{0.971}{1.50} \cdot 500 m L = 324 m L$ -> pressure increases, volume decreases and vice versa. ##### Impact of this question 21403 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828670382499695, "perplexity": 1293.2734305870433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00394.warc.gz"}
https://en.wikibooks.org/wiki/A-level_Mathematics/OCR/C1/Appendix_A:_Formulae
# A-level Mathematics/OCR/C1/Appendix A: Formulae < A-level Mathematics‎ | OCR‎ | C1 By the end of this module you will be expected to have learned the following formulae: ## The Laws of Indices 1. $x^ax^b = x^{a+b}\,$ 2. $\frac{x^a}{x^b} = x^{a-b}$ 3. $x^{-n}=\frac{1}{x^n}$ 4. $\left(x^a\right)^b = x^{ab}$ 5. $\left(xy \right)^n = x^n y^n$ 6. $\left(\frac{x}{y}\right)^n = \frac{x^n}{y^n}$ 7. $x^\frac{a}{b} = \sqrt[b]{x^a}$ 8. $x^0 = 1\,$ 9. $x^1 = x\,$ ## The Laws of Surds 1. $\sqrt{xy} = \sqrt{x} \times \sqrt{y}$ 2. $\sqrt{\frac{x}{y}} = \frac{\sqrt{x}}{\sqrt{y}}$ 3. $\frac{a}{b+\sqrt{c}} = \frac{a}{b+\sqrt{c}} \times \frac{b-\sqrt{c}}{b-\sqrt{c}} = \frac{a(b-\sqrt{c})}{b^2-c}$ ## Polynomials ### Parabolas If f(x) is in the form $a(x + b)^2 + c$ 1. -b is the axis of symmetry 2. c is the maximum or minimum y value Axis of Symmetry = $\frac{-b}{2a}$ ### Completing the Square $ax^2+bx+c=0\,$ becomes $a\left(x + \frac{b}{2a}\right)^2 -\frac{b^2}{4a} + c$ • The solutions of the quadratic $ax^2+bx+c=0$ are: $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ • The discriminant of the quadratic $ax^2+bx+c=0$ is $b^2 - 4ac$ ## Errors 1. $Absolute\ error = value\ obtained - true\ value$ 2. $Relative\ error = \frac{absolute\ error}{true\ value}$ 3. $Percentage\ error = relative\ error \times 100$ ## Coordinate Geometry $m=\frac {y_2-y_1}{x_2-x_1}$ The equation of a line passing through the point $\left (x_1 , y_1 \right )$ and having a slope m is $y - y_1 = m \left ( x - x_1 \right)$. ### Perpendicular lines Lines are perpendicular if $m_1 \times m_2=-1$ ### Distance between two points $d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$ ### Mid-point of a line $\left(\frac {{x_1} + {x_2}}{2} ; \frac {{y_1} + {y_2}}{2}\right)$ ### General Circle Formulae $Area = \pi r^2\,$ $Circumference = 2 \pi r\,$ ### Equation of a Circle $\left (x - h \right )^2 + \left (y - k \right )^2 = r^2$, where (h,k) is the center and r is the radius. ## Differentiation ### Differentiation Rules 1. Derivative of a constant function: $\frac{dy}{dx} \left (c \right) = 0$ 2. The Power Rule: $\frac{dy}{dx} \left (x^n \right) = nx^{n - 1}$ 3. The Constant Multiple Rule: $\frac{dy}{dx} c f \left ( x \right ) = c \frac{dy}{dx} f \left ( x \right )$ 4. The Sum Rule: $\frac{dy}{dx} \begin{bmatrix} f \left ( x \right ) + g \left ( x \right ) \end{bmatrix} = \frac{dy}{dx} f \left ( x \right ) + \frac{dy}{dx} g \left ( x \right )$ 5. The Difference Rule: $\frac{dy}{dx} \begin{bmatrix} f \left ( x \right ) - g \left ( x \right ) \end{bmatrix} = \frac{dy}{dx} f \left ( x \right ) - \frac{dy}{dx} g \left ( x \right )$ ### Rules of Stationary Points • If $f' \left ( c \right ) = 0$ and $f'' \left ( c \right ) <0$, then c is a local maximum point of f(x). The graph of f(x) will be concave down on the interval. • If $f' \left ( c \right ) = 0$ and $f'' \left ( c \right ) >0$, then c is a local minimum point of f(x). The graph of f(x) will be concave up on the interval. • If $f' \left ( c \right ) = 0$ and $f'' \left ( c \right ) = 0$ and $f''' \left ( c \right ) \ne 0$, then c is a local inflection point of f(x). This is part of the C1 (Core Mathematics 1) module of the A-level Mathematics text.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362976551055908, "perplexity": 1480.611662947972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453921765.84/warc/CC-MAIN-20150501041841-00076-ip-10-235-10-82.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/74308/isolated-coloring-of-math-symbols-and-boxes-in-equations
# Isolated coloring of math symbols and boxes in equations I know that by using a package like xcolor I can use $\color{<color>} <math symbols>$ to typeset math symbols in my preferred color. But how can I isolate the color to specific symbols only? Say for instance the illustrations of commented equations in Howard Anton's Calculus book have colors for underbraces and the bounding text boxes but have none for the included text. Consider the following MWE \documentclass[]{article} \usepackage{amsmath} \usepackage{xcolor} \begin{document} $$\dfrac{d}{dx}[\sin(3x^2+2)]=\underbrace{\cos(3x^2+2)}_{\text{ \fbox{\parbox[b][]{2cm}{ Derivative of the outise evaluated at the inside} }}} \cdot \underbrace{6x}_{\text{ \color{blue}{\fbox{\parbox[b][]{1.25cm}{ Derivative of the inside} }}}}$$ \end{document} which outputs How can I isolate the coloring to the underbraces and the bounding box to blue without affecting the other symbols/text? - \documentclass{article} \usepackage{amsmath} \usepackage{xcolor} \makeatletter \renewcommand\underbrace[2][olive]{% \mathop{\vtop{\m@th\ialign{##\crcr $\hfil\displaystyle{#2}\hfil$\crcr \noalign{\kern3\p@\nointerlineskip}% \textcolor{#1}{\upbracefill}\crcr\noalign{\kern3\p@}}}}\limits} \makeatother \newcommand\ColorBox[3][olive]{\text{\fcolorbox{#1}{white}{\parbox[b][]{#2}{\raggedright#3}}}} \begin{document} $$\frac{d}{dx}[\sin(3x^2+2)]=\underbrace{\cos(3x^2+2)}_{\ColorBox{2cm}{Derivative of the outise evaluated at the inside}} \cdot \underbrace[red!60!black]{6x}_{\ColorBox[red!60!black]{1.25cm}{Derivative of the inside}}$$ \end{document} The syntax: \underbrace[<color>]{<text>} \ColorBox[<color>]{<width>}{<text>} The default color: olive. - That was fast. Where can I find the definition for \underbraces and other such commands? – hpesoj626 Sep 27 '12 at 8:27 @hpesoj626 For LaTeX kernel commands, such as \underbrace, you can open a terminal a run texdox source2e or in CTAN: source2e. – Gonzalo Medina Sep 27 '12 at 12:57 Here are some minor alternatives to Gonzalo's answer, provided by the abraces package. More specifically, it allows for inserting arbitrary code within the brace construction using @{<stuff>}: \documentclass{article} $$\dfrac{\mathrm{d}}{\mathrm{d}x}\big[\sin(3x^2+2)\big]= \underbrace[@{\color{olive}}l1D1r]{\cos(3x^2+2)}_{\color{olive} \text{\fbox{\parbox[b]{2cm}{\raggedright% \color{black}Derivative of the outside evaluated at the inside} }}} \cdot \underbrace[@{\color{red!60!black}}l1D1r]{\vphantom{()}6x}_{\color{red!60!black} \text{\fbox{\parbox[b]{1.25cm}{\raggedright% \color{black}Derivative of the inside} }}}$$ • Using \mathrm{d} for d/dx; • Enlarging the brackets around the LHS using \big[ and \big]; and • Inserting \vphantom to lower the \underbrace for both components of the chain rule to the same depth.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.8235992193222046, "perplexity": 3012.0904234140053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053209501.44/warc/CC-MAIN-20160524012649-00041-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/125779-mathematical-induction-proof.html
# Math Help - mathematical induction proof 1. ## mathematical induction proof f(n) = 2, for n=0. f(n) = 2f(n-1)-n+1, for n>=1. prove f(n)=(2^n)+n+1 for n=0 -> f(n)=2. for n=k -> f(n)=(2^k)-k+1. for n=k+1 -> f(n)=(2^k+1)+k+2. Thats where i'm stuck. What am I supposed to do next? 2. Hi Your notation does not seem to be really appropriate I suggest $u_0=2$ $u_{n+1}=2 \:u_n - n$ You need to prove that $u_n = 2^n+n+1$ $2^0+0+1 = 2 = u_0$ therefore it is true for n=0 Suppose that it is true for n=k. This means that $u_k = 2^k+k+1$. You need to prove that $u_{k+1} = 2^{k+1}+k+2$ You know that $u_{k+1} = 2 \:u_k - k$ and that $u_k = 2^k+k+1$ Substitute $u_k = 2^k+k+1$ in the expression of $u_{k+1}$ to prove that $u_{k+1} = 2^{k+1}+k+2$ 3. Originally Posted by blank f(n) = 2, for n=0. f(n) = 2f(n-1)-n+1, for n>=1. prove f(n)=(2^n)+n+1 for n=0 -> f(n)=2. for n=k -> f(n)=(2^k)-k+1. for n=k+1 -> f(n)=(2^k+1)+k+2. Thats where i'm stuck. What am I supposed to do next? $f(0)=2$ $f(n)=2f(n-1)-n+1,\ for\ n\ge1$ If you write a few terms, it appears $f(n)=2^n+n+1$ Prove using induction that this is so. $f(n)=2^n+n+1$ If this is true, then the following will also be true... $f(n+1)=2^{n+1}+(n+1)+1=(2)2^n+(n+1)+1=2^n+2^n+(n+1 )+1$ $=[2^n+n+1]+2^n+1=f(n)+2^n+1$ Therefore, if we can prove that $f(n+1)$ really equals $f(n)+2^n+1$ then we only need to prove it works for the first term, since a mathematical chain reaction has been set up.. We attempt to prove it from f(n)=2f(n-1)-n+1. f(n+1)=2f(n)-(n+1)+1=2f(n)-n. The question is.... Is $2f(n)-n=f(n)+2^n+1$ ? Is $2f(n)-f(n)=2^n+n+1$ ? $f(n)=2^n+n+1$ true f(0)=1+0+1=2 f(1)=2+1+1=4=2(2)-1+1 true
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.934083104133606, "perplexity": 1249.0545011913835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647884.33/warc/CC-MAIN-20141024030047-00110-ip-10-16-133-185.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-z-2-8-6-2
Algebra Topics # How do you solve z/2.8 = -6.2? Jun 20, 2015 Multiply both sides of the equation by $2.8$ to get: $z = - 6.2 \cdot 2.8 = - 17.36$ #### Explanation: The truth of an equation is preserved by any of the following operations: (1) Add or subtract the same value on both sides. (2) Multiply or divide both sides by the same non-zero value. Note that you can also do things like square both sides of an equation, but this can introduce spurious solutions: that is the new equation may have solutions that the old one does not. ##### Impact of this question 357 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301924109458923, "perplexity": 567.3822333148275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00098.warc.gz"}
http://math.stackexchange.com/questions/367726/help-doubt-about-uniqueness-in-mathematics
# Help! Doubt About Uniqueness in Mathematics Many times in mathematics, as for example when we find the solution of an ODE, we can not claim uniqueness just by construction, instead we have to use a theorem. The reasoning behind this is that even if we found a solution and the solution appears to be unique from the point of view of the method used, how can we be sure there is no another method which provides another solution? Now my question: sometimes the following type of argument is accepted as valid. For example, a simple differential equation like this one $y'(x)=x^2$ with $y(0)=0$, we say that $y(x)=\frac{1}{3}x^3$ is the only solution (without the use of an uniqueness theorem, I think is because of the Fundamental Theorem of Calculus). How can be sure now that there is not another way to solve the equation that provides another solution (without using the uniqueness theorem of course)? When is this type of argument valid and when not? Down in some of the answers some people say that the reason in the previous example for uniqueness is because the anti-derivative is unique, which sounds reasonable. But the Laplace transform of a function that satisfies certain conditions is also unique. And we can't say that the solution of an ODE is unique only beause it was calculated through the Laplace transform method. - When is what argument valid? I see no arguments above to validate. – Math Gems Apr 20 '13 at 22:54 @MathGems uniqueness without using an uniqueness theorem as in the example. – Ambesh Apr 20 '13 at 22:54 Without knowing the uniqueness theorem, what logical basis do you have for believing that the solution should be unique? Does the same reasoning tell you that prime factorizations of integers is unique? – Math Gems Apr 20 '13 at 22:57 @MathGems Well, I'm not using my intuition. I have seen teachers do that. I have a very reputed teacher that claims uniqueness on construction basis sometimes. And that is driving me nuts. – Ambesh Apr 20 '13 at 22:59 For the specific case you are asking, and similar questions, we do use a uniqueness theorem all the time: If two functions $f$ and $g$ have the same derivative, their difference is constant. We may not bother to mention it all the time, just as we do not bother to mention that arithmetic is commutative every time we have integers $a,b$, and rewrite $ab$ as $ba$. Anyway, the result that if $f'=g'$ then $f,g$ differ by a constant tells you that if you find any function, by any method whatsoever, such that $f'(x)=x^2$, then any other $y$ with $y'=x^2$ will be $f(x)+c$. – Andrés E. Caicedo Apr 20 '13 at 23:05 It's unique because it is a ODE with separable variables, and its solution is found via separation of the variables and integration. The function obtained via integration is unique except for the arbitrary constant, which is uniquely determined by the initial condition. EDIT: I must correct this: let's suppose you have an ODE which is variable-separable that is defined by $$h(y)y'=g(x)$$ You can find a solution via integration as usual: $$\int h(y)dy=\int g(x)dx$$ The function is given inplicitly by the above equation. Let's suppose that $y(a)=b$ is the initial condition. If $y'(a)\neq 0$ then $y$ is locally invertible (as per the Inverse Function theorem) and you can find a local unique solution $y(x)$ which can be extended to a maximal conected domain; in this case it is correct to say that the solution obtained via the separation method is unique without recurring to the Existence & Uniqueness of solutions. An important remark: the Existence and Uniqueness Theorem for ODEs works fine for equations defined as $y'(x)=f(y,x)$, where $|f|$ is limited on a neighbourhood of the initial condition $(a,y(a))$. In the example cited on the comments, $f$ is something like $$f(x,y)=\dfrac{\hat{f}(x)}{y}$$ for some $\hat{f}$ so if $y(0)=0$, $|f|$ is not limited around $(0,0)$ and uniqueness cannot be guaranteed. - @Gustavo_Marra But how do we know that is really unique? Why integration is different than another method? For example using the Laplace transform to find the solution of an ODE. – Ambesh Apr 20 '13 at 23:00 Comes from the uniqueness of the primitive (antiderivative) of a function modulo an adding constant. Also see the Fundamental Theorem of Calculus. – Marra Apr 20 '13 at 23:03 @MykeArya uniqueness can be proven by the mean value theorem, which shows that a function defined on an interval with derivative zero is a constant. Then you take the difference of any solution to an ODE with the solution you solve for and use linearity of the derivative to show that that difference has derivative zero. The initial condition identifies the constant. – Chris Janjigian Apr 20 '13 at 23:42 I must be missing something: the equation $2y\frac{dy}{dx}=4x^3$, $y(0)=0$ is separable and has four solutions on $\mathbb{R}$ (two possibilities each side of the origin). – Shane O Rourke Apr 21 '13 at 9:13 As well as $y_1=x^2$ and $y_2=-x^2$, there is $y_3=\left\{\begin{array}{rl}x^2 & x<0\\ -x^2 & x\geq 0\end{array}\right.$ and $y_4=-y_3$. – Shane O Rourke Apr 21 '13 at 9:36 Although they may not be explicitly mentioned, there are in fact uniqueness theorems that are at the foundation of such "unique by construction" arguments. Let's consider your specific example of solutions of a nonhomogenous differential equation. It is simply a special case of the ubiquitous linear principle that the general solution of a nonhomogeneous linear equation is given by adding any particular solution to the general solution of the associated homogeneous equation. More explicitly, if $\rm\:D\:$ is a linear map then one easily proves Lemma $\ \$ If $\rm\ D\:f_1\ =\ g\$ then $\rm\ D\:f_2\ =\ g\ \iff\ 0\ =\ D\:f_1 - D\:f_2\ =\ D\:(f_1-f_2)$ Therefore $\rm\ D^{-1}(g)\ =\ f_1 +\ ker\ D\ =\:\:$ particular + homogeneous solution, as in linear algebra. In particular $\rm\ \ \int g\ =\ f_1 +\ c,\$ for $\rm\ c\in ker\ \dfrac{d}{d\:x}\: =\:\:$ constants w.r.t. the derivation $\rm\ D\: =\: \dfrac{d}{d\:x}\:.$ Compare this to $\rm \ x\, =\, 3\, +\, 5\, \mathbb Z,\:$ the solution of $\rm\ 2\: x\ \equiv\ 6\pmod{10}\:,\:$ with particular solution $\rm x \equiv 3\:,\:$ and homogeneous solution: $\rm\ 2\: x\:\equiv 0\pmod{10}\iff 10\:|\:2\:x\iff 5\:|\:x\iff x\in 5\ \mathbb Z\:.$ - Based on your reactions to the other answers, I think your problem might be with the form of such arguments. The general idea is this: we want to prove the uniqueness of a particular gadget satisfying certain properties. From those properties, we deduce that it has other properties, and eventually we show that an any gadget which satisfies all of these properties must be our particular gadget. Maybe there are other ways to approach the problem or other properties of the gadget which we haven't used, but in any case the gadget is unique. Analogously (?), a detective might determine that a master thief acted alone before determining the thief's identity. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152416586875916, "perplexity": 189.48606447617627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824319.59/warc/CC-MAIN-20160723071024-00275-ip-10-185-27-174.ec2.internal.warc.gz"}
https://jpmccarthymaths.com/2010/12/02/ms-2001-week-11/?replytocom=43
On Monday we completed our introduction to horizontal asymptotes and vertical asymptotes. On Wednesday we did some examples of curve sketching and applied max/ min problems. Problems You need to do exercises – all of the following you should be able to attempt. Do as many as you can/ want in the following order of most beneficial: Wills’ Exercise Sheets Stationary Points are points $a\in\mathbb{R}$ where the derivative of a differentiable function $f:\mathbb{R}\rightarrow\mathbb{R}$, $f'(a)=0$. When asked to find the critical points of a function defined on the entire real line (rather than just on a closed interval $[a,b]$), the ‘endpoints’, $\pm\infty$ are not considered critical points. Convex is concave up and concave is concave down. Other Exercise Sheets – Questions on Asymptotes From section 4 Q. 5 from Problems, find the vertical asymptotes of the the functions 5(b) [(7-10), (13), (15-16), (23)] and Q. 5(c) [except (30-31)] Past Exam Papers Q. 2(a), 6(b) from http://booleweb.ucc.ie/ExamPapers/Exams2005/Maths_Stds/MS2001Aut05.pdf All of this except Q. 1(d) [this is the Autumn 2010 paper which wasn’t on the library website earlier in the year] http://booleweb.ucc.ie/ExamPapers/exams2010/MathsStds/Autumn/MS2001Aut2010.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223809003829956, "perplexity": 1474.2795499395666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00641.warc.gz"}
https://en.wikibooks.org/wiki/Proteomics/Protein_Identification_-_Mass_Spectrometry/Instrumentation
# Proteomics/Protein Identification - Mass Spectrometry/Instrumentation « Protein Identification - Mass SpectrometryInstrumentation » Introduction Types of Mass Spectrometry This Section: ## How does a Mass Spectrometer work? A mass spectrometer is made up of three components: an ion source, mass analyzer, and a detector. The unknown sample which may originate as solid, liquid, solution or vapor, is presented to the ionization source. After ionizing the sample, the ions of the sample are passed to the mass analyzer region where separation based on the mass-to-charge ratio occurs. Once separated by the analyzer, the ions then enter the detector portion of the mass spectrometer. At this point, the machine calculates the mass-to-charge ratio and the relative abundance of each of the different ions. From this information, a spectrum graph can be created such as the one to the right. Most mass spectrometers are maintained under a vacuum to improve the chances of ions traveling from ionization source to detector without interference by collision with air molecules. ## Ion Source The ion source is the mass spectrometer component which ionizes the sample to be analyzed. Ionization mainly serves to present the sample as vaporized ions which can be acted upon by the mass analyzer and measured by the ion detector. There are many different methods available to ionize samples, such as positive or negative ion modes. The ionization method chosen should depend on the type of sample and the type of mass spectrometer. Ionization Methods: File:ES-MassSpec.jpeg Electronspray ionization MALDI There are three types of ionization methods, electron ionization, chemical ionization and photo-induced ionization. Electron ionization involves application of an electrical current to the sample to induce ionization. Chemical ionization involves interaction of the sample with reagent molecules to induce ionization. Ions produced are often denoted with symbols that indicate the nature of the ionization, eg. [M+H]+ is used to represent a molecule which is protonated. • Electron ionization • Chemical Ionization The methods commonly used in proteomics are ‘Matrix Assisted Laser Desorption Ionization’ or MALDI and ‘Electrospray Ionization’ also known as ESI. Atmospheric pressure chemical ionization and Atmospheric pressure photo-ionization are two other forms as well. MALDI uses a solid support target plate where a UV active matrix (solid or liquid) is spotted on the plate followed by the sample over the matrix. The laser hits the spot on the crystallized matrix and transfers energy from the matrix molecule to the sample. This energy transfer vaporizes the sample, sending a plume of ions into the MALDI source. This plume of ions is then collected and held in the source until a pulse sends them all out simultaneously. If the MALDI is attached to a Time of Flight(TOF) mass analyzer these ions are then sent down the TOF tube (typically ~2m) and are separated according to their velocity (light ions hitting first). MALDI is the preferred instrumentation for proteomics due to ease of reading the spectrum; most ions are found in the +1 charge state [M+H]. Electrospray, on the other hand, is done by injecting the sample dissolved in a slightly acidic solution through a heated capillary that has a voltage around 10v. This allows for highly charged particles to be formed at the tip of the capillary. As the particles evaporate, their charge/volume increases to a point where charge repulsion forces take over and the particle will explode. A small drop will form which continues the process until individual molecules are in the gas phase and charged. These ions will then travel into the analyzer, typically a quadrupole, to be scanned one mass at a time. The molecules in electrospray tend to be multiply charged and even though the upper mass limit of a quadrupole is 2000 m/z, the multiple charges allow for high mass ions or proteins to be identified. Proteins will have a charge envelope in which each peak has a different amount of charges on them. Special software is needed to deconvolve the multiple charge species peaks into a single mass peak, such as MassLynx from Waters. This form of ionization is good for most compounds, although is not best with neutral or low polarity molecules. Atmospheric pressure chemical ionization involves interaction of the sample with reagent molecules to induce ionization. As in electrospray ionization, liquid is pumped through a capillary. At the tip it is nebulized and a corona discharge takes place ionizing the molecules. The molecules interact with the analyte and transfer their charge. This form of ionization is good for small thermally stable molecules. Atmospheric pressure photo-ionization uses photons to excite and ionize the molecules after they have been nebulized. This form of ionization is good for neutral compounds. ## Mass Analyzers The mass analyzer is the component that separates the ions created from the ion source by their mass-to-charge ratios. Mass analyzers are based on the principles of charged particles in an electric or magnetic field. By using Lorentz force law and Newton’s second law of motion you can generate the following equation: ${\displaystyle ({\frac {m}{q}})\mathbf {a} =\mathbf {E} +\mathbf {v} \times \mathbf {B} }$ Where m is the mass, the ionic charge is q , a the acceleration, E is the electric field, and the vector cross product of the ion velocity and the magnetic field is v x B. This equation says that two particles with the same mass to charge ratio (m/q) will behave exactly the same. So what this equation is basically saying is that the mass to charge ratio acts as a determinant of acceleration of the ion, which can also be represented as the addition of the electric field plus the cross product of the ion velocity and magnetic field. ### Scanning Mass Analyzers Scanning mass analyzers need to separate the ions based on their relative abundances and mass to charge ratios. Electromagnetic fields are used to separate the ions based on their mass to charge ratios, by using a slit they are able to regulate which mass to charge ratio ions get to the detector. Once selected for a particular mass to charge ratio, the ion current is then recorded as a function of time which is analagous to mass. ### Sector Mass Spectrometer Mass spectrometer that uses a mass analyzer using magnetic, electric or static sector in it is called a sector instrument. This also works in combination of sectors like BEB, magnetic-electric-magnetic. In the present days, these sector instruments are mostly double focusing i.e, the ion beams are focused in both direction and velocity. Here is how a magnetic sector mass spectrometer works, imagine a tube like thing between two electromagnets. When the electrons are passed through the tube from one end to another the magnetic field bends the electron stream by exerting turning force on it. Then m/z ratio is determined. ### Orbitrap Mass spectrometer Orbitrap Mass spectrometer is the one in which ions are subjected to an electric field produced by electrodes tangentially. Also these ions are trapped between outer electrodes, where as ions are attracted with an electrostatic attraction to the inner electrode which is balanced by centrifugal force. Therefore these ions move in a circular manner around the inner electrode also back and forth with a particular m/z ratio. By using this ion oscillation based on the m/z ratio, the trap can act as a mass analyzer. You can think of this method as a filter or funnel which only allows certain ion masses to pass through. The "funnel" is actually a combination of positively and negatively charged metal rods which together form a channel through which the ions travel. The theory is that only selected masses will be able to pass through the channel, as all other ions won't have a stable trajectory though it and will hit the quadrupole rods, stopping the ion from reaching the detector. A quadrupole ion trap mass spectrometer consists of hyperbolic electrodes with a ring and two endcaps, which is the core of this instrument. In this method, ions are trapped and then sequentially ejected into a conventional electron multiplier detector from the ion trap. That way all ions can be stored during the process of mass analysis. Recent findings showed that using 1 mtorr of helium gas in the trapping volume substantially improved the resolution of the instrument with the kinetic energy of the ions reducing and the ion trajectories contracted to the center of the trap. A packet can be formed with a given m/z ions. This spectrometer is used widely for commercial purposes because of the high resolution and it is inexpensive. In the linear ion trap mass spectrometer the ions are trapped within a set of quadrupole rods to hold the ions radially and end electrodes to maintain the ions axially with a static electrical potential. It is simply said that unlike Quadrupole Ion Trap, which uses a 3-D field, Linear Quadrupole Ion Trap uses a 2-D field. It has a selective mass filter that detects the ions of particular m/z ratio. This method has advantages like higher ion storage capacity and faster scanning technique. ### Time of Flight Mass Analysers The TOF (Time of Flight) is a mass analyzer that allows ions to flow down a field free region; which allows the ions with a greater velocity, lighter ions, to hit the detector first. This is especially compatible with MALDI due to the fact that the TOF needs a pulsed instrument for its source. In this way ions are generated in the MALDI source and held there for a brief time and all are pulsed into the TOF at the same exact time. In this way, If all ions have the same kinetic energy, the ions with the lower mass will have a higher velocity and reach the detector first; whereas the ions with the higher mass will have a lower velocity and hit the detector last. The kinetic energy of an ion leaving a source if given by: ${\displaystyle T=eV={\frac {mv^{2}}{2}}}$ Where velocity v is defined by the Length of the path L divided by time t. ${\displaystyle v={\frac {L}{t}}}$ By substituting this equation into the first and solving for time you arrive at. ${\displaystyle t=L*{\sqrt {({\frac {m}{e}})*({\frac {1}{2V}})}}}$ From this equation you can easily see how mass directly affects travel time. Mass is directly proportional to time. Using the m/e portion of the equation you can clearly see how a larger mass means a longer time, and likewise how a lower mass would mean a smaller m/e and thus shorter travel time. ### Ion Cyclotron Resonance Spectrometer ICR is a form of trapped Ion mass analyzer, that specifically is defined as a static trap. It is basically a box with three parallel metal sides. Trapped ion analyzers work by keeping the ions in the trap and controlling the ions by using positive and negatively charged electrical fields in a carefully series of timed events. ICR specifically works on the principle that in a magnetic field, ions move in a circular path whose frequency is mass dependent. So using the cyclotron frequency you can surmise the mass. Equating the Lorentz force in a magnetic field to the equation for centripetal force yields. ${\displaystyle evB={\frac {mv^{2}}{r}}}$ You can then easily solve the above equation for the frequency F or ${\displaystyle F={\frac {v}{r}}}$ Groups of ions with the same mass to charge ratios will have the same cyclotron frequency but may be moving out of synch with one another, this is why an excitation pulse is needed to bring the resonant ions into phase with one another and the excitation pulse. Next the ions which will be passing close to the ICR cell receiver plates cause "image currents" which can be collected and amplified and analyzed. This signal registered in the receiver plates depends both on the number of ions and their distances from the receiver plates. ### Fourier transformation This mass spectrometry includes a mass analyzer that uses cyclotron frequency of the ions to determine their m/z ratio. Here a magnetic field with electric trapping plates known as penning trap is used to hold the ions. An oscillating electric field with the magnetic field at right angle to it is used to excite the trapped ions to a larger cyclotron radius. This results in packets of oscillating ions. The trapping plates detects the signal as a image current and results in a interferogram with sine waves called free induction decay (FID). By performing a Fourier transform to this data, a mass spectrum can be generated with useful signal. ## Detector Those ions which pass through analyzer are now separated by the desired methods. This mass spectrometry component records the charge induced by an ion passing by a surface or current produced when an ion hits a surface. From these charges or currents, a mass spectrum can be produced as well as measure the total number of ions at each each m/z which are present. Due to the fact that the number of ions entering the detector at any given moment is minuscule, signal amplification is often necessary. Next section: Types of Mass Spectrometry ## Articles Summarized ### Principles and Applications of Liquid Chromatography-Mass Spectrometry in Clinical Biochemistry #### Main Focus One benefit of Liquid Chromatography - Mass Spectrometry is there are different configurations to allow for results more geared toward the experiment. #### Summary LC-MS was slow to take off due to inefficient technologies. However, in the mid 1990’s, after new technologies were developed, it became more popular. This was in part due to its high specificity and ability to handle more complex mixtures than the other options available at the time, such as GC-MS. Mass spectrometry starts by ionizing samples to generate charged molecule fragments then analyzing their mass to charge-ratio. There are different technologies used to perform the mass spectrometry part of LC-MS to obtain different results, some being more versatile than others, such as changing the ion source, changing the mass analyzer, changing the ion suppression as well as changing the direct injection method. Different ion sources that can be used clinically would be an electrospray ionization source, an atmospheric pressure chemical ionization source or an atmospheric pressure photo-ionization source. Electrospray ionization, also known as ESI, works well on polar molecules; metabolites, xenobiotics and peptides are some examples. Atmospheric pressure chemical ionization source, also known as APCI, is great analyzer for small thermally stable molecules and neutral non-polar molecules such as free steroids. Atmospheric pressure photo-ionization, also known as APPI, works well with neutral compounds such as steroids. There are four different types of mass analyzers that can be used; these are the quadrupole analyzers, time-of-flight analyzers, ion trap analyzers, and hybrid analyzers. Quadrupole analyzers are are widely used because their ease of scanning and good quality quantitative data. Time-of-flight analyzers are used for small molecules because of its high sensitivity. Ion trap analyzers have the ability to fragment an ionize ions several times giving so-called MSn capabilities. Hybrid analyzers can be switched between ion trap mode and conventional quadrupole mode. There are also different steps that can be changed in the Liquid Chromatography part of LC-MS to make the procedure more versatile such as changing the flow rate, the mobile phase, the resolution and through-put and the quanitation (calibration). The parameters and conditions involved in LC-MS can be changed to optimize the assay. However, these conditions are specific to the particular analyte and the LC separation. Therefore there are no general conditions. Some of these conditions include using a dilute solution of the analyte and using single MS or tandem MS. It is also important to make sure the LC-MS system is working properly with protocols to detect deviations from normal performance. LC-MS is used in the Biochemical screening for genetic disorders as well as therapeutic drug monitoring and steroid hormones. #### New Terms GC-MS Gas chromatography-mass spectrometry (GC-MS) is a method that combines the features of gas-liquid chromatography and mass spectrometry to identify different substances within a test sample. ( http://en.wikipedia.org/wiki/GC-MS) Ion source A device in which gas ions are produced, focused, accelerated, and emitted as a narrow beam. Also known as ion gun; ionization source. ( http://www.answers.com/topic/ion-source ) Calibration a set of graduations to indicate values or positions —usually used in plural ( http://www.merriam-webster.com/dictionary/calibration ) Deviation noticeable or marked departure from accepted norms of behavior ( http://www.merriam-webster.com/dictionary/deviation ) Xenobiotics a chemical compound (as a drug, pesticide, or carcinogen) that is foreign to a living organism ( http://www.merriam-webster.com/dictionary/xenobiotics ) #### Course Relevance How is this applicable to the proteomics class? It is important to be able to analyze what makes up a protein so that we can understand the affects of that protein on an organism. Being able to see the differences between a given protein and its normal counterpart could help in the understanding of genetic diseases. LC-MS gives scientists the ability do protein separation. ## Websites Summarized ### What is Mass Spectrometry? Chiu CM, Muddiman DC. http://www.asms.org/whatisms/index.html (3/28/09) #### Main Focus This website discusses what mass spectrometry is, the history behind it, and how it works. #### Summary Mass spectrometry is a powerful analytical technique that is used to identify unknown compounds, to quantify known compounds, and to elucidate the structure and chemical properties of molecules. Compounds can be identified at very low concentrations. Mass spectrometry is used by a wide range of professionals such as physicians, astonomers, and biologists. For example it can monitor the breath of patients by anesthesiologists during surgery and determine the composition of molecular species found in space. It can be used to identify structures of biomolecules as well sequence them. It is also used to identify and quantitate compounds of complex organic mixtures. Mass spectrometry was invented by J.J. Thomson in a vacuum tube. His invention was used to discover a number of isotopes, to determine the relative abundance of the isotopes, and to measure their "exact masses" which are important in the foundation for later developments in diverse fields ranging from geochronology to biochemical research. A mass spectrometer is an instrument that measures the masses of individual molecules that have been converted into ions by becoming electrically charged. A mass spectrometer does not actually measure the molecular mass directly, but rather the mass-to-charge ratio of the ions formed from the molecules. The charge on an ion is denoted by the integer number of the fundamental unit of charge, and the mass-to-charge ratio. It represents daltons per fundamental unit of charge. Results are in a generated mass spectrum. A mass spectrum is a graph of ion intensity as a function of mass-to-charge ratio. #### New Terms dalton a unit of mass for expressing masses of atoms, molecules, or nuclear particles equal to 1⁄12 of the atomic mass of the most abundant carbon isotope ( http://www.merriam-webster.com/medical/dalton ) corona a faint glow adjacent to the surface of an electrical conductor at high voltage ( http://www.merriam-webster.com/dictionary/corona ) mass spectrum the spectrum of a stream of gaseous ions separated according to their differing mass and charge ( http://www.merriam-webster.com/dictionary/mass%20spectrum ) elucidate to make lucid especially by explanation or analysis ( http://www.merriam-webster.com/dictionary/elucidate ) isotopes any of two or more species of atoms of a chemical element with the same atomic number and nearly identical chemical behavior but with differing atomic mass or mass number and different physical properties ( http://www.merriam-webster.com/dictionary/isotopes ) #### Course Relevance How is this applicable to the proteomics class? Proteomics is the study of the protein and is defined as the qualitative and quantitative comparison of proteomes under different conditions to further unravel biological processes. Mass spectrometry allows for the study of the components of an organism and how they create their individual functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731256723403931, "perplexity": 1265.1706837225202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00327.warc.gz"}
http://nag.com/numeric/fl/nagdoc_fl24/html/C06/c06pwf.html
C06 Chapter Contents C06 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentC06PWF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose C06PWF computes the two-dimensional inverse discrete Fourier transform of a bivariate Hermitian sequence of complex data values. ## 2  Specification SUBROUTINE C06PWF ( M, N, Y, X, IFAIL) INTEGER M, N, IFAIL REAL (KIND=nag_wp) X(M*N) COMPLEX (KIND=nag_wp) Y((M/2+1)*N) ## 3  Description C06PWF computes the two-dimensional inverse discrete Fourier transform of a bivariate Hermitian sequence of complex data values ${z}_{{j}_{1}{j}_{2}}$, for ${j}_{1}=0,1,\dots ,m-1$ and ${j}_{2}=0,1,\dots ,n-1$. The discrete Fourier transform is here defined by $x^ k1 k2 = 1mn ∑ j1=0 m-1 ∑ j2=0 n-1 z j1 j2 × exp 2πi j1 k1 m + j2 k2 n ,$ where ${k}_{1}=0,1,\dots ,m-1$ and ${k}_{2}=0,1,\dots ,n-1$. (Note the scale factor of $\frac{1}{\sqrt{mn}}$ in this definition.) Because the input data satisfies conjugate symmetry (i.e., ${z}_{{k}_{1}{k}_{2}}$ is the complex conjugate of ${z}_{\left(m-{k}_{1}\right){k}_{2}}$, the transformed values ${\stackrel{^}{x}}_{{k}_{1}{k}_{2}}$ are real. A call of C06PVF followed by a call of C06PWF will restore the original data. This routine calls C06PQF and C06PRF to perform multiple one-dimensional discrete Fourier transforms by the fast Fourier transform (FFT) algorithm in Brigham (1974) and Temperton (1983). ## 4  References Brigham E O (1974) The Fast Fourier Transform Prentice–Hall Temperton C (1983) Fast mixed-radix real Fourier transforms J. Comput. Phys. 52 340–350 ## 5  Parameters 1:     M – INTEGERInput On entry: $m$, the first dimension of the transform. Constraint: ${\mathbf{M}}\ge 1$. 2:     N – INTEGERInput On entry: $n$, the second dimension of the transform. Constraint: ${\mathbf{N}}\ge 1$. 3:     Y($\left({\mathbf{M}}/2+1\right)×{\mathbf{N}}$) – COMPLEX (KIND=nag_wp) arrayInput On entry: the Hermitian sequence of complex input dataset $z$, where ${z}_{{j}_{1}{j}_{2}}$ is stored in ${\mathbf{Y}}\left({j}_{2}×\left(m/2+1\right)+{j}_{1}+1\right)$ , for ${j}_{1}=0,1,\dots ,m/2$ and ${j}_{2}=0,1,\dots ,n-1$. That is, if Y is regarded as a two-dimensional array of dimension $\left(0:{\mathbf{M}}/2,0:{\mathbf{N}}-1\right)$, then ${\mathbf{Y}}\left({j}_{1},{j}_{2}\right)$ must contain ${z}_{{j}_{1}{j}_{2}}$. 4:     X(${\mathbf{M}}×{\mathbf{N}}$) – REAL (KIND=nag_wp) arrayOutput On exit: the real output dataset $\stackrel{^}{x}$, where ${\stackrel{^}{x}}_{{k}_{1}{k}_{2}}$ is stored in ${\mathbf{X}}\left({k}_{2}×m+{k}_{1}+1\right)$, for ${k}_{1}=0,1,\dots ,m-1$ and ${k}_{2}=0,1,\dots ,n-1$. That is, if X is regarded as a two-dimensional array of dimension $\left(0:{\mathbf{M}}-1,0:{\mathbf{N}}-1\right)$, then ${\mathbf{X}}\left({k}_{1},{k}_{2}\right)$ contains ${\stackrel{^}{x}}_{{k}_{1}{k}_{2}}$. 5:     IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{M}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{M}}\ge 1$. ${\mathbf{IFAIL}}=2$ On entry, ${\mathbf{N}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{N}}\ge 1$. ${\mathbf{IFAIL}}=3$ An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ${\mathbf{IFAIL}}=-999$ Dynamic memory allocation failed. ## 7  Accuracy Some indication of accuracy can be obtained by performing a forward transform using C06PVF and a backward transform using C06PWF, and comparing the results with the original sequence (in exact arithmetic they would be identical). The time taken by C06PWF is approximately proportional to $mn\mathrm{log}\left(mn\right)$, but also depends on the factors of $m$ and $n$. C06PWF is fastest if the only prime factors of $m$ and $n$ are $2$, $3$ and $5$, and is particularly slow if $m$ or $n$ is a large prime, or has large prime factors. Workspace is internally allocated by C06PWF. The total size of these arrays is approximately proportional to $mn$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960063695907593, "perplexity": 1763.7367268358526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661023.80/warc/CC-MAIN-20160924173741-00031-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/easier-way-to-get-exact-sum-avr.225405/
# Easier way to get exact sum/avr? 1. Mar 30, 2008 ### buddingscientist Easier way to get exact sum/avr? [SOLVED, thanks awvvu] Greetings, First I will explain what I am trying to do. I am trying to find the 'average' distance from the bottom and right edges of this box: Basically (using 11 approximations) What I am after for is equal to 1 + sqrt (1 + 0.1^2) + sqrt (1 + 0.2^2) + sqrt (1 + 0.3^2) + sqrt (1 + 0.4^2) + sqrt (1 + 0.5^2) + sqrt (1 + 0.6^2) + sqrt (1 + 0.8^2) + sqrt (1 + 0.8^2) + sqrt (1 + 0.9^2) + sqrt(2) And then that divided by 11 I've used a nice messy excel spreadsheet to get this to 1000 approximations (1000 little 'slices') to get an average of 1.148001 and also using 5000 slices I get 1.147835 Basically as a summation what I _think_ I am looking for is (1000 slices): $$\frac{1}{n} \sum_{n=1}^{1000} \sqrt{1^2 + (0.001n)^2}$$ (that right?) And extended to an infinte amount of slices: $$\lim_{k\rightarrow\infty} \frac{1}{k} \sum_{n=1}^{k} \sqrt{1^2 + (\frac{n}{k})^2}$$ (is this right/possible?) What I am looking for is, using integration, or if it is do-able to evaluate that sum, to know if it is possible to get an 'exact' answer for the 'average' distance? I imagine it would be very similar to the 1.1478 answer above, but I'm looking for more accuracy (basically to whatever precision the infinite sum gives) or if it just happens to equal a nice fraction for me (8/7 which is 1.14285...) or you know.. something nice and round Thanks for reading, please let me know if you need any more info, or if I have gone wrong somewhere, or any hints to get me on the right track, etc. Last edited by a moderator: Apr 23, 2017 at 11:55 AM 2. Mar 30, 2008 ### awvvu You can set it up as an integral. Let's place the bottom-left corner of the square at the origin. The distance from the top-left corner to any x is $\sqrt{1+x^2}$. And we want to integrate from x = 0 to x = 1. I stuck it into integrator and the antiderivative is $\frac{1}{2}(x \sqrt{1 + x^2} + arcsinh(x))$. Plugging in our limits gives $\frac{1}{2}(\sqrt{2} + arcsinh(1)) \approx 1.14779$. It's a pretty unexpected exact expression. I think with some prodding, your sum can be turned into a Riemann sum and you'll get the same results as setting it up as an integral directly. edit: The integral and final expression should be divided by its length (1) to find the average. Last edited: Mar 30, 2008 3. Mar 31, 2008 ### buddingscientist Thanks very much ! I knew there would be a simpler way through integrating than my messy summations. Could I ask what you mean by "It's a pretty unexpected exact expression" ? (Just out of interest) Thanks again Similar Discussions: Easier way to get exact sum/avr?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376568794250488, "perplexity": 827.1343153890219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00099-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/240654/lebesgue-integrable-function?answertab=oldest
# Lebesgue Integrable Function My professor posed a question to us last week about the limit of a function as $k \to \infty$. He asked us to prove that $\int \lim_{k \to \infty} f(x)e^{-\frac{x^2}{k}} dx = \int f(x)dx$. This seems fairly basic, since it can be directly shown that the exponential part of the integrand, $e^{-\frac{x^2}{k}} \to 1$ as $k \to \infty$. Is there something I am missing? Are there extra steps needed to show that the integral of the limit is equal to the limit of the integral? Is this even necessary? I feel like this question is too simple relative to the rest of the material in class, but I'm not sure what I am missing. - well, $f=g$ implies $\int f=\int g$... –  leo Nov 20 '12 at 21:29 Are you sure he didn't ask $\lim_{k\rightarrow \infty}\int f(x)e^{-x^2/k}\,dx$? In that case, you need to justify switching the integral and the limit. –  asmeurer Dec 3 '12 at 5:12 Over an interval or compact set, $f(x) e^{-x^2/k} \to f(x)$ uniformly. Then you can get $\int \lim = \lim \int$ from Riemann integration. Indeed, $f(x) e^{-x^2/k} \to f(x)$ pointwise and $|f(x) e^{-x^2/k}| \leq f(x)$. Therefore $$\lim_{k \to \infty} \int f(x) e^{-x^2/k} \, dx \to \int f(x) \, dx$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569453597068787, "perplexity": 179.30620390131654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999655160/warc/CC-MAIN-20140305060735-00051-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/138359-elements-symmetric-groups-centralizers.html
# Math Help - elements of symmetric groups and centralizers 1. ## elements of symmetric groups and centralizers Let P = (1, 2, 3, .... n) in $S_n$. a. Show there are (n-1)! distinct n-cycles in $S_n$. b. How many conjugates does P have in $S_n$? c. Let C(P)=N(P) be the centralizer of P in $S_n$. Using (b), find |C(P)|. d. Find the subgroup C(P). my thoughts: a -- this seems really obvious, but I never actually learned how to show it! I have no ideas. b. by part (a), wouldn't this be simply (n-1)!, since that's the property of symmetric groups? c & d, I really don't know what to do. Do I use the class equation somehow, or another theorem? Thanks so much for any help! 2. Originally Posted by kimberu Let P = (1, 2, 3, .... n) in $S_n$. a. Show there are (n-1)! distinct n-cycles in $S_n$. b. How many conjugates does P have in $S_n$? c. Let C(P)=N(P) be the centralizer of P in $S_n$. Using (b), find |C(P)|. d. Find the subgroup C(P). my thoughts: a -- this seems really obvious, but I never actually learned how to show it! I have no ideas. b. by part (a), wouldn't this be simply (n-1)!, since that's the property of symmetric groups? c & d, I really don't know what to do. Do I use the class equation somehow, or another theorem? Thanks so much for any help! For (a), observe that (1 2 3 ... n )= (2 3 .. n 1) = (3, 4, .. n 1 2 ). To find the number of distinct n-cycles, you need to fix the first element in the n-cycle, let's say 1, and consider the permutation of the remaining elements. So there are (n-1)! distinct n-cycles in S_n. For (b), a conjugacy class in S_n has the same cycle type. So there are (n-1)! memebers in the conjugacy class of n-cycle in S_n. (c). The centralizer of (1 2 3 .. n) in S_n is simply a group generated by (1 2 3 .. n), which is <(1 2 3 .. n)>. If this is not immediate for you, cosider the permutation with two line notation. The members in the group <(1 2 3...n)> have a pattern in the two line notation. Try some members in the above group with two line notation and observe how they commute with (1 2 3 .. n). However, if you need to find the centralizer of (1 2 3 .. m) in S_n with m<n, the above needs a little bit of modification. For instance, the centralizer of (1 2 3 4 5 ) in S_8 is the group <(1 2 3 4 5), (6 8), (7 8)>. 3. Originally Posted by aliceinwonderland (c). The centralizer of (1 2 3 .. n) in S_n is simply a group generated by (1 2 3 .. n), which is <(1 2 3 .. n)>. If this is not immediate for you, cosider the permutation with two line notation. The members in the group <(1 2 3...n)> have a pattern in the two line notation. Try some members in the above group with two line notation and observe how they commute with (1 2 3 .. n). Thanks a lot - I understand parts a&b, but I'm a little shaky on the centralizer. I don't think I know which unique pattern you mean in the two-line notation. (Also, is the order of <(1 2 3 .. n)> also (n-1)!, that is, is this group the same as the conjugacy class? I'm not sure how to find that either.) 4. Originally Posted by kimberu Thanks a lot - I understand parts a&b, but I'm a little shaky on the centralizer. I don't think I know which unique pattern you mean in the two-line notation. (Also, is the order of <(1 2 3 .. n)> also (n-1)!, that is, is this group the same as the conjugacy class? I'm not sure how to find that either.) The order of <(1 2 3 ... n)> is n. It is the cyclic group generated by (1 2 3 .. n) in S_n. So it is an abelian group. For instance, if n=4 and let $\tau = (1 2 3 4 )$, then the cyclic group generated by $\tau$ is $\{1, \tau, \tau^2, \tau^3\}$. I encourage you to verify this with the two line notation. All elements in this group commute with $\tau$, which implies $\rho\tau\rho^{-1}=\tau$ where $\rho \in \{1, \tau, \tau^2, \tau^3\}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372479915618896, "perplexity": 716.3638499290673}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658061.59/warc/CC-MAIN-20150417045738-00059-ip-10-235-10-82.ec2.internal.warc.gz"}
https://nips.cc/Conferences/2015/ScheduleMultitrack?event=5722
Timezone: » Poster Reflection, Refraction, and Hamiltonian Monte Carlo Hadi Mohasel Afshar · Justin Domke Tue Dec 08 04:00 PM -- 08:59 PM (PST) @ 210 C #43 #None Hamiltonian Monte Carlo (HMC) is a successful approach for sampling from continuous densities. However, it has difficulty simulating Hamiltonian dynamics with non-smooth functions, leading to poor performance. This paper is motivated by the behavior of Hamiltonian dynamics in physical systems like optics. We introduce a modification of the Leapfrog discretization of Hamiltonian dynamics on piecewise continuous energies, where intersections of the trajectory with discontinuities are detected, and the momentum is reflected or refracted to compensate for the change in energy. We prove that this method preserves the correct stationary distribution when boundaries are affine. Experiments show that by reducing the number of rejected samples, this method improves on traditional HMC.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169958829879761, "perplexity": 984.9323026916866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00117.warc.gz"}
http://physics.stackexchange.com/questions/93904/speed-of-light-energy
# Speed of light energy Considering the amount of energy necessary to accelerate a particle to the speed of light (ie; half the energy in the entire universe) how could we have so many things already going the speed of light? Maybe there should be only two or three things (objects or particles etc.) in the universe already going that fast. Just a few photons for example, in the whole universe. - half the energy in the entire universe No, it's MUCH bigger, infinite in fact. –  jinawee Jan 16 at 16:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982224464416504, "perplexity": 514.7362861555354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773058.130/warc/CC-MAIN-20141217075253-00104-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-friction-on-ice-involving-weight.188350/
# HELP Friction on Ice involving weight 1. Oct 1, 2007 ### imadagron89 You are driving a 2580.0 kg car at a constant speed of 14.0 m/s along an icy, but straight, level road. As you approach an intersection, the traffic light turns red. You slam on the brakes. Your wheels lock, the tires begin skidding, and the car slides to a halt in a distance of 26.4 m. What is the coefficient of kinetic friction between your tires and the icy road. 3. The attempt at a solution I found the acceleration to be -.5303 by taking -14/26.4 (v=v0+at) but now I'm confused as how to find the kinetic friction by stopping 2. Oct 1, 2007 ### Staff: Mentor Find the acceleration or deceleration, then a/g = $\mu$, since Ffriction = ma = $\mu$ W = $\mu$mg. 3. Oct 1, 2007 ### imadagron89 im sorry man still not getting it, is there some sort of website that has all the equations on it? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: HELP Friction on Ice involving weight
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334468603134155, "perplexity": 2675.488519100613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686117.24/warc/CC-MAIN-20170920014637-20170920034637-00136.warc.gz"}
https://arxiv.org/abs/1509.03890
cond-mat.dis-nn (what is this?) # Title: Many-body Localization Transition in Rokhsar-Kivelson-type wave functions Abstract: We construct a family of many-body wave functions to study the many-body localization phase transition. The wave functions have a Rokhsar-Kivelson form, in which the weight for the configurations are chosen from the Gibbs weights of a classical spin glass model, known as the Random Energy Model, multiplied by a random sign structure to represent a highly excited state. These wave functions show a phase transition into an MBL phase. In addition, we see three regimes of entanglement scaling with subsystem size: scaling with entanglement corresponding to an infinite temperature thermal phase, constant scaling, and a sub-extensive scaling between these limits. Near the phase transition point, the fluctuations of the R\'enyi entropies are non-Gaussian. We find that R\'enyi entropies with different R\'enyi index transition into the MBL phase at different points and have different scaling behavior, suggesting a multifractal behavior. Comments: Published version. Improved version with 1 new references; 17 pages, 68 references and 16 figures 9some have been reordered) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Strongly Correlated Electrons (cond-mat.str-el) Journal reference: Physical Review B 92, 214204 (2015) DOI: 10.1103/PhysRevB.92.214204 Cite as: arXiv:1509.03890 [cond-mat.dis-nn] (or arXiv:1509.03890v5 [cond-mat.dis-nn] for this version)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652524948120117, "perplexity": 1736.1446792271406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155814.1/warc/CC-MAIN-20180919004724-20180919024724-00141.warc.gz"}
http://biztalkdeployment.codeplex.com/discussions/444095
# Include file in outdir, but not in Msi Topics: Server Deployment Wiki Link: [discussion:444095] adrach87 May 17, 2013 at 9:17 PM I have a need to copy a batch file from the Deployment folder to the outdir while building the MSI installer. Normally I'd just set Copy to Output folder to true in VS, but since the deployment framework isn't a project template, I don't have that option. I tried creating a postbuild action that just copies the file, but that doesn't seem to be executed. Here's how I set that up: `````` xcopy ..\..\MsiDeploy.bat . `````` But even after creating the MSI all I get are the ProjectName-Version.msi and Install-ProjectName-Version.bat files. What am I missing? tfabraham Coordinator May 19, 2013 at 5:18 AM Use this: `````` `````` Thanks, Tom adrach87 May 20, 2013 at 4:14 PM Thanks, but that doesn't seem to be working for me. This goes in the btdfproj file, in the Project element, right? The CreateItem node is underlined with the message "Target element has invalid child element CreateItem. List of possible elements expected: Task, PropertyGroup, ItemGroup, OnError". When I build an MSI, it works as normal, but the MsiDeploy.bat doesn't show up in the \bin\release folder. tfabraham Coordinator May 21, 2013 at 6:35 AM Yes. Then I would assume that the path to MsiDeploy.bat is incorrect. The path is relative to the Deployment project folder. Try a full explicit path if you can't get a relative path to work, just to see if you can get the file copied. Thanks, Tom adrach87 May 22, 2013 at 4:12 AM I updated to use an absolute path but that didn't work either. I even tried using msbuild and pointed it to the CustomPostInstaller target to determine if it maybe it just wasn't running that target, but again nothing. I also don't see any errors or warnings about not finding the MsiDeploy.bat file, so I don't think it's the path. One other possibility to get the file I need would be to customize the Install-ProjectName-Version.bat file. Is that possible? Thanks for your help. tfabraham Coordinator May 22, 2013 at 6:33 AM On BizTalk 2010 and newer you can use: `````` `````` See if the Message tasks write something out and whether the values are valid. It is not easily possible to modify the .bat file contents on initial creation. Thanks, Tom adrach87 May 23, 2013 at 3:21 AM This worked, once I got the correct path for the file I wanted to copy. Thanks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564311265945435, "perplexity": 1549.9736070163663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00353.warc.gz"}
https://physics.aps.org/articles/v9/s90
Synopsis World of Weyl Craft Physics 9, s90 Researchers provide new evidence for the existence of type-II Weyl semimetals, which would be both conducting and insulating in different spatial directions. Over the last year, much excitement has surrounded Weyl semimetals. These materials have an asymmetric crystal structure that results in never-before-seen collective excitations called Weyl fermions. Hints have also emerged that transition metal dichalcogenides, such as molybdenum ditelluride ( ${\text{MoTe}}_{2}$), represent a distinct category of Weyl semimetals (type-II), characterized by bizarre symmetry properties. Anna Tamai from the University of Geneva, Switzerland, and colleagues performed a careful assessment of ${\text{MoTe}}_{2}$ and argue that it is indeed a strong candidate for a type-II Weyl semimetal. Weyl semimetals have a complex electronic band structure, in which two bands meet at points. In a type-I Weyl semimetal (see 8 September 2015 Viewpoint), these so-called Weyl points are connected by arc-shaped features, known as Fermi arcs, which can be observed in data obtained with angle-resolved photoemission spectroscopy (ARPES). A type-II Weyl semimetal would also exhibit Fermi arcs, but the endpoints would not correspond to the Weyl points—making them harder to identify. This difference arises because the type-II band structure is predicted to have a large tilt, resulting in Weyl fermions that violate Lorentz symmetry. This violation would produce exotic properties such as the material acting as a conductor for electrons moving in certain directions, while being an insulator in others, depending on the orientation of an applied magnetic field. Several groups claimed to have observed a type-II Weyl semimetal. However, Tamai et al. argue that candidate materials may exhibit arcs that are actually “false positives.” Bearing this in mind, the authors identified several arc-like features in their ARPES data for ${\text{MoTe}}_{2}$ and then compared them to detailed electronic-structure calculations. They showed that some of these arcs can be explained without Weyl points, but others are only reproduced in scenarios with at least eight Weyl points, consistent with ${\text{MoTe}}_{2}$ being a type-II Weyl semimetal. This research is published in Physical Review X. –Michael Schirber Michael Schirber is a Corresponding Editor for Physics based in Lyon, France. Related Articles Condensed Matter Physics A Macroscopic Probe of Quantum States A simple measurement of the magnetic susceptibility of a material can reveal the population of specific quantum states in the material. Read More » Atomic and Molecular Physics Cold Atoms Go Topological A lattice of highly excited atoms can exhibit a topological phase, a new theoretical study shows.   Read More » Condensed Matter Physics Stretching Solves a Mystery of Magic-Angle Graphene Numerical simulations show that discrepancies between experiments on graphene bilayers can be attributed to tiny amounts of strain applied to the samples.    Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902971625328064, "perplexity": 1934.589075106676}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00436.warc.gz"}
http://avidemux.org/admWiki/doku.php?id=build:problems_with_compiling&rev=1352620301&do=diff
# Avidemux ### Site Tools build:problems_with_compiling # Differences This shows you the differences between two versions of the page. build:problems_with_compiling [2010/04/17 08:23]j.m build:problems_with_compiling [2012/11/11 08:51] (current) Both sides previous revision Previous revision 2010/04/17 08:24 j.m 2010/04/17 08:23 j.m 2010/04/16 06:40 j.m created Go Next revision Previous revision 2010/04/17 08:24 j.m 2010/04/17 08:23 j.m 2010/04/16 06:40 j.m created Go Line 13: Line 13: One of two things happened. One of two things happened. - - You need to provide the **<​nowiki>​--​with-newfaad** switch for ./configure <​code>​./​configure --with-jsapi-include=xxxx ​<​nowiki>​--​with-newfaad​ This is needed for the linux distribution of Gentoo and Ubuntu. + - You need to provide the **<​nowiki>​--​with-newfaad** switch for ./configure <​code>​./​configure --with-jsapi-include=xxxx --with-newfaad​ This is needed for the linux distribution of Gentoo and Ubuntu. - You do not need to give the <​nowiki>​--​with-newfaad switch for ./​configure. The solution is try the ./configure command without it **<​nowiki>​--​with-newfaad**. - You do not need to give the <​nowiki>​--​with-newfaad switch for ./​configure. The solution is try the ./configure command without it **<​nowiki>​--​with-newfaad**. Line 42: Line 42: If you have trouble with jsapi, one of two things has happened. ​ If you have trouble with jsapi, one of two things has happened. ​ - - If it had trouble finding jsapi.h, the solution is simple. <​code>​./​configure ​<​nowiki>​--​with-jsapi-include=xxxx [[<​nowiki>​--​with-newfaad|for gentoo or ubuntu]]​ Using the '​locate jsapi.h'​ command will usually find the directory you need with the jsapi.h file. Replace the xxxx is the directory where jsapi.h is. + - If it had trouble finding jsapi.h, the solution is simple. <​code>​./​configure --with-jsapi-include=xxxx [[-->​with-newfaad|for gentoo or ubuntu]]​ Using the '​locate jsapi.h'​ command will usually find the directory you need with the jsapi.h file. Replace the xxxx is the directory where jsapi.h is. - If there was a problem locating the library, the trouble could be several things. - If there was a problem locating the library, the trouble could be several things. * One probability is the libjs.so, libsmjs.so, and other necessary libraries are not in the standard path. You can either add them to your path value or you can put symlinks /usr/lib to the libraries themselves. * One probability is the libjs.so, libsmjs.so, and other necessary libraries are not in the standard path. You can either add them to your path value or you can put symlinks /usr/lib to the libraries themselves.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182610511779785, "perplexity": 4790.541267942059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00030.warc.gz"}
http://www.lmfdb.org/Variety/Abelian/Fq/1/27/ak
# Properties Label 1.27.ak Base Field $\F_{3^3}$ Dimension $1$ $p$-rank $1$ Principally polarizable Contains a Jacobian ## Invariants Base field: $\F_{3^3}$ Dimension: $1$ Weil polynomial: $1 - 10 x + 27 x^{2}$ Frobenius angles: $\pm0.0877398280459$ Angle rank: $1$ (numerical) Number field: $$\Q(\sqrt{-2})$$ Galois group: $C_2$ This isogeny class is simple. ## Newton polygon This isogeny class is ordinary. $p$-rank: $1$ Slopes: $[0, 1]$ ## Point counts This isogeny class contains a Jacobian, and hence is principally polarizable. $r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 18 684 19494 530784 14347458 387423756 10460425014 282430166400 7625601845298 205891158689964 $r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 18 684 19494 530784 14347458 387423756 10460425014 282430166400 7625601845298 205891158689964 ## Decomposition This is a simple isogeny class. ## Base change This isogeny class is not primitive. It is a base change from the following isogeny classes over subfields of $\F_{3^3}$. Subfield Primitive Model $\F_{3}$ 1.3.c
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9050201773643494, "perplexity": 1621.2068542814995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00184.warc.gz"}
https://www.physicsforums.com/threads/maximum-distance.192822/
# Maximum distance 1. Oct 21, 2007 ### princessfrost A 0.150-kg frame, when suspended from a coil spring, stretches the spring 0.050 m. A 0.200-kg lump of putty is dropped from rest onto the frame from a height of 30.0 cm. Find the maximum distance the frame moves downward from its initial position in meters. First I found the spring constant, k, which is given by F = -kx. F = -kx k = -F/x = -(0.150kg *9.8 m/s2)/(0.050m) = 29.4 N/m Next, I found the force that the lump of putty makes on the frame. The force is given by F = ma. The mass of the putty, 0.200kg, and the acceleration which is due to gravity, -9.8m/s2. F = ma = 0.200kg * -9.8m/s2 = 1.96 N Now the spring constant, k (29.4N/m), and the force of the putty, 1.96N, so I finally solve for the distance the frame moves. F = -kx x = -k/F = -(29.4N/m)/(1.96N) = -15m It is negative, because it moves in the -x direction. So, the frame moves 15m downwards is what I get but my program says it is wrong. Can someone please help me? 2. Oct 21, 2007 ### hage567 This part looks OK: For the next part, the 1.96 N is not the force that the putty exerts on the frame, since the putty was released 0.3 m above the frame. You can consider conservation of energy to get the distance the spring will go to bring the putty to a stop. 3. Oct 21, 2007 ### princessfrost 1/2mv^2= 1/2 kx^2 F= - k x where F= mg = (0.150-) (9.80m/s2) F = 1.47 N 1.47 N= -k ( - 0.050 ) k = 1.47N/0.050m k = 29.4N/m then: v22 = v12 - 2gh, where v1 = 0 and h =- 30 cm = - 0.30m v22 = - 2(9.80m/s2)(- 0.30m) v22 = 5.88 v2 = 2.425 m/s Now, plugging these values in I get mv2 = k x2 x2 = mv2/k = (0.200)(5.88 m2/s2)/(29.4N/m) x2 = 0.04 x = 0.2 m So the maximum distance is: X = 0.05 m + 0.20 m X = 0.25 m Is this right? 4. Oct 21, 2007 ### hage567 Looks OK to me. Similar Discussions: Maximum distance
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633573055267334, "perplexity": 3281.393633657713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00405.warc.gz"}
https://www.physicsforums.com/threads/a-chem-problem-help-please.8951/
# A chem problem help please 1. Nov 14, 2003 ### crayzlilgurl I don't know how to answer this question. I have to find the number of moles of water in the hydrate BaCL2*H2O. SOwwie i don't now how to make the 2 smaller. The mass of the hydrate is 5g, the Mass of lost 2. Nov 14, 2003 ### Chemicalsuperfreak It's pretty easy. You know that there was 0.7 grams of water originally. You, should, also know that water is 1/18 moles per gram. I can't make it any easier for you than that without telling you the answer, and that wouldn't do you any good. 3. Nov 15, 2003 ### crayzlilgurl thanks....i noe wut to do it just all the numbers i was given confused the heck outta me... 4. Jan 30, 2007 ### chris2009 i dont kow eitehr
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858516812324524, "perplexity": 1728.340321770595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213691.59/warc/CC-MAIN-20180818154147-20180818174147-00584.warc.gz"}
https://www.mis.mpg.de/calendar/lectures/2015/abstract-17767.html
# Abstract for the talk at 10.08.2015 (11:00 h) Arbeitsgemeinschaft ANGEWANDTE ANALYSIS Jinniao Qui (HU Berlin) Hörmander Type Theorem and Maximum Principle for Stochastic PDEs We shall first discuss the Hörmander theorem for general Itô processes and related (Kolmogrov) forward/backward stochastic PDEs, which may be beyond the scope of Markovian framework. Furthermore, we would also present the maximum principle for forward stochastic PDEs under a Hörmander type condition, which states the Lp (p 2) estimates for the time-space uniform norm of weak solutions. 01.03.2017, 13:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721134662628174, "perplexity": 3294.971320164756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00084.warc.gz"}
https://hal.inria.fr/hal-00764182
# Simplifying inclusion-exclusion formulas 1 VEGAS - Effective Geometric Algorithms for Surfaces and Visibility Inria Nancy - Grand Est, LORIA - ALGO - Department of Algorithms, Computation, Image and Geometry Abstract : Let F = {F_1, F_2,..., F_n} be a family of n sets on a ground set X, such as a family of balls in R^d. For every finite measure \mu on X, such that the sets of F are measurable, the classical inclusion-exclusion formula asserts that \mu(F_1 \cup F_2 \cup \bullet \bullet \bullet \cup F_n) = \sum_{I:{\O} \neq I\subseteq[n]} (-1)^{|I|+1} \mu(\cap_{i\inI} F_i); that is, the measure of the union is expressed using measures of various intersections. The number of terms in this formula is exponential in n, and a significant amount of research, originating in applied areas, has been devoted to constructing simpler formulas for particular families F. We provide the apparently first upper bound valid for an arbitrary F: we show that every system F of n sets with m nonempty fields in the Venn diagram admits an inclusion- exclusion formula with m^O((log n)^2) terms and with \pm1 coefficients, and that such a formula can be computed in m^O((log n)^2) expected time. We also construct systems of n sets on n points for which every valid inclusion-exclusion formula has the sum of absolute values of the coefficients at least \Omega(n^{3/2}). Type de document : Communication dans un congrès European Conference on Combinatorics, Graph Theory and Applications, Sep 2013, Pisa, Italy. 2013 Domaine : https://hal.inria.fr/hal-00764182 Contributeur : Xavier Goaoc <> Soumis le : mercredi 12 décembre 2012 - 15:12:50 Dernière modification le : jeudi 22 septembre 2016 - 14:31:10 ### Identifiants • HAL Id : hal-00764182, version 1 • ARXIV : 1207.2591 ### Citation Xavier Goaoc, Jiří Matoušek, Pavel Paták, Zuzana Safernová, Martin Tancer. Simplifying inclusion-exclusion formulas. European Conference on Combinatorics, Graph Theory and Applications, Sep 2013, Pisa, Italy. 2013. 〈hal-00764182〉 ### Métriques Consultations de la notice
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401430487632751, "perplexity": 3910.574336382296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823168.74/warc/CC-MAIN-20171018233539-20171019013539-00807.warc.gz"}
http://mathhelpforum.com/discrete-math/99409-cartesian-product-print.html
# Cartesian product • Aug 27th 2009, 02:50 AM The Lama Cartesian product My teatcher wasn't able to explain this for me in a clear way. See if you guys can help. Let A and B be two random sets. Prove that A $\times" alt="\times" /> B (B $\cap" alt="\cap" /> C) = (A $\times" alt="\times" /> B) $\cap" alt="\cap" /> (A $\cap" alt="\cap" /> C) by putting in the element (x,y) • Aug 27th 2009, 04:31 AM Defunkt Sort of difficult to see what you want to prove like this... Did you mean: $A \times (B \times C) = (A \times B) \times (A \times C)$ ? • Aug 27th 2009, 05:09 AM The Lama Sorry about that. Didn't know how to handle the LaTex-codes right. But I think know what's going on now. But yes, that was what I meant. The book wants me to prove the equivalence by putting in the elemet (x,y) into that. • Aug 27th 2009, 06:32 AM Plato LaTex Help: $$A \times \left( {B \times C} \right) \ne \left( {A \times B} \right) \times \left( {A \times C} \right)$$ gives $A \times \left( {B \times C} \right) \ne \left( {A \times B} \right) \times \left( {A \times C} \right)$. Why? $A \times \left( {B \times C} \right)$ is a set of triples. Whereas, $\left( {A \times B} \right) \times \left( {A \times C} \right)$ is a set of pairs made up of pairs. • Aug 27th 2009, 07:17 AM The Lama Ok, this is how the question looks like and what the key says. $A \times (B \cap C) = (A \times B) \cap (A \times C)$ Let (x,y) be a random element in $A \times (B \cap C)$ Then we have that X $\epsilon$ A and y $\epsilon$ B $\cap$C which give y $\epsilon$ B and y $\epsilon$C. (x,y) $\epsilon$ A x B and (x,y) $\epsilon$ A x C so (x,y) $\epsilon$ (A x B) $\cap$ (A x C) and now we can prove that (A x B) $\cap$ (A x C) $\subseteq$ A x (B $\cap$ C). But I don't get a grip over this. For example why do you have to put the x in A and the y in (B $\cap$C) and not the other way around? I would like to see this in a graphic 3D model so I could see what I was doing. • Aug 27th 2009, 07:28 AM Defunkt Quote: Originally Posted by The Lama Ok, this is how the question looks like and what the key says. $A \times (B \cap C) = (A \times B) \cap (A \times C)$ Let (x,y) be a random element in $A \times (B \cap C)$ Then we have that X $\epsilon$ A and y $\epsilon$ B $\cap$C which give y $\epsilon$ B and y $\epsilon$C. (x,y) $\epsilon$ A x B and (x,y) $\epsilon$ A x C so (x,y) $\epsilon$ (A x B) $\cap$ (A x C) and now we can prove that (A x B) $\cap$ (A x C) $\subseteq$ A x (B $\cap$ C). But I don't get a grip over this. For example why do you have to put the x in A and the y in (B $\cap$C) and not the other way around? I would like to see this in a graphic 3D model so I could see what I was doing. You need to show $x \in A$ and $y \in {B \cap C}$ because that's simply the order of the cartesian product -- for example: $X \times Y = \left\{ (x,y) : x \in X ; y \in Y\right\}$ While $Y \times X = \left\{ (y,x) : y \in Y; x \in X\right\}$ And those are ordered pairs, so: $(x,y) \neq (y,x) \Rightarrow X \times Y \neq Y \times X$ Also, your wording is a bit off... You should say "Let $(x,y) \in {A \times B}$", or "Let $(x,y)$ be some element in $A \times B$"; random is not quite the right word here! • Aug 27th 2009, 07:29 AM Plato Quote: Originally Posted by The Lama (x,y) $\epsilon$ A x B and (x,y) $\epsilon$ A x C so (x,y) $\epsilon$ (A x B) $\cap$ (A x C) and now we can prove that (A x B) $\cap$ (A x C) $\subseteq$ A x (B $\cap$ C). But I don't get a grip over this. For example why do you have to put the x in A and the y in (B $\cap$C) and not the other way around? More LaTex help. $$(x,y)\in A\times (B\cap C)$$ gives $(x,y)\in A\times (B\cap C)$. The is simply the way a cross product is defined. $(x,y)\in W\times Z$ means $x\in W\text{ and }y\in Z$ • Aug 27th 2009, 08:51 AM The Lama Quote: Originally Posted by Defunkt Also, your wording is a bit off... You should say "Let $(x,y) \in {A \times B}$", or "Let $(x,y)$ be some element in $A \times B$"; random is not quite the right word here! Alright. Thanks. I just tried to translate it as best I could from a swedish compendium. yea. x and $\times$. Got a bit lazy there I guess.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 62, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2674.1768288324815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320077.32/warc/CC-MAIN-20170623170148-20170623190148-00631.warc.gz"}
http://nu7026.com/tag/lopinavir/
## Physical fitness can be explained as a couple of components that Physical fitness can be explained as a couple of components that determine exercise influence and ability performance in sports. Measurements of muscles power typically concentrate on the powerful drive generated with the elbow flexors or the leg extensors, at different angles of elbow flexion or knee extension typically. Strength could be measured using the muscles remaining at a set duration (isometric) or while contracting (powerful). The handgrip check, an easy and reliable measure, is by far the most popular measure for assessing isometric strength in epidemiological studies (Bohannon et al. 2011). For dynamic explosive strength, the vertical jump has been the most widely used test. is definitely a performance-related fitness component that relates to the maintenance of a stable body position (Caspersen et al. 1985) which is definitely taken care of by both sensory and engine systems (Tresch 2007). It can be measured using the Balance Error Scoring System (BESS) that is commonly used Lopinavir by experts and clinicians and has a moderate to good reliability (Bell et al. 2011). and as key words. In addition, the research lists of these articles were inspected. Content articles (all-year) published in English and reporting twin correlations and/or heritability estimations of the vertical jump test, handgrip strength, balance and flexibility (sit-and-reach test) in a sample of children, adolescents and/or young adults up to the age of 30 were included, provided that these phenotypes were roughly similar (we.e. protocol) to the phenotypes measured in the current study. These papers are demonstrated in Table?1. For all studies, the unadjusted and univariate correlations and/or quotes had been extracted, aside from the scholarly research by Silventoinen et al. (2008) and Tiainen et al. (2004), who reported age-adjusted quotes only. Without all scholarly research reported twin correlations, an estimation was included by them from the heritability, the meta-analyses were predicated on the heritability estimates therefore. By weighing these heritability quotes from all scholarly tests by Lopinavir the amount of individuals, the weighted typical heritability could be computed using Microsoft Excel Lopinavir (2010) (Li et al. 2003; Neyeloff et al. 2012). When the typical mistakes (SEs) or self-confidence intervals (CIs) from the heritability quotes weren’t Gja1 reported, we were holding computed using the SEs or CIs from research who did survey these figures (Li et al. 2003). All research reported one (equated) heritability estimation for men and women, aside from Maes et al. (1996). These heritability quotes for men and women had been treated if we were holding self-employed samples. Results from the current study were also included in the meta-analyses. For regularity, univariate models were fitted to our four phenotypes and the producing heritability estimations were used in the meta-analyses. The statistic was used to assess heterogeneity and was determined as (? df)/gene. This gene seems to influence the overall performance of fast skeletal muscle mass materials and XX homozygotes may have modestly lower skeletal muscle mass strength in comparison with R-allele service providers (Yang et al. 2003). No large-scale genome-wide association (GWA) studies have been carried out on these phenotypes, which has proven to be a successful approach to understanding the heritability of many health-related risk factors and disease (Flint 2013; Visscher et al. 2012). This is unfortunate, because the components of physical fitness used in this study are relatively easy to measure (compared to for example maximal oxygen usage) in large samples and Lopinavir display substantial heritability, suggesting that a GWA meta-analysis effort could be successful. Moreover, the moderate but significant genetic association between handgrip and vertical jump suggests that meta-analysis over genetic association studies that use similar traits is definitely valid, and that the traits do not need to be exactly related to capture the latent genetic factors. Some limitations must be regarded as while interpreting our results. A significant assumption underlying twin research is that twins are consultant set alongside the general people completely. Silventoinen et al. reported that Lopinavir singletons demonstrated extra deviation in power and fat assessed in comparison to twins, which could result in inflated heritability quotes (Silventoinen et al. 2008). Furthermore, the siblings inside our research had an extremely wide a long time (12C25) which might be a issue as younger siblings may be pubertal, set alongside the remaining subjects. Inter-individual variation in maturation can be an established aspect that affects power and strength. However, whenever we.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881620168685913, "perplexity": 3484.128277417088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00187.warc.gz"}
https://proofwiki.org/wiki/Orthogonal_Trajectories/Circles_Tangent_to_Y_Axis
# Orthogonal Trajectories/Circles Tangent to Y Axis ## Theorem Consider the one-parameter family of curves: $(1): \quad x^2 + y^2 = 2 c x$ which describes the loci of circles tangent to the $y$-axis at the origin. Its family of orthogonal trajectories is given by the equation: $x^2 + y^2 = 2 c y$ which describes the loci of circles tangent to the $x$-axis at the origin. ## Proof 1 Differentiating $(1)$ with respect to $x$ gives: $2 x + 2 y \dfrac {\mathrm d y} {\mathrm d x} = 2 c$ from which: $\dfrac {\mathrm d y} {\mathrm d x} = \dfrac {y^2 - x^2} {2 x y}$ Thus from Orthogonal Trajectories of One-Parameter Family of Curves, the family of orthogonal trajectories is given by: $\dfrac {\mathrm d y} {\mathrm d x} = \dfrac {2 x y} {x^2 - y^2}$ Let: $M \left({x, y}\right) = 2 x y$ $N \left({x, y}\right) = x^2 - y^2$ Put $t x, t y$ for $x, y$: $\displaystyle M \left({t x, t y}\right)$ $=$ $\displaystyle 2 t x t y$ $\displaystyle$ $=$ $\displaystyle t^2 \left({2 x y}\right)$ $\displaystyle$ $=$ $\displaystyle t^2 M \left({x, y}\right)$ $\displaystyle N \left({t x, t y}\right)$ $=$ $\displaystyle \left({t x}\right)^2 - \left({t y}\right)^2$ $\displaystyle$ $=$ $\displaystyle t^2 N \left({x^2 - y^2}\right)$ $\displaystyle$ $=$ $\displaystyle t N \left({x, y}\right)$ Thus both $M$ and $N$ are homogeneous functions of degree $2$. Thus, by definition, $(1)$ is a homogeneous differential equation. By Solution to Homogeneous Differential Equation, its solution is: $\displaystyle \ln x = \int \frac {\mathrm d z} {f \left({1, z}\right) - z} + C$ where: $f \left({x, y}\right) = \dfrac {2 x y} {x^2 - y^2}$ Thus: $\displaystyle \ln x$ $=$ $\displaystyle \int \frac {\mathrm d z} {\dfrac {2 z} {1 - z^2} - z} + C_1$ $\displaystyle$ $=$ $\displaystyle \int \frac {1 - z^2} {z \left({1 + z^2}\right)} \, \mathrm d z + C_1$ $\displaystyle$ $=$ $\displaystyle \int \frac {\mathrm d z} {z \left({1 + z^2}\right)} \, \mathrm d z - \int \frac z {\left({1 + z^2}\right)} \, \mathrm d z + C_1$ $\displaystyle$ $=$ $\displaystyle \frac 1 2 \ln \left({\frac {z^2} {z^2 + 1} }\right) - \frac 1 2 \ln \left({z^2 + 1}\right) + C_1$ $\displaystyle$ $=$ $\displaystyle \frac 1 2 \ln \left({\frac {z^2} {\left({z^2 + 1}\right)^2} }\right) + C_1$ $\displaystyle \implies \ \$ $\displaystyle C_2 x^2$ $=$ $\displaystyle \frac {z^2} {\left({z^2 + 1}\right)^2}$ $\displaystyle \implies \ \$ $\displaystyle C_3 x$ $=$ $\displaystyle \frac {y/x} {\left({y/x}\right)^2 + 1}$ $\displaystyle \implies \ \$ $\displaystyle x^2 + y^2$ $=$ $\displaystyle 2 C y$ $\blacksquare$ ## Proof 2 Expressing $(1)$ in polar coordinates, we have: $(2): \quad r = 2 c \cos \theta$ Differentiating $(1)$ with respect to $\theta$ gives: $(3): \quad \dfrac {\d r} {\d \theta} = -2 c \sin \theta$ Eliminating $c$ from $(2)$ and $(3)$: $r \dfrac {\d \theta} {\d r} = -\dfrac {\cos \theta} {\sin \theta}$ Thus from Orthogonal Trajectories of One-Parameter Family of Curves, the family of orthogonal trajectories is given by: $r \dfrac {\d \theta} {\d r} = \dfrac {\sin \theta} {\cos \theta}$ Using the technique of Separation of Variables: $\displaystyle \int \frac {\d r} r = \int \dfrac {\cos \theta} {\sin \theta} \rd \theta$ which by Primitive of Reciprocal and various others gives: $\ln r = \map \ln {\sin \theta} + \ln 2 c$ or: $r = 2 c \sin \theta$ This can be expressed in Cartesian coordinates as: $x^2 + y^2 = 2 c y$ Hence the result. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774504661560059, "perplexity": 172.95480340722833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00551.warc.gz"}
https://direct.mit.edu/neco/article-abstract/1/2/270/5490/A-Learning-Algorithm-for-Continually-Running-Fully?redirectedFrom=fulltext
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932245671749115, "perplexity": 734.3763342538857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00321.warc.gz"}
http://www.fotobooks.ca/sofas-made-wttll/viewtopic.php?id=wirtinger-derivatives-chain-rule-800ee6
Two days ago in Julia Lab, Jarrett, Spencer, Alan and I discussed the best ways of expressing derivatives for automatic differentiation in complex-valued programs. Complex Derivatives, Wirtinger View and the Chain Rule. To introduce the product rule, quotient rule, and chain rule for calculating derivatives To see examples of each rule To see a proof of the product rule's correctness In this packet the learner is introduced to a few methods by which derivatives of more … In order to master the techniques explained here it is vital that you undertake plenty of practice exercises so that they become second nature. I think we need a function chain in ChainRulesCore taking two differentials, which usually just falls back to multiplication, but if any of the arguments is a Wirtinger, treats the first argument as the partial derivative of the outer function and the second as the derivative of the inner function. The inner function is the one inside the parentheses: x 2-3.The outer function is √(x). 4:53 . Let’s solve some common problems step-by-step so you can learn to solve them routinely for yourself. Finally, for f(z) = h(g(z)) 5 h(w), g : C ++ C, the following chain rules hold [FL88, Rem891: A.2.2 Discussion The Wirtinger derivative can be considered to lie inbetween the real derivative of a real function and the complex derivative of a complex function. Cauchy … Implicit Differentiation – In this section we will discuss implicit differentiation. … Derivative Rules Derivative Rules (Sum and Difference Rule) (Chain Rule… 1. Thread starter squeeze101; Start date Oct 3, 2010; Tags chain derivatives rule wirtinger; … Need to review Calculating Derivatives that don’t require the Chain Rule? Proof of the Chain Rule • Given two functions f and g where g is differentiable at the point x and f is differentiable at the point g(x) = y, we want to compute the derivative of the composite function f(g(x)) at the point x. Most problems are average. The chain rule states formally that . 66–67). With the chain rule in hand we will be able to differentiate a much wider variety of functions. Wirtinger derivatives were used in complex analysis at least as early as in the paper (Poincaré 1899), as briefly noted by Cherry & Ye (2001, p. 31) and by Remmert (1991, pp. Let’s first notice that this problem is first and foremost a product rule problem. This is a product of two functions, the inverse tangent and the root and so the first thing we’ll need to do in taking the derivative is use the product rule. share | cite | improve this question | follow | asked Sep 23 at 13:52. 2 Chain rule for two sets of independent variables If u = u(x,y) and the two independent variables x,y are each a function of two new independent variables s,tthen we want relations between their partial derivatives. In the following discussion and solutions the derivative of a function h(x) will be denoted by or h'(x) . Ekin Akyürek January 25, 2019 Leave a reply. Derivatives - Quotient and Chain Rule and Simplifying Show Step-by-step Solutions. This calculus video tutorial explains how to find derivatives using the chain rule. The chain rule is by far the trickiest derivative rule, but it’s not really that bad if you carefully focus on a few important points. There are rules we can follow to find many derivatives.. For example: The slope of a constant value (like 3) is always 0; The slope of a line like 2x is 2, or 3x is 3 etc; and so on. I can't remember how to do the following derivative: ## \frac{d}{d\epsilon}\left(\sqrt{1 + (y' + \epsilon g')^2}\right) ## where ##y, g## are functions of … That material is here. Let's look more closely at how d dx (y 2) becomes 2y dy dx. Wirtinger’s calculus [15] has become very popular in the signal processing community mainly in the context of complex adaptive filtering [13, 7, 1, 2, 12, 8, 4, 10], as a means of computing, in an elegant way, gradients of real valued cost functions defined on complex domains (Cν). r 2 is a constant, so its derivative is 0: d dx (r 2) = 0. 4 Homological criterion for existence of a square root of a quadratic differential In mathematical analysis, the chain rule is a derivation rule that allows to calculate the derivative of the function composed of two derivable functions. Whenever the argument of a function is anything other than a plain old x, you’ve got a composite function. However, we rarely use this formal approach when applying the chain rule to specific problems. 1 Introduction. The following chain rule examples show you how to differentiate (find the derivative of) many functions that have an “inner function” and an “outer function.”For an example, take the function y = √ (x 2 – 3). Practice your math skills and learn step by step with our math solver. A few are somewhat challenging. Not every function can be explicitly written in terms of the independent variable, … Simulation results complement the analysis. The Chain Rule says: du dx = du dy dy dx. Derivative of sq rt(x + sq rt(x^3 - 1)) Chain Rule on Nested Square Root Function - Duration: 4:53. Derivative using the chain rule I; Thread starter tomwilliam; Start date Oct 28, 2020; Oct 28, 2020 #1 tomwilliam . Sascha Sascha. y dy dx = −x. Using the chain rule I get $$\partial F/\partial\bar{z} = \partial F/\partial x\cdot\partial x/\partial\bar{z} + \partial F/\partial y\cdot\partial y/\partial\bar{z}$$. 133 0. Similarly, we can look at complex variables and consider the equation and Wirtinger derivatives $$(\partial_{\bar z} f)(z) +g(z) f(z)=0.$$ Can one still write down an explicit solution? The Chain Rule Using dy dx. I'm coming back to maths (calculus of variations) after a long hiatus, and am a little rusty. The calculator will help to differentiate any function - from simple to the most complex. Product Rule, Chain Rule and Simplifying Show Step-by-step Solutions. Try the given examples, or type in your own problem and … Are you working to calculate derivatives using the Chain Rule in Calculus? By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule. This is the point where I know something is going wrong. Collect all the dy dx on one side. Historical notes Early days (1899–1911): the work of Henri Poincaré. However, in using the product rule and each derivative will require a chain rule application as well. 362 3 3 silver badges 20 20 bronze badges $\endgroup$ … Chain Rule: Problems and Solutions. •Prove the chain rule •Learn how to use it •Do example problems . U se the Chain Rule (explained below): d dx (y 2) = 2y dy dx. Which gives us: 2x + 2y dy dx = 0. What is Derivative Using Chain Rule. Curvature. Here are useful rules to help you work out the derivatives of many functions (with examples below). Check out all of our online calculators here! Using Chain rule to find Wirtinger derivatives. Multivariable chain rule, simple version. The Chain Rule mc-TY-chain-2009-1 A special rule, thechainrule, exists for differentiating a function of another function. AD has two fundamental operating modes for executing its chain rule-based gradient calculation, known as the forward and reverse modes40,57. Email. Despite being a mature theory, Wirtinger’s-Calculus has not been applied before in this type of problems. This calculator calculates the derivative of a function and then simplifies it. Anil Kumar 22,823 views. Differentiating vector-valued functions (articles) Derivatives of vector-valued functions. In complex analysis of one and several complex variables, Wirtinger derivatives (sometimes also called Wirtinger operators), named after Wilhelm Wirtinger who introduced them in 1927 in the course of his studies on the theory of functions of several complex variables, are partial differential operators of the first order which behave in a very similar manner to the ordinary derivatives … To find the gradient of the output in forward mode, the derivatives of inner functions are substituted first, which consists of starting at the input After reading this text, and/or viewing the video tutorial on this topic, you should be able … Solve for dy dx: dy dx = −x y. The chain rule provides us a technique for finding the derivative of composite functions, with the number of functions that make up the composition determining how many differentiation steps are necessary. Such functions, obviously, are not holomorphic and therefore the complex derivative cannot be used. Google Classroom Facebook Twitter. Try the free Mathway calculator and problem solver below to practice various math topics. (simplifies to but for this demonstration, let's not combine the terms.) The first way is to just use the definition of Wirtinger derivatives directly and calculate \frac{\partial s}{\partial z} and \frac{\partial s}{\partial z^*} by using \frac{\partial s}{\partial x} and \frac{\partial s}{\partial y} (which you can compute in the normal way). By the way, here’s one way to quickly recognize a composite function. The Derivative tells us the slope of a function at any point.. This unit illustrates this rule. The chain rule for derivatives can be extended to higher dimensions. View Non AP Derivative Rules - COMPLETE.pdf from MATH MISC at Duluth High School. As you will see throughout the rest of your Calculus courses a great many of derivatives you take will involve the chain rule! Here we see what that looks like in the relatively simple case where the composition is a single-variable function. What is the correct generalization of the Wirtinger derivatives to arbitrary Clifford algebras? Derivatives - Product + Chain Rule + Factoring Show Step-by-step Solutions. real-analysis ap.analysis-of-pdes cv.complex-variables. Definition •In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. In English, the Chain Rule reads: The derivative of a composite function at a point, is equal to the derivative of the inner function at that point, times the derivative of the outer function at its image. A Newton’s-based method is proposed in which the Jacobian is replaced by Wirtinger’s derivatives obtaining a compact representation. Derivative Rules. For example, given instead of , the total-derivative chain rule formula still adds partial derivative terms. The chain rule is a rule for differentiating compositions of functions. Load-flow calculations are indispensable in power systems operation, … For example, if a composite function f( x) is defined as Having inspired from this discussion, I want to share my understanding of the subject and eventually present a chain rule … Chain rule of differentiation Calculator Get detailed solutions to your math problems with our Chain rule of differentiation step-by-step calculator.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283843636512756, "perplexity": 514.4589475858875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00471.warc.gz"}
https://chem.libretexts.org/Textbook_Maps/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_(McQuarrie_and_Simon)/06%3A_The_Hydrogen_Atom/6.E%3A_The_Hydrogen_Atom_(Exercises)
# 6.E: The Hydrogen Atom (Exercises) These are homework exercises to accompany Chapter 6 of McQuarrie and Simon's "Physical Chemistry: A Molecular Approach" Textmap. ### Q6.5 $$\int_{-1}^{1} T_n(x)T_m(x) \frac{1}{\sqrt{1-x^2}}dx= \begin{cases} 0, & n \neq m \\ \pi, & n=m=0 \\ \pi/2, & n=m \neq 0 \end{cases}$$ First 6 Chebyshev Polynomials $$T_0(x)=1$$ $$T_1(x)=x$$ $$T_2(x)=2x^2-1$$ $$T_3(x)=4x^3-3x$$ $$T_4(x)=8x^4-8x^2+1$$ $$T_5(x)=16x^5-20x^3+5x$$ Use the orthogonality of Chebyshev polynomials to determine what the following polynomials are equal to 1. $$\int_{-1}^{1} x^2 \frac{dx}{\sqrt{1-x^2}}$$ 2. $$\int_{-1}^{1} 4x^3-2x \frac{dx}{\sqrt{1-x^2}}$$ 3. $$\int_{-1}^{1} 1 \frac{dx}{\sqrt{1-x^2}}$$ 4. $$\int_{-1}^{1} 4x^4-4x^2+1 \frac{dx}{\sqrt{1-x^2}}$$ ### S6.5 1. x^2= T1*T1; therefore the answer is π /2 2. here the following polynomial is not a product of either Chebyshev polynomials; therefore, answer is doesn't follow orthogonality conditions 3. 1=T0*T0; therefore, answer is π 4. x^4-4x^2+1= T2*T2; therefore the answer is π/2 ### Q6.6 Use Eq. 6.47 to generate the radial functions $$R_{nl}\left(r\right)$$  for $$n=1,2$$. ### S6.6 $R_{10}\left(r\right)={\left\{\dfrac{\left(1-0-1\right)!}{2\left(1\right){\left[\left(1+0\right)!\right]}^3}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{2}{1a_0}\right)}^{\dfrac{0+3}{2}}r^0e^{-\dfrac{r}{1a_0}}L^1_1\left(\dfrac{2r}{1a_0}\right)$ $R_{10}\left(r\right)=-{\left\{\dfrac{1}{2}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{2}{a_0}\right)}^{\dfrac{3}{2}}e^{-\dfrac{r}{a_0}}$ $R_{20}\left(r\right)={\left\{\dfrac{\left(2-0-1\right)!}{2\left(2\right){\left[\left(2+0\right)!\right]}^3}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{2}{2a_0}\right)}^{\dfrac{0+3}{2}}r^0e^{-\dfrac{r}{2a_0}}L^1_2\left(\dfrac{2r}{2a_0}\right)$ $R_{20}\left(r\right)={\left\{\dfrac{1}{32}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{1}{a_0}\right)}^{\dfrac{3}{2}}e^{-\dfrac{r}{2a_0}}\left(-2!\left(2-\dfrac{r}{a_0}\right)\right)$ $R_{20}\left(r\right)=-2{\left\{\dfrac{1}{32}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{1}{a_0}\right)}^{\dfrac{3}{2}}e^{-\dfrac{r}{2a_0}}\left(\left(2-\dfrac{r}{a_0}\right)\right)$ $R_{21}\left(r\right)={\left\{\dfrac{\left(2-1-1\right)!}{2\left(2\right){\left[\left(2+1\right)!\right]}^3}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{2}{2a_0}\right)}^{\dfrac{1+3}{2}}r^1e^{-\dfrac{r}{2a_0}}L^3_3\left(\dfrac{2r}{2a_0}\right)$ $R_{21}\left(r\right)=-6{\left\{\dfrac{1}{864}\right\}}^{\dfrac{1}{2}}{\left(\dfrac{1}{a_0}\right)}^2r^1e^{-\dfrac{r}{2a_0}}$ ### Q6.29 Compare $$\psi_{310}$$ and $$\psi_{311}$$. Hint: What do the subscripts tell you about the wave function? What do they denote? ### S6.29 The first subscript tells you the quantum number $$n$$. The second denotes the angular momentum $$l$$. The last denotes the magnetic spin number $$m_l$$. These two functions have the same $$n$$ values, and thus they are degenerate. ### Q6.30 What is the probability density of the 3p orbital by evaluating $\left (\sum_{m=-1}^{1}\psi_{31m}^{2}\right )$ $\sum_{m=-1}^{1}\psi_{31m}^{2}=\left (\dfrac{2}{6561\pi}\right )\left (\dfrac{z^{3}}{a_o^{3}}\right )\sigma^{3}\left (6-\sigma\right )^{2}\exp^{\dfrac{-2\sigma}{3}} \left (\cos^{2} \theta+\sin^{2} \theta \cos^{2} \phi + \sin^{2} \theta \sin^{2} \phi \right )$ $\sum_{m=-1}^{1}\psi_{31m}^{2}=\left (\dfrac{2z^{3}\sigma^{2} \left (6-\sigma\right )^{2}\exp^{\dfrac{-2\sigma}{3}}}{6561\pi a_o^{3}}\right ) \left (\cos^{2}\theta+\sin^{2}\theta \left(\cos^{2}\phi+\sin^{2}\phi \right ) \right)$ $\sum_{m=-1}^{1}\psi_{31m}^{2}=\left (\dfrac{2z^{3}\sigma^{2}\left (6-\sigma\right )^{2}\exp^{\dfrac{-2\sigma}{3}}}{6561\pi a_o^{3}}\right )$ ### Q6.34 Find the energy, and wavefunction for a single electron located in the 2p orbital of the hydrogen atom. Include all possible wavefunctions. ### S6.34 Identify the quantum numbers for the electron of interest (in our case, $$n=2$$; $$l =1$$). Energy of the electron can be defined as $E_n = \dfrac{-m_ee^4}{8n^2\epsilon_o^2h^2}$ this leads us to $E_2 = \dfrac{-m_ee^4}{32\epsilon_o^2h^2}$ we have two possible wave functions $\Psi_{210}= \dfrac{1}{\sqrt{32}} (\dfrac{z}{a_o})^{3/2}\sigma/e^{-\sigma/2} \cos{\theta}$ and $\Psi_{21\pm1}= \dfrac{1}{\sqrt{32}} (\dfrac{z}{a_o})^{3/2}\sigma/e^{-\sigma/2} \sin{\theta}e^{\pm i\theta }$ ### Q6.37 The Hamiltonian is given by $$\hat{H} = \dfrac{-\hbar}{2m}\nabla^2 + V$$ is an Hermitian Operator. Using this fact, show that $\int{\psi^*[\hat{H},\hat{A}]\psi} d\tau = 0$ where $$\hat{A}$$ is any operator. ### S6.37 Through the commutation relation $\int{\psi^*\hat{H}\hat{A}\psi} d\tau - \int{\psi^*\hat{A}\hat{H}\psi} d\tau= 0$ because $$\hat{H}$$ is a Hermitian operator, the above goes to $\int{(\psi\hat{H})^*\hat{A}\psi} d\tau - \int{\psi^*\hat{A}(\hat{H}\psi)} d\tau= 0$ $E\int{\psi^*\hat{A}\psi} d\tau - E\int{\psi^*\hat{A}\psi} d\tau= 0$ ### Q6.38 Prove that $$\langle{\hat{K}}\rangle \ = \ \langle{V}\rangle = E/2$$ for a harmonic oscillator using the virial theorem ### S6.38 The virial theorem gives us, $\Bigg\langle{x\dfrac{\partial V}{\partial x} + y\dfrac{\partial V}{\partial y} + z\dfrac{\partial V}{\partial z}}\Bigg\rangle = 2\langle{\hat{K}}\rangle$ For a three-dimensional harmonic oscillator, $V(x,y,z) = \dfrac{k_xx^2}{2} + \dfrac{k_yy^2}{2} + \dfrac{k_zz^2}{2}$ Therefore, $x\dfrac{\partial V}{\partial x} + y\dfrac{\partial V}{\partial y} + z\dfrac{\partial V}{\partial z} = k_xx^2 + k_yy^2 + k_zz^2 = 2V$ and substituting into the equation given by the virial theorem gives us $$2\langle{V}\rangle = 2\langle{\hat{K}}\rangle$$. Because $$\langle{\hat{K}}\rangle + \langle{V}\rangle = E$$, we can also write $\langle{\hat{K}}\rangle = \langle{V}\rangle = \dfrac{1}{2}E$ ### Q6.41 Find the expected values of 1/r and 1/r2 for a hydrogenlike atom in the 2pz orbital. ### S6.41 The 2pz orbital: $Ψ_{210} = \dfrac{1}{4\sqrt{2π}} (Z/a_0)^{3/2}\rho e^{-\rho} \sinθ \cos ϕ \,dθ$ where $$\rho=Zr/a_o$$ $<1/r>Ψ210\int_{0}^2π \, dθ \int\limits_{0}^{π}\, sinθcos2ϕdθ \int\limits_{0}^{}\ (Z3/2/ao3/24\sqrt{2π}\)2*r2p*e-p*(1/r)dr$ $\int_0^{2π}\, dθ = 2π$ $\int_0^π \, \sinθ \cos^2 \,ϕ \,dθ = \dfrac{2}{3}$ $int _ 0^∞ \ (Z^{3/2}/ao3/24\sqrt{2π})^r^2 \pho e-{\rho}(1/r)dr =(Z3/ao332π)* [3!/(Z/ao)4]$ $<1/r>Ψ210 = (Z3/ao332π)(2π)(2/3)[3!/(Z/ao)4]$ Simplify to get: $\langle \dfrac{1}{r} \rangle _{Ψ_{210}} = \dfrac{Z}{4a_o}$ For the hydrogen atom $$Z=1$$, therefore $\langle \dfrac{1}{r} \rangle _{Ψ_{210}} = \dfrac{1}{4a_o}$ For $$\langle \dfrac{1}{r^2} \rangle$$ $\langle \dfrac{1}{r^2} \rangle_{Ψ_{210}} = \int_{0}^2π \, dθ \int\limits_{0}^{π}\, sinθcos2ϕdθ \int\limits_{0}^{}\ (Z3/2/ao3/24\sqrt{2π}\)2*r2p*e-p*(1/r2)dr$ $\int\limits_{0}^{2π}\, dθ = 2π$ $\int\limits_{0}^{π}\, sinθcos2ϕdθ = 2/3$ $\int \limits_{0}^{}\ (Z3/2/ao3/24\sqrt{2π}\)2*r2p*e-p*(1/r2)dr = (Z3/ao332π)* [2!/(Z/ao)3]$ $<1/r2>Ψ210 =(Z3/ao332π)(2π)(2/3)[2!/(Z/ao)3]$ Simplify to get: $\langle \dfrac{1}{r^2} \rangle _{Ψ_{210}} =\dfrac{Z^2}{12a_o^2}$ where Z=1 $\langle \dfrac{1}{r^2} \rangle _{Ψ_{210}} = \dfrac{1}{12a_o^2}$ ### Q6.43 Derive the classical magnetic moment of an electron orbiting a nucleus in terms of charge, mass and angular momentum. ### S6.43 We can begin by recalling the classical expression for a magnetic moment, $\mu = IArea$ Where $$I$$ is the current the electron makes by revolving around the nucleus. The definition of current is $I = \dfrac{Q}{time}$ In this case $$Q$$ is simply the charge $$(q_e)$$ of the electron and $$time$$ is the time it takes the electron to orbit the nucleus once. The area is the of loop that the electron takes when revolving around the nucleus. We also know from classical mechanics that $$x=vt$$. solving for $$t$$ and evaluating $$x$$ to be $$2\pi r$$ for a circle. We can figure out the time of revolution to be, $t = \dfrac{x}{v}= \dfrac{2\pi r}{v}$ Our current equations becomes, $I = \dfrac{q_ev}{2\pi r}$ To introduce angular momentum $$L=m_evr$$ we can multiply the right side of our current equation by $$\dfrac{m_er}{m_er}$$ to arrive at $I = \dfrac{q_em_evr}{2\pi m_er^2} \\ I = \dfrac{q_eL}{2\pi m_er^2}$ Substituting in the area of a circle $$(\pi r^2)$$ we can show that, $\boxed{\mu = IArea = \dfrac{q_eL}{2m_e}}$ ### Q6.46 Find the magnitude of the splitting shown in figure below. The magnetic field in the figure is at 20 T. ### S6.46 We know from a previous problem that $\Delta E = E_{2} - E_{1} = \beta _{e}B_{z}(m_2 - m_1)$ In the 1\s\ state where m = 0 and in the 2\p\ state where m = 0, \pm\ 1. The condition will cause (m_{2} - m_{1}) become equal to 0, or \pm\ 1 which will affect the magnitude of splitting, calculated below $\Delta E = (9.274 * 10^{-24} J*T^{-1}) (20T)(1)$ $$\Delta E = 1.8548 * 10^{-22} J*T^{-1}$$ or 0 ### Q6.47 Consider the transition between the $$l=1$$ and the $$l=2$$ states for atomic hydrogen. Determine the total number of possible allowed transitions between these two states in an external magnetic field given the following selection rules 1. Light whose electric field vector is parallel to the external magnetic field's direction has a selection rule of $$\Delta m=0$$ for allowed transitions. 2. Light whose electric field vector is perpendicular to the external magnetic field's direction has a selection rule of $$\Delta m=\pm 1$$ for allowed transitions. ### S6.47 An external magnetic field splits a state with given values n and $$l$$ into $$2l+1$$ levels. So the $$l=1$$ state will be split into three states ($$m=0, \pm 1$$) and the $$l=2$$ state will be split into five states ($$m=0, \pm 1, \pm 2$$). This means that the $$l=1 \rightarrow l=2$$ transition will have a possible of 15 transitions (ignoring any selection rules that reduce this number). 1. Using the selection rule $$\Delta m=0$$, then three transitions are possible: $$m=0$$, $$m=1$$, $$m=-1$$ 2. Using the selection rule $$\Delta m= \pm 1$$, then six transitions are possible: $$l=1$$ $$\rightarrow$$ $$l=2$$ Relative Orientation of light Polarization to Magnetic field m=0   m=1 parallel m=0   m=-1 parallel m=1   m=2 perpendicular m=1   m=0 perpendicular m=-1    m=-2 perpendicular m=-1   m=0 perpendicular ### Q6.49 Prove that  $$\hat{L_+}\hat{L_-} - \hat{L_-}\hat{L_+} = 2\hbar\hat{L_z}$$ given that $$\hat{L_+} = \hat{L_x} + i\hat{L_y}$$ and $$\hat{L_-} = \hat{L_x} - i\hat{L_y}$$. ### S6.49 $$\hat{L_+}\hat{L_-} = (\hat{L_x} + i\hat{L_y})(\hat{L_x} - i\hat{L_y}) = \hat{L^2_x} + \hat{L^2_y} - i\hat{L^2_x}\hat{L^2_y} + i\hat{L^2_y} \hat{L^2_x} = \hat{L^2_x} + \hat{L^2_y} +i[\hat{L_y},\hat{L_x}]$$ $$\hat{L_+}\hat{L_-} = \hat{L^2} - \hat{L^2_z} +\hbar \hat{L_z}$$ and $$\hat{L_-}\hat{L_+} = (\hat{L_x} - i\hat{L_y})(\hat{L_x} + i\hat{L_y}) = \hat{L^2_x} + \hat{L^2_y} +i\hat{L^2_x}\hat{L^2_y} - i\hat{L^2_y} \hat{L^2_x} = \hat{L^2_x} + \hat{L^2_y} +i[\hat{L_x},\hat{L_y}]$$ $$\hat{L_-}\hat{L_+} = \hat{L^2} - \hat{L^2_z} - \hbar \hat{L_z}$$ thus $$\hat{L_+}\hat{L_-} - hat{L_-}\hat{L_+} = \hat{L^2} - \hat{L^2_z} +\hbar \hat{L_z} - \hat{L^2} + \hat{L^2_z} - \hbar \hat{L_z}$$ $$\hat{L_+}\hat{L_-} - \hat{L_-}\hat{L_+} = 2\hbar \hat{L_z}$$ ### Q6.49 Show that the commutative property applies to $\hat{L}_{-}\hat{L}_{+}$ ### S6.49 $\hat{L}_{-}\hat{L}_{+} = \hat{L}_{+}\hat{L}_{-}$ $\hat{L}_{-} = \hat{L}_{x} - i \hat{L}_{y}$ and $\hat{L}_{+} = \hat{L}_{x} + i \hat{L}_{y}$ so $\hat{L}_{-}\hat{L}_{+}=[\hat{L}_x -i\hat{L}_y][\hat{L}_x + i \hat{L}_y]$ $= \hat{L}_{x}^2 + i \hat{L}_{x} \hat{L}_{y} - i \hat{L}_{x}\hat{L}_{y} + \hat{L}_{y}^2$ and $\hat{L}_{+}\hat{L}_{-}= [\hat{L}_{x} + i\hat{L}_{y}][\hat{L}_{x}-i\hat{L}_{y}]$ $= \hat{L}_{x}^2 -i \hat{L}_{y}\hat{L}_{x}+ i\hat{L}_{x}\hat{L}_{y}+ \hat{L}_{y}^2$ which shows that $\hat{L}_{-}\hat{L}_{+}= \hat{L}_{+}\hat{L}_{-}$ ### Q7.29 Calculate the ground-state energy for particle in the box model Using variational method. ### S7.29 Variational method equations is: $E_\phi=\dfrac{\langle\phi | \hat{H}| \phi\rangle}{\langle\phi|\phi\rangle}$where the wavefunctions are unnormalized The unnormalized Schrodinger equation for PIB: $\phi(x)=A \sin (\dfrac{xn\pi}{L})$ $\langle\phi | \phi\rangle$ $= A = \sqrt[]{\dfrac{2}{L}}$ and $\langle \phi | \hat{H}| \phi\rangle$ $= \dfrac{n^2 h^2}{8mL^2}\cdot \sqrt[]{\dfrac{2}{L}}$ so $E_\phi = \dfrac{\dfrac{n^2 h^2}{8mL^2}\cdot \sqrt[]{\dfrac{2}{L}}}{\sqrt[]{\dfrac{2}{L}}}$ so $E_\phi = \dfrac{n^2h^2}{8mL^2}$ where n=1 we get $E_\phi = \dfrac{h^2}{8mL^2}$ ### Q6.50 If two functions commute, they have mutual eigenfunctions, such as $$\hat{L}$$2 and $$\hat{L}$$$$z$$.  These mutual eigenfunctions are also known as spherical harmonics, $$Y$$$$l$$$$m$$($$\theta$$, $$\phi$$), however this information is not pertinent in this case.  Let $$\psi$$$$\alpha$$$$\beta$$ be a mutual eigenfunction of $$\hat{L}$$2 and $$\hat{L}$$$$z$$ so that $$\hat{L}$$$$z$$ $$\psi$$$$\alpha$$$$\beta$$ = $$\beta$$2\$$\alpha$$$$\beta$$ and $$\hat{L}$$$$z$$ $$\psi$$$$\alpha$$$$\beta$$ = $$\alpha$$$$\psi$$$$\alpha$$$$\beta$$ Now let $$\psi$$$$\alpha$$$$\beta$$+1 =  $$\hat{L}$$+$$\psi$$$$\alpha$$$$\beta$$ Show that $$\hat{L}$$$$z$$$$\psi$$$$\alpha$$$$\beta$$+1 = ($$\alpha$$ + $$\hbar$$)$$\psi$$$$\alpha$$$$\beta$$+1 and $$\hat{L}$$2$$\psi$$$$\alpha$$$$\beta$$+1 = $$\beta$$2$$\psi$$$$\alpha$$$$\beta$$+1 This proves that if $$\alpha$$ is an eigenvalue of $$\hat{L}$$$$z$$, then $$\alpha$$ + $$\hbar$$ also is an eigenvalue. ### S6.50 Solve this problem as given below: $$\psi$$$$\alpha$$$$\beta$$+1 =  $$\hat{L}$$+$$\psi$$$$\alpha$$$$\beta$$ $$\hat{L}$$$$z$$ $$\psi$$$$\alpha$$$$\beta$$+1 =$$\hat{L}$$$$z$$ $$\hat{L}$$+$$\psi$$$$\alpha$$$$\beta$$ = ($$\hat{L}$$$$z$$ $$\hat{L}$$$$x$$  + $$i$$$$\hat{L}$$$$z$$ $$\hat{L}$$$$y$$ )$$\psi$$$$\alpha$$$$\beta$$ $$z$$, $$\hat{L}$$$$x$$] + $$\hat{L}$$$$x$$ $$\hat{L}$$$$z$$ +  $$i$$[$$\hat{L}$$$$z$$, $$\hat{L}$$$$y$$] +  $$i$$$$\hat{L}$$$$y$$ $$\hat{L}$$$$z$$)$$\psi$$$$\alpha$$$$\beta$$ $$y$$ +  $$\hat{L}$$$$z$$$$\hat{L}$$$$x$$ + $$i$$$$\hbar$$$$\hat{L}$$$$x$$ +  $$\hat{L}$$$$y$$$$\hat{L}$$$$z$$) $$\psi$$$$\alpha$$$$\beta$$ $$\hat{L}$$$$+$$$$\hat{L}$$$$z$$ + $$\hbar$$$$\hat{L}$$$$+$$)$$\psi$$$$\alpha$$$$\beta$$ $$+$$($$\alpha$$ + $$\hbar$$)$$\psi$$$$\alpha$$$$\beta$$ $$\alpha$$$$\beta$$+1 Therefore proven. Finally, you can write: $$\hat{L}$$2$$\psi$$$$\alpha$$$$\beta$$+1 = $$\hat{L}$$2$$\hat{L}$$$$+$$$$\psi$$$$\alpha$$$$\beta$$ = ($$\hat{L}$$2$$\hat{L}$$$$x$$ + $$i$$$$\hat{L}$$2$$\hat{L}$$$$y$$)$$\psi$$$$\alpha$$$$\beta$$ =([$$\hat{L}$$2,$$\hat{L}$$$$x$$] + $$\hat{L}$$$$x$$$$\hat{L}$$2 + $$i$$[$$\hat{L}$$2,$$\hat{L}$$$$y$$] + $$i$$$$\hat{L}$$$$y$$$$\hat{L}$$2)$$\psi$$$$\alpha$$$$\beta$$ $$x$$$$\hat{L}$$ $$i$$$$\hat{L}$$$$y$$$$\hat{L}$$2)$$\psi$$$$\alpha$$$$\beta$$ $$+$$$$\beta$$2$$\psi$$$$\alpha$$$$\beta$$ 2$$\psi$$$$\alpha$$$$\beta$$ +1 Therefore proven.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464876055717468, "perplexity": 1211.421183993368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00594.warc.gz"}
https://www.physicsforums.com/threads/need-help-finding-frictional-force-in-torque-problem.651394/
# Need help finding frictional force in torque problem 1. Nov 11, 2012 ### EmptyMerc 1. The problem statement, all variables and given/known data A ladder having uniform density, a length of 3.4 meters and a weight of 121 N rests against a frictionless vertical wall, making an angle of 75 degrees with the horizontal. The lower end rests on a flat surface, where the coefficient of static friction is Mu = 0.400. A painter having a mass of 80 kg attempts to climb the ladder. How far up the ladder will the painter be when the ladder begins to slip? 2. Relevant equations Torque equations and frictional force equations 3. The attempt at a solution So using the torque equations I got x = [Fwall * 3.4 * sin(75) - 121 * (3.4/2) * sin(165)] / (80 * 9.81 * sin(165) Where x is the distance from the pivot point and Fwall = Frictional force. But I think I'm having a problem finding the frictional force. I know Frictional force = Mu(N) , where N is the normal force of the ground. So to find N I use the formula 121 + 80(9.81) which equals 905.8 and gives a frictional force of .4(905.8) = 362.32. But when I plug it in I get x = 5.6 m, which is taller than the ladder and leads me to believe that is not the right way to calculate the frictional force. So is that the right way to calculate the frictional force, or is there another way? Help is much appreciated. 2. Nov 11, 2012 ### Spinnor 3. Nov 11, 2012 ### Simon Bridge Do the algebra before putting the numbers in ... helps with troubleshooting. Otherwise working out what you are doing involves some pain sifting through the numbers. A ladder of mass $M$ and length $L$ sits on a horizontal floor with friction coefficient $\mu$ leaning at angle $\theta$ (to the horizontal) against a frictionless vertical wall. A painter-being, mass m, climbs the ladder. We need to know how far, x, along the ladder the being can go without slipping. So the torque about the floor pivot (say) would be $\tau=(mx+ML/2)\cos(\theta)$ for example... and the force down the length of the ladder towards the floor-pivot would be $F=(m+M)g\sin(\theta)$ ... this force has a component at the pivot that is directly down and another that it horizontal away from the wall. Try reworking your math this way - it should be clearer. 4. Nov 11, 2012 ### EmptyMerc Yea it is a lot clearer. I still come up with the same answer and judging on how how the book did it I think it is correct now. Really appreciate the help though! 5. Nov 11, 2012 ### Simon Bridge Yep - sometimes reworking a problem can clear up that feeling of uncertainty. When you are presenting your working to someone else, it helps them understand you if you use the symbolic/algebraic form rather than the absolute/numerical form. It would have been a lot for work for me to figure out if you'd done it right or not so I just tried to get you to do the work instead :) Similar Discussions: Need help finding frictional force in torque problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921877145767212, "perplexity": 624.6662423927642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00712.warc.gz"}
http://mathhelpforum.com/algebra/102988-how-achieve-desired-result-reducing-numerator-denominator-simultaneously-print.html
# How to achieve a desired result by reducing numerator and denominator simultaneously • September 18th 2009, 12:42 PM cz22 How to achieve a desired result by reducing numerator and denominator simultaneously Hi, hopefully someone can help, am not all that mathematically minded!! If I have the following Numerator 300 Denominator 250 Divide them I get 1.2 I need to attain a result of 1.25. How can I calculate the minimum amount to subtract off both to attain this result. If I take 1 off the numerator I must take 1 off the denominator, if 2 off the numerator then 2 off the denominator and so on. Any help greatly appreciated, are there excel functions to these effect? add ins? • September 18th 2009, 12:55 PM e^(i*pi) Quote: Originally Posted by cz22 Hi, hopefully someone can help, am not all that mathematically minded!! If I have the following Numerator 300 Denominator 250 Divide them I get 1.2 I need to attain a result of 1.25. How can I calculate the minimum amount to subtract off both to attain this result. If I take 1 off the numerator I must take 1 off the denominator, if 2 off the numerator then 2 off the denominator and so on. Any help greatly appreciated, are there excel functions to these effect? add ins? There is a probable chance that I've misread this question >.< $\frac{300-x}{250-x} = \frac{5}{4}$ Solve for x Spoiler: $4(300-x) = 5(250-x)$ $1200 - 4x = 1250 - 5x$ $x = 50$ • September 18th 2009, 01:10 PM cz22 You'll have to spell it out to me... apologies it's been too long since I last sat in a maths class! • September 18th 2009, 01:13 PM e^(i*pi) Quote: Originally Posted by cz22 You'll have to spell it out to me... apologies it's been too long since I last sat in a maths class! Did you check the spoiler? $1.25 = \frac{5}{4}$. I used fractions for the fun of it. You say you have to subtract the same amount from both the top and the bottom until the quotient equals the value above. To determine this number we give it a letter to make it easier to find. In this case it's x. x is defined as the value that will make the above part correct. As it has to be the same both values must be x so we have an equation in x and only x. Therefore we can solve like any fraction to find x • September 18th 2009, 02:16 PM cz22 Many thanks, never used the site before, didn't notice the spoiler.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002883791923523, "perplexity": 735.3491458314977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824146.3/warc/CC-MAIN-20160723071024-00232-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/464605/coordinate-functions-on-the-structure-sheaf-definition-of-a-smooth-manifold/473743
# “Coordinate functions” on the structure-sheaf definition of a smooth manifold I've been reading Bredon's Topology and Geometry recently; what an excellent book! He defines smooth manifolds in two distinct ways and then shows they are in fact equivalent. The "non-standard" definition is in terms of some sheaf-like "functional structure $F_X$ " on the underlying space $X$, satisfying the following properties: for every open set $U \subset X,$ we have 1. $F_X(U)$ is a subalgebra of the algebra of continuous real-valued functions on $U$; 2. $F_X(U)$ contains all constant functions; 3. $V \subset U, f \in F_X(U) \implies f|_V \in F_X(V)$; 4. $U = \bigcup U_{\alpha}$ and $f|_{U_{\alpha}} \in F_X(U_{\alpha})$ for all $\alpha \implies f \in F_X(U).$ A morphism of functionally structured spaces $(X,F_X) \rightarrow (Y,F_Y)$ is a map $\phi:X \rightarrow Y$ such that $f \mapsto f \circ \phi$ carries $F_Y(U)$ into $F_X(\phi^{-1}(U))$. Then a smooth $n$-manifold is a second countable, functionally structured, Hausdorff space $(M^n,F)$ which is locally isomorphic to $(\mathbb{R}^n,C^{\infty}).$ My question: to familiarize myself with the definition I have attempted the following exercise: Show that a second countable Hausdorff space $X$ with a functional structure $F$ is an $n$-manifold $\iff$ every point in $X$ has a neighborhood $U$ such that there are functions $f_1,\ldots,f_n \in F(U)$ such that a real-valued function $g$ on $U$ is in $F(U) \iff$ there exists a smooth function $h(x_1,\ldots,x_n)$ of $n$ real variables such that $g(p) = h(f_1(p),\ldots,f_n(p))$ for every $p \in U.$ The only part that I haven't been able to complete is the "$\Longleftarrow$" direction. That is, given the $n$ "coordinate functions" $f_i$, and given a point $x \in X$ and a neighborhood $U \ni x$ I, have constructed a morphism $\phi:(U,F_U) \rightarrow (\phi(U),C^{\infty})$ via $\phi(x) = (f_1(x),\ldots,f_n(x)).$ But for the life of me, I don't see how I could show that this is actually an isomorphism. Any hint toward the answer would be greatly appreciated! - possible duplicate of Functionally structured spaces and manifolds –  Zhen Lin Aug 10 '13 at 23:16 @ZhenLin Oh, my bad. If I am reading correctly, does this mean that the problem as stated above is false? –  A.P. Aug 11 '13 at 16:03 @Alex P. Yes, it is false. I provide an asnwer below. –  John Aug 22 '13 at 18:32 The implication $\Leftarrow$ you are considering is false. The functionally structured space $(\mathbb{R},F)$ I defined here provides a counterexample. It is not an smooth manifold yet it verifies the property you mention. For every $x\in\mathbb{R}$ we can take any open interval $I$ containing $x$, and let $f_{1}$ be any function in $F(I)$ (recall that $F(I)$ consists only of constant functions). If $g\in F(I)$ and we let $h=g$ we have that $h$ is smooth of 1 real variable such that $g(y)=h(f_{1}(y))$ for all $y\in I$. On the other hand, suppose that there is a smooth function of 1 real variable $h$ such that for a continuous $g:I\rightarrow\mathbb{R}$ we have $g(y)=h(f_{1}(y))$ for all $y\in I$. Then since $f_{1}$ is constant, say $f_{1}\equiv c\in\mathbb{R}$, we get $g(y)=h(c)$ for all $y\in I$, i.e., $g\in F(I)$. If you add the additional hypothesis that $\phi$ is locally invertible then the implication $\Leftarrow$ is true. Other hypotheses may also work. Othe links to problems on that section of Bredon's book are this, this and this. - Thanks, John. Would you mind explaining why your counterexample works (more precisely, why the hypotheses are satisfied)? I assume you let $f = id$, as I see no other plausible candidate. But then it's not true that $g(p) = h(p)$ for some smooth $h$ implies that $g$ is locally constant. –  A.P. Aug 23 '13 at 15:49 @ Alex P. I edited my answer. I hope it helps. $f_{1}=id$ will not work. Just let $f_{1}$ be any constant function. –  John Aug 24 '13 at 1:21 Ah, excellent! I don't know why I didn't think of constant functions. Thank you very much, John. –  A.P. Aug 27 '13 at 14:20 No problem. I edited the question and added some links to questions related to the problems of the section you are reading. I am myself stuck on problem 5 p.71 (see the last link I added at the end of my answer) –  John Aug 27 '13 at 14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751322269439697, "perplexity": 178.25083456917153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775404.88/warc/CC-MAIN-20141217075255-00113-ip-10-231-17-201.ec2.internal.warc.gz"}
https://operativeneurosurgery.com/doku.php?id=incidence
# Operative Neurosurgery incidence ## Incidence Incidence is a measure of the probability of occurrence of a given medical condition in a population within a specified period of time. Although sometimes loosely expressed simply as the number of new cases during some time period, it is better expressed as a proportion or a rate with a denominator. Prevalence is contrasted with incidence, which is a measure of new cases arising in a population over a given period (month, year, etc.). The difference between prevalence and incidence can be summarized thus: prevalence answers “How many people have this disease right now?” and incidence answers “How many people per year newly acquire this disease?”. Incidence proportion (also known as cumulative incidence) is the number of new cases within a specified time period divided by the size of the population initially at risk. For example, if a population initially contains 1,000 non-diseased persons and 28 develop a condition over two years of observation, the incidence proportion is 28 cases per 1,000 persons, i.e. 2.8%.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425532221794128, "perplexity": 787.5532789178137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00007.warc.gz"}
https://infoscience.epfl.ch/record/80207
## The accordion experiment, a simple approach to three-dimensional NMR spectroscopy As a simple approach to 3-dimensional NMR spectroscopy a novel type of expt. is proposed in which the dimension is reduced from 3 to 2 by synchronous incrementation of evolution period t, and the mixing time tm parameters: tm = Kt1. Because of the concerted stretching of the pulse sequence, this expt. is referred to as accordion spectroscopy. The salient feature of the novel expt. is the accommodation of 2-dimensional information along a single time or frequency axis. In complete analogy to std. 2-dimensional exchange spectroscopy, the peak positions in an accordion spectrum characterize the origin (w1) and destination (w2) of the exchanging magnetization. The 3rd dimension (wm) is reflected in the lineshape along the w1, wm axis. These lineshapes correspond to Fourier transforms with respect to tm of the mixing functions aii(tm) and aij(tm), and contain all information relevant to the dynamic processes. These mixing functions can be retrieved from an accordion spectrum by a 3rd (reverse) Fourier transformation for any pair of sites i and j. [on SciFinder (R)] Published in: Journal of Magnetic Resonance, 45, 2, 367-73 Year: 1981 Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257398009300232, "perplexity": 2267.2950155419085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00248.warc.gz"}
https://www2.karlin.mff.cuni.cz/~pyrih/e/e2001v2/c/ect/node49.html
Next: Arc-like (chainable) continua Up: Elementary examples Previous: Sin curve ## Cantor organ and accordion The Cantor organ is the union of the product of the Cantor ternary set and the unit interval and all segments of the form or , where (, resp.) is the closure of a component of of length ( ), [Kuratowski 1968, p. 191]. See Figure A. 1. is an arc-like continuum which is irreducible between points and , where , and has exactly four end points. 2. It has uncountably many arc components. A variation of the Cantor organ is the Cantor accordion which is defined as the monotone image of under a map that shrinks horizontal bars and to points [Kuratowski 1968, p. 191]. See Figure B. Besides the above properties, Here you can find source files of this example. Here you can check the table of properties of individual continua. Here you can read Notes or write to Notes ies of individual continua. Next: Arc-like (chainable) continua Up: Elementary examples Previous: Sin curve Janusz J. Charatonik, Pawel Krupski and Pavel Pyrih 2001-11-30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556806087493896, "perplexity": 2984.4682145577017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00189.warc.gz"}
https://groups.google.com/g/visone-users/c/40f6V6IawMA
# density calculation for directed graphs 11 views ### Uwe Serdült Dec 11, 2020, 12:50:35 AM12/11/20 to visone-users Dear Visone team, I am using visone 2.18 for an undergrad class in which students have to calculate density by hand but could also use visone to do the job. However, when importing the directed graph attached and calculating density you get a value of .5833 in visone but also showing 30 present edges (as there should be). The size of the graph is n=9. You can get 0.5833 for density in a directed graph of size 9 with 42 edges present but there are clearly only 30. Can you reproduce and check, please? Best wishes, Uwe Ex1a_1.csv ### Müller Julian Jan 4, 2021, 10:11:58 AMJan 4 Dear Uwe, Thank you for the bug report. I committed a fix into the internal visone repository just now. It will be included in the next release. Some background: For purposes of density calculation, visone treated all networks as undirected. So the calculated value was the density of the underlying undirected graph, but not of the directed graph itself. After the fix, visone will now calculate the percentage of occupied dyads without loops. That means: * The denominator is n(n-1). * Directed edges are counted once. * Undirected edges are counted twice (i.e., like two directed edges). * Parallel edges are treated like a single edge. * Loops are not considered, neither in the numerator nor in the denominator. Best wishes, Julian
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793566226959229, "perplexity": 2677.906504639733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00510.warc.gz"}
http://math.stackexchange.com/questions/915456/simplifying-the-sum-of-powers-of-the-golden-ratio
# Simplifying the sum of powers of the golden ratio I seem to have forgotten some fundamental algebra. I know that: $(\frac{1+\sqrt{5}}{2})^{k-2} + (\frac{1+\sqrt{5}}{2})^{k-1} = (\frac{1+\sqrt{5}}{2})^{k}$ But I don't remember how to show it algebraicly factoring out the biggest term on the LHS gives $(\frac{1+\sqrt{5}}{2})^{k-2}(1+(\frac{1+\sqrt{5}}{2}))$ which doesn't really help - $x^{k-2}+x^{k-1}=x^k$ is true if you have $x+1=x^2$. Can you solve for $x$ in the quadratic equation $x^2-x-1=0$? Is $(1+\sqrt{5})/2$ one of the solutions? – Kim Jong Un Sep 1 '14 at 1:09 $$\left (\frac{1+\sqrt{5}}{2} \right )^{k-2} + \left (\frac{1+\sqrt{5}}{2} \right )^{k-1} = \left ( \frac{1+ \sqrt{5}}{2}\right )^{k-2} \left ( 1+ \frac{1+ \sqrt{5}}{2}\right)$$ It is known that the Greek letter phi (φ) represents the golden ratio,which value is: $$\phi=\frac{1+ \sqrt{5}}{2}$$ One of its identities is: $$\phi^2=\phi+1$$ Therefore: $$\left ( 1+ \frac{1+ \sqrt{5}}{2}\right)= \left ( 1+ \frac{\sqrt{5}}{2}\right)^2$$ So: $$\left ( \frac{1+ \sqrt{5}}{2}\right )^{k-2} \left ( 1+ \frac{1+ \sqrt{5}}{2}\right)= \left ( 1+ \frac{\sqrt{5}}{2}\right)^k$$ - What's $\left(\frac{1+\sqrt{5}}{2}\right)^2$? - You are almost done. You have already found that $$( \frac{1 + \sqrt{5}}{2} )^{k-2} + ( \frac{1 + \sqrt{5}}{2} )^{k-1} = ( \frac{1 + \sqrt{5}}{2} )^{k-2} (1 + \frac{1 + \sqrt{5}}{2} )$$ You want to show that this quantity can be expressed as $( \frac {1 + \sqrt{5} }{2} )^k$. Comparing what you have to what you need, you should be able to see that it would be sufficient to prove that $1 + \frac{1 + \sqrt{5}}{2} = ( \frac {1 + \sqrt{5} }{2} )^2$. This can be verified directly by simplifying both sides. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306115508079529, "perplexity": 254.99558808165412}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00038-ip-10-71-132-137.ec2.internal.warc.gz"}
http://category-theory.mitpress.mit.edu/chapter002.html
# The Category of Sets The theory of sets was invented as a foundation for all of mathematics. The notion of sets and functions serves as a basis on which to build intuition about categories in general. This chapter gives examples of sets and functions and then discusses commutative diagrams. Ologs are then introduced, allowing us to use the language of category theory to speak about real world concepts. All this material is basic set theory, but it can also be taken as an investigation of the category of sets, which is denoted Set. # 2.1   Sets and functions People have always found it useful to put things into bins. The study of sets is the study of things in bins. ## 2.1.1   Sets You probably have an innate understanding of what a set is. We can think of a set X as a collection of elements xX, each of which is recognizable as being in X and such that for each pair of named elements x, x′ ∈ X we can tell if x = x′ or not.1 The set of pendulums is the collection of things we agree to call pendulums, each of which is recognizable as being a pendulum, and for any two people pointing at pendulums we can tell if they’re pointing at the same pendulum or not. Notation 2.1.1.1. The symbol ∅ denotes the set with no elements (see Figure 2.1), which can also be written as { }. The symbol ℕ denotes the set of natural numbers: The symbol ℤ denotes the set of integers, which contains both the natural numbers and their negatives, If A and B are sets, we say that A is a subset of B, and write AB, if every element of A is an element of B. So we have ℕ ⊆ ℤ. Checking the definition, one sees that for any set A, we have (perhaps uninteresting) subsets ∅ ⊆ A and AA. We can use set-builder notation to denote subsets. For example, the set of even integers can be written {n ∈ ℤ | n is even}. The set of integers greater than 2 can be written in many ways, such as $\begin{array}{ccccc}\left\{n\in ℤ|n>2\right\}& \text{or}& \left\{n\in ℕ|n>2\right\}& \text{or}& \left\{n\in ℕ|n⩾3\right\}\end{array}.$ The symbol ∃ means “there exists.” So we could write the set of even integers as The symbol ∃! means “there exists a unique.” So the statement “∃!x ∈ ℝ such that x2 = 0” means that there is one and only one number whose square is 0. Finally, the symbol ∀ means “for all.” So the statement “∀m ∈ ℕ ∃n ∈ ℕ such that m < n” means that for every number there is a bigger one. As you may have noticed in defining ℕ and ℤ in (2.1) and (2.2), we use the colon-equals notation “AXY Z” to mean something like “define A to be XY Z.” That is, a colon-equals declaration does not denote a fact of nature (like 2 + 2 = 4) but a choice of the writer. We also often discuss a certain set with one element, denoted {☺}, as well as the familiar set of real numbers, ℝ, and some variants such as ℝ⩾0 ≔ {x ∈ ℝ | x ⩾ 0}. Exercise 2.1.1.2. Let A ≔ {1, 2, 3}. What are all the subsets of A? Hint: There are eight. A set can have other sets as elements. For example, the set $X≔\left\{\left\{1,2\right\},\left\{4\right\},\left\{1,3,6\right\}\right\}$ has three elements, each of which is a set. ## 2.1.2   Functions If X and Y are sets, then a function f from X to Y, denoted f : XY, is a mapping that sends each element xX to an element of Y, denoted f(x) ∈ Y. We call X the domain of the function f, and we call Y the codomain of f. Note that for every element xX, there is exactly one arrow emanating from x, but for an element yY, there can be several arrows pointing to y, or there can be no arrows pointing to y (see Figure 2.2). Slogan 2.1.2.1. Given a function f : XY, we think of X as a set of things, and Y as a set of bins. The function tells us in which bin to put each thing. Application 2.1.2.2. In studying the mechanics of materials, one wishes to know how a material responds to tension. For example, a rubber band responds to tension differently than a spring does. To each material we can associate a force-extension curve, recording how much force the material carries when extended to various lengths. Once we fix a methodology for performing experiments, finding a material’s force-extension curve would ideally constitute a function from the set of materials to the set of curves. Exercise 2.1.2.3. Here is a simplified account of how the brain receives light. The eye contains about 100 million photoreceptor (PR) cells. Each connects to a retinal ganglion (RG) cell. No PR cell connects to two different RG cells, but usually many PR cells can attach to a single RG cell. Let PR denote the set of photoreceptor cells, and let RG denote the set of retinal ganglion cells. a. According to the above account, does the connection pattern constitute a function RGPR, a function PRRG, or neither one? b. Would you guess that the connection pattern that exists between other areas of the brain are function-like? Justify your answer. Example 2.1.2.4. Suppose that X is a set and X′ ⊆ X is a subset. Then we can consider the function X′ → X given by sending every element of X′ to “itself” as an element of X. For example, if X = {a, b, c, d, e, f} and X′ = {b, d, e}, then X′ ⊆ X. We turn that into the function X′ → X given by bb, dd, ee.2 As a matter of notation, we may sometimes say the following: Let X be a set, and let i : X′ ⊆ X be a subset. Here we are making clear that X′ is a subset of X, but that i is the name of the associated function. Exercise 2.1.2.5. Let f : ℕ → ℕ be the function that sends every natural number to its square, e.g., f(6) = 36. First fill in the blanks, then answer a question. a. 2 ↦ ________ b. 0 ↦ ________ c. −2 ↦ ________ d. 5 ↦ ________ e. Consider the symbol → and the symbol ↦. What is the difference between how these two symbols are used so far in this book? Given a function f : XY, the elements of Y that have at least one arrow pointing to them are said to be in the image of f; that is, we have The image of a function f is always a subset of its codomain, im(f) ⊆ Y. Exercise 2.1.2.6. If f : XY is depicted by Figure 2.2, write its image, im(f) as a set. Given a function f : XY and a function g : YZ, where the codomain of f is the same set as the domain of g (namely, Y), we say that f and g are composable $X\stackrel{f}{\to }Y\stackrel{g}{\to }Z.$ The composition of f and g is denoted by gf : XZ. See Figure 2.3. Slogan 2.1.2.7. Given composable functions $X\stackrel{f}{\to }Y\stackrel{g}{\to }Z$, we have a way of putting every thing in X into a bin in Y, and we have a way of putting each bin from Y into a larger bin in Z. The composite, gf : XZ, is the resulting way that every thing in X is put into a bin in Z. Exercise 2.1.2.8. If AX is a subset, Example 2.1.2.4 showed how to think of it as a function i : AX. Given a function f : XY, we can compose $A\stackrel{i}{\to }X\stackrel{f}{\to }Y$ and get a function fi: AY. The image of this function is denoted $f\left(A\right)≔\text{im}\left(f○i\right),$ see (2.3) for the definition of image. Let X = Y ≔ ℤ, let A ≔ {−1, 0, 1, 2, 3} ⊆ X, and let f : XY be given by f(x) = x2. What is the image set f(A)? Solution 2.1.2.8. By definition of image (see (2.3), we have Since A = {−1, 0, 1, 2, 3} and since i(a) = a for all aA, we have f(A) = {0, 1, 4, 9}. Note that an element of a set can only be in the set once; even though f(−1) = f(1) = 1, we need only mention 1 once in f(A). In other words, if a student has an answer such as {1, 0, 1, 4, 9}, this suggests a minor confusion. Notation 2.1.2.9. Let X be a set and xX an element. There is a function {☺} → X that sends ☺ ↦ x. We say that this function represents xX. We may denote it x: {☺} → X. Exercise 2.1.2.10. Let X be a set, let xX be an element, and let x: {☺} → X be the function representing it. Given a function f : XY, what is fx? Remark 2.1.2.11. Suppose given sets A, B, C and functions $A\stackrel{f}{\to }B\stackrel{g}{\to }C$. The classical order for writing their composition has been used so far, namely, gf : AC. For any element aA, we write gf(a) to mean g(f(a)). This means “do g to whatever results from doing f to a.” However, there is another way to write this composition, called diagrammatic order. Instead of gf, we would write f; g : AC, meaning “do f, then do g.” Given an element aA, represented by a: {☺} → A, we have an element a; f; g. Let X and Y be sets. We write HomSet(X, Y) to denote the set of functions XY.3 Note that two functions f, g : XY are equal if and only if for every element xX, we have f(x) = g(x). Exercise 2.1.2.12. Let A = {1, 2, 3, 4, 5} and B = {x, y}. a. How many elements does HomSet(A, B) have? b. How many elements does HomSet(B, A) have? Exercise 2.1.2.13. a. Find a set A such that for all sets X there is exactly one element in HomSet(X, A). Hint: Draw a picture of proposed A’s and X’s. How many dots should be in A? b. Find a set B such that for all sets X there is exactly one element in HomSet(B, X). Solution 2.1.2.13. a. Here is one: A ≔ {☺}. (Here is another, A ≔ {48}, and another, A ≔ {a1}). Why? We are trying to count the number of functions XA. Regardless of X and A, in order to give a function XA one must answer the question, Where do I send x? several times, once for each element xX. Each element of X is sent to an element in A. For example, if X = {1, 2, 3}, then one asks three questions: Where do I send 1? Where do I send 2? Where do I send 3? When A has only one element, there is only one place to send each x. A function X → {☺} would be written 1 ↦ ☺, 2 ↦ ☺, 3 ↦ ☺. There is only one such function, so HomSet(X, {☺}) has one element. b. B = ∅ is the only possibility. To give a function BX one must answer the question, Where do I send b? for each bB. Because B has no elements, no questions must be answered in order to provide such a function. There is one way to answer all the necessary questions, because doing so is immediate (“vacuously satisfied”). It is like commanding John to “assign a letter grade to every person who is over 14 feet tall.” John is finished with his job the moment the command is given, and there is only one way for him to finish the job. So HomSet(∅, X) has one element. For any set X, we define the identity function on X, denoted ${\text{id}}_{X}:X\to X,$ to be the function such that for all xX, we have idX(x) = x. Definition 2.1.2.14 (Isomorphism). Let X and Y be sets. A function f : XY is called an isomorphism, denoted f : $X\stackrel{\cong }{\to }Y$, if there exists a function g : YX such that gf = idX and fg = idY. In this case we also say that f is invertible and that g is the inverse of f. If there exists an isomorphism $X\stackrel{\cong }{\to }Y$, we say that X and Y are isomorphic sets and may write XY. Example 2.1.2.15. If X and Y are sets and f : XY is an isomorphism, then the analogue of Figure 2.2 will look like a perfect matching, more often called a one-to-one correspondence. That means that no two arrows will hit the same element of Y, and every element of Y will be in the image. For example, Figure 2.4 depicts an isomorphism $X\stackrel{\cong }{\to }Y$ between four element sets. Application 2.1.2.16. There is an isomorphism between the set NucDNA of nucleotides found in DNA and the set NucRNA of nucleotides found in RNA. Indeed, both sets have four elements, so there are 24 different isomorphisms. But only one is useful in biology. Before we say which one it is, let us say there is also an isomorphism NucDNA ≅ {A, C, G, T} and an isomorphism NucRNA ≅ {A, C, G, U}, and we will use the letters as abbreviations for the nucleotides. The convenient isomorphism ${\text{Nuc}}_{\text{DNA}}\stackrel{\cong }{\to }{\text{Nuc}}_{\text{RNA}}$ is that given by RNA transcription; it sends (See also Application 5.1.2.21.) There is also an isomorphism ${\text{Nuc}}_{\text{DNA}}\stackrel{\cong }{\to }{\text{Nuc}}_{\text{DNA}}$ (the matching in the double helix), given by Protein production can be modeled as a function from the set of 3-nucleotide sequences to the set of eukaryotic amino acids. However, it cannot be an isomorphism because there are 43 = 64 triplets of RNA nucleotides but only 21 eukaryotic amino acids. Exercise 2.1.2.17. Let n ∈ ℕ be a natural number, and let X be a set with exactly n elements. a. How many isomorphisms are there from X to itself? b. Does your formula from part (a) hold when n = 0? Proposition 2.1.2.18. The following facts hold about isomorphism. 1. Any set A is isomorphic to itself; i.e., there exists an isomorphism $A\stackrel{\cong }{\to }A$. 2. For any sets A and B, if A is isomorphic to B, then B is isomorphic to A. 3. For any sets A, B, and C, if A is isomorphic to B, and B is isomorphic to C, then A is isomorphic to C. Proof.     1. The identity function idA: AA is invertible; its inverse is idA because idA ○ idA = idA. 2. If f : AB is invertible with inverse g : BA, then g is an isomorphism with inverse f. 3. If f : AB and f′ : BC are each invertible with inverses g : BA and g′: CB, then the following calculations show that f′ ○ f is invertible with inverse gg′: $\begin{array}{c}\left(f\prime ○f\right)○\left(g○g\prime \right)=f\prime ○\left(f○g\right)○g\prime =f\prime ○{\text{id}}_{B}○g\prime =f\prime ○g\prime ={\text{id}}_{C}\\ \left(g○g\prime \right)○\left(f\prime ○f\right)=g○\left(g\prime ○f\prime \right)○f=g○{\text{id}}_{B}○f=g○f={\text{id}}_{A}\end{array}$ Exercise 2.1.2.19. Let A and B be these sets: Note that the sets A and B are isomorphic. Suppose that f : B → {1, 2, 3, 4, 5} sends “Bob” to 1, sends ♣ to 3, and sends r8 to 4. Is there a canonical function A → {1, 2, 3, 4, 5} corresponding to f?4 Solution 2.1.2.19. No. There are a lot of choices, and none is any more reasonable than any other, i.e., none are canonical. (In fact, there are six choices; do you see why?) The point of this exercise is to illustrate that even if one knows that two sets are isomorphic, one cannot necessarily treat them as the same. To treat them as the same, one should have in hand a specified isomorphism g : $A\stackrel{\cong }{\to }B$, such as ar8, 7 ↦ “Bob”, Q ↦ ♣. Now, given f : B → {1, 2, 3, 4, 5}, there is a canonical function A → {1, 2, 3, 4, 5} corresponding to f, namely, fg. Exercise 2.1.2.20. Find a set A such that for any set X, there is an isomorphism of sets $X\cong {\text{Hom}}_{\mathbf{\text{Set}}}\left(A,X\right).$ Hint: A function AX points each element of A to an element of X. When would there be the same number of ways to do that as there are elements of of X? Solution 2.1.2.20. Let A = {☺}. Then to point each element of A to an element of X, one must simply point ☺ to an element of X. The set of ways to do that can be put in one-to-one correspondence with the set of elements of X. For example, if X = {1, 2, 3}, then ☺ ↦ 3 is a function AX representing the element 3 ∈ X. See Notation 2.1.2.9. Notation 2.1.2.21. For any natural number n ∈ ℕ, define a set We call n the numeral set of size n. So, in particular, 2 = {1, 2}, 1 = {1}, and 0 = ∅. Let A be any set. A function f : nA can be written as a length n sequence We call this the sequence notation for f. Exercise 2.1.2.22. a. Let A = {a, b, c, d}. If f : 10A is given in sequence notation by (a, b, c, c, b, a, d, d, a, b), what is f(4)? b. Let s: 7 → ℕ be given by s(i) = i2. Write s in sequence notation. Solution 2.1.2.22. a. c b. (1, 4, 9, 16, 25, 36, 49) Definition 2.1.2.23 (Cardinality of finite sets). Let A be a set and n ∈ ℕ a natural number. We say that A has cardinality n, denoted $|A|=n,$ if there exists an isomorphism of sets An. If there exists some n ∈ ℕ such that A has cardinality n, then we say that A is finite. Otherwise, we say that A is infinite and write |A| ⩾ ∞. Exercise 2.1.2.24. a. Let A = {5, 6, 7}. What is |A|? b. What is |{1, 1, 2, 3, 5}|? c. What is |ℕ|? d. What is |{n ∈ ℕ | n ⩽ 5}|? We will see in Corollary 3.4.5.6 that for any m, n ∈ ℕ, there is an isomorphism mn if and only if m = n. So if we find that A has cardinality m and that A has cardinality n, then m = n. Proposition 2.1.2.25. Let A and B be finite sets. If there is an isomorphism of sets f : AB, then the two sets have the same cardinality, |A| = |B|. Proof. If f : AB is an isomorphism and Bn, then An because the composition of two isomorphisms is an isomorphism. # 2.2   Commutative diagrams At this point it is difficult to precisely define diagrams or commutative diagrams in general, but we can get a heuristic idea.5 Consider the following picture: We say this is a diagram of sets if each of A, B, C is a set and each of f, g, h is a function. We say this diagram commutes if gf = h. In this case we refer to it as a commutative triangle of sets, or, more generally, as a commutative diagram of sets. Application 2.2.1.1. In its most basic form, the central dogma of molecular biology is that DNA codes for RNA codes for protein. That is, there is a function from DNA triplets to RNA triplets and a function from RNA triplets to amino acids. But sometimes we just want to discuss the translation from DNA to amino acids, and this is the composite of the other two. The following commutative diagram is a picture of this fact Consider the following picture: We say this is a diagram of sets if each of A, B, C, D is a set and each of f, g, h, i is a function. We say this diagram commutes if gf = ih. In this case we refer to it as a commutative square of sets. More generally, it is a commutative diagram of sets. Application 2.2.1.2. Given a physical system S, there may be two mathematical approaches f : SA and g : SB that can be applied to it. Either of those results in a prediction of the same sort, f′ : AP and g′ : BP. For example, in mechanics we can use either the Lagrangian approach or the Hamiltonian approach to predict future states. To say that the diagram commutes would say that these approaches give the same result. Note that diagram (2.6) is considered to be the same diagram as each of the following: In all these we have h = gf, or in diagrammatic order, h = f; g. # 2.3   Ologs In this book I ground the mathematical ideas in applications whenever possible. To that end I introduce ologs, which serve as a bridge between mathematics and various conceptual landscapes. The following material is taken from Spivak and Kent [43], an introduction to ologs. ## 2.3.1   Types A type is an abstract concept, a distinction the author has made. Each type is represented as a box containing a singular indefinite noun phrase. Each of the following four boxes is a type: Each of the four boxes in (2.8) represents a type of thing, a whole class of things, and the label on that box is what one should call each example of that class. Thus ⌜a man⌝ does not represent a single man but the set of men, each example of which is called “a man.” Similarly, the bottom right box represents an abstract type of thing, which probably has more than a million examples, but the label on the box indicates the common name for each such example. Typographical problems emerge when writing a text box in a line of text, e.g., the text box a man seems out of place, and the more in-line text boxes there are, the worse it gets. To remedy this, I denote types that occur in a line of text with corner symbols; e.g., I write ⌜a man⌝ instead of a man. ### 2.3.1.1   Types with compound structures Many types have compound structures, i.e., they are composed of smaller units. Examples include It is good practice to declare the variables in a compound type, as in the last two cases of (2.9). In other words, it is preferable to replace the first box in (2.9) with something like so that the variables (m, w) are clear. Rules of good practice 2.3.1.2. A type is presented as a text box. The text in that box should (i) begin with the word a or an; (ii) refer to a distinction made and recognizable by the olog’s author; (iii) refer to a distinction for which instances can be documented; (iv) be the common name that each instance of that distinction can be called; and (v) declare all variables in a compound structure. The first, second, third, and fourth rules ensure that the class of things represented by each box appears to the author to be a well defined set, and that the class is appropriately named. The fifth rule encourages good readability of arrows (see Section 2.3.2). I do not always follow the rules of good practice throughout this book. I think of these rules being as followed “in the background,” but I have nicknamed various boxes. So ⌜Steve⌝ may stand as a nickname for ⌜a thing classified as Steve⌝ and ⌜arginine⌝ as a nickname for ⌜a molecule of arginine⌝. However, one should always be able to rename each type according to the rules of good practice. ## 2.3.2   Aspects An aspect of a thing x is a way of viewing it, a particular way in which x can be regarded or measured. For example, a woman can be regarded as a person; hence “being a person” is an aspect of a woman. A molecule has a molecular mass (say in daltons), so “having a molecular mass” is an aspect of a molecule. In other words, when it comes to ologs, the word aspect simply means function. The domain A of the function f : AB is the thing we are measuring, and the codomain is the set of possible answers or results of the measurement. So for the arrow in (2.10), the domain is the set of women (a set with perhaps 3 billion elements); the codomain is the set of persons (a set with perhaps 6 billion elements). We can imagine drawing an arrow from each dot in the “woman” set to a unique dot in the “person” set, just as in Figure 2.2. No woman points to two different people nor to zero people—each woman is exactly one person—so the rules for a function are satisfied. Let us now concentrate briefly on the arrow in (2.11). The domain is the set of molecules, the codomain is the set ℝ>0 of positive real numbers. We can imagine drawing an arrow from each dot in the “molecule” set to a single dot in the “positive real number” set. No molecule points to two different masses, nor can a molecule have no mass: each molecule has exactly one mass. Note, however, that two different molecules can point to the same mass. ### 2.3.2.1   Invalid aspects To be valid an aspect must be a functional relationship. Arrows may on their face appear to be aspects, but on closer inspection they are not functional (and hence not valid as aspects). Consider the following two arrows: A person may have no children or may have more than one child, so the first arrow is invalid: it is not a function. Similarly, if one drew an arrow from each mechanical pencil to each piece of lead it uses, one would not have a function. Warning 2.3.2.2. The author of an olog has a worldview, some fragment of which is captured in the olog. When person A examines the olog of person B, person A may or may not agree with it. For example, person B may have the following olog which associates to each marriage a man and a woman. Person A may take the position that some marriages involve two men or two women and thus see B’s olog as wrong. Such disputes are not “problems” with either A’s olog or B’s olog; they are discrepancies between worldviews. Hence, a reader R may see an olog in this book and notice a discrepancy between R’s worldview and my own, but this is not a problem with the olog. Rules are enforced to ensure that an olog is structurally sound, not to ensure that it “correctly reflects reality,” since worldviews can differ. Consider the aspect . At some point in history, this would have been considered a valid function. Now we know that the same object would have a different weight on the moon than it has on earth. Thus, as worldviews change, we often need to add more information to an olog. Even the validity of is questionable, e.g., if I am considered to be the same object on earth before and after I eat Thanksgiving dinner. However, to build a model we need to choose a level of granularity and try to stay within it, or the whole model would evaporate into the nothingness of truth. Any level of granularity is called a stereotype; e.g., we stereotype objects on earth by saying they each have a weight. A stereotype is a lie, more politely a conceptual simplification, that is convenient for the way we want to do business. Remark 2.3.2.3. In keeping with Warning 2.3.2.2, the arrows in (2.12*) and (2.13*) may not be wrong but simply reflect that the author has an idiosyncratic worldview or vocabulary. Maybe the author believes that every mechanical pencil uses exactly one piece of lead. If this is so, then is indeed a valid aspect. Similarly, suppose the author meant to say that each person was once a child, or that a person has an inner child. Since every person has one and only one inner child (according to the author), the map is a valid aspect. We cannot fault the olog for its author’s view, but note that we have changed the name of the label to make the intention more explicit. ### 2.3.2.4   Reading aspects and paths as English phrases Each arrow (aspect) $X\stackrel{f}{\to }Y$ can be read by first reading the label on its source box X, then the label on the arrow f, and finally the label on its target box Y. For example, the arrow is read “a book has as first author a person.” Remark 2.3.2.5. Note that the map in (2.14) is a valid aspect, but a similarly benign-looking map would not be valid, because it is not functional. When creating an olog, one must be vigilant about this type of mistake because it is easy to miss, and it can corrupt the olog. Sometimes the label on an arrow can be shortened or dropped altogether if it is obvious from context (see Section 2.3.3). Here is a common example from the way I write ologs. Neither arrow is readable by the preceding protocol (e.g., “a pair (x, y), where x and y are integers x an integer” is not an English sentence), and yet it is clear what each map means. For example, given (8, 11) in A, arrow x would yield 8 and arrow y would yield 11. The label x can be thought of as a nickname for the full name “yields as the value of x,” and similarly for y. I do not generally use the full name, so as not to clutter the olog. One can also read paths through an olog by inserting the word which (or who) after each intermediate box. For example, olog (2.16) has two paths of length 3 (counting arrows in a chain): The top path is read “a child is a person, who has as parents a pair (w, m), where w is a woman and m is a man, which yields, as the value of w, a woman.” The reader should read and understand the content of the bottom path, which associates to every child a year. ### 2.3.2.6   Converting nonfunctional relationships to aspects There are many relationships that are not functional, and these cannot be considered aspects. Often the word has indicates a relationship—sometimes it is functional, as in , and sometimes it is not, as in . Clearly, a father may have more than one child. This one is easily fixed by realizing that the arrow should go the other way: there is a function . What about . Again, a person may own no cars or more than one car, but this time a car can be owned by more than one person too. A quick fix would be to replace it by . This is okay, but the relationship between ⌜a car⌝ and ⌜a set of cars⌝ then becomes an issue to deal with later. There is another way to indicate such nonfunctional relationships. In this case it would look like this: This setup will ensure that everything is properly organized. In general, relationships can involve more than two types, and in olog form looks like this: For example, Exercise 2.3.2.7. On page 25, the arrow in (2.12*) was indicated as an invalid aspect: Create a valid olog that captures the parent-child relationship; your olog should still have boxes ⌜a person⌝ and ⌜a child⌝ but may have an additional box. Rules of good practice 2.3.2.8. An aspect is presented as a labeled arrow pointing from a source box to a target box. The arrow label text should (i) begin with a verb; (ii) yield an English sentence, when the source box text followed by the arrow text followed by the target box text is read; (iii) refer to a functional relationship: each instance of the source type should give rise to a specific instance of the target type; (iv) constitute a useful description of that functional relationship. ## 2.3.3   Facts In this section I discuss facts, by which I mean path equivalences in an olog. It is the notion of path equivalences that makes category theory so powerful. A path in an olog is a head-to-tail sequence of arrows. That is, any path starts at some box B0, then follows an arrow emanating from B0 (moving in the appropriate direction), at which point it lands at another box B1, then follows any arrow emanating from B1, and so on, eventually landing at a box Bn and stopping there. The number of arrows is the length of the path. So a path of length 1 is just an arrow, and a path of length 0 is just a box. We call B0 the source and Bn the target of the path. Given an olog, its author may want to declare that two paths are equivalent. For example, consider the two paths from A to C in the olog We know as English speakers that a woman parent is called a mother, so these two paths AC should be equivalent. A mathematical way to say this is that the triangle in olog (2.17) commutes. That is, path equivalences are simply commutative diagrams, as in Section 2.2. In the preceding example we concisely say “a woman parent is equivalent to a mother.” We declare this by defining the diagonal map in (2.17) to be the composition of the horizontal map and the vertical map. I generally prefer to indicate a commutative diagram by drawing a check mark, ✓, in the region bounded by the two paths, as in olog (2.17). Sometimes, however, one cannot do this unambiguously on the two-dimensional page. In such a case I indicate the commutative diagram (fact) by writing an equation. For example, to say that the diagram commutes, we could either draw a check mark inside the square or write the equation ${}_{A}\left[f,g\right]\simeq {}_{A}\left[h,i\right]$ above it.6 Either way, it means that starting from A, “doing f, then g” is equivalent to “doing h, then i.” Here is another example: Note how this diagram gives us the established terminology for the various ways in which DNA, RNA, and protein are related in this context. Exercise 2.3.3.1. Create an olog for human nuclear biological families that includes the concepts of person, man, woman, parent, father, mother, and child. Make sure to label all the arrows and that each arrow indicates a valid aspect in the sense of Section 2.3.2.1. Indicate with check marks (✓) the diagrams that are intended to commute. If the 2-dimensionality of the page prevents a check mark from being unambiguous, indicate the intended commutativity with an equation. Solution 2.3.3.1. Note that neither of the two triangles from child to person commute. To say that they did commute would be to say that “a child and its mother are the same person” and that “a child and its father are the same person.” Example 2.3.3.2 (Noncommuting diagram). In my conception of the world, the following diagram does not commute: The noncommutativity of diagram (2.18) does not imply that no person lives in the same city as his or her father. Rather it implies that it is not the case that every person lives in the same city as his or her father. Exercise 2.3.3.3. Create an olog about a scientific subject, preferably one you think about often. The olog should have at least five boxes, five arrows, and one commutative diagram. ### 2.3.3.4   A formula for writing facts as English Every fact consists of two paths, say, P and Q, that are to be declared equivalent. The paths P and Q will necessarily have the same source, say, s, and target, say, t, but their lengths may be different, say, m and n respectively.7 We draw these paths as Every part of an olog (i.e., every box and every arrow) has an associated English phrase, which we write as 〈〈〉〉. Using a dummy variable x, we can convert a fact into English too. The following general formula may be a bit difficult to understand (see Example 2.3.3.5). The fact PQ from (2.19) can be Englished as follows: Example 2.3.3.5. Consider the olog To put the fact that diagram (2.21) commutes into English, we first English the two paths: F = “a person has an address which is in a city” and G = “a person lives in a city.” The source of both is s = “a person” and the target of both is t = “a city.” Write: Given x, a person, consider the following. We know that x is a person, who has an address, which is in a city, that we call P(x). We also know that x is a person, who lives in a city that we call Q(x). Fact: Whenever x is a person, we will have P(x) = Q(x). More concisely, one reads olog 2.21 as A person x has an address, which is in a city, and this is the city x lives in. Exercise 2.3.3.6. This olog was taken from Spivak [38]. It says that a landline phone is physically located in the region to which its phone number is assigned. Translate this fact into English using the formula from (2.20). Exercise 2.3.3.7. In olog (2.22), suppose that the box ⌜an operational landline phone⌝ is replaced with the box ⌜an operational cell phone⌝. Would the diagram still commute? ### 2.3.3.8   Images This section discusses a specific kind of fact, generated by any aspect. Recall that every function has an image (2.3), meaning the subset of elements in the codomain that are “hit” by the function. For example, the function f : ℤ → ℤ given by f(x) = 2 * x: ℤ → ℤ has as image the set of all even numbers. Similarly, the set of mothers arises as the image of the “has as mother” function: Exercise 2.3.3.9. For each of the following types, write a function for which it is the image, or write “not clearly useful as an image type.” a. ⌜a book⌝ b. ⌜a material that has been fabricated by a working process of type T c. ⌜a bicycle owner⌝ d. ⌜a child⌝ e. ⌜a used book⌝ f. ⌜a primary residence⌝ __________________ 1Note that the symbol x′, read “x-prime,” has nothing to do with calculus or derivatives. It is simply notation used to name a symbol that is somehow like x. This suggestion of kinship between x and x′ is meant only as an aid for human cognition, not as part of the mathematics. 2This kind of arrow, ↦, is read “maps to.” A function f : XY means a rule for assigning to each element xX an element f(x) ∈ Y. We say that “x maps to f(x)” and write xf(x). 3The notation HomSet(−, −) will make more sense later, when it is seen in a larger context. See especially Section 5.1. 4Canonical, as used here, means something like “best choice,” a choice that stands out as the only reasonable one. 5Commutative diagrams are precisely defined in Section 6.1.2. 6We defined function composition in Section 2.1.2, but here we are using a different notation. There we used classical order, and our path equivalence would be written gf = ih. As discussed in Remark 2.1.2.11, category theorists and others often prefer the diagrammatic order for writing compositions, which is f; g = h; i. For ologs, we roughly follow the latter because it makes for better English sentences, and for the same reason, we add the source object to the equation, writing A[f, g] ≃ A[h, i]. 7If the source equals the target, s = t, then it is possible to have m = 0 or n = 0, and the ideas that follow still make sense.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635101914405823, "perplexity": 672.7055910120393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00246-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.ask.com/question/how-long-would-it-take-to-travel-20-miles-per-hour-for-one-foot
# How Long Would It Take to Travel 20 Miles Per Hour for One Foot? Figuring out how long it will take to travel 1 foot at a speed of 20 miles per hour requires the distance of 1 mile in units of feet. There are 5280 feet in one mile. So setting up an equation relating the information, (20 m/h)(5280ft / 1mi)(1 hr / 3600s) = 29.33ft/s. Traveling 1 foot at this speed will take 0.034 s. Q&A Related to "How Long Would It Take to Travel 20 Miles Per..." You are traveling at 15 miles per hour. http://wiki.answers.com/Q/How_many_miles_per_hour_... The velocity that the ISS travels is determined by a couple physical variables. In the simplified case of a perfectly circular orbit the velocity can be derived by knowing: 1. The http://www.quora.com/Why-does-the-space-station-ha... Bullet that shoots 100 miles http://www.chacha.com/question/if-a-bullet-shoots-... Let the speed of current be x mph. Let the distance between A & B be D miles. D/ (20+x) / D/ (20-x) = 3/5. CROSS MULTIPLY. 5( 20-x) = 3 ( 20 + x) 100 -5x = 60 + 3x. 8x = 40. x http://answers.yahoo.com/question/index?qid=201310... Similar Questions Top Related Searches Explore this Topic It will take you 0.208 hours to travel 1.5 miles at a speed of 7.2 miles per hour. This is equal to about 12.5 minutes. You are really not traveling all that fast ... To travel 4,700 miles, at 30 miles an hour, it would take approximately 156 hours, which is about 6 days, give or take a few hours. It would be much faster to ... If you were traveling 65 miles per hour, and you wanted to travel 473 miles, you would divide 473/65 to get 7.28 hours. Therefore, it takes 7.28 hours to travel ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082409739494324, "perplexity": 741.3462406345004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999635916/warc/CC-MAIN-20140305060715-00079-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/195437-ordering-property-z-print.html
# ordering property of Z Given the natural numbers N with the properties of associativity and commutivity of both addition and multiplication and the distributive law with trichotomy(a<b, a>b or a=b) and transitivity(a > b, b > c $\Rightarrow$ a > c) and a < a + c and a < b $\Rightarrow$ ac < bc for all a, b, c in N. With the integers Z defined as the set of ordered pairs (x,y) where $x,y \in N$ and $(a,b) < (c,d) \Leftrightarrow a+d < c+b$. 0 is defined as equivalence class of (m,m) $m \in N$ and $(a+b,b) \in N$.How does one show that if $z_1, z_2, z_3 \in Z, z_1 < z_2 \Rightarrow z_1 + z_3 < z_2 + z_3$? My difficulty comes in how does one show that if $c+b < a+d \Rightarrow (a+b,c+d) \in N$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790072202682495, "perplexity": 289.3216069485138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00092-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/cryptospy
# CryptoSpy In this cryptogram worksheet, students answer a question by decrypting the symbols. Different hearts are the symbols for the letters of the alphabet.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261388182640076, "perplexity": 2475.741906949755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693363.77/warc/CC-MAIN-20170925201601-20170925221601-00082.warc.gz"}
https://hockeyschtick.blogspot.com/2014/11/new-paper-finds-large-surface-solar.html
## Thursday, November 20, 2014 ### New paper finds large surface solar radiation increase of 4% per decade & UV increase 7% per decade A paper published today in Atmospheric Chemistry and Physics finds global solar radiation at the surface in Belgium has significantly and substantially increased by 4% per decade from 1991-2013, and solar UV radiation at the surface has increased even more by 7% per decade. According to the authors, the findings corroborate others for Europe as well as the well-known global brightening phenomenon, which followed the global dimming period from ~1970-1985 that was responsible for the ice age scare of the 1970's. The authors find a (statistically insignificant) decrease of aerosol optical depth of -8%/decade from 1991-2013, which could be due to a decrease of cloud cover and/or other aerosols. As noted by Dr. Roy Spencer, a mere 1-2% change in cloud cover can alone account for global warming or cooling. The authors also find total column ozone, which is primarily generated by solar UV and can act as a solar amplification mechanism, has increased by 3%/decade. The effects of solar dimming and brightening on climate are far greater than attributed to greenhouse gases, but which have not been simulated by climate models. These observed trends of solar surface radiation dimming and brightening correspond well to the observed global temperature changes over the past 50 years, and to a far greater extent than do CO2 levels. Findings from the paper: erythemal ultraviolet (UV) dose (Sery):   +7%/decade Excerpt: Concerning the global solar radiation, many publications agree on the existence of a solar dimming period between 1970 and 1985 and a subsequent solar brightening period (Norris and Wild, 2007; Solomon et al., 2007; Makowski et al., 2009; Stjern et al., 2009; Wild et al., 2009; Sanchez-Lorenzo and Wild, 2012). Different studies have calculated the trend in Sg after 1985. The trend in Sg [global solar radiation] from GEBA (Global Energy Balance Archive; between 1987 and 2002 is equal to +1.4 ( 3.4)Wm-2 per decade according to Norris and Wild (2007). Stjern et al. (2009) found a total change in the mean surface solar radiation trend over 11 stations in northern Europe of +4.4% between 1983 and 2003. In the Fourth Assessment Report of the IPCC (Solomon et al., 2007), 421 sites were analyzed; between 1992 and 2002, the change of all-sky surface solar radiation was equal to 0.66Wm-2 per year. Wild et al. (2009) investigated the global solar radiation from 133 stations from GEBA/World Radiation Data Centre belonging to different regions in Europe. All series showed an increase over the entire period, with a pronounced upward tendency since 2000. For the Benelux region, the linear change between 1985 and 2005 is equal to +0.42Wm-2 per year, compared to the pan-European average trend of +0.33Wm-2 per year (or +0.24Wm-2 if the anomaly of the 2003 heat wave is excluded) (Wild et al. 2009). Our trend at Uccle of +0.5 ( 0.2)Wm-2 per year (or +4% per decade) agrees within the error bars with the results from Wild et al. (2009). Atmospheric Chemistry and Physics, 14, 12251-12270, 2014 Author(s): V. De Bock, H. De Backer, R. Van Malderen, A. Mangold, and A. Delcloo At Uccle, Belgium, a long time series (1991–2013) of simultaneous measurements of erythemal ultraviolet (UV) dose (Sery), global solar radiation (Sg), total ozone column (Q_{O3}\$) and aerosol optical depth (τaer) (at 320.1 nm) is available, which allows for an extensive study of the changes in the variables over time. Linear trends were determined for the different monthly anomalies time series. Sery, Sg and QO3 all increase by respectively 7, 4 and 3% per decade. τaer shows an insignificant negative trend of −8% per decade. These trends agree with results found in the literature for sites with comparable latitudes. A change-point analysis, which determines whether there is a significant change in the mean of the time series, is applied to the monthly anomalies time series of the variables. Only for Sery and QO3, was a significant change point present in the time series around February 1998 and March 1998, respectively. The change point in QO3corresponds with results found in the literature, where the change in ozone levels around 1997 is attributed to the recovery of ozone. A multiple linear regression (MLR) analysis is applied to the data in order to study the influence of Sg, QO3 and τaer on Sery. Together these parameters are able to explain 94% of the variation in Sery. Most of the variation (56%) in Sery is explained by Sg. The regression model performs well, with a slight tendency to underestimate the measured Sery values and with a mean absolute bias error (MABE) of 18%. However, in winter, negative Sery are modeled. Applying the MLR to the individual seasons solves this issue. The seasonal models have an adjusted R2 value higher than 0.8 and the correlation between modeled and measured Sery values is higher than 0.9 for each season. The summer model gives the best performance, with an absolute mean error of only 6%. However, the seasonal regression models do not always represent reality, where an increase in Sery is accompanied with an increase in QO3 and a decrease in τaer. In all seasonal models, Sg is the factor that contributes the most to the variation in Sery, so there is no doubt about the necessity to include this factor in the regression models. The individual contribution of τaer to Sery is very low, and for this reason it seems unnecessary to include τaer in the MLR analysis. Including QO3, however, is justified to increase the adjusted R2 and to decrease the MABE of the model. #### 1 comment: 1. This is just confirmation that global cloudiness is linked to changes in global atmospheric air circulation. Zonal / poleward jets give less clouds and meridional / equatorward jets give more clouds. The consequence is changes in the proportion of solar energy that gets into the oceans to affect global surface temperatures and drive the climate system. It also supports my view that solar induced changes in ozone amounts in the stratosphere alter the gradient of tropopause height between equator and poles so as to allow latitudinal shifting of the jets and climate zones. I think that is a better solution than the Svensmark cosmic ray proposition.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422284722328186, "perplexity": 3026.91701753386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00680.warc.gz"}
https://www.mathworks.com/help/signal/ref/pmtm.html
# pmtm Multitaper power spectral density estimate ## Syntax ``pxx = pmtm(x)`` ``pxx = pmtm(x,nw)`` ``pxx = pmtm(x,nw,nfft)`` ``[pxx,w] = pmtm(___)`` ``[pxx,f] = pmtm(___,fs)`` ``[pxx,w] = pmtm(x,nw,w)`` ``[pxx,f] = pmtm(x,nw,f,fs)`` ``[___] = pmtm(___,method)`` ``[___] = pmtm(x,e,v)`` ``[___] = pmtm(x,dpss_params)`` ``[___] = pmtm(___,'DropLastTaper',dropflag)`` ``[___] = pmtm(___,freqrange)`` ``[___,pxxc] = pmtm(___,'ConfidenceLevel',probability)`` ``pmtm(___)`` ## Description example ````pxx = pmtm(x)` returns Thomson’s multitaper power spectral density (PSD) estimate, `pxx`, of the input signal, `x`. When `x` is a vector, it is treated as a single channel. When `x` is a matrix, the PSD is computed independently for each column and stored in the corresponding column of `pxx`. The tapers are the discrete prolate spheroidal (DPSS), or Slepian, sequences. The time-halfbandwidth, `nw`, product is 4. By default, `pmtm` uses the first 2 × `nw` – 1 DPSS sequences. If `x` is real-valued, `pxx` is a one-sided PSD estimate. If `x` is complex-valued, `pxx` is a two-sided PSD estimate. The number of points, `nfft`, in the discrete Fourier transform (DFT) is the maximum of 256 or the next power of two greater than the signal length.``` example ````pxx = pmtm(x,nw)` use the time-halfbandwidth product, `nw`, to obtain the multitaper PSD estimate. The time-halfbandwidth product controls the frequency resolution of the multitaper estimate. `pmtm` uses 2 × `nw` – 1 Slepian tapers in the PSD estimate.``` example ````pxx = pmtm(x,nw,nfft)` uses `nfft` points in the DFT. If `nfft` is greater than the signal length, `x` is zero-padded to length `nfft`. If `nfft` is less than the signal length, the signal is wrapped modulo `nfft`.``` ````[pxx,w] = pmtm(___)` returns the normalized frequency vector, `w`. If `pxx` is a one-sided PSD estimate, `w` spans the interval [0,π] if `nfft` is even and [0,π) if `nfft` is odd. If `pxx` is a two-sided PSD estimate, `w` spans the interval [0,2π).``` example ````[pxx,f] = pmtm(___,fs)` returns a frequency vector, `f`, in cycles per unit time. The sample rate, `fs`, is the number of samples per unit time. If the unit of time is seconds, then `f` is in cycles/sec (Hz). For real–valued signals, `f` spans the interval [0,`fs`/2] when `nfft` is even and [0,`fs`/2) when `nfft` is odd. For complex-valued signals, `f` spans the interval [0,`fs`). `fs` must be the fourth input to `pmtm`. To input a sample rate and still use the default values of the preceding optional arguments, specify these arguments as empty, `[]`.``` ````[pxx,w] = pmtm(x,nw,w)` returns the two-sided multitaper PSD estimates at the normalized frequencies specified in `w`. The vector `w` must contain at least two elements, because otherwise the function interprets it as `nfft`.``` ````[pxx,f] = pmtm(x,nw,f,fs)` returns the two-sided multitaper PSD estimates at the frequencies specified in the vector, `f`. The vector `f` must contain at least two elements, because otherwise the function interprets it as `nfft`. The frequencies in `f` are in cycles per unit time. The sample rate, `fs`, is the number of samples per unit time. If the unit of time is seconds, then `f` is in cycles/second (Hz).``` example ````[___] = pmtm(___,method)` combines the individual tapered PSD estimates using the method, `method`. `method` can be one of: `'adapt'` (default), `'eigen'`, or `'unity'`.``` example ````[___] = pmtm(x,e,v)` uses the tapers in the N-by-K matrix `e` with concentrations `v` in the frequency band [-w,w]. N is the length of the input signal, `x`. Use `dpss` to obtain the Slepian tapers and corresponding concentrations.``` ````[___] = pmtm(x,dpss_params)` uses the cell array, `dpss_params`, to pass input arguments to `dpss` except the number of elements in the sequences. The number of elements in the sequences is the first input argument to `dpss` and is not included in `dpss_params`. An example of this usage is `pxx = pmtm(randn(1000,1),{2.5,3})`.``` example ````[___] = pmtm(___,'DropLastTaper',dropflag)` specifies whether `pmtm` drops the last taper in the computation of the multitaper PSD estimate. `dropflag` is a logical. The default value of `dropflag` is `true` and the last taper is not used in the PSD estimate.``` example ````[___] = pmtm(___,freqrange)` returns the multitaper PSD estimate over the frequency range specified by `freqrange`. Valid options for `freqrange` are `'onesided'`, `'twosided'`, and `'centered'`.``` example ````[___,pxxc] = pmtm(___,'ConfidenceLevel',probability)` returns the `probability` × 100% confidence intervals for the PSD estimate in `pxxc`. ``` example ````pmtm(___)` with no output arguments plots the multitaper PSD estimate in the current figure window. ``` ## Examples collapse all Obtain the multitaper PSD estimate of an input signal consisting of a discrete-time sinusoid with an angular frequency of $\pi /4$ rad/sample with additive N(0,1) white noise. Create a sine wave with an angular frequency of $\pi /4$ rad/sample with additive N(0,1) white noise. The signal is 320 samples in length. Obtain the multitaper PSD estimate using the default time-halfbandwidth product of 4 and DFT length. The default number of DFT points is 512. Because the signal is real-valued, the PSD estimate is one-sided and there are 512/2+1 points in the PSD estimate. ```n = 0:319; x = cos(pi/4*n)+randn(size(n)); pxx = pmtm(x);``` Plot the multitaper PSD estimate. `pmtm(x)` Obtain the multitaper PSD estimate with a specified time-halfbandwidth product. Create a sine wave with an angular frequency of $\pi /4$ rad/sample with additive N(0,1) white noise. The signal is 320 samples in length. Obtain the multitaper PSD estimate with a time-halfbandwidth product of 2.5. The resolution bandwidth is $\left[-2.5\pi /320,2.5\pi /320\right]$ rad/sample. The default number of DFT points is 512. Because the signal is real-valued, the PSD estimate is one-sided and there are 512/2+1 points in the PSD estimate. ```n = 0:319; x = cos(pi/4*n)+randn(size(n)); pmtm(x,2.5)``` Obtain the multitaper PSD estimate of an input signal consisting of a discrete-time sinusoid with an angular frequency of $\pi /4$ rad/sample with additive N(0,1) white noise. Use a DFT length equal to the signal length. Create a sine wave with an angular frequency of $\pi /4$ rad/sample with additive N(0,1) white noise. The signal is 320 samples in length. Obtain the multitaper PSD estimate with a time-halfbandwidth product of 3 and a DFT length equal to the signal length. Because the signal is real-valued, the one-sided PSD estimate is returned by default with a length equal to 320/2+1. ```n = 0:319; x = cos(pi/4*n)+randn(size(n)); pmtm(x,3,length(x))``` Obtain the multitaper PSD estimate of a signal sampled at 1 kHz. The signal is a 100 Hz sine wave in additive N(0,1) white noise. The signal duration is 2 s. Use a time-halfbandwidth product of 3 and DFT length equal to the signal length. ```fs = 1000; t = 0:1/fs:2-1/fs; x = cos(2*pi*100*t)+randn(size(t)); [pxx,f] = pmtm(x,3,length(x),fs);``` Plot the multitaper PSD estimate. `pmtm(x,3,length(x),fs)` Obtain a multitaper PSD estimate where the individual tapered direct spectral estimates are given equal weight in the average. Obtain the multitaper PSD estimate of a signal sampled at 1 kHz. The signal is a 100 Hz sine wave in additive N(0,1) white noise. The signal duration is 2 s. Use a time-halfbandwidth product of 3 and a DFT length equal to the signal length. Use the `'unity'` option to give equal weight in the average to each of the individual tapered direct spectral estimates. ```fs = 1000; t = 0:1/fs:2-1/fs; x = cos(2*pi*100*t)+randn(size(t)); [pxx,f] = pmtm(x,3,length(x),fs,'unity');``` Plot the multitaper PSD estimate. `pmtm(x,3,length(x),fs,'unity')` This example examines the frequency-domain concentrations of the DPSS sequences. The example produces a multitaper PSD estimate of an input signal by precomputing the Slepian sequences and selecting only those with more than 99% of their energy concentrated in the resolution bandwidth. The signal is a 100 Hz sine wave in additive N(0,1) white noise. The signal duration is 2 s. ```fs = 1000; t = 0:1/fs:2-1/fs; x = cos(2*pi*100*t)+randn(size(t));``` Set the time-halfbandwidth product to 3.5. For the signal length of 2000 samples and a sampling interval of 0.001 seconds, this results in a resolution bandwidth of [-1.75,1.75] Hz. Calculate the first 10 Slepian sequences and examine their frequency concentrations in the specified resolution bandwidth. ```[e,v] = dpss(length(x),3.5,10); stem(1:length(v),v,'filled') ylim([0 1.2]) title('Proportion of Energy in [-w,w] of k-th Slepian Sequence')``` Determine the number of Slepian sequences with energy concentrations greater than 99%. Using the selected DPSS sequences, obtain the multitaper PSD estimate. Set `'DropLastTaper'` to `false` to use all the selected tapers. ```hold on plot(1:length(v),0.99*ones(length(v),1))``` `idx = find(v>0.99,1,'last')` ```idx = 5 ``` `[pxx,f] = pmtm(x,e(:,1:idx),v(1:idx),length(x),fs,'DropLastTaper',false);` Plot the multitaper PSD estimate. ```figure pmtm(x,e(:,1:idx),v(1:idx),length(x),fs,'DropLastTaper',false)``` Obtain the multitaper PSD estimate of a 100 Hz sine wave in additive N(0,1) noise. The data are sampled at 1 kHz. Use the `'centered'` option to obtain the DC-centered PSD. ```fs = 1000; t = 0:1/fs:2-1/fs; x = cos(2*pi*100*t)+randn(size(t)); [pxx,f] = pmtm(x,3.5,length(x),fs,'centered');``` Plot the DC-centered PSD estimate. `pmtm(x,3.5,length(x),fs,'centered')` The following example illustrates the use of confidence bounds with the multitaper PSD estimate. While not a necessary condition for statistical significance, frequencies in the multitaper PSD estimate where the lower confidence bound exceeds the upper confidence bound for surrounding PSD estimates clearly indicate significant oscillations in the time series. Create a signal consisting of the superposition of 100-Hz and 150-Hz sine waves in additive white N(0,1) noise. The amplitude of the two sine waves is 1. The sampling frequency is 1 kHz. The signal is 2 s in duration. ```fs = 1000; t = 0:1/fs:2-1/fs; x = cos(2*pi*100*t)+cos(2*pi*150*t)+randn(size(t));``` Obtain the multitaper PSD estimate with 95%-confidence bounds. Plot the PSD estimate along with the confidence interval and zoom in on the frequency region of interest near 100 and 150 Hz. ```[pxx,f,pxxc] = pmtm(x,3.5,length(x),fs,'ConfidenceLevel',0.95); plot(f,10*log10(pxx)) hold on plot(f,10*log10(pxxc),'r-.') xlim([85 175]) xlabel('Hz') ylabel('dB') title('Multitaper PSD Estimate with 95%-Confidence Bounds')``` The lower confidence bound in the immediate vicinity of 100 and 150 Hz is significantly above the upper confidence bound outside the vicinity of 100 and 150 Hz. Generate 1024 samples of a multichannel signal consisting of three sinusoids in additive $N\left(0,1\right)$ white Gaussian noise. The sinusoids' frequencies are $\pi /2$, $\pi /3$, and $\pi /4$ rad/sample. Estimate the PSD of the signal using Thomson's multitaper method and plot it. ```N = 1024; n = 0:N-1; w = pi./[2;3;4]; x = cos(w*n)' + randn(length(n),3); pmtm(x)``` ## Input Arguments collapse all Input signal, specified as a row or column vector, or as a matrix. If `x` is a matrix, then its columns are treated as independent channels. Example: `cos(pi/4*(0:159))+randn(1,160)` is a single-channel row-vector signal. Example: `cos(pi./[4;2]*(0:159))'+randn(160,2)` is a two-channel signal. Data Types: `single` | `double` Complex Number Support: Yes Time-halfbandwidth product, specified as a positive scalar. In multitaper spectral estimation, the user specifies the resolution bandwidth of the multitaper estimate [–W,W] where W = k/NΔt for some small k > 1. Equivalently, W is some small multiple of the frequency resolution of the DFT. The time-halfbandwidth product is the product of the resolution halfbandwidth and the number of samples in the input signal, N. The number of Slepian tapers whose Fourier transforms are well-concentrated in [–W,W] (eigenvalues close to unity) is 2NW – 1. Number of DFT points, specified as a positive integer. For a real-valued input signal, `x`, the PSD estimate, `pxx` has length (`nfft`/2 + 1) if `nfft` is even, and (`nfft` + 1)/2 if `nfft` is odd. For a complex-valued input signal,`x`, the PSD estimate always has length `nfft`. If `nfft` is specified as empty, the default `nfft` is used. Data Types: `single` | `double` Sample rate, specified as a positive scalar. The sample rate is the number of samples per unit time. If the unit of time is seconds, then the sample rate has units of Hz. Normalized frequencies, specified as a row or column vector with at least two elements. Normalized frequencies are in rad/sample. Example: `w = [pi/4 pi/2]` Data Types: `double` Frequencies, specified as a row or column vector with at least two elements. The frequencies are in cycles per unit time. The unit time is specified by the sample rate, `fs`. If `fs` has units of samples/second, then `f` has units of Hz. Example: `fs = 1000; f = [100 200]` Data Types: `double` Weights on individual tapered PSD estimates, specified as one of `'adapt'`, `'eigen'`, or `'unity'`. The default is Thomson’s adaptive frequency-dependent weights, `'adapt'`. The calculation of these weights is detailed on pp. 368–370 in [1]. The `'eigen'` method weights each tapered PSD estimate by the eigenvalue (frequency concentration) of the corresponding Slepian taper. The `'unity'` method weights each tapered PSD estimate equally. DPSS (Slepian) sequences, specified as a N-by-K matrix where N is the length of the input signal, `x`. The matrix `e` is the output of `dpss`. Eigenvalues for DPSS (Slepian) sequences, specified as a column vector. The eigenvalues for the DPSS sequences indicate the proportion of the sequence energy concentrated in the resolution bandwidth, [-W, W]. The eigenvalues range lie in the interval (0,1) and generally the first 2NW-1 eigenvalues are close to 1 and then decrease toward 0. Input arguments for `dpss`, specified as a cell array. The first input argument to `dpss` is the length of the DPSS sequences and is omitted from `dpss_params`. The length of the DPSS sequences is obtained from the length of the input signal, `x`. Example: `{3.5,5}` Flag indicating whether to drop or keep the last DPSS sequence, specified as a logical. The default is `true` and `pmtm` drops the last taper. In a multitaper estimate, the first 2NW – 1 DPSS sequences have eigenvalues close to unity. If you use less than 2NW – 1 sequences, it is likely that all the tapers have eigenvalues close to 1 and you can specify `dropflag` as `false` to keep the last taper. Frequency range for the PSD estimate, specified as a one of `'onesided'`, `'twosided'`, or `'centered'`. The default is `'onesided'` for real-valued signals and `'twosided'` for complex-valued signals. The frequency ranges corresponding to each option are • `'onesided'` — returns the one-sided PSD estimate of a real-valued input signal, `x`. If `nfft` is even, `pxx` has length `nfft`/2 + 1 and is computed over the interval [0,π] rad/sample. If `nfft` is odd, the length of `pxx` is (`nfft` + 1)/2 and the interval is [0,π) rad/sample. When `fs` is optionally specified, the corresponding intervals are [0,`fs`/2] cycles/unit time and [0,`fs`/2) cycles/unit time for even and odd length `nfft` respectively. • `'twosided'` — returns the two-sided PSD estimate for either the real-valued or complex-valued input, `x`. In this case, `pxx` has length `nfft` and is computed over the interval [0,2π) rad/sample. When `fs` is optionally specified, the interval is [0,`fs`) cycles/unit time. • `'centered'` — returns the centered two-sided PSD estimate for either the real-valued or complex-valued input, `x`. In this case, `pxx` has length `nfft` and is computed over the interval (–π,π] rad/sample for even length `nfft` and (–π,π) rad/sample for odd length `nfft`. When `fs` is optionally specified, the corresponding intervals are (–`fs`/2, `fs`/2] cycles/unit time and (–`fs`/2, `fs`/2) cycles/unit time for even and odd length `nfft` respectively. Coverage probability for the true PSD, specified as a scalar in the range (0,1). The output, `pxxc`, contains the lower and upper bounds of the `probability` × 100% interval estimate for the true PSD. ## Output Arguments collapse all PSD estimate, returned as a real-valued, nonnegative column vector or matrix. Each column of `pxx` is the PSD estimate of the corresponding column of `x`. The units of the PSD estimate are in squared magnitude units of the time series data per unit frequency. For example, if the input data is in volts, the PSD estimate is in units of squared volts per unit frequency. For a time series in volts, if you assume a resistance of 1 Ω and specify the sample rate in hertz, the PSD estimate is in watts per hertz. Data Types: `single` | `double` Normalized frequencies, returned as a real-valued column vector. If `pxx` is a one-sided PSD estimate, `w` spans the interval [0,π] if `nfft` is even and [0,π) if `nfft` is odd. If `pxx` is a two-sided PSD estimate, `w` spans the interval [0,2π). For a DC-centered PSD estimate, `w` spans the interval (–π,π] for even `nfft` and (–π,π) for odd `nfft`. Data Types: `double` Cyclical frequencies, returned as a real-valued column vector. For a one-sided PSD estimate, `f` spans the interval [0,`fs`/2] when `nfft` is even and [0,`fs`/2) when `nfft` is odd. For a two-sided PSD estimate, `f` spans the interval [0,`fs`). For a DC-centered PSD estimate, `f` spans the interval (–`fs`/2, `fs`/2] cycles/unit time for even length `nfft` and (–`fs`/2, `fs`/2) cycles/unit time for odd length `nfft`. Data Types: `double` | `single` Confidence bounds, returned as a matrix with real-valued elements. The row size of the matrix is equal to the length of the PSD estimate, `pxx`. `pxxc` has twice as many columns as `pxx`. Odd-numbered columns contain the lower bounds of the confidence intervals, and even-numbered columns contain the upper bounds. Thus, `pxxc(m,2*n-1)` is the lower confidence bound and `pxxc(m,2*n)` is the upper confidence bound corresponding to the estimate `pxx(m,n)`. The coverage probability of the confidence intervals is determined by the value of the `probability` input. Data Types: `single` | `double` collapse all ### Discrete Prolate Spheroidal (Slepian) Sequences The derivation of the Slepian sequences proceeds from the discrete-time — continuous frequency concentration problem. For all 2 sequences index-limited to 0,1,...,N – 1, the problem seeks the sequence having the maximal concentration of its energy in a frequency band [–W,W] with |W| < 1/2Δt. This amounts to finding the eigenvalues and corresponding eigenvectors of an N-by-N self-adjoint positive semi-definite operator. Therefore, the eigenvalues are real and nonnegative and eigenvectors corresponding to distinct eigenvalues are mutually orthogonal. In this particular problem, the eigenvalues are bounded by 1 and the eigenvalue is the measure of the sequence’s energy concentration in the frequency interval [–W,W]. The eigenvalue problem is given by `$\sum _{n=0}^{N-1}\frac{\mathrm{sin}\left(2\pi W\left(n-m\right)\right)}{\pi \left(n-m\right)}{g}_{n}={\lambda }_{k}\left(N,W\right){g}_{m}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }m=0,1,2,\dots ,N-1$` The 0th-order DPSS sequence, g0 is the eigenvector corresponding to the largest eigenvalue. The 1-st order DPSS sequence, g1 is the eigenvector corresponding to the next largest eigenvalue and is orthogonal to the 0-th order sequence. The 2nd-order DPSS sequence, g2, is the eigenvector corresponding to the third largest eigenvalue and is orthogonal to the 0-th order and 1-st order DPSS sequences. Because the operator is N-by-N, there are N eigenvectors. However, it can be shown that for a given sequence length N and a specified bandwidth [-W,W], there are approximately 2NW – 1 DPSS sequences with eigenvalues very close to unity. ### Multitaper Spectral Estimation The periodogram is not a consistent estimator of the true power spectral density of a wide-sense stationary process. To produce a consistent estimate of the PSD, the multitaper method averages modified periodograms obtained using a family of mutually orthogonal tapers (windows). In addition to mutual orthogonality, the tapers also have optimal time-frequency concentration properties. Both the orthogonality and time-frequency concentration of the tapers is critical to the success of the multitaper technique. See Discrete Prolate Spheroidal (Slepian) Sequences for a brief description of the Slepian sequences used in Thomson’s multitaper method. The multitaper method uses K modified periodograms with each one obtained using a different Slepian sequence as the window. Let `${S}_{k}\left(f\right)=\Delta t|\sum _{n=0}^{N-1}{g}_{k,n}{x}_{n}{e}^{-i2\pi fn\Delta t}{|}^{2}$` denote the modified periodogram obtained with the k-th Slepian sequence, gk,n. In the simplest form, the multitaper method simply averages the K modified periodograms to produce the multitaper PSD estimate. `${S}^{\left(\text{MT}\right)}\left(f\right)=\frac{1}{K}\sum _{k=0}^{K-1}{S}_{k}\left(f\right)$` Note the difference between the multitaper PSD estimate and Welch’s method. Both methods reduce the variability in the periodogram by averaging over approximately uncorrelated estimates of the PSD. However, the two approaches differ in how they produce these uncorrelated PSD estimates. The multitaper method uses the entire signal in each modified periodogram. The orthogonality of the Slepian tapers decorrelates the different modified periodograms. Welch’s overlapped segment averaging approach uses segments of the signal in each modified periodogram and the segmenting decorrelates the different modified periodograms. The preceding equation corresponds to the `'unity'` option in `pmtm`. However, as explained in Discrete Prolate Spheroidal (Slepian) Sequences, the Slepian sequences do not possess equal energy concentration in the frequency band of interest. The higher the order of the Slepian sequence, the less concentrated the sequence energy is in the band [-W,W] with the concentration given by the eigenvalue. Consequently, it can be beneficial to use the eigenvalues to weight the K modified periodograms prior to averaging. This corresponds to the `'eigen'` option in `pmtm`. Using the sequence eigenvalues to produce a weighted average of modified periodograms accounts for the frequency concentration properties of the Slepian sequences. However, it does not account for the interaction between the power spectral density of the random process and the frequency concentration of the Slepian sequences. Specifically, frequency regions where the random process has little power are less reliably estimated in the modified periodograms using higher order Slepian sequences. This argues for an frequency-dependent adaptive process, which accounts not only for the frequency concentration of the Slepian sequence, but also for the power distribution in the time series. This adaptive weighting corresponds to the `'adapt'` option in `pmtm` and is the default for computing the multitaper estimate. ## References [1] Percival, D. B., and A. T. Walden, Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques. Cambridge, UK: Cambridge University Press, 1993. [2] Thomson, D. J., “Spectrum estimation and harmonic analysis.” Proceedings of the IEEE®. Vol. 70, 1982, pp. 1055–1096.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472580552101135, "perplexity": 1487.354322542929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00189.warc.gz"}
https://inspirations.newszii.com/tag/goodness/
“Courage is the most important of all the virtues because without courage, you can’t practice any other virtue consistently.” Tags: , , , , , , , , , , , , , , , -Maya Angelou “Let the first act of every morning be to make the following resolve for the day: – I shall not fear anyone on Earth. – I shall fear only God. – I shall not bear ill will toward anyone. – I shall not submit to injustice from anyone. – I shall conquer untruth by truth. And in resisting untruth, I shall put up with all suffering.” Tags: , , , , , , , -Mahatma Gandhi “When I despair, I remember that all through history the way of truth and love have always won. There have been tyrants and murderers, and for a time, they can seem invincible, but in the end, they always fall. Think of it–always.” Tags: , , , , , , , -Mahatma Gandhi “Courage is the most important of all the virtues because without courage, you can’t practice any other virtue consistently.” Tags: , , , , , , , , , , , , , , , -Maya Angelou “Let the first act of every morning be to make the following resolve for the day: – I shall not fear anyone on Earth. – I shall fear only God. – I shall not bear ill will toward anyone. – I shall not submit to injustice from anyone. – I shall conquer untruth by truth. And in resisting untruth, I shall put up with all suffering.” Tags: , , , , , , , -Mahatma Gandhi “When I despair, I remember that all through history the way of truth and love have always won. There have been tyrants and murderers, and for a time, they can seem invincible, but in the end, they always fall. Think of it–always.” Tags: , , , , , , , -Mahatma Gandhi “Do your little bit of good where you are; it’s those little bits of good put together that overwhelm the world.” Tags: , , -Desmond Tutu |
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919029474258423, "perplexity": 3211.5383933365374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584547882.77/warc/CC-MAIN-20190124121622-20190124143622-00380.warc.gz"}
https://www.mmacageworld.com/2010/10/ufcwec-merger.html
MMA CAGEWORLD: UFC/WEC merger. ## MMA Latest LATEST NEWS > Anthony 'Rumble' Johnson planning UFC return . . . . . . . . . . . . . . . . . . . . . . . . . . . > Antonio 'Bigfoot' Silva volunteers to rematch Fedor Emelianko in Bellator - Rizin event . . . . . . . . . . . . . . . . . . .> GFC 18 imposter with fake passport gets choked out after faking his identity to fight in Russia. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ## 28 October 2010 ### World Extreme Cagefighting is merging with its sister organization the Ultimate Fighting Championship. UFC president Dana White has stated that the merger will happen in January 2011, which means that the UFC will now feature for the first time a 135 and 145-pound weight class. The WEC lightweight division fighters will also move over to the UFC. "The timing was right," White said. "The reality is, we purchased the WEC, we started getting these lighter weight guys exposure on television, sending them around the country and arenas. Now, as the UFC continues to grow globally and we're doing more and more fights, now it makes sense to bring in those lighter weight classes." "We're going to add more fights every year and add more countries, more television networks in different countries. So now it makes sense." Before the merger takes place, WEC will put on 2 more events. By the time the merger takes place, the WEC will have put on 53 events, including one pay-per-view card in April 2010.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532126545906067, "perplexity": 243.19700939787222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00627.warc.gz"}
https://www.physicsforums.com/threads/year-12-cambridge-physics-problem-pressure-inside-a-vessel.615083/
# Homework Help: Year 12: Cambridge Physics Problem (Pressure inside a vessel) 1. Jun 19, 2012 ### johnconnor A vessel is divided into two parts of equal volume by a partition in which there is a very small hole. Initially, each part contains gas at 300K and a low pressure, p. One part of the vessel is now heated to 600K while the other is maintained at 300K. If a steady state is established when the rate at which molecules pass through the hole from each side is the same, find the resulting pressure difference between the two parts. Attempt: I'm assuming that the number and mass of molecules inside remain the same and that the temperature of the two parts during the steady state is the same. So we have $$N_1+N_2=2N$$, where N is the number of molecules inside each part before heating and N1 and N2 denote the number of molecules inside each part after heating. Also pressure is proportional to <c>2, implying T is proportional to <c>2, and that p is proportional to T. We also have $$N_1<c>_1=N_2<c>_2$$, where N_i<c>_i denotes the rate at which molecules pass through the hole from one side to another. So now $<c>^2 \propto T \text{and }N_1<c>_1=N_2<c>_2 \Rightarrow \dfrac{N_1}{N_2}= \dfrac{<c>_2}{<c>_1} \Rightarrow \dfrac{N_1^2}{N_2^2}= \dfrac{<c>_2^2}{<c>_1^2} \Rightarrow \dfrac{N_1^2}{N_2^2}= \dfrac{T_2}{T_1} \Rightarrow \dfrac{N_1}{N_2}= \(\dfrac{T_2}{T_1})^{1/2}$ And I'm stuck. I'm supposed to find the difference of pressure in terms of p but how do I do that when the terms which I have introduced are nowhere close to p? The closest one I could get are p1 and p2. Help? 2. Jun 19, 2012 ### Infinitum I believe this is for ideal gases. Let the volume of each compartment be V. Write down the ideal gas equations for the initial and final conditions(separately). The initial condition will give you an equation in p, which you can use to find out difference in pressures. $$(2V)p = 2N R T_i$$ 3. Jun 19, 2012 ### Aero51 This question is kind of stupid because if there is any kind of mass transfer between the two sections of the vessel, there will also be heat transfer making the problem quite difficult. If there is no heat transfer then as a first approximation you could apply the ideal gas law to each section of the vessel: PV = nRT; P = pressure V = volume n = number of molecules R = universal gas constant T = temperature of the section 4. Jun 19, 2012 ### Infinitum At the initial situation of the problem, the ideal gas law, as I suggested, can obviously be applied for the whole vessel. For the final situation, at equilibrium, meaning net transfer of heat being zero, the ideal gas law is applicable. 5. Jun 19, 2012 ### Aero51 You can have equilibrium with a temperature gradient inside both the chambers, which again would make the problem much more difficult. If you want to solve the problem at "steady state" you need to solve the heat equation and determine the temperature distribution inside both vessels. It may not vary with time but it certainly wont be an abrupt change at the interface. of the wall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333918690681458, "perplexity": 381.92664684304356}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863206.9/warc/CC-MAIN-20180619212507-20180619232507-00592.warc.gz"}
https://codedump.io/share/9UN1lIi9AVrx/1/running-both-python-27-and-35-on-pc
HUSMEN - 7 months ago 43 Python Question Running both Python 2.7 and 3.5 on PC I have both versions of Python installed on my PC running Windows 10 and I can switch between them manually as needed, but I was wondering if there is a way to edit their path environment variables so that I can launch both of them from the CMD easily. For example, instead of typing "python" to launch whatever is the default one right now, I want to just type python2 for one, and python3 for the other, is that possible? Update: it turned out that you don't need any trick for this, you just use either py -2 or py -3 accordingly. Alternatively, you can configure your own aliases in cmd as mentioned below. DOSKEY python3=C:\path\to\python3.exe $* DOSKEY python2=C:\path\to\python2.exe$* to define the alias. You can then put those in a .cmd file e.g. env.cmd and use cmd.exe /K env.cmd
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855257511138916, "perplexity": 1832.6851867624296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171620.77/warc/CC-MAIN-20170219104611-00139-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/556280/prove-int-01-fracx2-2-x2-ln1xx3-sqrt1-x2dx-frac-pi28-fra
# Prove $\int_0^1\frac{x^2-2\,x+2\ln(1+x)}{x^3\,\sqrt{1-x^2}}dx=\frac{\pi^2}8-\frac12$ How can I prove the following identity? $$\int_0^1\frac{x^2-2\,x+2\ln(1+x)}{x^3\,\sqrt{1-x^2}}dx=\frac{\pi^2}8-\frac12$$ - Just curious (for my own learning's sake), what sort of class involves integrals like this? It's certainly above anything in my Calc II course I took. Is this like Real Analysis (or perhaps Complex Analysis)? –  anorton Nov 7 '13 at 23:55 How did you get the answer? –  Mhenni Benghorbal Nov 8 '13 at 0:13 Where did this came from? What's the history behind this identity? –  Lucas Zanella Nov 8 '13 at 2:53 @MhenniBenghorbal It was part of the problem. Anyways, it would be easy to guess using a numeric approximation and wolframalpha.com –  Laila Podlesny Nov 8 '13 at 23:09 add comment ## 1 Answer First observe that $$x^2-2 x+2 \log{(1+x)} = 2 \sum_{k=3}^{\infty} (-1)^{k+1} \frac{x^k}{k}$$ The integral is then equal to $$2 \sum_{k=0}^{\infty} \frac{(-1)^k}{k+3} \int_0^1 dx \frac{x^k}{\sqrt{1-x^2}}$$ Now, we will need separate treatments for the even and odd terms (1): $$\int_0^1 dx \frac{x^k}{\sqrt{1-x^2}} = \begin{cases} \frac{\displaystyle 1}{\displaystyle 2^{2 k}} \displaystyle \binom{2 k}{k} \frac{\pi}{2} & k \: \text{even}\\ \frac{\displaystyle 2^{2 k-1}}{\displaystyle k \binom{2 k}{k}} & k \: \text{odd} \end{cases}$$ That is, the integral is now equal to the difference between two sums: $$\pi \sum_{k=0}^{\infty} \frac{1}{2 k+3} \frac{1}{2^{2 k}} \binom{2 k}{k} - \frac12 \sum_{k=1}^{\infty} \frac{1}{ k+1} \frac{\displaystyle 2^{2 k}}{\displaystyle k \binom{2 k}{k}}$$ We now evaluate each sum in turn. For the first, let $$f(x) = \sum_{k=0}^{\infty} \frac{1}{2 k+3} \frac{1}{2^{2 k}} \binom{2 k}{k} x^{2 k+3}$$ Then $$f'(x) = x^2 \sum_{k=0}^{\infty} \frac{1}{2^{2 k}} \binom{2 k}{k} x^{2 k} = \frac{x^2}{\sqrt{1-x^2}}$$ which means that, enforcing the condition that $f(0)=0$ (2), $$f(x) = \int dx \frac{x^2}{\sqrt{1-x^2}} = \frac{1}{2} \arcsin(x)-\frac{1}{2} x \sqrt{1-x^2}$$ The sum in question is equal to $f(1) = \pi/4$. For the second sum, define $$g(x) = \sum_{k=1}^{\infty} \frac{1}{k( k+1)} \frac{\displaystyle 2^{2 k}}{\displaystyle \binom{2 k}{k}} x^{k+1}$$ Then (see this answer for a reference) $$g''(x) = \frac{1}{x} \sum_{k=1}^{\infty} \frac{(4 x)^k}{\displaystyle \binom{2 k}{k}} = \frac{\displaystyle 1+\frac{ \arcsin\left(\sqrt{x}\right)}{\sqrt{x(1-x)}}}{1-x}$$ Integrating twice and enforcing the condition that $g(0)=0$ and $g'(0)=0$, we find that (3) $$g(x) = x+\arcsin\left(\sqrt{x}\right)^2-2 \sqrt{x(1-x)} \arcsin\left(\sqrt{x}\right)$$ The second sum is then $$g(1) = 1+\frac{\pi^2}{4}$$ The value of the integral we seek is then equal to $$\pi f(1) - \frac12 g(1) = \pi \frac{\pi}{4} - \frac12 \left ( 1+ \frac{\pi^2}{4} \right ) = \frac{\pi^2}{8} - \frac12$$ as was to be shown. ADDENDUM I think I should fill in some gaps of the above proof. I will go through each intermediate result in turn so that the solution is more self-contained. The integrals I evaluate here are not as difficult as they appear, although there is one subtlety that should be pointed out. Equation (1) $$\int_0^1 dx \frac{x^k}{\sqrt{1-x^2}}$$ a) $k$ even, i.e., $k=2 m$, $m \in \{0,1,2,\ldots\}$ Sub $x=\sin{t}$ to see that this integral is equal to $$I_m = \int_0^{\pi/2} dt \, \sin^{2 m}{t}$$ Integrate by parts to see that \begin{align}I_m &= -\underbrace{\left [ \cos{t} \sin^{2 m-1}{t} \right ]_0^{\pi/2}}_{\text{this}=0} + (2 m-1) \underbrace{\int_0^{\pi/2} dt \, \cos^2{t} \sin^{2 m-2}{t}}_{\cos^2{t}=1-\sin^2{t}}\\ &= (2 m-1) I_{m-1} - (2 m-1) I_m\end{align} Thus, $$I_m = \frac{2 m-1}{2 m} I_{m-1} = \frac{(2 m-1)(2 m-3)\cdots (3)(1)}{(2 m)(2 m-2)\cdots (2)} I_0$$ where $I_0 = \int_0^{\pi/2} dt = \pi/2$. We may rearrange the above result by multiplying the numerator by the denominator, and we have for even values of $k$: $$I_m = \frac{1}{2^{2 m}} \binom{2 m}{m} \frac{\pi}{2}$$ b) $k$ odd, i.e., $k=2 m+1$, $m \in \{0,1,2,\ldots\}$ We perform identical manipulations as above, but now we get that $$I_m = \frac{(2 m)(2 m-2)\cdots (2)}{(2 m+1)(2 m-1)\cdots (3)} I_1$$ where $I_1 = \int_0^{\pi/2} dt \, \sin{t} = 1$. Using similar manipulations as above (except we multiply the denominator by the numerator), we have $$I_m = \frac{1}{2 m+1} \frac{2^{2 m}}{\displaystyle \binom{2 m}{m}}$$ You may note, however, that this is not the result I displayed in the proof. Good reason: this form would complicate the series approach to evaluating the sum. To this effect, let's map $m \mapsto m-1$ and consider $m \in \{1,2,3,\ldots\}$. Then $$I_m = \frac{2^{2 m-2}}{2 m-1} \frac{[(m-1)!]^2}{(2 m-2)!} = \frac{2^{2 m-1}}{\displaystyle m \binom{2 m}{m}}$$ as asserted. Equation (2) $$\underbrace{\int dx \frac{x^2}{\sqrt{1-x^2}}}_{x=\sin{t}} = \int dt \, \sin^2{t} = \frac{t}{2} - \frac12 \sin{t} \cos{t}$$ form which the posted result follows. Equation (3) Here we have 2 integrations. First, $$g'(x) = \underbrace{\int dx \frac{1+\frac{\arcsin{\sqrt{x}}}{\sqrt{x (1-x)}}}{1-x}}_{x=u^2} = \underbrace{2 \int du \, \frac{u + \frac{\arcsin{u}}{\sqrt{1-u^2}}}{1-u^2}}_{u=\sin{t}} = 2 \int dt \, \tan{t} + 2 \int dt \, t \sec^2{t}$$ Do the second integral by parts: $$2 \int dt \, t \sec^2{t} = 2 t \tan{t} - 2 \int dt \, \tan{t}$$ Thus we have a fortuitous cancellation, and using $t=\arcsin{\sqrt{x}}$, and enforcing $g'(0)=0$, we have $$g'(x) = 2 \sqrt{\frac{x}{1-x}}\arcsin{\sqrt{x}}$$ So, second, we must integrate this result to get $g(x)$. We use similar substitutions as above (i.e., $x=u^2$, $u=\sin{t}$): $$g(x) = 4 \int du \, \frac{u^2}{\sqrt{1-u^2}} \arcsin{u} = 4 \int dt \, t \, \sin^2{t}$$ Now, integrate by parts: $$4 \int dt \, t \, \sin^2{t} = 2 t (t - \sin{t} \cos{t}) - 2 \int dt \, (t - \sin{t} \cos{t}) = t^2 - 2 t \sin{t} \cos{t} + \sin^2{t} +C$$ Now, use $t = \arcsin{\sqrt{x}}$ and the fact that $g(0)=0$ and get $$g(x) = \arcsin{\left ( \sqrt{x}\right )}^2 - 2 \sqrt{x (1-x)} \arcsin{\left ( \sqrt{x}\right )} + x$$ as posted above. - Nice solution. Ron Gordon –  juantheron Nov 8 '13 at 2:55 Bravo again Ron! +1 –  Bennett Gardiner Nov 8 '13 at 12:55 @juantheron: thanks. I've been meaning to ask: is juantheron a play on Juan Perón, or is it just your name and I'm being too imaginative? –  Ron Gordon Nov 8 '13 at 14:29 @BennettGardiner: thanks, as always, you show me so much kindness. –  Ron Gordon Nov 8 '13 at 14:29 Excellent work! :) –  Ahaan Rungta Nov 9 '13 at 1:22 add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910312294960022, "perplexity": 1345.3203194627508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/144397/how-to-know-if-the-pseudoscalar-yukawa-lagrangian-is-invariant-under-chiral-tran/144414
# How to know if the pseudoscalar Yukawa Lagrangian is invariant under chiral transformation? The pseudo-scalar Yukawa theory Lagrangian is $$\mathcal{L}=\bar{\psi}(i\gamma ^\mu \partial_\mu - m)\psi -g\bar{\psi}i\gamma^5\phi\psi,$$ where $g$ is a coupling constant. How can I show it is invariant under a chiral transformation, $\psi\to e^{i\lambda \gamma_5}\psi$? • Um...plug the transformation in and see what happens? – ACuriousMind Nov 2 '14 at 16:08 (This is a largely a response prompted by your comment.) You can get the answer by just remembering the commutation/anti-commutation properties of the $\gamma$ matrices, and the fact that ${\bar \psi} = \psi^{\dagger} \gamma^0$. To see the following, you would have to expand the exponential factor, up to linear order $e^{M} = I + M + \ldots$. (I'm not going to do your homework, this is just a guide!) 1) The kinematic term $i \bar{\psi}\gamma ^\mu \partial_\mu\psi$ goes into itself, using $\{\gamma^{\mu}, \gamma^5\} = 0$. 2) The Yukawa coupling term follows suite. 3) There is no such cancellation in the mass term $m \bar \psi \psi$, but the two factors reinforce each other. This term picks up an overall factor of $e^{2i\lambda \gamma_5}\psi$, i.e. two times either factor. Thus, the mass term is not invariant under this transformation, and breaks chiral symmetry. B.T.W. This transformation is called the axial-vector transformation, since the corresponding conserved (in the m=0 limit) Noether current transforms like an axialvector $\bar \psi \gamma^{\mu} \gamma^5 \psi$. Resolving into Weyl spinors $\psi_{L,R} = (1\mp \gamma^5)\psi/2$ is an alternative way of seeing this. With this, you will again have to use the $\gamma$ matrices' properties, and you will arrive at the result that only the mass term mixes up the two chiralities, i.e. becomes $m (\bar \psi_L \psi_R + \bar \psi_R \psi_L)$. The kinematic term would transform into $i \bar{\psi_L}\gamma ^\mu \partial_\mu\psi_L + i \bar{\psi_R}\gamma ^\mu \partial_\mu\psi_R$ and hence, it is like the the kinematic terms of two independent Lagrangians added up. No mixing. The two formulations are absolutely equivalent. • Thanks for your very clear guidance. I have a question though, while I am performing this transformation on Yukawa coupling term: I get the following:$$L'=-g\bar{\psi}i\gamma^5\phi\psi = -gi\bar{\psi}e^{i\lambda \gamma^5}\gamma^5 \phi e^{i\lambda\gamma^5}\psi$$ Why aren't the exponentials cancelling so it would be invariant.. – Fluctuations Nov 2 '14 at 19:09 • In one of your exponentials there should be a minus sign, because $\bar{\psi}$ is a conjugate of $\psi$. Also, $\gamma^5$ commutes with its exponential (because it commutes with all terms in its Tailor expansion). So the two exponentials cancel each other. – Prof. Legolasov Nov 2 '14 at 20:31 • @Fluctuations no, it is not true. But for every matrix $x$ (in your case, $x=i\gamma^5$), the following holds: $$\left[ x, \exp x \right] = 0$$ – Prof. Legolasov Nov 2 '14 at 21:38 • Hindsight did most of the follow-up job for me, so thanks. Regarding the last point, @Fluctuations, as Hindsight already mentioned, they don't anticommute, they commute! To see this explicitly, expand the exponential in $$[ x, \exp x ]$$ and exploit the linearity $$[ x, (y+z) ] = [ x, y ] + [ x, z ]$$. Clearly, $x$ commutes with identity, with $x$, with $x^2$, and so on. :) – 299792458 Nov 3 '14 at 5:10 • And @Hindsight, thanks for the follow-up job. Yes, chiral symmetry holds only in the $m=0$ limit. Finite mass explicitly breaks chiral symmetry. But if these mass terms are small, like $u$ and $d$ quarks (the $SU(2)$ case), chiral symmetry can be considered an approximate symmetry of the strong interactions. :) – 299792458 Nov 3 '14 at 5:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427048563957214, "perplexity": 535.6221181170971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00338.warc.gz"}
http://researchonline.ljmu.ac.uk/id/eprint/4161/
# Galaxy And Mass Assembly (GAMA): The absence of stellar mass segregation in galaxy groups and consistent predictions from GALFORM and EAGLE simulations Kafle, PR, Robotham, ASG, Lagos, CDP, Davies, LJ, Moffett, AJ, Driver, SP, Andrews, SK, Baldry, IK, Bland-Hawthorn, J, Brough, S, Cortese, L, Drinkwater, MJ, Finnegan, R, Hopkins, AM and Loveday, J (2016) Galaxy And Mass Assembly (GAMA): The absence of stellar mass segregation in galaxy groups and consistent predictions from GALFORM and EAGLE simulations. Monthly Notices of the Royal Astronomical Society, 463 (4). pp. 4194-4209. ISSN 0035-8711 We investigate the contentious issue of the presence, or lack thereof, of satellites mass segregation in galaxy groups using the Galaxy And Mass Assembly (GAMA) survey, the GALFORM semi-analytic and the EAGLE cosmological hydrodynamical simulation catalogues of galaxy groups. We select groups with halo mass $12 \leqslant \log(M_{\text{halo}}/h^{-1}M_\odot) <14.5$ and redshift $z \leqslant 0.32$ and probe the radial distribution of stellar mass out to twice the group virial radius. All the samples are carefully constructed to be complete in stellar mass at each redshift range and efforts are made to regularise the analysis for all the data. Our study shows negligible mass segregation in galaxy group environments with absolute gradients of $\lesssim0.08$ dex and also shows a lack of any redshift evolution. Moreover, we find that our results at least for the GAMA data are robust to different halo mass and group centre estimates. Furthermore, the EAGLE data allows us to probe much fainter luminosities ($r$-band magnitude of 22) as well as investigate the three-dimensional spatial distribution with intrinsic halo properties, beyond what the current observational data can offer. In both cases we find that the fainter EAGLE data show a very mild spatial mass segregation at $z \leqslant 0.22$, which is again not apparent at higher redshift. Interestingly, our results are in contrast to some earlier findings using the Sloan Digital Sky Survey. We investigate the source of the disagreement and suggest that subtle differences between the group finding algorithms could be the root cause.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295302629470825, "perplexity": 3687.0192236936173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00318.warc.gz"}
http://groupprops.subwiki.org/wiki/Lucas'_theorem_prime_power_case
# Lucas' theorem prime power case ## Statement ### Symbolic statement Let where is a prime and is relatively prime to . Then: ## Proof ### Proof using group theory Recall that a proof of Sylow's theorem invokes Lucas' theorem at the following critical juncture: we consider the size of the set of subsets of size , on which the group of order is acting, and then infer that there exists an orbit of size , whose isotropy subgroup is hence a Sylow subgroup. In the proof of Lucas' theorem, we employ the same tactic in reverse, but instead of taking any arbitrary group, we start off with the cyclic group of order . Formally, here's the proof. Consider the cyclic group of order . We need to show that the number of subsets of size in is modulo . To prove this, we claim that under the action of left multiplication by , there is exactly one orbit whose size is relatively prime to , and the size of this orbit is . Consider an orbit whose size is relatively prime to . Then, the size of this orbit must be a divisor of . Further, since the union of members of any orbit is the whole of , the number of members in the orbit must be at least , equality occurring off they are pairwise disjoint. Combing the two facts, the and hence all the members of the orbit are disjoint. We thus have a situation where there is a subset of size in such that all its left translates are pairwise disjoint. Basic group theory tells us that this subset must be a left coset of a subgroup of size , and moreover, the subgroups are in bijective correspondence with such orbits. We now use the fact that has a unique subgroup of order , and we are done.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797720313072205, "perplexity": 187.14035328929072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399455473.15/warc/CC-MAIN-20151124211055-00293-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.reference.com/browse/wiki/Cone_(linear_algebra)
Definitions # Cone (linear algebra) In linear algebra, a (linear) cone is a subset of a vector space that is closed under multiplication by positive scalars. ## Definition A subset C of a real vector space V is a (linear) cone if and only if $lambda x$ belongs to C for any x in C and any positive scalar $lambda$ of V. The condition can be written more succinctly as "λC = C for any positive scalar λ of V". The definition makes sense for any vector space V which allows the notion of "positive scalar", such as spaces over the rational, algebraic, or (more commonly) real numbers . The concept can also be extended for any vector space V whose scalar field is a superset of those fields (such as the complex numbers, quaternions, etc.), to the extent that such a space can be viewed as a real vector space of higher dimension. ## Boolean, additive and linear closure Linear cones are closed under Boolean operations (set intersection, union, and complement). They are also closed under addition (if C and D are cones, so is C + D) and arbitrary linear maps. In particular, if C is a cone, so is its opposite cone -C. ## Pointed and blunt cones A cone C is said to be pointed if it includes the null vector (origin) 0 of the vector space; otherwise C is said to be blunt. Note that a pointed cone is closed under multiplication by arbitrary non-negative (not just positive) scalars. ## The cone of a set The (linear) cone of an arbitrary subset X of V is the set X$\left\{\right\}^*$ of all vectors $lambda$x where x belongs to X and λ is a positive real number. With this definition, the cone of X is pointed or blunt depending on whether X contains the origin 0 or not. If "positive" is replaced by "non-negative" in the defitions, the cone X$\left\{\right\}^*$ will be always pointed. ## Salient cone A cone X is salient if it does not contain any pair of opposite nonzero vectors; that is, if and only if C$cap$(-C) $subseteq$ {0}. ## Spherical section and projection Let |·| be any norm for V, with the property that the norm of any vector is a scalar of V. By definition, a nonzero vector x belongs to a cone C of V if and only if the unit-norm vector x/|x| belongs to C. Therefore, a blunt (or pointed) cone C is completely specified by its central projection onto the sphere S; that is, by the set $C\text{'} = \left\{, frac\left\{x\right\}$ > ;:; x in C wedge x neq mathbf{0} ,} It follows that there is a one-to-one correspondence between blunt (or pointed) cones and subsets of the unit-norm sphere of V, the set $S = \left\{, x in V;:; |x| = 1 ,\right\}$ Indeed, the central projection C' is simply the spherical section of C, the set C$cap$S of its unit-norm elements. A cone C is closed with respect to the norm |·| if it is a closed set in the topology induced by that norm. That is the case if and only if C is pointed and its spherical section is a closed subset of S. Note that the cone C is salient if and only if its spherical section does not contain two opposite vectors; that is, C' $cap$(-C' ) = {}. ## Convex cone A convex cone is a cone that is closed under convex combinations, i.e. if and only if αx + βy belongs to C for any non-negative scalars α, β with α + β = 1. ## Affine cone If C - v is a cone for some v in V, then C is said to be an (affine) cone with vertex v. ## Proper cone The term proper cone is variously defined, depending on the context. It often means a salient and convex cone, or a cone that is contained in an open halfspace of V.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715696573257446, "perplexity": 491.439807278303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118888119.65/warc/CC-MAIN-20150124170128-00022-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.newton.ac.uk/programmes/TOD/seminars/2012083011301.html
Skip to content # TOD ## Seminar ### Stochastic travelling waves in bistable biochemical system: Numerical and mathematical analysis Lipniacki, T (Polish Academy of Sciences) Thursday 30 August 2012, 11:30-12:30 Seminar Room 1, Newton Institute #### Abstract I will discuss stochastic transitions in a bistable biochemical system of trans-activating molecules on a hexagonal lattice. Kinetic Monte Carlo simulations demonstrated that the steady state of the system is controlled by the diffusion, and size of the reactor. In considered example, in small reactor the system remains inactive. In larger domain, however, the system activates spontaneously at some place of the reactor and then the activity wave propagates until whole domain becomes active. The expected time to activation grows exponentially with the diffusion coefficient. I will interpret these results by analytical considerations of a simpler bistable system, which evolution is equivalent to the one dimensional birth and death process. #### Video The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible. #### Comments Start the discussion! Back to top ∧
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652366995811462, "perplexity": 2596.9155411678657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/183170-gcd-f5-print.html
gcd in F5 • Jun 16th 2011, 07:52 PM wik_chick88 gcd in F5 calculate $gcd(x^3 + 2x^2 + 3x - 1, 2x^2 - x - 1)$ in $F_{5}$ does $6x^2 \equiv 0$, because we are working in $F_{5}$? • Jun 16th 2011, 08:28 PM TheEmptySet Re: gcd in F5 Quote: Originally Posted by wik_chick88 calculate $gcd(x^3 + 2x^2 + 3x - 1, 2x^2 - x - 1)$ in $F_{5}$ does $6x^2 \equiv 0$, because we are working in $F_{5}$? No, if that were the case it would be the additive identity. Just reduce the coefficient mod 5 $6x^2=1x^2=x^2$ • Jun 16th 2011, 08:47 PM Also sprach Zarathustra Re: gcd in F5 Quote: Originally Posted by wik_chick88 calculate $gcd(x^3 + 2x^2 + 3x - 1, 2x^2 - x - 1)$ in $F_{5}$ does $6x^2 \equiv 0$, because we are working in $F_{5}$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812333583831787, "perplexity": 3921.5012648387724}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00179-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/photon-spin-experimental-evidence.243920/
# Photon spin - experimental evidence 1. Jul 7, 2008 ### Usaf Moji I read somewhere that if a beam of photons all of like polarization are directed towards the surface of a disk (the disk being capable of rotation), the disk will rotate. This rotation is supposed to be experimental evidence confirming that photons have angular momentum. Is this true? 2. Jul 8, 2008 ### clem It is true if the photons are absorbed or reflected. 3. Dec 4, 2008 ### turin Where can you get such a beam of photons? Doesn't the EM wave classically carry angular momentum? How can you distinguish the classical angular momentum from the spin of individual photons? 4. Dec 5, 2008 ### clem The "classical wave" is just a huge number of photons. 5. Dec 5, 2008 ### turin No. You are talking about the correspondence between classical and quantum. What I'm saying is that, even if you don't assume a quantum for the electromagnetic wave (a photon), there is still angular momentum carried by the wave; this does not require quantum mechanics. Since classical E&M preceeds QM, there is no reason to believe that a transfrer of angular momentum from the EM wave to an object is evidence for QM; it is already there in classical E&M. 6. Dec 5, 2008 ### clem You are right in that "there is no reason to believe that a transfer of angular momentum from the EM wave to an object is evidence for QM". But if we believe that photons exist (Don't we?), then "This rotation is ... experimental evidence confirming that photons have angular momentum.", which is what was asked. 7. Dec 5, 2008 ### turin I suppose I am splitting hairs, here. My point is that a transfer of momentum is insufficient to demonstrate photon angular momentum; the transfer of momentum must be a specific discrete amount in order to demonstrate photon angular momentum. So, if single photons hit the object periodically, then you could see this effect, but if you have "a huge number of photons", then you have no way of separating this tiny effect from other tiny effects, say, the beam hitting at a slight angle and slightly off axis. Similar Discussions: Photon spin - experimental evidence
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444677829742432, "perplexity": 849.3130781374538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00269.warc.gz"}
http://math.stackexchange.com/questions/267059/compound-poisson-process-calculate-e-left-sum-k-1n-tx-k-et-t-k-righ
# Compound Poisson process: calculate $E\left( \sum_{k=1}^{N_t}X_k e^{t-T_k} \right)$, $X_k$ i.i.d., $T_k$ arrival time Let $N_t$ be a Poisson process with rate $\lambda$. $T_k$ the inter arrival times of $N_t$. $\{X_k\}$ a collection of i.i.d. random variables with mean $\mu$. $X_k$ is independent of $N_t$. Calculate the expectation of $$S_t= \sum_{k=1}^{N_t} X_k e^{t-T_k}.$$ Given $N_t$, the inter arrival times are uniformly distributed on $[0,t]$. Hence, $T_k \sim \text{Beta}(k,n-k+1)$ and $$E\left( \left. e^{-T_k}\right| N_t=n \right)=\frac{1}{B(k,n-k+1)}\int_0^1 e^{-x}x^{k-1} (1-x)^{n-k} dx.$$ I don't see how to compute this integral. - Use $\frac{1}{\operatorname{B}(k,n-k+1)} = n \binom{n-1}{k-1}$: $$\sum_{k=1}^n \frac{x^{k-1} (1-x)^{n-k}}{\operatorname{B}(k,n-k+1)} = n \sum_{k=1}^{n} \binom{n-1}{k-1} x^{k-1} (1-x)^{(n-1)-(k-1)} = n$$ Thus: $$\mathbb{E}\left( \sum_{k=1}^{N_t} X_k \mathrm{e}^{t-T_k} \right) = \mathbb{E}\left( \mathbb{E}\left( \sum_{k=1}^{N_t} X_k \mathrm{e}^{t-T_k} \Big| N_t \right) \right) = \mathbb{E}(X) \mathbb{E}\left( N_t \int_0^1 \mathrm{e}^{t-t x} \mathrm{d} x \right) = \mathbb{E}(X) \mathbb{E}\left( N_t \right) \frac{\exp(t)-1}{t} = \lambda \left( \exp(t)-1 \right)\mathbb{E}(X)$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416717290878296, "perplexity": 211.5962680889487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678690318/warc/CC-MAIN-20140313024450-00077-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/0804.3972/
# Toward an AdS/cold atoms correspondence: a geometric realization of the Schrödinger symmetry D. T. Son Institute for Nuclear Theory, University of Washington, Seattle, Washington 98195-1550, USA April 2008 ###### Abstract We discuss a realization of the nonrelativistic conformal group (the Schrödinger group) as the symmetry of a spacetime. We write down a toy model in which this geometry is a solution to field equations. We discuss various issues related to nonrelativistic holography. In particular, we argue that free fermions and fermions at unitarity correspond to the same bulk theory with different choices for the near-boundary asymptotics corresponding to the source and the expectation value of one operator. We describe an extended version of nonrelativistic general coordinate invariance which is realized holographically. ###### pacs: 11.25.Tq, 03.75.Ss preprint: INT PUB 08-08 ## I Introduction The anti–de Sitter/conformal field theory (AdS/CFT) correspondence Maldacena:1997re ; Gubser:1998bc ; Witten:1998qj establishes the equivalence between a conformal field theory in flat space and a string theory in a higher-dimensional curved space. The best known example is the equivalence between supersymmetric Yang-Mills theory and type IIB string theory in AdSS space. The strong coupling limit of the field theory corresponds to the supergravity limit in which the string theory can be solved. In the recent literature, the supersymmetric Yang-Mills theory at infinite ’t Hooft coupling is frequently used as a prototype to illustrate features of strongly coupled gauge theories. There exist, in nonrelativistic physics, another prototype of strong coupling: fermions at unitarity Eagles ; Leggett ; Nozieres . This is the system of fermions interacting through a short-ranged potential which is fine-tuned to support a zero-energy bound state. The system is scale invariant in the limit of zero-range potential. Since its experimental realizations using trapped cold atoms at the Feshbach resonance OHara ; Jin ; Grimm ; Ketterle ; Thomas ; Salomon , this system has attracted enormous interest. One may wonder if there exists a gravity dual of fermions at unitarity. If such a gravity dual exists, it would extend the notion of holography to nonrelativistic physics, and could potentially bring new intuition to this important strongly coupled system. Similarities between the super–Yang-Mills theory and unitarity fermions indeed exist, the most important of which is scale invariance. The have been some speculations on the possible relevance of the universal AdS/CFT value of the viscosity/entropy density ratio Kovtun:2004de for unitarity fermions Gelman:2004fj ; Schafer:2007pr ; Thomas-visc . Despite these discussions, no serious attempt to construct a gravity dual of unitarity fermions has been made to date. In this paper, we do not claim to have found the gravity dual of the unitary Fermi gas. However, we take the possible first step toward such a duality. We will construct a geometry whose symmetry coincides with the Schrödinger symmetry Hagen:1972pd ; Niederer:1972 , which is the symmetry group of fermions at unitarity Mehen:1999nd . In doing so, we keep in mind that one of the main evidences for gauge/gravity duality is the coincidence between the conformal symmetry of the field theory and the symmetry of the AdS space. On the basis of this geometric realization of the Schrödinger symmetry, we will be able to discuss a nonrelativistic version of the AdS/CFT dictionary—the operator-state correspondence, the relation between dimensions of operators and masses of fields, etc. The structure of this paper is as follows. In Sec. II we give a short introduction to fermions at unitarity, emphasizing the field-theoretical aspects of the latter. We also review the Schrödinger algebra. In Sec. III we describe how Schrödinger symmetry can be embedded into a conformal symmetry in a higher dimension. We consider operator-field mapping in Sec. V. In Sec. VI we show how the conservation laws for mass, energy and momentum are realized holographically. We conclude with Sec. VII. In this paper always refers to the number of spatial dimensions in the nonrelativistic theory, so corresponds to the real world. ## Ii Review of fermions at unitarity and Schrödinger symmetry In this section we collect various known facts about fermions at unitarity and the Schrödinger symmetry. The goal is not to present an exhaustive treatment, but only to have a minimal amount of materials needed for later discussions. Further details can be found in Nishida:2007pj . We are mostly interested in vacuum correlation functions (zero temperature and zero chemical potential), but not in the thermodynamics of the system at nonzero chemical potential. The reasons are twofold: i) the chemical potential breaks the Schrödinger symmetry and ii) even at zero chemical potential there are nontrivial questions, such as the spectrum of primary operators (see below). We will comment on how chemical potential can be taken into account in Sec. VII. One way to arrive at the theory of unitarity fermions is to start from noninteracting fermions, L=iψ†∂tψ−|∇ψ|22m, (1) add a source coupled to the “dimer” field  Nishida:2006br , L=iψ†∂tψ−|∇ψ|22m+ϕ∗ψ↓ψ↑+ϕψ†↑ψ†↓, (2) and then promote the source to a dynamic field. There is no kinetic term for in the bare Lagrangian, but it will be generated by a fermion loop. Depending on the regularization scheme, one may need to add to (2) a counterterm to cancel the UV divergence in the one-loop selfenergy (such a term is needed in momentum cutoff regularization but not in dimensional regularization.) The theory defined by the Lagrangian (2) is UV complete in spatial dimension , including the physically most relevant case of . This system is called “fermions at unitarity,” which refers to the fact that the -wave scattering cross section between two fermions saturates the unitarity bound. Another description of fermions at unitarity is in terms of the Lagrangian L=iψ†∂tψ−|∇ψ|22m−c0ψ†↓ψ†↑ψ↑ψ↓. (3) where is an interaction constant. The interaction is irrelevant in spatial dimensions , and is marginal at . At there is a nontrivial fixed point at a finite and negative value of of order  Sachdev . The situation is similar to the nonlinear sigma model in dimensions. In the quantum-mechanical language, unitarity fermions are defined as a system with the free Hamiltonian H=∑ip2i2m, (4) but with a nontrivial Hilbert space, defined to contain those wavefunctions (where are coordinates of spin-up particles and are those of spin-down particles) which satisfy the following boundary conditions when a spin-up and a spin-down particle approach each other, ψ(x1,x2,…;y1,y2,…)→C|xi−yj|+O(|xi−yj|). (5) where depends only on coordinates other than and . This boundary condition can be achieved by letting the fermions interact through some pairwise potential (say, a square-well potential) that has one bound state at threshold. In the limit of zero range of the potential , keeping the zero-energy bound state, the two-body wave function satisfies the boundary condition (5) and the physics is universal. Both free fermions and fermions at unitarity have the Schrödinger symmetry—the symmetry group of the Schrödinger equation in free space, which is the nonrelativistic version of conformal symmetry Mehen:1999nd . The generators of the Schrödinger algebra include temporal translation , spatial translations , rotations , Galilean boosts , dilatation (where time and space dilate with different factors: , ), one special conformal transformation [which takes , ], and the mass operator . The nonzero commutators are [Mij,Mkl]=i(δikMjl+δjlMik−δilMjk−δjkMil),[Mij,Pk]=i(δikPj−δjkPi),[Mij,Kk]=i(δikKj−δjkKi),[D,Pi]=−iPi,[D,Ki]=iKi,[Pi,Kj]=−iδijM,[D,H]=−2iH,[D,C]=2iC,[H,C]=iD. (6) The theory of unitarity fermions is also symmetric under an SU(2) group of spin rotations. The theory of unitarity fermions is an example of nonrelativistic conformal field theories (NRCFTs). Many concepts of relativistic CFT, such as scaling dimensions and primary operators, have counterparts in nonrelativistic CFTs. A local operator is said to have scaling dimension if . Primary operators satisfy . To solve the theory of unitarity fermions at zero temperature and chemical potential is, in particular, to find the spectrum of all primary operators. In the theory of unitarity fermions, there is a quantum-mechanical interpretation of the dimensions of primary operators WernerCastin ; Tan ; Nishida:2007pj . A primary operator with dimension and charges and with respect to the spin-up and spin-down particle numbers (the total particle numbers is ) corresponds to a solution of the zero-energy Schrödinger equation: (∑i∂2∂x2i+∑j∂2∂y2j)ψ(x1,x2,…,xN↑;y1,y2,…,y% N↓)=0, (7) which satisfies the boundary condition (5) and with a scaling behavior ψ(x1,x2,…,y1,y2,…)=Rνψ(Ωk), (8) where is an overall scale of the relative distances between , , and are dimensionless variables that are defined through the ratios of the relative distances. Equations (7) and (8) define, for given and , a discrete set of possible values for . For example, in three spatial dimensions, for , there are two possible values for : 0 and . For , , the lowest value for is . Each value of corresponds to an operator with dimension , which is related to by Δ=ν+dN2. (9) It has also been established that each primary operator corresponds to a eigenstate of the Hamiltonian of unitarity fermion in an isotropic harmonic potential of frequency  WernerCastin ; Tan ; Nishida:2007pj . The scaling dimension of the operator simply coincides with the energy of the state: E=Δℏω. (10) The first nontrivial operator is the dimer . It has dimension in the free theory, and in the theory of fermions at unitarity. This corresponds to the fact that the lowest energy state of two fermions with opposite spins in a harmonic potential is in the case of free fermions and for unitarity fermions. ## Iii Embedding the Schrödinger group into a conformal group To realize geometrically the Schrödinger symmetry, we first embed the Schrödinger group in spatial dimensions Sch() ( for the most interesting case of the unitarity Fermi gas) into the relativistic conformal algebra in spacetime dimensions O(, 2). The next step will be to realize the Schrödinger group as a symmetry of a dimensional spacetime background. That the Schrödinger algebra can be embedded into the relativistic conformal algebra can be seen from the following. Consider the massless Klein-Gordon equation in -dimensional Minkowski spacetime, □ϕ≡−∂2tϕ+d+1∑i=1∂2iϕ=0. (11) This equation is conformally invariant. Defining the light-cone coordinates, x±=x0±xd+1√2, (12) the Klein-Gordon equation becomes (−2∂∂x−∂∂x++d∑i=1∂2i)ϕ=0. (13) If we make an identification , then the equation has the form of the Schrödinger equation in free space, with the light-cone coordinate playing the role of time, (2im∂∂x++∂i∂i)ϕ=0. (14) This equation has the Schrödinger symmetry Sch(). Since the original Klein-Gordon equation has conformal symmetry, this means that Sch() is a subgroup of O(, 2). Let us now discuss the embedding explicitly. The conformal algebra is [~Mμν,~Mαβ]=i(ημα~Mνβ+ηνβ~Mμα−ημβ~Mνα−ηνα~Mμβ),[~Mμν,~Pα]=i(ημα~Pν−ηνα~Pμ),[~D,~Pμ]=−i~Pμ,[~D,~Kμ]=i~Kμ,[~Pμ,~Kν]=−2i(ημν~D+~Mμν), (15) where Greek indices run , and all other commutators are equal to 0. The tilde signs denote relativistic operators; we reserve untilded symbols for the nonrelativistic generators. We identify the light-cone momentum with the mass operator in the nonrelativistic theory. We now select all operators in the conformal algebra that commute with . Clearly these operators form a closed algebra, and it is easy to check that it is the Schrödinger algebra in spatial dimensions. The identification is as follows: M=~P+,H=~P−,Pi=~Pi,Mij=~Mij,Ki=~Mi+,D=~D+~M+−,C=~K+2. (16) From Eqs. (15) and (16) one finds the commutators between the untilded operators to be exactly the Schrödinger algebra, Eqs. (6). ## Iv Geometric realization of the Schrödinger symmetry To realize the Schrödinger symmetry geometrically, we will take the AdS metric, which is is invariant under the whole conformal group, and then deform it to reduce the symmetry down to the Schrödinger group. The AdS space, in Poincaré coordinates, is ds2=ημνdxμdxν+dz2z2. (17) The generators of the conformal group correspond to the following infinitesimal coordinates transformations that leave the metric unchanged, Pμ: xμ→xμ+aμ,D: xμ→(1−a)xμ,z→(1−a)z,Kμ: xμ→xμ+aμ(z2+x⋅x)−2xμ(a⋅x) (18) (here ). We will now deform the metric so to reduce the symmetry to the Schrödinger group. In particular, we want the metric to be invariant under , which is a linear combination of a boost along the direction and the scale transformation , but not separately under or . The following metric satisfies this condition: ds2=−2(dx+)2z4+−2dx+dx−+dxidxi+dz2z2. (19) It is straightforward to verify that the metric (19) exhibits a full Schrödinger symmetry. From Eqs. (16) and (18) one finds that the generators of the Schrödinger algebra correspond to the following isometries of the metric: Pi: xi→xi+ai,H: x+→x++a,M: x−→x−+a,Ki: xi→xi−aix+,x−→x−−aixi,D: xi→(1−a)xi,z→(1−a)z,x+→(1−a)2x+,x−→x−,C: z→(1−ax+)z,xi→(1−ax+)xi,x+→(1−ax+)x+,x−→x−−a2(xixi+z2). (20) We thus hypothesize that the gravity dual of the unitarity Fermi gas is a theory living on the background metric (19). Currently we have very little idea of what this theory is. We shall now discuss several issues related to this proposal. i) The mass in the Schrödinger algebra is mapped onto the light-cone momentum . In nonrelativistic theories the mass spectrum is normally discrete: for example, in the case of fermions at unitarity the mass of any operator is a multiple of the mass of the elementary fermion. It is possible that the light-cone coordinate is compactified, which would naturally give rise to the discreteness of the mass spectrum. ii) In AdS/CFT correspondence the number of color of the field theory controls the magnitude of quantum effects in the string theory side: in the large limit the string theory side becomes a classical theory. The usual unitarity Fermi gas does not have this large parameter , hence the dual theory probably has unsuppressed quantum effects. However, there exists an extension of the unitarity Fermi gas with Sp() symmetry Sachdev ; Radzihovsky . The gravity dual of this theory may be a classical theory in the limit of large , although with an infinite number of fields, similar to the conjectured dual of the critical O() vector model in 2+1 dimensions Klebanov:2002ja . iii) We can write down a toy model in which the metric (19) is a solution to field equations. Consider the theory of gravity coupled to a massive vector field with a negative cosmological constant, S=∫dd+2xdz√−g(12R−Λ−14HμνHμν−m22CμCμ), (21) where . One can check that Eq. (19), together with C−=1, (22) is a solution to the coupled Einstein and Proca equations for the following choice of and : Λ=−12(d+1)(d+2),m2=2(d+2). (23) iv) Although the metric component has singularity at , the metric has a plane-wave form and all scalar curvatures are finite. For example, the most singular component of the Ricci tensor, , has a singularity, as the and components of the Weyl tensor. However, since , any scalar constructed from the curvature tensor is regular. v) In terms of a dual field theory, the field with mass in Eq. (23) corresponds to a vector operator with dimension , which can be found from the general formula (Δ−1)[Δ+1−(d+2)]=2(d+2), (24) from which . We thus can think about the quantum field theory as an irrelevant deformation of the original CFT, with the action S=SCFT+J∫dd+2xO+. (25) ## V Operator-field correspondence Let us now discuss the relationship between the dimension of operators and masses of fields in this putative nonrelativistic AdS/CFT correspondence. Consider an operator dual to a massive scalar field with mass . We shall assume that it couples minimally to gravity, S=−∫dd+3x√−g(gμν∂μϕ∗∂νϕ+m20ϕ∗ϕ). (26) Assuming the light-cone coordinate is periodic, let us concentrate only on the Kaluza-Klein mode with . The action now becomes S=∫dd+2xdz1zd+3(2iMz2ϕ∗∂tϕ−z2∂iϕ∗∂iϕ−m2ϕ∗ϕ), (27) where the “nonrelativistic bulk mass” is related to the original mass by . Contributions to can arise from interaction terms between and , for example , , etc. We therefore will treat as an independent parameter. The field equation for is ∂2zϕ−d+1z∂zϕ+(2Mω−→k2−m2z2)ϕ=0. (28) The two independent solutions are ϕ±=zd/2+1K±ν(pz),p=(→k2−2Mω)1/2,ν=√m2+(d+2)24. (29) As in usual AdS/CFT correspondence, one choice of corresponds to turning a source for in the boundary theory, and another choice corresponds to a condensate of . One can distinguish two cases: 1. When , is non-normalizable and is renormalizable. Therefore corresponds to the source and to the condensate. The correlation function of is ⟨OO⟩∼(→k2−2Mω)2ν, (30) which translate into the scaling dimension Δ=d+22+ν. (31) 2. When both asymptotics are normalizable, and there is an ambiguity in the choice of the source and condensate boundary conditions. These two choices should correspond to two different nonrelativistic CFTs. In one choice the operator has dimension , and in the other choice . It is similar to the situation discussed in Klebanov:1999tb . The smallest dimension of an operator one can get is when . Therefore, there is a lower bound on operator dimensions, Δ>d2. (32) This bound is very natural if one remember that operator dimensions correspond to eigenvalues of the Hamiltonian in an external harmonic potential. For a system of particles in a harmonic potential, one can separate the center-of-mass motion from the relative motion. Equation (32) means that the total energy should be larger than the zero-point energy of the center-of-mass motion. The fact that there are pairs of nonrelativistic conformal field theories with two different values of the dimensions of is a welcome feature of the construction. In fact, free fermions and fermions at unitarity can be considered as such a pair. In the theory with free fermions the operator has dimension , and for unitarity fermions, this operator has dimension 2. The two numbers are symmetric with respect to : d=d+22+d−22,2=d+22−d−22. (33) Therefore, free fermions and fermions at unitarity should correspond to the same theory, but with different interpretations for the asymptotics of the field dual to the operator . A similar situation exists in the case of Fermi gas at unitarity with two different masses for spin-up and spin-down fermions Nishida:2007mr . In a certain interval of the mass ratios (between approximately 8.6 and 13.6), there exist two different scale-invariant theories which differ from each other, in our language, by the dimension of a three-body -wave operator. At the upper end of the interval (mass ratio 13.6) the dimension of this operator tends to 5/2 in both theories; at the lower end it has dimension in the theory with three-body resonance and in the theory without three-body resonance. ## Vi Turning on sources Let us now try to turn on sources coupled to conserved currents in the boundary theory. That would correspond to turning on non-normalizable modes. For the fields that enter the model action (21), the general behavior of the non-normalizable part of the metric and the field near is ds2=−2e−2Φz4(dx+−Bidxi)2−2e−Φz2(dx+−Bidxi)(dx−−A0dx+−Aidxi)+gijdxidxj+dz2z2+O(z0),C−=1. (34) We have chosen the gauge . The non-normalizable metric fluctuations are parametrized by the functions , , , and of and . These functions are interpreted as background fields, on which the boundary theory exists. Following the general philosophy of AdS/CFT correspondence, we assume that the partition function of the high-dimensional theory with the boundary condition (34) is equal to the partition function of an NRCFT in the background fields, Z=Z[A0,Ai,Φ,Bi,gij]. (35) This partition function should be invariant with respect to a group of gauge transformations acting on the background fields, which we will derive. The gauge condition does not completely fix the metric: there is a residual gauge symmetry parametrized by arbitrary functions of and (but not of ): t→t′=t+ξt(t,x),x−→x−′=x−+ξ−(t,x),xi→xi′=xi+ξi(t,x), (36) and another set of infinitesimal transformations characterized by a function , z→z′=z−ω(t,x)z,xμ→xμ′=xμ+12gμν∂νω. (37) Consider first (36). Under these residual gauge transformations, the fields entering the metric (34) change in the following way: δA0=˙ξ−−A0˙ξt−Ai˙ξi−ξμ∂μA0,δAi=∂iξ−−A0∂iξt−eΦgij˙ξj−ξμ∂μAi−Aj∂iξj,δΦ=˙ξt−Bi˙ξi−ξμ∂μΦ,δBi=∂iξt+Bi(˙ξt−Bj˙ξj)−ξμ∂μBi−Bj∂iξj,δgij=−(Bigjk+Bjgik)˙ξk−ξμ∂μgij−gkj∂iξk−gik∂jξk, (38) where . The residual gauge symmetry implies that the partition function of the boundary theory should be invariant under such transformations, δZ=0. (39) Can one formulate NRCFTs on background fields with this symmetry? In fact, it can be done explicitly in the theory of free nonrelativistic particles. One introduces the interaction to the background fields in the following manner: S=∫dtdx√ge−Φ[i2eΦ(ψ†Dtψ−Dtψ†ψ)−gij2mDiψ†Djψ−Bi2m(Dtψ†Diψ+Diψ†Dtψ)−B22mDtψ†Dtψ], (40) where is the inverse matrix of , , , , and . One can verify directly that the action (40) is invariant under the transformations (38), if transforms as δψ=imξ−ψ−ξμ∂μψ. (41) In fact, this invariance is an extension of the general coordinate invariance previously discussed in Son:2005rv . The invariance found in Son:2005rv corresponds to restricting in all formulas. To linear order in external field, the action is S=S[0]+∫dtdx(A0ρ+Aiji+Φϵ+Bijiϵ+12hijΠij), (42) and from Eq. (40) one reads out the physical meaning of the operators coupled to the external sources: • is coupled to the stress tensor , • is coupled to the mass current , • are coupled to the energy current . The invariance of the partition function with respect to the gauge transformations (38) leads to an infinite set of Takahashi-Ward identities for the correlation functions. The simplest ones are for the one-point functions. The fact that the group of invariance includes gauge transformation of : guarantees the conservation of mass. The fact that the linear parts in the transformation laws for and look like a gauge transformation, and leads to energy conservation in the absence of external fields: ∂t⟨∂lnZ∂Φ⟩+∂i⟨∂lnZ∂Bi⟩∣∣∣Aμ=Φ=Bi=hij=0=0. (43) Energy is not conserved in a general background (which is natural, since the background fields exert external forces on the system). Similarly, momentum conservation (and the fact that momentum density coincides with mass current) is related to terms linear in in and : , . Let us now turn to the transformations (37), under which δΦ=2ω,δgij=−2ωgij. (44) The invariance of the partition function with respect to this transformation implies 2ϵ=Πii, (45) which is the familiar relationship between energy and pressure, E=d2PV, (46) valid for free gas as well as for Fermi gas at unitarity. The action (40) is not invariant under (44), but it can be made so by replacing the “minimal coupling” by a “conformal coupling” to external fields. Therefore, the proposed holography is consistent with conservation laws and the universal thermodynamic relation between energy and pressure. ## Vii Conclusion The main goal of the paper is to construct a geometry with the symmetry of the Schrödinger group. The existence of such a geometrical realization make it possible to discuss the possibility of a dual description of Fermi gas at unitarity at a concrete level. It remains to be seen if holography is a notion as useful in nonrelativistic physics as it is for relativistic quantum field theories. At the very least, one should expect holography to provide toy models with Schrödinger symmetry. In this paper we have considered only the properties of the vacuum correlation functions. In order to construct the gravity dual of the finite-density ground state, about which a lot is known both experimentally and theoretically, one should turns on a background in the metric (34). Superfluidity of the system should be encoded in the condensation of the scalar field (whose dimension is 2 in the case of unitarity fermions, cf. Hartnoll:2008vx ; Gubser:2008zu ). It would be interesting to find black-hole metrics which realize nonrelativistic hydrodynamics and superfluid hydrodynamics. We defer this problem to future work. ###### Acknowledgements. The author thanks A. Karch and Y. Nishida for discussions leading to this work, and S. Hartnoll, V. Hubeny, D. Mateos, H. Liu, K. Rajagopal, M. Rangamani, S. Shenker, and M. Stephanov for valuable comments. This work is supported, in part, by DOE Grant DE-FG02-00ER41132. Note added—After this work was completed, J. McGreevy informed the author that he and K. Balasubramanian have also obtained the metric (19) and determined that it has nonrelativistic conformal symmetry McGreevy .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9603438377380371, "perplexity": 557.3946103120539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363125.46/warc/CC-MAIN-20211204215252-20211205005252-00163.warc.gz"}
http://calculator.tutorcircle.com/nernst-equation-calculator.html
Sales Toll Free No: 1-855-666-7446 # Nernst Equation Calculator Top Nernst Equation determines the half cell reduction potential in non-standard state condition. If the cell potential is zero, this means the reaction is at equilibrium state. Nernst equation at no-standard state is given by, $E=E^{^{\circ}}-$$\frac{RT}{nF}$$lnQ$ The Nernst Equation calculator determines the reduction potential of a reaction at 25$^{\circ}$ C. Therefore, the Nernst equation at 25$^{\circ}$ C is given by, $E=E^{\circ}-$$\frac{0.05916}{n}$$log_{10}$$\frac{a_{Red}}{a_{Ox}} Where, E = reduction potential in V, E^{\circ} = standard cell potential in V, R = gas constant = 8.314 J/mol-K, T = temperature in K, n = number of electron moles transferred in mol, F = Faraday's constant = 96500 coulombs/mol, Q = reaction quotient. ## Steps Back to Top Step 1 : Read the problem and put down the given values. Step 2 : Substitute the values into the Nernst equation and get the cell potential of the reaction. ## Problems Back to Top Given below are some of the problems based on nernst equation. ### Solved Examples Question 1: Determine the reduction potential of the reaction Sn(s)|Sn^{2+}(0.15 M)||Ag^{+}(1.7 M)|Ag(s), if the cell potential is given as +0.94V at 25^{\circ}. Solution: Step 1 : Given parameter values : [Sn^{2+}] = 0.15 M, [Ag^{+}] = 1.7 M, E^{\circ} = +0.94V, n = 2. Step 2 : Reduction potential of the reaction is, E=E^{\circ}-$$\frac{0.05916}{n}$$log_{10}$$\frac{a_{Red}}{a_{Ox}}$ $E=+0.94-$$\frac{0.05916}{2}$$log_{10}$$\frac{[0.15]}{[1.7]^2} E=+0.94 - 0.02958 \times log_{10}[0.0519] E=+0.978V Question 2: Determine the reduction potential of the reaction Fe(s)|Cu^{2+}(aq)(0.3 M)||Fe^{2+}(aq)(0.1 M)|Cu(s), if the cell potential is given as +0.78V at 25^{\circ}. Solution: Step 1 : Given parameter values : [Cu^{2+}] = 0.3 M, [Fe^{2+}] = 0.1 M, E^{\circ} = +0.78V, n = 2. Step 2 : Reduction potential of the reaction is, E=E^{\circ}-$$\frac{0.05916}{n}$$log_{10}$$\frac{a_{Red}}{a_{Ox}}$ $E=+0.78-$$\frac{0.05916}{2}$$log_{10}$$\frac{[0.3]}{[0.1]^2}$ $E=+0.78 - 0.02958 \times log_{10}[0.33]$ $E=+0.794V$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267755150794983, "perplexity": 2617.613626331851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00302.warc.gz"}
http://mathoverflow.net/questions/70283/differential-equations-satisfied-by-modular-forms
# Differential Equations Satisfied by Modular Forms In Verrill's paper preprint here, she has the following theorem which is from a paper of Stiller. It states that Let $\Gamma$ be a discrete subgroup of $SL_{2}(\mathbb{R})$ commensurable with $SL_{2}(\mathbb{Z})$. For $f \in M_{k}(\Gamma)$ (the space of weight $k$ modular forms) and $t \in M_{0}(\Gamma)$ (the space of meromorphic weight 0 modular forms), if $f = \sum_{n \geq 0}b_{n}t^{n}$ near $t = 0$, then there is a linear order $k + 1$ differential equation satisfied by $g(x) = \sum_{n \geq 0} b_{n}x^{n}$, of the form $$P_{k + 1}(x)\frac{d^{k + 1}g}{dx^{k + 1}} + P_{k}(x)\frac{d^{k}g}{dx^{k}} + \cdots + P_{0}(x)g = 0$$ where $P_{i}(x)$ are algebraic functions in $x$. If we take $t$ to be a Hauptmodul for $\Gamma$, then $P_{i}(x)$ are rational functions. Hence by multiplying by a suitable polynomial, we can in fact assume that the $P_{i}(x)$'s are polynomials. My question is that how does one get explicit bounds on the degrees of these $P_{i}(x)$'s (specifically in the case when $k = 1$)? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928482174873352, "perplexity": 98.69651162208689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159985.29/warc/CC-MAIN-20160205193919-00231-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/127184-harder-limits.html
1. ## Harder limits The function f is differentiable at a. Find $\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)}{h}$ I'd assume its something to do with manipulating the definition $\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}$ but have no idea how. 2. Originally Posted by vuze88 The function f is differentiable at a. Find $\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)}{h}$ I'd assume its something to do with manipulating the definition $\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}$ but have no idea how. $\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)}{h}=p\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)+f(a)-f(a)}{ph}\\$ could you finish it? 3. so is the answer $2pf'(a)$ 4. Originally Posted by felper $\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)}{h}=p\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)+f(a)-f(a)}{ph}\\$ could you finish it? $p\lim_{h\to 0}\frac{f(a+ph)-f(a-ph)+f(a)-f(a)}{ph}$ $=p\lim_{h\to 0}\frac{f(a+ph)-f(a)}{ph}+\frac{f(a)-f(a-ph)}{ph}$ $=p\lim_{h\to 0}\frac{f(a+ph)-f(a)}{ph}+\frac{f(a-ph)-f(a)}{-ph}=2pf'(a)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892638921737671, "perplexity": 1233.5390710772226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/252941/what-are-useful-properties-of-limit-inferior-superior-of-real-valued-function
What are useful properties of limit inferior/superior of real-valued function? To make it clear, this is the definition from wikipedia; Let $X,Y$ be topological spaces and $E\subset X$. Let $Y$ be an ordered set and $f:E\rightarrow Y$ be a function. Then, $\limsup_{x\to a} f(x) \triangleq \inf \{\sup \{f(x)\in Y|x\in U\cap E \setminus \{a\}\}\in Y|U \text{ is open}, U\cap E \setminus \{a\} ≠ \emptyset, a\in U\} \\ \liminf_{x\to a} f(x) \triangleq \sup \{\inf \{f(x)\in Y|x\in U\cap E \setminus \{a\}\}\in Y|U \text{ is open}, U\cap E \setminus \{a\} ≠ \emptyset, a\in U\}$ =============== Let $E\subset \mathbb{R}$ and $f:E\rightarrow \mathbb{R}$ be a function and $a$ be a limit point of $E$. Then, it can be shown; $\limsup_{x\to a} = \lim_{\epsilon\to 0} \sup\{f(x)\in \overline{\mathbb{R}}|x\in B(x,\epsilon)\cap E \setminus \{a\}\} \\ \liminf_{x\to a} = \lim_{\epsilon\to 0} \inf\{f(x)\in \overline{\mathbb{R}}|x\in B(x,\epsilon)\cap E \setminus \{a\}\}$. (where $\overline{\mathbb{R}} = \mathbb{R} \cup \{+\infty,-\infty\}$) Also, if $E$ is unbounded; $\limsup_{x\to\infty}= \lim_{\epsilon\to\infty} \sup\{f(x)\in \overline{\mathbb{R}}| \epsilon < x\in E\} \\ \liminf_{x\to\infty}= \lim_{\epsilon\to\infty} \inf\{f(x)\in \overline{\mathbb{R}}| \epsilon < x\in E\}$. ==================== With this definition, what are useful properties of limit inferior and superior of $f:E\rightarrow \mathbb{R}$ where $E\subset \mathbb{R}$? So i can try to prove those properties :) (i.e. superadditivity) (I'm asking this question, since i know there are many useful properties of limit inferior and superior of a sequence, so i think it has those properties too)(Since it seems it's a generalization of that of a sequence) Till now, i have only shown that $\limsup_{x\to a} f(x) = \liminf_{x\to a} f(x)=A$ iff $\lim_{x\to a} f(x)=A$. - You can reduce the function case to the sequence case by noting that $$\limsup_{x\to a} f(x) = \sup_{\substack{(x_n) \in (E\setminus\{a\})^{\mathbb N}\\ x_n \to a}} \limsup_{n \to \infty} f(x_n).$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759851694107056, "perplexity": 74.84231340272562}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.goodmath.org/blog/2015/12/
# Significant Figures and the Age of the Universe (Note: This post originally contained a remarkably stupid error in an example. For some idiotic reason, I calculated as if a liter was a cubic meter. Which, duh, it isn’t. so I was off by a factor of 1000. Pathetic, I know. Thanks to the multiple readers who pointed it out!) The other day, I got a question via email that involves significant figures. Sigfigs are really important in things that apply math to real-world measurements. But they’re poorly understood at best by most people. I’ve written about them before, but not in a while, and this question does have a somewhat different spin on it. Here’s the email that I got: Do you have strong credentials in math and/or science? I am looking for someone to give an expert opinion on what seems like a simple question that requires only a short answer. Could the matter of significant figures be relevant to an estimate changing from 20 to less than 15? What if it were 20 billion and 13.7 billion? If the context matters, in the 80s the age of the universe was given as probably 20 billion years, maybe more. After a number of changes it is now considered to be 13.7 billion years. I believe the change was due to distinct new discoveries, but I’ve been told it was simply a matter of increasing accuracy and I need to learn about significant figures. From what I know (or think I know?) of significant figures, they don’t really come into play in this case. The subject of significant digits is near and dear to my heart. My father was a physicist who worked as an electrical engineer producing power circuitry for military and satellite applications. I’ve talked about him before: most of the math and science that I learned before college, I learned from him. One of his pet peeves was people screwing around with numbers in ways that made no sense. One of the most common ones of that involves significant digits. He used to get really angry at people who did things with calculators, and just read off all of the digits. He used to get really upset when people did things like, say, measure a plate with a 6 inch diameter, and say that it had an are] of 28.27433375 square inches. That’s ridiculous! If you measured a plate’s diameter to within 1/16th of an inch, you can’t use that measurement to compute its area down to less than one billionth of a square inch! Before we really look at how to answer the question that set this off, let’s start with a quick review of what significant figures are and why they matter. When we’re doing science, a lot of what we’re doing involves working with measurements. Whether it’s cosmologists trying to measure the age of the universe, chemists trying to measure the energy produced by a reaction, or engineers trying to measure the strength of a metal rod, science involves measurements. Measurements are limited by the accuracy of the way we take the measurement. In the real world, there’s no such thing as a perfect measurement: all measurements are approximations. Whatever method we chose for taking a measurement of something, the measurement is accurate only to within some margin. If I measure a plate with a ruler, I’m limited by factors like how well I can align the ruler with the edge of the plate, by what units are marked on the ruler, and by how precisely the units are marked on the ruler. Once I’ve taken a measurement and I want to use it for a calculation, the accuracy of anything I calculate is limited by the accuracy of the measurements: the accuracy of our measurements necessarily limits the accuracy of anything we can compute from those measurements. For a trivial example: if I want to know the total mass of the water in a tank, I can start by saying that the mass of a liter of water is one kilogram. To figure out the mass of the total volume of water in the tank, I need to know its volume. Assuming that the tank edges are all perfect right angles, and that it’s uniform depth, I can measure the depth of the water, and the length and breadth of the tank, and use those to compute the volume. Let’s say that the tank is 512 centimeters long, and 203 centimeters wide. I measure the depth – but that’s difficult, because the water moves. I come up with it being roughly 1 meter deep – so 100 centimeters. The volume of the tank can be computed from those figures: 5.12 times 2.03 times 1.00, or 10,393.6 liters. Can I really conclude that the volume of the tank is 10,393.6 liters? No. Because my measurement of the depth wasn’t accurate enough. It could easily have been anything from, say, 95 centimeters to 105 centimeters, so the actual volume could range between around 9900 liters and 11000 liters. From the accuracy of my measurements, claiming that I know the volume down to a milliliter is ridiculous, when my measurement of the depth was only accurate within a range of +/- 5 centimeters! Ideally, I might want to know a strong estimate on the bounds of the accuracy of a computation based on measurements. I can compute that if I know the measurement error bounds on each error measurement, and I can track them through the computation and come up with a good estimate of the bounds – that’s basically what I did up above, to conclude that the volume of the tank was between 9,900 and 11,000 liters. The problem with that is that we often don’t really know the precise error bounds – so even our estimate of error is an imprecise figure! And even if we did know precise error bounds, the computation becomes much more difficult when you want to track error bounds through it. (And that’s not even considering the fact that our error bounds are only another measured estimate with its own error bounds!) Significant figures are a simple statistical tool that we can use to determine a reasonable way of estimating how much accuracy we have in our measurements, and how much accuracy we can have at the end of a computation. It’s not perfect, but most of the time, it’s good enough, and it’s really easy. The basic concept of significant figures is simple. You count how many digits of accuracy each measurement has. The result of the computation over the measurements is accurate to the smallest number of digits of any of the measurements used in the computation. In the water tank example, we had three significant figures of accuracy on the length and width of the tank. But we only had one significant figure on the accuracy of the depth. So we can only have one significant figure in the accuracy of the volume. So we conclude that we can say it was around 10 liters, and we can’t really say anything more precise than that. The exact value likely falls somewhere within a bell curve centered around 10 liters. Returning to the original question: can significant figures change an estimate of the age of the universe from 20 to 13.7? Intuitively, it might seem like it shouldn’t: sigfigs are really an extension of the idea of rounding, and 13.7 rounded to one sigfig should round down to 10, not up to 20. I can’t say anything about the specifics of the computations that produced the estimates of 20 and 13.7 billion years. I don’t know the specific measurements or computations that were involved in that estimate. What I can do is just work through a simple exercise in computations with significant figures to see whether it’s possible that changing the number of significant digits in a measurement could produce a change from 20 to 13.7. So, we’re looking at two different computations that are estimating the same quantity. The first, 20, has just one significant figure. The second, 13.7 has three significant digits. What that means is that for the original computation, one of the quantities was known only to one significant figure. We can’t say whether all of the elements of the computation were limited to one sigfig, but we know at least one of them was. So if the change from 20 to 13.7 was caused by significant digits, it means that by increasing the precision of just one element of the computation, we could produce a large change in the computed value. Let’s make it simpler, and see if we can see what’s going on by just adding one significant digit to one measurement. Again, to keep things simple, let’s imagine that we’re doing a really simple calculation. We’ll use just two measurements $x$ and $y$, and the value that we want to compute is just their product, $x \times y$. Initially, we’ll say that we measured the value of $x$ to be 8.2 – that’s a measurement with two significant figures. We measure $y$ to be 2 – just one significant figure. The product $x\times y = 8.2 \times 2 = 16.4$. Then we need to reduce that product to just one significant figure, which gives us 20. After a few years pass, and our ability to measure $y$ gets much better: now we can measure it to two significant figures, with a new value of 1.7. Our new measurement is completely compatible with the old one – 1.7 reduced to 1 significant figure is 2. Now we’ve got equal precision on both of the measurements – they’re now both 2 significant figures. So we can compute a new, better estimate by multiplying them together, and reducing the solution to 2 significant figures. We multiply 8.2 by 1.7, giving us around 13.94. Reduced to 2 significant figures, that’s 14. Adding one significant digit to just one of our measurements changed our estimate of the figure from 20 to 14. Returning to the intuition: It seems like 14 vs 20 is a very big difference: it’s a 30 percent change from 20 to 14! Our intuition is that it’s too big a difference to be explained just by a tiny one-digit change in the precision of our measurements! There’s two phenomena going on here that make it look so strange. The first is that significant figures are an absolute error measurement. If I’m measuring something in inches, the difference between 15 and 20 inches is the same size error as the difference between 90 and 95 inches. If a measurement error changed a value from 90 to 84, we wouldn’t give it a second thought; but because it reduced 20 to 14, that seems worse, even though the absolute magnitude of the difference considered in the units that we’re measuring is exactly the same. The second (and far more important one) is that a measurement of just one significant digit is a very imprecise measurement, and so any estimate that you produce from it is a very imprecise estimate. It seems like a big difference, and it is – but that’s to be expected when you try to compute a value from a very rough measurement. Off by one digit in the least significant position is usually not a big deal. But if there’s only one significant digit, then you’ve got very little precision: it’s saying that you can barely measure it. So of course adding precision is going to have a significant impact: you’re adding a lot of extra information in your increase in precision!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483027219772339, "perplexity": 288.8334331925954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829997.74/warc/CC-MAIN-20181218225003-20181219011003-00610.warc.gz"}
http://farside.ph.utexas.edu/teaching/329/lectures/node32.html
Next: Numerical errors Up: Integration of ODEs Previous: Introduction Euler's method Consider the general first-order o.d.e., (5) where denotes , subject to the general initial-value boundary condition (6) Clearly, if we can find a method for numerically solving this problem, then we should have little difficulty generalizing it to deal with a system of simultaneous first-order o.d.e.s. It is important to appreciate that the numerical solution to a differential equation is only an approximation to the actual solution. The actual solution, , to Eq. (5) is (presumably) a continuous function of a continuous variable, . However, when we solve this equation numerically, the best that we can do is to evaluate approximations to the function at a series of discrete grid-points, the (say), where and . For the moment, we shall restrict our discussion to equally spaced grid-points, where (7) Here, the quantity is referred to as the step-length. Let be our approximation to at the grid-point . A numerical integration scheme is essentially a method which somehow employs the information contained in the original o.d.e., Eq. (5), to construct a series of rules interrelating the various . The simplest possible integration scheme was invented by the celebrated 18th century Swiss mathematician Leonhard Euler, and is, therefore, called Euler's method. Incidentally, it is interesting to note that virtually all of the standard methods used in numerical analysis were invented before the advent of electronic computers. In olden days, people actually performed numerical calculations by hand--and a very long and tedious process it must have been! Suppose that we have evaluated an approximation, , to the solution, , of Eq. (5) at the grid-point . The approximate gradient of at this point is, therefore, given by (8) Let us approximate the curve as a straight-line between the neighbouring grid-points and . It follows that (9) or (10) The above formula is the essence of Euler's method. It enables us to calculate all of the , given the initial value, , at the first grid-point, . Euler's method is illustrated in Fig. 4. Next: Numerical errors Up: Integration of ODEs Previous: Introduction Richard Fitzpatrick 2006-03-29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843890070915222, "perplexity": 579.8462109968707}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535923940.4/warc/CC-MAIN-20140909040646-00435-ip-10-180-136-8.ec2.internal.warc.gz"}
http://astronomy.stackexchange.com/questions/1095/why-do-the-planets-in-the-solar-system-stay-in-the-same-orbital-plane?answertab=active
# Why do the planets in the Solar system stay in the same orbital plane? An earlier question addressed why all planets formed in the same orbital plane, but how is this angle maintained? What prevents the planets from taking on a different orbital plane? - Your latest edit asks a different question than has been asked previously. I have made some more edits to bring your question more in line with your recent focus and reopened your question. –  called2voyage Dec 12 '13 at 14:52 the Angular Momentum Conservation Law states that, for any moving body, its angular momentum does not change unless you exercise an external force different from the central force. For an orbiting body like a planet, this means that Sun's gravity, being the central force, does not modify Angular Momentum, but any other external force will do. Examples of external forces are collisions or the forces made by Jupiter on another planet, or by Neptune on Pluto. After the Solar System was formed, these external forces are quite small, and thus does not change greatly the Angular Momentum of any major body. But you can see how passing near a body can alter a comet's orbit. Moreover, the external forces made by bodies that are in the same plane as an orbiting body does modify the value of its Angular Momentum, but not the direction. This causes that the orbiting body changes its orbit but can not not change planes. So if you add small forces from objects in the same plane, you end up with no changes to planes. - ## Angular momentum conservation To put it in more mathematical terms, you can play with the energy and the angular momentum of a bunch of particles orbinting a central mass $M$, given by $$E = \sum_i m_i \left(\frac{1}{2}v_i^2 - \frac{GM}{r_i}\right),$$ for the energy and $${\bf I} = \sum_i m_i {\bf r}_i \times {\bf v_i},$$ for the angular momentum. Now, let's try to extremize the energy for a given angular momentum, keeping in mind that the system has to conserve angular momentum, and that collisions between the particles can reduce the energy. One good way to do it is to use Lagrange multiplier $$\delta E - \lambda\cdot\delta {\bf I} = \sum_i\left[\delta {\bf v}_i \cdot \left({\bf v}_i - \lambda \cdot {\bf r}_i \right) + \delta {\bf r}_i \cdot \left( \frac{GM}{r_i^3} + \lambda \times {\bf v}_i\right)\right],$$ that requires $$\lambda\cdot{\bf r}_i = 0, \qquad {\bf v}_i = \lambda \times {\bf r}_i, \qquad \lambda^2 = \frac{GM}{r_i^3},$$ that means that all orbits are coplanar and circular. ## Is this true in general? That's the principle. Note, however, that all the planetary systems do not always stay in an orbital plane. Such systems can be explained by Lidov-Kozai oscillations, typically trigger by "high-excentricity migration" of hot Jupiters (Fabrycky, 2012). As far as we know now, we can say that: • our Solar System is flat! • planetary systems observed by Kepler are mostly flat (there is kind of an observational bias, due to the transit method); • planetary systems observed by radial-velocity method are more or less flat (with an mean angle between 10 and 20°); • planetary systems with hot Jupiters are not flat in general. More dirty details: There is an excellent talk by Scott Tremaine, given at ESO last year you could watch online. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198971390724182, "perplexity": 594.9344236503092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296456.82/warc/CC-MAIN-20150323172136-00091-ip-10-168-14-71.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1207945/modifying-kruskals-algorithm-for-maximum-spanning-tree
Modifying Kruskal's algorithm for Maximum Spanning Tree So in our class, we did a proof on Kruskal's algorithm for finding Minimum Spanning Tree. Now, based on that, I have to modify it to find me a Maximum Spanning Tree. I know the idea, taking maximum-cost edges. I also have an idea why this works, because taking maximum edge from the remaining set is just like taking minimum edge from the same graph but with negative weights. My trouble is, I have no idea how to formally write down the proof knowing that Kruskal's algorithm is correct for finding Minimum spanning tree. Can someone help me and say whether I am right, and how I should write this formally, by not having to repeat the whole proof for MST and change a few things? Thanks! • Kruskal's algorithm does work directly for negative weights. Why can't you just apply it, as it is, but on the graph obtained by negating the edges? – Clement C. Mar 26 '15 at 19:28 • That was my point. The assumption was that weights are positive. So what should I write down? – Luka Bulatovic Mar 26 '15 at 19:29 • Well, exactly this. Since Kruskal's algorithm (Minimum Spanning Tree) works for negative weights as well, use is to compute a Minimum Spanning Tree of the negated-weight graph. This will be a maximum spanning tree of the original graph. – Clement C. Mar 26 '15 at 19:30 • Okay, I get that. I am just confused because it says: describe the algorithm and prove it. Notice: modify some of the known algorithms for MSTs. So is this an actual proof? :S – Luka Bulatovic Mar 26 '15 at 19:32 • It is, as long as you explain why "MinST of negated tree $\Leftrightarrow$ MaxST of original", and state clearly that/why Kruskal's algorithm also works for negative weights. – Clement C. Mar 26 '15 at 19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877058207988739, "perplexity": 423.2868058463879}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00030.warc.gz"}
http://papers.nips.cc/paper/4673-ancestor-sampling-for-particle-gibbs
# NIPS Proceedingsβ ## Ancestor Sampling for Particle Gibbs [PDF] [BibTeX] [Supplemental] ### Abstract We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PG-AS). Similarly to the existing PG with backward simulation (PG-BS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of using separate forward and backward sweeps as in PG-BS, however, we achieve the same effect in a single forward sweep. We apply the PG-AS framework to the challenging class of non-Markovian state-space models. We develop a truncation strategy of these models that is applicable in principle to any backward-simulation-based method, but which is particularly well suited to the PG-AS framework. In particular, as we show in a simulation study, PG-AS can yield an order-of-magnitude improved accuracy relative to PG-BS due to its robustness to the truncation error. Several application examples are discussed, including Rao-Blackwellized particle smoothing and inference in degenerate state-space models.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639013767242432, "perplexity": 1611.2950172384528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161661.80/warc/CC-MAIN-20180925143000-20180925163400-00545.warc.gz"}
http://www.cfd-online.com/W/index.php?title=Introduction_to_turbulence/Homogeneous_turbulence&diff=next&oldid=9128
Introduction to turbulence/Homogeneous turbulence (Difference between revisions) Revision as of 01:48, 8 June 2008 (view source)Michail (Talk | contribs) (→A second look at simple shear flow turbulence: FINISHED!!!!!!!!!!!!)← Older edit Latest revision as of 13:03, 21 March 2012 (view source)Bluebase (Talk | contribs) m Line 1: Line 1: + {{Introduction to turbulence menu}} == A first look at decaying turbulence == == A first look at decaying turbulence == Latest revision as of 13:03, 21 March 2012 Nature of turbulence Statistical analysis Reynolds averaged equation Turbulence kinetic energy Stationarity and homogeneity Homogeneous turbulence Free turbulent shear flows Wall bounded turbulent flows Study questions ... template not finished yet! A first look at decaying turbulence Look, for example, at the decay of turbulence which has already been generated. If this turbulence is homogeneous and there is no mean velocity gradient to generate new turbulence, the kinetic energy equation reduces to simply: $\frac{d}{dt} k = - \epsilon$ (1) This is often written (especially for isotropic turbulence) as: $\frac{d}{dt} \left[ \frac{3}{2} u^{2} \right] = - \epsilon$ (2) where $k \equiv \frac{3}{2} u^{2}$ (3) Now you can't get any simpler than this. Yet unbelievably we still don't have enough information to solve it. Let's try. Suppose we use the extanded ideas of Kolmogorov we introduced in Chapter 3 to related the dissipation to the turbulence energy, say: $\epsilon = f \left( Re \right) \frac{u^{3}}{l}$ (4) Already you can see we have two problems, what is $f \left( Re \right)$, and what is the time dependece of $l$? Now there is practically a different answer to these questions for every investigator in turbulence - most of whom will assure you their choice is only reasonable one. Figure 6.1 shows an attempt to correlate some of the grid turbulence data using the longitudinal integral scale for $l$, i.e., $l = L^{(1)}_{11}$, or simply $L$. The first thing you notice is the problem at low Reynolds number. The second is probably the possible asymptote at the higher Reynolds numbers. And the third is probably the scatter in the data, which is characteristic of most turbulence experiments, especially if you try to compare the results of one experiment to the other. Let's try to use the apparent asymptote at high Reynolds number to our advantage by arguing that $f \left( Re \right) \rightarrow A$, where $A$ is a constant. Note that this limit is consistent with the Kolmogorov argument we made back when we were talking about the dissipation earlier, so we might feel on pretty firm ground here, at least at high turbulent Reynolds numbers. But before we feel too comfortable about this, let's look at another curve shown in figure 6.2. This one is also due to Sreenivasan, but a compiled a decade later and based on large scale computer simulations of turbulence. There is less scatter, but it appears that the asymptote depends on the details of how the experiment was forced at the large scales of motion. This is not good, since it means that the answer depends on the particular flow - exactly what we wanted to avoid by modelling in the first place. Nonetheless, let's proceed by assuming in spite of the evidence that $A \approx 1$ and $L$ is the integral scale. Now how does $L$ vary with time? Figure 6.3 shows the ration of the integral scale to the Taylor microscale from the famous Comte-Bellot/Corrsin (1971) experiment. One might assume, with some theoretical justification, that $L / \lambda \rightarrow const$. This would be nice since you will be able to show that if the turbulence decays as a power law in time, say $u^{2} \sim t^{n}$, then $\lambda \sim t^{1/2}$. But as shown in Figure 6.4 from Wang et all (2000), this is not very good assumption for the DNS data avialable at this time. Now I believe this is because of problems in the simulations, mostly having to do with the fact that turbulence in a box is not a very good approximation for truly homogeneous turbulence unless the size of the box is much larger than the energetic scales. Figure 6.5 shows what happens if you try to correct for the finite box size, and now the results look pretty good. So the bottom line is that we don't really know yet for sure how $L$ behaves with time, or even whetherwe should have confidence in the expiremental and DNS attempts to determine it. Regardless, most assume that $L$ varies as a power of time, say $L = Bt^{p}$. There are various justifications for this and everyone has his own choice for $p$, but the truth is that the main justification is that it allows us to solve the equation/ In fact it is easy to show by substitution that this implies directly that the energy decays as a power law in time; in fact: $u^{2} \sim t^{p-1}$ (5) You can see immediately that if I am right and $L \sim \lambda \sim t^{1/2}$ then $u^{2} \sim t^{-1}$. Now any careful study of the data will convince you that the energy indeed decays as a power law in time, but there is no question that $n \neq -1$, but $n < - 1$, at least for most of the experiments. Most people have tried to fix this problem changing $p$. But I say the problem is in $f \left( Re \right)$ and the assumption that $\epsilon \sim u^{3} / L$ at finite Reynolds numbers. I would argue that $n \rightarrow -1$ only in the limit of infinite Reynolds number. To see why I believe this, try doing the problem another way. We know for sure that if the turbulence decays as a power law, then the Taylor microscale, $\lambda_{g}$, must be proportional exactly to $t^{1/2}$. Thus we must have (assuming isotropy): $\frac{dk}{dt} = - 10 \nu \frac{k}{\lambda^{2}_{g}} \propto \frac{k}{t}$ (6) It is easy to show that $k \propto t^{n}$ where $n$ is gyven by: $\frac{d \lambda^{2}_{g}}{dt} = - \frac{10}{n}$ (7) and any value of $n \leq -1$ is acceptable. Obviosly the difference lies in the use of the relation $\epsilon \propto u^{3} / L$ at finite Reynolds numbers. Believe it or not, this whole subject is one of the really hot debates of the last decade, and may well be for the next as well. Who knows, maybe some of you will be involved in resolving it, since it really is one of the most fundamental qustions in turbulence. The dissipation equation and turbulence modelling If you are really more inclined toward engineering than physics, you might be wondering whether the ambiguities above make any difference. They might. And in fact they might lie at the core of the reasons why we can't do things better with our existing single point turbulence models. To see this let's consider the dissipation equation. The derivation of this equation begins by taking the gradient of the equation for the fluctuating velocity, then interchanging the order of the time and space derivatives to get it into an equation for the fluctuating strain-rate, then averaging and multiplying by twice the kinematic viscosity to obtain an equation for the dissipation of kinetic energy per unit mass due to the fluctuating velocity field. After some rearrangement the result is: A second look at simple shear flow turbulence Let's consider another homogeneous flow that seems pretty simple at first sight, homogeneous shear flow turbulence with constant mean shear. We already considered this flow when we were talking about the role of the pressure-strain rate terms. Now we will only worry, for the moment, about the kinetic energy which reduces to: $\frac{\partial }{\partial t} \left[ \frac{1}{2} k \right] = - \left\langle uv \right\rangle \frac{d U}{d y} - \epsilon$ (8) Now turbulence modellers (and most experimentalists as well) would love for the left-hand side to be exactly zero so that the production and dissipation exactly balance. Unfortunately Mother Nature, to this point at least, has not allowed such flow to be generated. In every experiment to-date, the energy increases with time (or equivalently, down the tunnel). Let's make a few simple assumptions and see if we can figure out what is going on. Suppose we assume that the correlation coefficient $\left\langle uv \right\rangle / u^{2} = C$ is a constant. Now, we could again assume that $\epsilon \sim u^{3} / l$, at least in the very high Reynolds number limit. But for reasons that will be obvious below, let's assume something else we know for sure about the dissipation; namely that: $\epsilon = D \frac{u^{2}}{\lambda^{2}}$ (9) where $\lambda$ is the Taylor microscale and $D \approx 15$ (exact for isotropic turbulence). Finally let's assume that the mean shear is constant so $dU / dy = K$ is constant also. Then our problem simplifies to: $\frac{d}{dt} \frac{3}{2} u^{2} = - KC u^{2} - D \frac{u^{2}}{\lambda^{2}}$ (10) Even with all these simplifications and assumptions the problem still comes down to "What is $\lambda = \lambda \left( t \right)$ ?". Now the one thing that all the experiments agree on is that $\lambda = \lambda_{0}$ is approximately constant. (I actually have a theory about this, together with M.Gibson, and it even predicts this result.) Now you have all you need to finish the problem, and I will leave it for you. But when you do you will find that the turbulence grows (or decays) exponentialy. How fast it grows(or decays) depends on the ratio of the production to dissipation; i.e., $\frac{P}{\epsilon} \equiv \frac{ - \left\langle uv \right\rangle dU/dy }{ \epsilon }$ (11) My personal belief is that $P / \epsilon$ depends on the upstream or initial conditions, the closer $P / \epsilon$ is to unity. If I am right, then you will only get $P / \epsilon \rightarrow 1$ as an infinite Reynolds number limit. Which in turn implies you can never really achieve the ideal flow many people would like where the productions and dissipation exactly balance.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 52, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628844261169434, "perplexity": 365.5104053027959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111518.82/warc/CC-MAIN-20160428161511-00117-ip-10-239-7-51.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/103714/differentiability-continuity-of-derivatives
# differentiability-continuity of derivatives I am trying to come up with a function $g:\mathbb{R}^{2} \to\mathbb{R}$ which is differentiable at each point $(x,y)$ in $\mathbb{R}^{2}$ but whose partial derivatives are not continuous at $(0,0)$. Can anyone give me examples of such functions? - From Counterexamples in Analysis, Gelbaum and Olmsted , page 119 $$f(x,y)=\cases{ x^2\sin(1/x)+y^2\sin(1/y),&xy \ne 0\cr x^2\sin(1/x), &x \ne 0, y=0 \cr y^2\sin(1/y), &x=0, y\ne0 \cr 0,&x=y=0 }$$ I believe, but haven't proved, that if you take the graph of $$g(x)=\cases{x^2\sin(1/x), &x\ne0 \cr 0,&x=0 }$$ in the $x$-$z$-plane and "spin the right half of it about" the $z$-axis, you'll obtain an example of the function you want. At any rate, this captures the flavor of the Gelbaum and Olmsted example (but would be harder to work with). Note that $$g'(x)=\cases{2x\sin(1/x)-\cos(1/x),&x\ne0\cr0,&x=0 };$$ so, $g'$ is discontinuous at $x=0$. $\ \ \color{darkgreen}{z= g(x)},\quad \color{maroon}{z=x^2},\quad\color{darkblue}{z=-x^2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353586435317993, "perplexity": 225.23369655379045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00292-ip-10-185-27-174.ec2.internal.warc.gz"}
https://cerncourier.com/a/lhcb-interrogates-x3872-line-shape/
# LHCb interrogates X(3872) line-shape 29 May 2020 A report from the LHCb experiment In 2003, the Belle collaboration reported the discovery of a mysterious new hadron, the X(3872), in the decay B+→X(3872)K+. Their analysis suggested an extremely small width, consistent with zero, and a mass remarkably close to the sum of the masses of the D0 and D*0 mesons. The particle’s existence was later confirmed by the CDF, D0, and BaBar experiments. LHCb first reported studies of the X(3872) in the data sample taken in 2010, and later unambiguously determined its quantum numbers to be 1++, leading the Particle Data Group to change the name of the particle to χc1(3872). The nature of this state is still unclear. Until now, only an upper limit on the width of the χc1(3872) of 1.2 MeV has been available. No conventional hadron is expected to have such a narrow width in this part of the otherwise very well understood charmonium spectrum. Among the possible explanations are that it is a tetraquark, a molecular state, a hybrid state where the gluon field contributes to its quantum numbers, or a glueball without any valence quarks at all. A mixture of these explanations is also possible. ### Two new measurements As reported at the LHCP conference this week, the LHCb collaboration has now published two new measurements of the width of the χc1(3872), based on minimally overlapping data sets. The first uses Run 1 data corresponding to an integrated luminosity of 3 fb-1, in which (15.5±0.4)×103 χc1(3872) particles were selected inclusively from the decays of hadrons containing b quarks. The second analysis selected (4.23±0.07)×103 fully reconstructed B+→χc1(3872)K+ decays from the full Run 1–2 data set, which corresponds to an integrated luminosity of 9 fb-1. In both cases, the χc1(3872) particles were reconstructed through decays to the final state J/ψπ+π. For the first time the measured Breit-Wigner width was found to be non-zero, with a value close to the previous upper limit from Belle (see figure). Combining the two analyses, the mass of the χc1(3872) was found to be 3871.64±0.06 MeV – just 70±120 keV below the D0D*0 threshold. The proximity of the χc1(3872) to this threshold puts a question mark on measuring the width using a simple fit to the well-known Breit-Wigner function, as this approach neglects potential distortions. Conversely, a precise measurement of the line-shape could help elucidate the nature of the χc1(3872). This has led LHCb to explore a more sophisticated Flatté parametrisation and report a measurement of the χc1(3872) line-shape with this model, including the pole positions of the complex amplitude. The results favour the interpretation of the state as a quasi-bound D0D*0 molecule, but other possibilities cannot yet be ruled out. Further studies are ongoing. Physicists from other collaborations are also keenly interested in the nature of the χc1(3872), and the very recent observation by CMS of the decay process Bs0→χc1(3872)𝜙 suggests another laboratory for studying its properties.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168630838394165, "perplexity": 1524.8452065831455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00299.warc.gz"}