url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.allthatmatters.academy/courses/existence/lessons/position-coordinates/
Back to Discipline 0% Complete 0/13 Steps In Progress Skill 11 of 13 In Progress # Position & coordinates Technical skill ●●●○○ Time and quantity as well as length, area and volume were easy to tell: ‘She will be here in 30 seconds,’ ‘We’ll be 20 for dinner’ as well as ‘You must climb 5 metres,’ ‘It’s a 64 square-metre apartment’ and ‘It takes 300 cubic metres of water to fill my pool. t=30\,\mathrm s\qquad n=20\qquad L=5\,\mathrm m\qquad A=64\,\mathrm{m^2}\qquad V=300\,\mathrm{m^3} They are all just a single number. Position, on the other hand, is not just a number. The station is 300 metres that way’ or ‘300 metres north-east’ or ‘300 metres at an angle of 45 degrees.’ It seems like we here need to say both how far and which way before it makes sense. Both distance and direction. Two pieces of information, not just one. A position apparently needs both the 300\,\mathrm m and the 45^\circ. Sometimes it even needs three numbers: ‘The balloon is 300 metres away if you turn north-east and look up under the cloud.’ Here we must say how far (forwards), how much to turn the body (sideways) and how much to tilt the head (upwards). Three numbers to tell a position in the 3D sky; two numbers for the 2D ground. We might want to group those two or three numbers when written since they only make sense together. Why don’t we wrap them in brackets like this:1 \left(\begin{matrix} 300\,\mathrm m \\ 45^\circ \\ 300\,\mathrm m \\ 45^\circ \\ 30^\circ \\ \end{matrix}\right) Let us call the numbers within the brackets coordinates.2 Once in a while you might talk about positions in another way: ‘Look 15 centimetres from the left edge and 20 from the bottom on the map’ or ‘The falcon is hovering 30 metres in, 40 metres out on the field and 50 metres up. But are these coordinates solely distances? Are distances alone taking care of both distance and direction? Apparently yes. Apparently, a bit outwards and a bit across is just as valid a 2D position as a bit out and then a turn (and in 3D we must go a bit up as well). So we could just as well write the positions as: \left(\begin{matrix} 15\,\mathrm{cm} \\ 20\,\mathrm{cm} \\ 30\,\mathrm m \\ 40\,\mathrm m \\ 50\,\mathrm m \\ \end{matrix}\right) These “contain” their directions, although not visibly. The conclusion seems to be that position needs two numbers in 2D and three in 3D regardless of the type of numbers. When the units are the same for all coordinates, let us shorten it a bit by “taking” the unit “out”. Then we only have to write the unit once: \left(\begin{matrix} 15\,\mathrm{cm} \\ 20\,\mathrm{cm} \\ \end{matrix}\right)=\left(\begin{matrix} 15 \\ 20 \\ \end{matrix}\right) \mathrm{cm} \left(\begin{matrix} 30\,\mathrm m \\ 40\,\mathrm m \\ 50\,\mathrm m \\ \end{matrix}\right) =\left(\begin{matrix} 30 \\ 40 \\ 50 \\ \end{matrix}\right) \mathrm m Remember that • we can think of time and also quantity, length, area and volume as one-dimensional. All are in 1D, which corresponds to them all containing just one number (only one “coordinate”). • Position on the other hand is the defining property of space, and space is three-dimensional. So we can think of position, which contains three numbers (coordinates), as a 3D property (and as a 2D property when we only use two numbers as above). A property’s dimension apparently equals the number of its coordinates.3 Then position doesn’t feel that different after all – it is a “number” like any of the other properties, just a “number” in multiple dimensions (a “number” containing several numbers). References: 1. coordinate’ (dictionary), Dictionary.com, www.dictionary.com/browse/coordinates (accessed May 7th, 2019)
2021-01-22 21:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6843881607055664, "perplexity": 2313.6458586627155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00672.warc.gz"}
https://calculus7.org/
## Zeros of Taylor polynomials of (1+z)^p This is post is related to Extremal Taylor polynomials where it was important to observe that the Taylor polynomials of the function ${(1+z)^{-1/2}}$ do not have zeros in the unit disk. Let’s see how far this generalizes. The function ${f(z)=(1+z)^{-1}}$ has the rare property that all zeros of its Taylor polynomial have unit modulus. This is clear from ${\displaystyle T_n(z) = \sum_{k=0}^n (-z)^k = (1-(-z)^{n+1})/(1+z)}$. In this and subsequent illustrations, the zeros of the first 50 Taylor polynomials are shown as blue dots, with the unit circle in red for reference. When the exponent is less than -1, the zeros move inside the unit disk and begin forming nice patterns in there. When the exponent is strictly between -1 and 1, the zeros are all outside of the unit disk. Some of them get quite large, forcing a change of scale in the image. Why does this happen when the exponent approaches 1? The function ${1+z}$ is its own Taylor polynomial, and has the only zero at -1.  So, when ${p\approx 1}$, the Taylor polynomials are small perturbations of ${1+z}$. These perturbations of coefficients have to create additional zeros, but being small, they require a large value of ${z}$ to help them. For a specific example, the quadratic Taylor polynomial of ${(1+z)^p}$ is ${1 + pz + p(p-1)z^2/2}$, with roots ${(1\pm \sqrt{(2-p)/p})/(1-p) }$. When ${p\approx 1}$, one of these roots is near ${-1}$ (as it has to be) and the other is large. Finally, when ${p>1}$ and is not an integer, we get zeros on both sides of the unit circle. The majority of them are still outside. A prominent example of an interior zero is ${-1/p}$ produced by the first-degree polynomial ${1 + pz}$. Another related post: Real zeros of sine Taylor polynomials. ## Measuring the regularity of a graph by its Laplacian eigenvalues Let ${G}$ be a graph with vertices ${1, 2, \dots, n}$. The degree of vertex ${i}$ is denoted ${d_i}$. Let ${L}$ be the Laplacian matrix of ${G}$, so that ${L_{ii}=d_i}$, ${L_{ij}}$ is ${-1}$ when the vertices ${i, j}$ are adjacent, and is ${0}$ otherwise. The eigenvalues of ${L}$ are written as ${\lambda_1\le \dots \le \lambda_n}$. The graph is regular if all vertices have the same degree: ${d_1=\cdots = d_n}$. How can this property be seen from its Laplacian eigenvalues ${\lambda_1, \dots, \lambda_n}$? Since the sum of eigenvalues is equal to the trace, we have ${\sum \lambda_i = \sum d_i}$. Moreover, ${\sum \lambda_i^2}$ is the trace of ${L^2}$, which is equal to the sum of the squares of all entries of ${L}$. This sum is ${\sum d_i^2 + \sum d_i}$ because the ${i}$th row of ${L}$ contains one entry equal to ${d_i}$ and ${d_i}$ entries equal to ${-1}$. In conclusion, ${\sum d_i^2 = \sum \lambda_i^2 - \sum\lambda_i}$. The Cauchy-Schwarz inequality says that ${n\sum d_i^2 \ge \left(\sum d_i \right)^2}$ with equality if and only if all numbers ${d_i}$ are equal, i.e., the graph is regular. In terms of eigenvalues, this means that the difference ${\displaystyle D =n\sum d_i^2 - \left(\sum d_i \right)^2 = n\sum (\lambda_i^2 - \lambda_i) - \left( \sum\lambda_i \right)^2 }$ is always nonnegative, and is equal to zero precisely when the graph is regular. This is how one can see the regularity of a graph from its Laplacian spectrum. As an aside, ${D }$ is an even integer. Indeed, the sum ${\sum d_i}$ is even because it double-counts the edges. Hence the number of vertices of odd degree is even, which implies that ${\sum d_i^k }$ is even for every positive integer  ${k }$. Up to a constant factor, ${D}$ is simply the degree variance: the variance of the sequence ${d_1, \dots, d_n}$. What graph maximizes it for a given ${n}$? We want to have some very large degrees and some very small ones. Let ${G_{m, n}}$ be the union of the complete graph ${K_m}$ on ${m}$ vertices and ${(n-m)}$ isolated vertices. The sum of degrees is ${m(m-1)}$ and the sum of squares of degrees is ${m(m-1)^2}$. Hence, ${D = nm(m-1)^2 - (m(m-1))^2 = m(m-1)^2(n-m)}$ For ${n=3, 4, 5, 6}$ the maximum is attained by ${m=n-1}$, that is there is one isolated vertex. For ${n=7, 8, 9, 10}$ the maximum is ${m=n-2}$. In general it is attained by ${m^*=\lfloor (3n+2)/4 \rfloor}$. The graph ${G_{m, n}}$ is disconnected. But any graph has the same degree variance as its complement. And the complement ${G^c(m, n)}$ is always connected: it consists of a “center”, a complete graph on ${n-m}$ vertices, and “periphery”, a set of ${m}$ vertices that are connected to each central vertex. Put another way, ${G^c(m, n)}$ is obtained from the complete bipartite graph ${K_{m, n-m}}$ by connecting all vertices of the ${n-m}$ group together. Tom A. B. Snijders (1981) proved that ${G(m^*, n)}$ and ${G^c(m^*, n)}$ are the only graphs maximizing the degree variance; in particular, ${G^c(m^*, n)}$ is the unique maximizer among the connected graphs. It is pictured below for ${n=4, \dots, 9}$. ## The displacement set of nonlinear maps in vector spaces Given a vector space ${V}$ and a map ${f\colon V\to V}$ (linear or not), consider the displacement set of ${f}$, denoted ${D(f) = \{f(x)-x\colon x\in V\}}$. For linear maps this is simply the range of the operator ${f-I}$ and therefore is a subspace. The essentially nonlinear operations of taking the inverse or composition of maps become almost linear when the displacement set is considered. Specifically, if ${f}$ has an inverse, then ${D(f^{-1}) = -D(f)}$, which is immediate from the definition. Also, ${D(f\circ g)\subset D(f)+D(g)}$. When ${V}$ is a topological vector space, the maps for which ${D(f)}$ has compact closure are of particular interest: these are compact perturbations of the identity, for which degree theory can be developed. The consideration of ${D(f)}$ makes it very clear that if ${f}$ is an invertible compact perturbation of the identity, then ${f^{-1}}$ is in this class as well. It is also of interest to consider the maps for which ${D(f)}$ is either bounded, or is bounded away from ${0}$. Neither case can occur for linear operators, so this is essentially nonlinear analysis. In the nonlinear case, the boundedness assumption for linear operators is usually replaced by the Lipschitz condition. Let us say that ${f}$ is ${(L, \ell)}$-bi-Lipschitz if ${\ell\|x-y\|\le \|f(x)-f(y)\|\le L\|x-y\|}$ for all ${x, y}$ in the domain of ${f}$. Brouwer’s fixed point theorem fails in infinite-dimensional Hilbert spaces, but it not yet clear how hard it can fail. The strongest possible counterexample would be a bi-Lipschitz automorphism of the unit ball with displacement bounded away from 0. The existence of such a map is unknown. If it does not exist, that would imply that the unit ball and the unit sphere in the Hilbert space are not bi-Lipschitz equivalent, because the unit sphere does have such an automorphism: ${x\mapsto -x}$. Concerning the maps with bounded displacement, here is a theorem from Patrick Biermann’s thesis (Theorem 3.3.2): if ${f}$ is an ${(L, \ell)}$-bi-Lipschitz map in a Hilbert space, ${L/\ell < \pi/\sqrt{8}}$, and ${f}$ has bounded displacement, then ${f}$ is onto. The importance of bounded displacement is illustrated by the forward shift map ${S(x_1, x_2, \dots) = (0, x_1, x_2, \dots)}$ for which ${L=\ell=1}$ but surjectivity nonetheless fails. It would be nice to get rid of the assumption ${L/\ell < \pi/\sqrt{8}}$ in the preceding paragraph. I guess any bi-Lipschitz map with bounded displacement should be surjective, at least in Hilbert spaces, but possibly in general Banach spaces as well. ## Orthogonality in normed spaces For a vector ${x}$ in a normed space ${X}$, define the orthogonal complement ${x^\perp}$ to be the set of all vectors ${y}$ such that ${\|x+ty\|\ge \|x\|}$ for all scalars ${t}$. In an inner product space (real or complex), this agrees with the normal definition of orthogonality because ${\|x+ty\|^2 - \|x\|^2 = 2\,\mathrm{Re}\,\langle x, ty\rangle + o(t)}$ as ${t\to 0}$, and the right hand side can be nonnegative only if ${\langle x, y\rangle=0}$. Let’s see what properties of orthogonal complement survive in a general normed space. For one thing, ${x^\perp=X}$ if and only if ${x=0}$. Another trivial property is that ${0\in x^\perp}$ for all ${x}$. More importantly, ${x^\perp}$ is a closed set that contains some nonzero vectors. •  Closed because the complement is open: if ${\|x+ty\| < \|x\|}$ for some ${t}$, the same will be true for vectors close to ${y}$. • Contains a nonzero vector because the Hahn-Banach theorem provides a norming functional for ${x}$, i.e., a unit-norm linear functional ${f\in X^*}$ such that ${f(x)=\|x\|}$. Any ${y\in \ker f}$ is orthogonal to ${x}$, because ${\|x+ty\|\ge f(x+ty) = f(x) = \|x\|}$. In general, ${x^\perp}$ is not a linear subspace; it need not even have empty interior. For example, consider the orthogonal complement of the first basis vector in the plane with ${\ell_1}$ (taxicab) metric: it is $\{(x, y)\colon |y|\ge |x|\}$. This example also shows that orthogonality is not symmetric in general normed spaces: ${(1,1)\in (1,0)^\perp}$ but ${(1,0)\notin (1,1)^\perp}$. This is why I avoid using notation ${y \perp x}$ here. In fact, ${x^\perp}$ is the union of kernels of all norming functionals of ${x}$, so it is only a linear subspace when the norming functional is unique. Containment in one direction was already proved. Conversely, suppose ${y\in x^\perp}$ and define a linear functional ${f}$ on the span of ${x,y}$ so that ${f(ax+by) = a\|x\|}$. By construction, ${f}$ has norm 1. Its Hahn-Banach extension is a norming functional for ${x}$ that vanishes on ${y}$. Consider ${X=L^p[0,1]}$ as an example. A function ${f}$ satisfies ${1\in f^\perp}$ precisely when its ${p}$th moment is minimal among all translates ${f+c}$. This means, by definition, that its “${L^p}$-estimator” is zero. In the special cases ${p=1,2,\infty}$ the ${L^p}$ estimator is known as the median, mean, and midrange, respectively. Increasing ${p}$ gives more influence to outliers, so ${1\le p\le 2}$ is the more useful range for it. ## Unpopular positive opinion challenge Challenge accepted. I got three, though none are below 40%. ## The Yellow Birds (2018), 45% on RT The market isn’t so hot for Iraq war movies. And it’s nearly impossible to adapt such an introspective novel into film. I still respect the effort and its outcome, even if all references to the normal distribution got left out of it. I spent a lot of time trying to identify the exact point at which I noticed a change in Murph, somehow thinking that if I could figure out where he had begun to slide down the curve of the bell that I could do something about it. But these are subtle shifts, and trying to distinguish them is like trying to measure the degrees of gray when evening comes. It’s impossible to identify the cause of anything, and I began to see the war as a big joke, for how cruel it was, for how desperately I wanted to measure the particulars of Murph’s new, strange behavior and trace it back to one moment, to one cause, to one thing I would not be guilty of. And I realized very suddenly one afternoon while throwing rocks into a bucket in a daze that the joke was in fact on me. Because how can you measure deviation if you don’t know the mean? There was no center in the world. The curves of all our bells were cracked. (From The Yellow Birds by Kevin Powers) ## Hearts in Atlantis (2001), 49% on RT Two actors with (essentially) the same first name and over 50 years of age difference (Anthony Hopkins 1937-, Anton Yelchin 1989-2016) make this Stephen King adaptation well worth watching. He made another circuit of his room, working the tingles out of his legs, feeling like a prisoner pacing his cell. The door had no lock on it—no more than his mom’s did—but he felt like a jailbird just the same. He was afraid to go out. She hadn’t called him for supper, and although he was hungry—a little, anyway—he was afraid to go out. He was afraid of how he might find her… or of not finding her at all. Suppose she had decided she’d finally had enough of Bobby-O, stupid lying little Bobby-O, his father’s son? Even if she was here, and seemingly back to normal… was there even such a thing as normal? People had terrible things behind their faces sometimes. He knew that now. (From Low Men in Yellow Coats by Stephen King) ## Maze Runner: The Death Cure (2018), 43% on RT Sure, it’s not as good as the first film in the series (which does not qualify for the challenge, scoring 65% on RT), but a major improvement on the mindless zombie chases of the second part. I like to think of it as a parable illustrating ethical issues in public health… allowing for the customary movie-science vs actual-science differences. ## Measuring nonlinearity and reducing it How to measure the nonlinearity of a function ${f\colon I\to \mathbb R}$ where ${I\subset \mathbb R}$ is an interval? A natural way is to consider the smallest possible deviation from a line ${y=kx+b}$, that is ${\inf_{k, b}\sup_{x\in I}|f(x)-kx-b|}$. It turns out to be convenient to divide this by ${|I|}$, the length of the interval ${I}$. So, let ${\displaystyle NL(f;I) = \frac{1}{|I|} \inf_{k, b}\sup_{x\in I}|f(x)-kx-b|}$. (This is similar to β-numbers of Peter Jones, except the deviation from a line is measured only in the vertical direction.) ### Relation with derivatives The definition of derivative immediately implies that if ${f'(a)}$ exists, then ${NL(f;I)\to 0}$ as ${I}$ shrinks to ${a}$ (that is, gets smaller while containing ${a}$). A typical construction of a nowhere differentiable continuous function is based on making ${NL(f;I)}$ bounded from below; it is enough to do this for dyadic intervals, and that can be done by adding wiggly terms like ${2^{-n}\mathrm{dist}\,(x, 2^{-n}\mathbb Z)}$: see the blancmange curve. The converse is false: if ${NL(f; I)\to 0}$ as ${I}$ shrinks to ${a}$, the function ${f}$ may still fail to be differentiable at ${a}$. The reason is that the affine approximation may have different slopes at different scales. An example is ${f(x)=x \sin \sqrt{-\log |x|}}$ in a neighborhood of ${0}$. Consider a small interval ${[-\delta, \delta]}$. The line ${y = kx}$ with ${k=\sin\sqrt{-\log \delta}}$ is a good approximation to ${f}$ because ${f(x)/x\approx k}$ on most of the interval except for a very small part near ${0}$, and on that part ${f}$ is very close to ${0}$ anyway. Why the root of logarithm? Because ${\sin \log |x|}$ has a fixed amount of change on a fixed proportion of  ${[-\delta, \delta]}$, independently of ${\delta}$. We need a function slower than the logarithm, so that as ${\delta}$ decreases, there is a smaller amount of change on a larger part of the interval ${[-\delta, \delta]}$. ### Nonlinearity of Lipschitz functions Suppose ${f}$ is a Lipschitz function, that is, there exists a constant ${L}$ such that ${|f(x)-f(y)|\le L|x-y|}$ for all ${x, y\in I}$. It’s easy to see that ${NL(f;I)\le L/2}$, by taking the mid-range approximation ${y=\frac12 (\max_I f + \min_I f)}$. But the sharp bound is ${NL(f;I)\le L/4}$ whose proof is not as trivial. The sharpness is shown by ${f(x)=|x|}$ with ${I=[-1,1]}$. Proof. Let ${k}$ be the slope of the linear function that agrees with ${f}$ at the endpoints of ${I}$. Subtracting this linear function from ${f}$ gives us a Lipschitz function ${g}$ such that ${-L-k\le g'\le L-k}$ and ${\int_I g'= 0}$. Let ${A = \int_I (g')^+ = \int_I (g')^-}$. Chebyshev’s inequality gives lower bounds for the measures of the sets ${g'>0}$ and ${g'<0}$: namely, ${|g'>0|\ge A/(L-k)}$ and ${|g'<0|\le A/(L+k)}$. By adding these, we find that ${|I| \ge 2LA/(L^2-k^2)\ge 2A/L}$. Since ${\max _I g - \min_I g \le A}$, the mid-range approximation to ${g}$ has error at most ${A/2 \le |I|L/4}$. Hence ${NL(f; I) = NL(g; I) \le L/4}$. ### Reducing nonlinearity Turns out, the graph of every Lipschitz function has relatively large almost-flat pieces.  That is, there are subintervals of nontrivial size where the measure of nonlinearity is much smaller than the Lipschitz constant. This result is a special (one-dimensional) case of Theorem 2.3 in Affine approximation of Lipschitz functions and nonlinear quotients by Bates, Johnson, Lindenstrauss, Preiss, and Schechtman. Theorem AA (for “affine approximation”): For every ${\epsilon>0}$ there exists ${\delta>0}$ with the following property. If ${f\colon I\to \mathbb R}$ is an ${L}$-Lipschitz function, then there exists an interval ${J\subset I}$ with ${|J|\ge \delta |I|}$ and ${NL(f; J)\le \epsilon L}$. Theorem AA should not be confused with Rademacher’s theorem which says that a Lipschitz function is differentiable almost everywhere. The point here is a lower bound on the size of the interval ${J}$. Differentiability does not provide that. In fact, if we knew that ${f}$ is smooth, or even a polynomial, the proof of Theorem AA would not become any easier. ### Proof of Theorem AA We may assume ${I=[-1, 1]}$ and ${L=1}$. For ${t\in (0, 2]}$ let ${L(t) = \sup \{|f(x)-f(y)|/|x-y| \colon x, y\in I, \ |x-y|\ge t\}}$. That is, ${L(t)}$ is the restricted Lipschitz constant, one that applies for distances at least ${t}$. It is a decreasing function of ${t}$, and ${L(0+)=1}$. Note that ${|f(-1)-f(1)|\le 2L(1)}$ and that every value of ${f}$ is within ${2L(1)}$ of either ${f(-1)}$ or ${f(1)}$. Hence, the oscillation of ${f}$ on ${I}$ is at most ${6L(1)}$. If ${L(1) \le \epsilon/3}$, then the constant mid-range approximation on ${I}$ gives the desired conclusion, with ${J=I}$. From now on ${L(1) > \epsilon/3}$. The sequence ${L_k = L(4^{-k})}$ is increasing toward ${L(0+)=1}$, which implies ${L_{k+1}\le (1+\epsilon) L_k}$ for some ${k}$. Pick an interval ${[a, b]\subset I}$ that realizes ${L_k}$, that is ${b-a\ge 4^{-k}}$ and ${|f(b)-f(a)| = 4^{-k}L_k}$. Without loss of generality ${f(b)>f(a)}$ (otherwise consider ${-f}$). Let ${J = [(3a+b)/4, (a+3b)/4]}$ be the middle half of ${[a. b]}$. Since each point of ${J}$ is within distance ${\ge 4^{-k-1}}$ of both ${a}$ and ${b}$, it follows that ${\displaystyle f(b) + L_{k+1}(x-b) \le f(x) \le f(a) + L_{k+1}(x-a) }$ for all ${x \in J}$. So far we have pinched ${f}$ between two affine functions of equal slope. Let us consider their difference: ${\displaystyle (f(a) + L_{k+1}(x-a)) - (f(b) + L_{k+1}(x-b)) = (L_{k+1}-L_k) (b-a)}$. Recall that ${L_{k+1}\le (1+\epsilon) L_k}$, which gives a bound of ${\epsilon L_k(b-a) \le 2\epsilon L |J|}$ for the difference. Approximating ${f}$ by the average of the two affine functions we conclude that ${NL(f;J)\le \epsilon L}$ as required. It remains to consider the size of ${J}$, about which we only know ${|J|\ge 4^{-k}/2}$ so far. Naturally, we want to take the smallest ${k}$ such that ${L_{k+1}\le (1+\epsilon) L_k}$ holds. Let ${m}$ be this value; then ${L_m > (1+\epsilon)^{m} L_0}$. Here ${L_m\le 1}$ and ${L_0 = L(1)> \epsilon/3 }$. The conclusion is that ${(1+\epsilon)^m < 3/\epsilon}$, hence ${m< \log(3/\epsilon)/\log(1+\epsilon)}$. This finally yields ${\displaystyle \delta = 4^{-\log(3/\epsilon)/\log(1+\epsilon)}/2}$ as an acceptable choice, completing the proof of Theorem AA. A large amount of work has been done on quantifying ${\delta}$ in various contexts; for example Heat flow and quantitative differentiation by Hytönen and Naor. ## 2019 Formula One season This is now a separate post from Graph theory in Formula 1 so that the evolution of the graph of 1-2 finishes can be tracked. The graphs are shown as they were after the race mentioned in the subheading. At times, when the main F1 graph remained unchanged, I threw in similar graphs for some F1 feeder series. ## Australia Obviously, there is only one edge after the first race of the season, a Mercedes 1-2. This turned out to be the beginning of a series of five 1-2 for Mercedes, so the graph did not change again until Monaco. ## Monaco At Monaco, Mercedes drivers took “only” the first and third place, as Vettel appeared in top 2. ## Austria It began with the youngest ever front row of the F1 grid: Leclerc and Verstappen. And ended with the youngest ever 1-2 finish (represented by an edge here) in Formula One: Verstappen and Leclerc. For the moment, the graph is disconnected. Two predictions: (1) the components will get connected; (2) the graph will stay with 5 vertices, tying the record for the fewest number of vertices (there were 5 in 2000 and 2011). Which is a way of saying, I don’t expect either Gasly or anyone outside of top 3 teams to finish in top two for the rest of the season. ## Germany The rain-induced chaos in Hockenheim could have added a third component to the graph, but instead it linked the two existing ones. The graph is now a path on 5 vertices, which is not a likely structure in this context. ## Hungary Sure, the ${P_5}$ configuration did not last. The graph is longer a tree, and nor longer bipartite. A prediction added during the summer break: the season’s graph will contain a Hamiltonian cycle. ## Belgium Getting closer to constructing a Hamiltonian cycle: only one degree-1 vertex remains. The graph is similar to 1992 season, except the appendage was one edge longer then. In 1992, the central position was occupied by Mansell, who scored 93% more points than the runner-up to the title. This is where we find Hamilton at present, though with “only” 32% more points than the 2nd place. (The percentages are called for, because the scoring system changed in between.) ## Italy A Hamiltonian cycle is now complete. The only way to lose it is by adding another vertex to the graph, which I do not expect to happen. The graph resembles the 2001 season where Hamilton’s position was occupied by Schumacher. The only difference is that in 2001, there was an extra edge incident to Schumacher. ## Singapore We have a 4-clique, and are two edges short of the complete graph on 5 vertices. However, I predict the complete graph will not happen. Achieving it would require two races in which neither Hamilton nor Leclerc finishes in top two. Such a thing happened just once in the first 15 races, in the chaos of rainy Hockenheim.  Not likely to happen twice in the remaining 6. ## Russia The Formula 1 graph did not change, which is not surprising, considering how unlikely the two missing edges are to appear (see above). But since FIA Formula 3 championship ended in Sochi, here is its complete graph. The champion, Shwartzman, has the highest vertex degree with 5. Given the level of success of Prema team, one could expect their drivers to form a 3-clique, but this is not the case: Armstrong and Daruvala are not connected (Daruvala’s successful races were mostly toward the beginning of the season, Armstrong’s toward the end). Two Hitech drivers, Vips and Pulcini, each share a couple edges with Prema drivers. All in all, this was a closely fought championship that sometimes made Formula 1 races look like parade laps in comparison. ## Japan Unlikely as it was, another edge was created, bringing the graph within one edge of the first non-planar season in F1. Could we get an even more unlikely Verstappen-Bottas finish in the remaining four races? Red Bull did not look strong enough in recent races for that to happen. ## Interlude: Formula 4 The level of Formula 4 championships is highly variable: some struggle to survive with a handful of cars on the grid, some have developed into spectacular competitions. The following summary of F4 history is highly recommended. The two most noteworthy ones are the “twin” F4 championships held in Germany and Italy which have disjoint calendars and share many of the drivers. Here is a summary of German (ADAC) F4 in 2019: At times, US Racing team threatened to take positions 1-2-3-4 in the standings. They did get 1, 3, 4, 6 but it was a close fight, with Pourchaire taking the title by 7 points (258 : 251) over Hauger. Hauger and his neighbors in the graph (US Racing quartet and Petecof of Prema team) occupied the top 6 positions. The radius of the graph is 3, with its (unique) center being Pourchaire. The Italian F4 championship sometimes had over 35 cars on the grid, but its 1-2 graph is smaller, of radius 2. The unique center is Hauger, who won by a landslide (Hauger 369 : 233 Petecof). The only Italian driver on the graph of this Italian championship is Ferrari who once took second place when Hauger and Petecof collided. Arguably, Hauger is the 2019 driver of the year at F4 level: he won 6 races in ADAC F4 and 12 in Italian F4. Pourchaire won 4 races in ADAC F4 and did not participate in Italian F4. Another fascinating contest was the season-long battle of two 15-year old F4 rookies: Aron and Stanek. Stanek took ADAC F4 rookie title, Aron did likewise in Italy. One can call it a tie, with a rematch likely next year unless they move to different categories. Mercedes-backed Aron gets more media attention so far. ## Mexico No new edge, just another repeat of Hamilton-Vettel pairing: it is the 55th time they took the top two spots in Formula 1, an all-time record. They are adjacent on every graph since 2010 except for 2013, where Hamilton’s only race win came with Vettel finishing 3rd. They were also 1-3 in Japan 2009, so one has to go back to 2008, when Vettel drove for Toro Rosso, to find a season where they did not share the podium. Meanwhile, Formula Renault Eurocup 2019 season ended, so here is its summary graph. As usual, the highest vertex degree (Piastri, 6) indicates the champion. The 4-clique in the center of the large component took the top 4 places. The small component De Wilde – Lorandi comes from the season opener, where JD Motorsport team claimed the top two. Neither driver was in top two again, as the rest of the season was almost entirely a contest between R-ace GP and MP Motorsport. Not obvious from the graph: despite only appearing in top 2 once, as a second place in Spa, Collet took a handful of 3rd and 4th places on his way to the 5th place in overall standings and the top rookie title. The gap between 5th and 6th places was 207:102, more than a factor of 2, and the championship often felt like there were only 5 cars in the running, all from R-ace GP or MP Motorsport. ## United States It was so close to Bottas-Verstappen finish, which would have completed the graph to ${K_5}$, making it the first non-planar F1 graph in history. Could be that some Law of Planarity interfered, causing the yellow flags that denied Verstappen that final chance at overtaking Hamilton. No change to the graph, then. Another feeder series fills up the spot, then: Formula Regional European Championship (FREC). An unimpressive affair from start to finish, to be frank. Yes, it was the first year the championship took place, and it’s supposed to play an important role as a stepping stone from F4 to FIA F3. (Few drivers can realistically jump into international F3 competition directly from F4, with Hauger and Pourchaire likely to be the only two to pull off this move in 2020.) Still, it is a travesty to award 25 Super License points – same as in Japanese Super Formula – for beating this small field of mostly under-tested cars and some under-prepared drivers. As Floersch put it, Prema had three cars since November, so they’d been testing since November with three guys who actually can also drive. We had the cars one week before Paul Ricard and had one driver. At least it was pretty close to a wheel graph. At its center, Vesti won the championship by a wide margin. I included the Fraga-Guzman edge based on my recollection of Guzman finishing second in the second race at Monza – the official standings table gives Guzman no points for any Monza race, as if there was a post-race DQ that nobody mentioned to the press (but given the level of organization, I would not be surprised if it was a clerical error). ## Brazil Funny how predictions work sometimes. After the Austrian Grand Prix, when Gasly was still with Red Bull, I wrote I don’t expect either Gasly or anyone outside of top 3 teams to finish in top two for the rest of the season. But Gasly dropped out of a top-3 team and then finished second in Brazil. Well, my prediction did not cover the Toro Rosso version of Gasly, who now looks like a different driver inhabiting the same body, Jekyll/Hyde style. This race also broke the Hamiltonian cycle, and the only chance for it to be recovered is for Gasly to finish in top two again in Abu Dhabi.
2019-11-19 03:38:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 278, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7878430485725403, "perplexity": 506.335500879296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00140.warc.gz"}
https://math.stackexchange.com/questions/328722/linear-algebra-proof-dependent-or-independent
# Linear algebra proof, dependent or independent Suppose that $p_1=4-3x+6x^2+2x^3$, $p_2=1+8x+3x^2+x^3$, and $p_3=3-2x-x^2$ are vectors in $P_3$. Determine if $p_1$,$p_2$,and $p_3$ are linearly independent or dependent. Justify your answer. So far, what I did was say: Suppose $k_1$, $k_2$ and $k_3$ are constants. Then I said: $$k_1 \begin{pmatrix} 4\\ -3\\ 2\\ 6 \end{pmatrix} + k_2 \begin{pmatrix} 1\\ 8\\ 3\\ 1 \end{pmatrix} + k_3 \begin{pmatrix} 3\\ -2\\ -1\\ 0 \end{pmatrix}$$ Then I made the augmented matrix $$\begin{pmatrix} 4 & 1 & 3 &0 \\ -3 & 8 & -2 & 0\\ 6 & 3 & -1 & 0\\ 2 & 1 & 0 & 0 \end{pmatrix}$$ and found that the RREF is \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} Therefore, $k_1 = k_2 = k_3 = 0$. Therefore, $p_1$, $p_2$ and $p_3$ are linearly independent. So far this is what I have. If this is right then how to I go about justifying my answer? Also, is there a better way of writing vectors on this website? • What is the RREF? Mar 12, 2013 at 19:23 • row reduced echelon form Mar 12, 2013 at 19:28 • Assuming your calculation is correct, it follows from the fact that $\{1,x,x^2,x^3\}$ span $P_3$. Mar 12, 2013 at 19:31 • @nicholas codecogs.com/latex/eqneditor.php Mar 12, 2013 at 19:35 • awesome thanks euler Mar 12, 2013 at 20:24 That absolutely justifies that $p_1,p_2,p_3$ are linearly independent. • @Pirategull It tells us that the image of the map isn't the full space. This is a good thing, because if it were, then the nullspace of the map would have dimension $-1,$ whatever that would mean. Mar 16, 2020 at 21:35
2022-08-14 03:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7719091773033142, "perplexity": 164.78746355732048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00236.warc.gz"}
https://hero.handmade.network/forums/code-discussion/t/2982-introducing_data_oriented_design_into_position_parts_of_entities/2#14590
Mikael Johansson 99 posts / 1 project My concern is not about the performance, it never was. The good thing with pointers is that they are very useful when the structs gets really nested. Like you have a tank "struct", that contains a "vehicle" struct, that contains a "movable" struct, that contains a "entity" struct. That could become very complex code to get to that specific tanks entity. So you give the tank a pointer to its entity. Workflow is (for me) extremely important. 14 posts pragmatic_hero I played around with it a bit, and my asm is a bit rusty, but the difference is roughly: A) 1 2 3 4 // rax = dude addr, rdi = gun_types mov edx, DWORD PTR [rax] // load1, edx = gun_idx lea rdx, [rdx+rdx*4] // address calc, rdx = rdx * 5 add esi, DWORD PTR [rdi+8+rdx*4] // load2, adding damage to esi, [gun_types + fieldoffset + rdx * 4] B) 1 2 mov rdx, QWORD PTR [rax] //load1, gun_type add esi, DWORD PTR [rdx+8] //load2, adding damage to esi In case of A) at some point in the function "gun_types" variable has to be loaded into rdi or some other register. So there is *potentially* an extra read. *potentially* Which is why I said this is *barely* an indirection. But this is hard to benchmark in any reasonable sense since this is gameplay code, not some super-tight data-heavy loops which can be singled out. But overall +1 instruction (lea) and more complicated addressing mode = more instruction bytes. Plus potentially gun_types array address has to be loaded from a variable. If it's an array in data segment, then doesn't even need a memory load. Optimizing instructions in gameplay code like this is quite insane truly. Ratholing ^ 2. Thats probably fair to say that its not important enough to optimize but I figured that whatever optimization you apply here can be generalized to other systems (physics, rendering, etc). On the other side, if the optimizations that can be applied are not too complex, we may get large performance increases in gameplay code which will let us have crazier mechanics that take advantage of that (for example, having way more entities or entities that have much more complex ai). And lastly, figuring out how to store the positions in a data oriented way isn't really optimizing gameplay code, its a lot more general. Position is important for physics and rendering as well so in many ways, this is optimizing all of those sub systems. Again, this might not actually matter but if it did, I'm interested to explore and figure out how to architect a fix cause I think the fix would generalize to other parts of programming. Telash My concern is not about the performance, it never was. The good thing with pointers is that they are very useful when the structs gets really nested. Like you have a tank "struct", that contains a "vehicle" struct, that contains a "movable" struct, that contains a "entity" struct. That could become very complex code to get to that specific tanks entity. So you give the tank a pointer to its entity. Workflow is (for me) extremely important. I try to approach it differently. I think that we should figure out what the code that gives us really good performance looks like, and then figure out how we can write that code as easily as possible. To some degree, this is up to the language to provide but we can also just structure the API we expose to also be cleanly organized. As for your code on the first page, you solve the problem of overwriting dead monstars positions by looping and checking for impossible position values. I guess thats a fair way to do it and creating monstars is definitly not as common so it shouldn't be a performance issue, but I think I have a better system. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 struct soa_pos_data { //NOTE: Handles are indexes into the pos id table // NOTE: PosId's are indexes into our soa table where we store the x, y u32* PosIdArray; // NOTE: The soa data u32 NumPos; u32 CurrNumPos; f32* PosX; f32* PosY; }; struct entity { u32 PosHash; // Rest of the entity data }; inline u32 CreatePosHash(soa_pos_data* PosData, f32 NewX, f32 NewY) { u32 PosHash = GenHashValue(PosData); u32 PosId = PosData->CurrNumPos++; PosData->PosIdArray[PosHash] = PosId; PosData->PosX[PosId] = NewX; PosData->PosY[PosId] = NewY; return PosHash; } inline void DeletePosHash(soa_pos_data* PosData, u32 DelPosHash) { // NOTE: We don't want to leave any holes in our x,y arrays so we swap the // deleted x,y with the last x,y and update the pos id table u32 LastPosId = PosData->CurrNumPos--; u32 LastPosHash = FindPosHashVal(PosData, LastPosId); u32 DelPosId = PosData->PosIdArray[DelPosHash]; PosData->PosIdArray[LastPosHash] = DelPosId; PosData->PosX[DelPosId] = PosData->PosX[LastPosId]; PosData->PosY[DelPosId] = PosData->PosY[LastPosId]; RemovePosHashFromHash(PosData, DelPosHash); } inline void UpdateSingleEntity(soa_pos_data* PosData) { entity Entity; // NOTE: Get Pos u32 PosId = PosData->PosIdArray[Entity.PosHash]; f32 EntityX = PosData->PosX[PosId]; f32 EntityY = PosData->PosY[PosId]; // ... do stuff with the pos // NOTE: If we modified the pos, write back if (PosWasUpdated) { PosData->PosX[PosId] = NewEntityX; PosData->PosY[PosId] = NewEntityY; } } So I think I implemented this right, I'm gunna try it in my game but this is the pseudo code. So I have a array of indexes PosIdArray that is my layer of indirection. Every entity stores a index into this hashtable called PosHash instead of a position. When I want to retrieve a position, I use the PosHash as a array index into my PosIdArray and then get out a u32. This u32 is the index in my actual posx, posy array's and I can use this index to retrieve/modify them. Now when I create a new position, I want to append the position to my posx, posy array's assuming that those array's are filled already (they have no holes in them). So I get a PosHash by calling GenHashValue and I make the stored value at PosHash be the index to one past the current last position in the posx, posy arrays. When I delete a PosHash, I want to make sure I don't leave a empty position in the middle of my posx, posy arrays. So to prevent me from leaving the DelPosId index empty in my array's, I overwrite it with the x,y values stored at the end of my posx, posy arrays. I then find the PosHash that corresponds to the last x,y values and I make it now point to DelPosId (I make it point to where I moved the x,y values to). This way, my array of positions is always filled. For my hashtable functions, I use create hash, delete hash, and find hash. Hashtables are somewhere between O(1) and O(logn) depending on the hash function but generally, these operations shouldn't be too taxing. The thing that bothers me is that, although my posx, posy array can be made to be arbitrarily long by converting it to a linked list with elements being an array of 64 positions, the hashtable needs to know upfront how many elements there can be in my game. I guess its really hard to have fast access times without putting a limit on how many positions we can store in our game. I'm not super knowledgable on data structures so is there a different way to do this? It seems like this method has a lot of benefits except for putting a upfront limit on how many monstars we can have. 101 posts Edited by pragmatic_hero on Telash Workflow is (for me) extremely important. Use of indices/handles for most things, IS a matter of workflow. It simplifies serialization, and run-time data-structure editing - which is very powerful and liberating. An indice is the most basic and simple thing - "Nth thing in an array". Like you have a tank "struct", that contains a "vehicle" struct, that contains a "movable" struct, that contains a "entity" struct. You mean tank->vehicle->movable->entity? Or tank.vehicle.movable.entity? Why would you do that? That's just some bad code, some bizarro OOPism inheritance hierarchy. Has nothing to do with indices. 101 posts HawYeah Yeah I guess I hadn't really considered it before because the extra level of indirection always left really slow to me. HawYeah if the optimizations that can be applied are not too complex, we may get large performance increases in gameplay code which will let us have crazier mechanics that take advantage of that HawYeah Hashtables are somewhere between O(1) and O(logn) depending on the hash function but generally, these operations shouldn't be too taxing You are "optimizing" code based on FEELINGS of what IS and ISN'T going to be fast/slow without making any measurements on any meaningful amount of code. Apriori deciding how things should be written based on some un-tested ASSUMPTIONS. If you gonna do technical ratholing at least do it right. What's the point of doing SoA if you update one entity at a time. There's all this weird shit in there - like generating hash values out of nothing, why are there hashmaps to begin with. Why can't entity have an 16/32bit index directly into the position array? Having things in flat arrays and then iterating over them sequentially. Or accessing array elements by an index is pretty much as fast as you're going to get. It is HARD to write slow code in C if you do the "dumbest" most straight forward thing. Doing SoA/SIMDifying entities is like the last resort, but at that point you'd better have tens of thousands of entities. And even then you'd most likely get way more milage out of making sure you use the right OpenGL AZDO fastpath. Mikael Johansson 99 posts / 1 project "You mean tank->vehicle->movable->entity? Or tank.vehicle.movable.entity?" Well, yes, and no. That is one way to go, but with pointers you can also go directly tank->entity and tank->moveable. "Why would you do that? That's just some bad code, some bizarro OOPism inheritance hierarchy." The reason is to make more resuable functions. Like the render function takes a pointer to all entitys, the movefunction a pointer to all moveables, the tracks-function takes a pointer to the vehicles and the rotatecannon takes a pointer to all tanks, for example. Mayeb this is broken down, maybe not. It all depends. Calling it "bad" without even knowing the context is just silly. Mikael Johansson 99 posts / 1 project
2021-12-06 17:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2791283130645752, "perplexity": 2439.046300052368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00472.warc.gz"}
https://zbmath.org/?q=an:0724.05071
# zbMATH — the first resource for mathematics An application of Ramsey’s theory to partitions in groups. I. (English) Zbl 0724.05071 In 1916 I. Schur proved the following Theorem: In every finite colouring of the positive integers N there exists a monochrome solution to the equation $$x+y=z$$. The authors of this paper prove a version of Schur’s Theorem for arbitrary groups. For the equation (*) $$xy=z$$, where x, y, and z are distinct non-identity elements, they obtain Theorem A: For any n-colouring of an infinite group there exists a monochrome solution to (*). Theorem B: For any n-colouring of a finite group of order at least $$R(2,8(n^ 2-n)/2)+1$$ there exists a monochrome solution to (*). (Here the numbers R(a,b,c) are the Ramsey numbers.) In the special cases $$n=2$$ and $$n=3$$, using Theorem A they obtain Theorem C: a) If G is a 2-coloured group of order greater than 7 which is not elementary Abelian of order 9 then there is a monochrome solution of (*). b) If G is a 3-coloured group of order 17 or greater than 18 then there is a monochrome solution of (*). The proof of Theorem C: b) needs the help of a computer. ##### MSC: 05D10 Ramsey theory 05A17 Combinatorial aspects of partitions of integers ##### Keywords: Ramsey theory; partitions; Schur’s Theorem; group Full Text: ##### References: [1] G. Ehrlich , Algorithm 477: Generator of set-partitions to exactly R subsets [G7] , Communication of the ACM , 17 , no. 4 ( 1974 ), pp. 224 - 225 . [2] S. Even , Algorithmic Combinatorics , Mac Millan ( 1973 ), pp. 60 - 61 . MR 335266 | Zbl 0258.05101 · Zbl 0258.05101 [3] R.L. Graham , Rudiments of Ramsey theory, CBMS Regional Conference Series in Mathematics, no. 45 , American Math. Soc. ( 1981 ). MR 608630 | Zbl 0458.05043 · Zbl 0458.05043 [4] R.L. Graham - B. L. ROTSCHILD, Ramsey’s theorem for n-parameter sets , Trans. Amer. Math. Soc. , 159 ( 1971 ), pp. 257 - 292 . MR 284352 | Zbl 0233.05003 · Zbl 0233.05003 · doi:10.2307/1996010 [5] R.L. Graham - B. L. ROTHSCHILD - J. H. SPENCER, Ramsey Theory, Wiley-Interscience Series in Discrete Math. ( 1980 ). MR 591457 | Zbl 0455.05002 · Zbl 0455.05002 [6] E. Reingold - J. Nivergelt - N. Deo , Combinatorial Algorithms , Prentice-Hall ( 1977 ), pp. 106 - 112 . Zbl 0367.68032 · Zbl 0367.68032 [7] J. Sanders , A Generalization of Schur’s Theorem, dissertation , Yale University ( 1969 ). [8] S. Shelah , Primitive recursive bounds for van der Waerden numbers , J. AMS , 1 ( 1988 ), pp. 683 - 697 . MR 929498 | Zbl 0649.05010 · Zbl 0649.05010 · doi:10.2307/1990952 [9] I. Schur , Über die Kongruenz xm + ym congruent zm(mod p) , Iber, Deutsche Math. Verein. , 25 ( 1916 ), pp. 114 - 116 . JFM 46.0193.02 · JFM 46.0193.02 [10] I. Schur , Gesammelte Abhandlungen , Springer-Verlag , Berlin ( 1973 ). Zbl 0274.01054 · Zbl 0274.01054 [11] B.L. Van Der Waerden , Beweis einer Baudetschen Vermutung , Nieuw. Arch. Wisk. , 19 ( 1927 ), pp. 212 - 216 . JFM 53.0073.12 · JFM 53.0073.12 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-06 12:27:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8519931435585022, "perplexity": 2170.0755970171917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00283.warc.gz"}
https://www.esaral.com/q/the-volume-of-a-right-circular-cone-is-9856-mathrmcm3
# The volume of a right circular cone is $9856 \mathrm{~cm}^{3}$. Question. The volume of a right circular cone is $9856 \mathrm{~cm}^{3}$. If the diameter of the base is $28 \mathrm{~cm}$, find (i) height of the cone (ii) slant height of the cone (iii) curved surface area of the cone $\left[\right.$ Assume $\left.\pi=\frac{22}{7}\right]$ Solution: (i) Radius of cone $=\left(\frac{28}{2}\right) \mathrm{cm}=14 \mathrm{~cm}$ Let the height of the cone be h. Volume of cone $=9856 \mathrm{~cm}^{3}$ $\Rightarrow \frac{1}{3} \pi r^{2} h=9856 \mathrm{~cm}^{3}$ $\Rightarrow\left[\frac{1}{3} \times \frac{22}{7} \times(14)^{2} \times h\right] \mathrm{cm}^{2}=9856 \mathrm{~cm}^{3}$ $h=48 \mathrm{~cm}$ Therefore, the height of the cone is $48 \mathrm{~cm}$. (ii) Slant height ( $l$ ) of cone $=\sqrt{r^{2}+h^{2}}$ $=\left[\sqrt{(14)^{2}+(48)^{2}}\right] \mathrm{cm}$ $=[\sqrt{196+2304}] \mathrm{cm}$ $=50 \mathrm{~cm}$ Therefore, the slant height of the cone is $50 \mathrm{~cm}$. (iii) CSA of cone $=\pi r$ $=\left(\frac{22}{7} \times 14 \times 50\right) \mathrm{cm}^{2}$ $=2200 \mathrm{~cm}^{2}$ Therefore, the curved surface area of the cone is $2200 \mathrm{~cm}^{2}$. Leave a comment Free Study Material
2023-03-21 14:58:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923824429512024, "perplexity": 590.0586738954391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00443.warc.gz"}
http://mathhelpforum.com/algebra/100520-weights-print.html
# Weights • Sep 4th 2009, 02:29 AM travellingscotsman Weights Hello all. I'm trying to work out the total weight of a a bundle of pipe that in total is 4200 feet long, each joint is 40 foot long, so this means that I have 105 joints of pipe. The diameter is 13 3/8" and it's 54.5 pounds per foot. Any ideas? • Sep 4th 2009, 04:05 AM HallsofIvy Quote: Originally Posted by travellingscotsman Hello all. I'm trying to work out the total weight of a a bundle of pipe that in total is 4200 feet long, each joint is 40 foot long, so this means that I have 105 joints of pipe. The diameter is 13 3/8" and it's 54.5 pounds per foot. Any ideas? Since you have "pounds per foot" the diameter is not relevant. You have 4200 feet of pipe at 54.5 pounds per foot: the total weight is (4200)(54.5)= 228900 pounds or 228900/2000= 114.45 tons. I hope you are not hoping to lift that yourself! • Sep 4th 2009, 04:08 AM aidan Quote: Originally Posted by travellingscotsman Hello all. I'm trying to work out the total weight of a a bundle of pipe that in total is 4200 feet long, each joint is 40 foot long, so this means that I have 105 joints of pipe. The diameter is 13 3/8" and it's 54.5 pounds per foot. Any ideas? Yes. You state that you have 4200 feet of pipe that weighs 54.5 pound per foot. And that you are trying to work out the total weight of this bundle of pipe. . Step 1: If you had ONLY 1 foot of pipe, how much would 1 foot of pipe weigh? . Step 2: If you had 2 (two) feet of pipe, how much would 2 feet of pipe weigh? You should be able to provide that answer. (Hint: 54.5+54.5 = 109) . Step 3: If you had 3 feet of pipe, how much would 3 feet of pipe weigh? Hint: $3 \times 54.5 = 163.5$ . Step 4: If you had 4 feet of pipe, how much would 4 feet of pipe weigh? No hints here. You should be able to provide the answer. At this point you should see a pattern developing. You will need to do this step business 4200 more times before the answer for the total weight appears. DO NOT SKIP ANY INTERMEDIATE STEPS! .
2017-05-23 01:19:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.615361213684082, "perplexity": 1245.9538249554357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00515.warc.gz"}
http://blogs.crikey.com.au/pollbludger/2012/11/10/seat-of-the-week-boothby/?comment_page=3/
Refine By Refine By Our Pick Virgin’s 1 acquisition of Tiger changes the game Refine By Our Pick ### DAVID SALTER The ABC debate: why Beecher and Crikey fear the ABC Where culture lives Previous lists The latest ## Seat of the week: Boothby Last held by Labor in 1949, the southern Adelaide suburbs seat of Boothby has been trending in the party’s direction since the early Howard years. UPDATE (12/11/12): Essential Research has Labor gaining ground for the second week in a row to attain their best position since March last year. They now trail 52-48, down from 53-47, from primary votes of 37% for Labor (steady), 45% for the Coalition (down one) and 9% for the Greens (steady). Also featured are monthly personal approval ratings, which last time had both leaders up in the immediate aftermath of Julia Gillard’s sexism and misogyny speech. Whereas Gillard has maintained her gains, her approval steady at 41% approval and disapproval down two to 49%, Tony Abbott has fallen to his worst net result ever, his approval down four to 33% and disapproval up four to a new low of 58%. Gillard’s lead as preferred prime minister is up from 43-36 to 45-32, her best result since February 2011. Also canvassed are options on how the government might rein in the budget, with reducing or means testing the baby bonus and increasing tax for those on high incomes respectively coming on top. The southern Adelaide electorate of Boothby covers coastal suburbs from Brighton south to Marino, extending inland to edge of the coastal plain at Myrtle Bank and the hills at Belair, Eden Hills, Bellevue Heights and Flagstaff Hill. The seat’s Liberal lean is softened by the area around the defunct Tonsley Park Mitsubishi plant, the only part of the electorate with below average incomes and above average ethnic diversity. The redistribution has shaved the Liberal margin from 0.8% to 0.3% by adding about 10,000 in Aberfolye Park from Mayo in the south, and removing 4000 voters at Myrtlebank to Sturt and 1500 at Edwardstown to Hindmarsh. Boothby was created when South Australia was first divided into electorates in 1903, at which time it was landlocked and extended north into the eastern suburbs. Its coastal areas were acquired when the neighbouring electorate of Hawker was abolished in 1993. Labor held the seat for the first eight years of its existence, and remained competitive until the Menzies government was elected in 1949. This began a long-term trend to the Liberals which peaked in the 1970s, when margins were consistently in double digits. Former Premier and Liberal Movement figurehead Steele Hall held the seat from 1981 until he was succeeded by Andrew Southcott in 1996. A positive swing in the difficult 2004 election had Labor hopeful of going one better in 2007, inspiring Right powerbrokers to recruit what they imagined to be a star candidate in Nicole Cornes, a minor Adelaide celebrity and wife of local football legend Graham Cornes. However, Cornes only managed a 2.4% swing against a statewide result of 6.8% after a series of disastrous campaign performances. Labor again had high hopes at the 2010 election, seeing in the seat a potential gain to balance anticipated losses in Queensland and New South Wales. However, while the Labor swing of 2.2% outperformed a statewide result of 0.8%, perhaps reflecting a suppressed vote in 2007, it fell 0.8% short of what was required. Andrew Southcott came to the seat at the age of 26 after winning preselection at the expense of fellow moderate Robert Hill, the faction’s leading light in the Senate. Tony Wright of the Sydney Morning Herald wrote that the Right had built up strength in local branches with a view to unseating its hated rival Steele Hall, and when denied by his retirement turned its guns on Hill as a “surrogate”. Unlike Hill, who went on to become government leader in the Senate, Southcott has led an unremarkable parliamentary career, finally winning promotion after the 2007 election defeat to the Shadow Minister for Employment Participation, Apprenticeships and Training. However, he was demoted to parliamentary secretary when Tony Abbott became leader in December 2009, after backing Malcolm Turnbull in the leadership vote. Southcott’s preselection for the coming election was challenged by former state party president Chris Moriarty, following disquiet in the party over his fundraising record. However, Moriarty was only able to manage 35 votes in the February 2012 party ballot against 195 for Southcott, support for his challenge reportedly evaporating as the Kevin Rudd leadership challenge came to a head. Southcott will again face his Labor opponent from 2010, Annabel Digance, a former nurse and SA Water Board member factionally aligned with the Right. • 101 victoria Posted Saturday, November 10, 2012 at 10:15 am | Permalink WE KNOW how long governments in Victoria will run. Four years from polling day to polling day was an important reform of the Bracks government, providing certainty and an end to the often-paralysing speculation of when a premier's car would roll up the Government House drive to seek an election. Ted Baillieu should be especially grateful. • 102 Greensborough Growler Posted Saturday, November 10, 2012 at 10:17 am | Permalink WWP, The bright shining light reflecting off the tinfoil hat gathers no hysterial publicity or moss. • 103 WeWantPaul Posted Saturday, November 10, 2012 at 10:21 am | Permalink GG I just don’t understand the frothing at the mouth over a royal commission and the abuse and condemnation of anyone who doesn’t agree with the lynch mob immediately. I’m long used to PB being a place of contempt intolerance and hate directed against people of faith, but this is a new level even for here. • 104 Leroy Posted Saturday, November 10, 2012 at 10:21 am | Permalink Coalition floating two things through the AFR today. Nuclear Subs, and the latest reheat of the S&G story. http://www.afr.com/f/free/blogs/christopher_joye/coalition_leaders_float_nuclear_FTYU0PR4uJLeGinF94G5kI Coalition leaders float nuclear navy PUBLISHED: 9 hours 15 MINUTES AGO | UPDATE: 1 hour 47 MINUTES AGO Christopher Joye Top Coalition leaders want to open the debate over the purchase of nuclear submarines to replace the navy’s diesel fleet, a huge step up in Australia’s military capability in response to China’s plan to become a major maritime power in the Pacific Ocean. http://www.afr.com/p/national/what_gillard_knew_about_the_slush_JbGcILLo9O59yt3QGFT68H What Gillard knew about the ‘slush fund’ PUBLISHED: 9 hours 7 MINUTES AGO | UPDATE: 0 hour 0 MINUTES AGO Laura Tingle and Mark Skulley Julia Gillard had reason to be feeling good when federal Parliament rose last week. The polls were continuing to improve for the Prime Minister and the ALP. The release of the Asian Century white paper finally gave a structure to the government’s agenda for the next election. Ms Gillard held the ascendancy in Parliament over Tony Abbott, who was facing his own pressures and demons. But all was not well. Not only was the government in conflict with its cross benches about the budget, but the controversy over Gillard’s role in a 17-year-old union corruption scandal had been revived.] Both free articles. I read the second one carefully, it seemed thin on anything new. • 105 Greensborough Growler Posted Saturday, November 10, 2012 at 10:22 am | Permalink Diogs, I doubt Pell was talking about the NSW police which is what the good Inspector was asserting the other night. • 106 victoria Posted Saturday, November 10, 2012 at 10:23 am | Permalink wwp For me personally, i am sick of the catholic bashing on this site. I am getting to the point of no longer wishing to post here. • 107 guytaur Posted Saturday, November 10, 2012 at 10:23 am | Permalink GG Calling for a legal inquiry to get to the bottom of allegations is not the tin foil hat response. Tin foil hat brigade are convinced the allegations are true without evidence. Calling for an inquiry to obtain evidence is in fact very anti tin foil hat behaviour. I am for a Royal Commission precisely because we need transparency in this area. Remember for justice to be done it must be seen to be done. It is certainly the case that justice has not been seen to be done here. • 108 my say Posted Saturday, November 10, 2012 at 10:25 am | Permalink C@tmomma Posted Friday, November 9, 2012 at 6:00 pm | Permalink my say, You are exactly the sort of devout Catholic that the Church has relied upon for a very long time to cover for them. If you really love your Church and your Faith and the Holy Trinity you will work hard to have these a note for c tomm q devout catholic is one that stil goes to mass every sunday, and all the holidays , they whom i know well are rather elderley, my aunt is a methoditst like my father and also attends her church every sunday. i dont go to church these days but still try to live my faith my own way, my thoughts on this is to point out not all people in the church are the same. you are a new comer here , most of the people that use to be here, and really dont know me. one little story i was once bullied by a boy at school that was not catholic and ask to produce my rosary beads it was as a big sporting event i think i was about 17,\to his shock and amazment i produced them, i really had a laugh at the type nonsence, i am married to a wonderful catholic man whos parents migrated here, ,\my mother in law is what you would call devout, they had no where to live and the local priest took them in, no australian protestant offered them shelter. she was shunned becuace she was dutch and a catholic. the first house the boarded at in sydney the woman put a piece of string across the kitchen and told them never to cross that mark this dutch lady had worked with my father in law inthe dutch resistance movement sending british pilots back to the uk and hiding them the attic, she protected a family of 8 jews, and when she came here no one offered them shelter so dont preach to me about what jesus said. o and before i, go i wondered last night if hands would come through the computer and burn me at the stack or send me to the tower of london i hope labor wins the election, but i want be blogging here any more. and i know my ears will be burning when you all have your little get together well so be it. • 109 C@tmomma Posted Saturday, November 10, 2012 at 10:25 am | Permalink The attacks from the hard-arsked Right & Faux News, on Rachel Maddow & her sexuality(what that has to do with her analysis, I don’t know), have begun: http://www.advocate.com/politics/media/2012/11/09/fox-reporter-calls-maddow-angry-young-man Also a faint echo of the PM’s speech having percolated through the consciousness of Americans in this statement in the Comments: I apologise if what I say is offending some people, but misogyny is misogyny, and the gender of the person making the slur doesn’t change that. • 110 Diogenes Posted Saturday, November 10, 2012 at 10:25 am | Permalink GG Correct. He wasn’t referring to the latest allegations. • 111 Greensborough Growler Posted Saturday, November 10, 2012 at 10:26 am | Permalink WWP, Yes, must agree. It’s often a case of “off goes the head and on goes the pumpkin” for some of our esteemed posters when religion s discussed. • 112 Oakeshott Country Posted Saturday, November 10, 2012 at 10:27 am | Permalink I went to the school which is at the centre of the scandal in Maitland-Newcastle dioscese (I now callit St Peter Pheils). The dioscese is bankrupt, 13 priests have been convicted, my old headmaster was charged but died before getting to court. The age of the priests is slightly younger than the parishioners and both are in the 70s. If there is a Catholoic church in 30 years it will be a small and insignificant thing I don’t see that a royal commission will tell us anything we don’t already know. In Maitland there was a ring of paedophile priests whose crimes were hidden by the hierarchy. No doubt paedophiles were attracted to the priesthood in Maitland because of this. Their may still be some active paedophiles but the publicity, their age, small numbers and lack of contact with young people limits their ability to be active. • 113 Diogenes Posted Saturday, November 10, 2012 at 10:27 am | Permalink GG I should add that I think there should be a RC but it should be include all faiths, not just Catholic. • 114 guytaur Posted Saturday, November 10, 2012 at 10:28 am | Permalink OC Watch last nights Lateline. Last allegation to surface was 18 months ago. • 115 C@tmomma Posted Saturday, November 10, 2012 at 10:29 am | Permalink Gee, Campbell Newman is having such a positive effect on Queensland’s economy and the families that make it up. Not: http://ht.ly/faGp0 • 116 Greensborough Growler Posted Saturday, November 10, 2012 at 10:31 am | Permalink Diogs, Perhaps you could include the Jimmy Saville and Gary Gltter types as well. I’m not sure if bicyle riders should be included because they are pedallists. What about we investigate everybody. Just to be sure! • 117 WeWantPaul Posted Saturday, November 10, 2012 at 10:31 am | Permalink Tin foil hat brigade are convinced the allegations are true without evidence. Calling for an inquiry to obtain evidence is in fact very anti tin foil hat behaviour. There needs to be a reason why normal law enforcement is insufficient. We don’t have a royal commission into drug importation every day, we have one (if we ever have) when there is something substantial and important that has been shown to be beyond normal justice. Perverting the course of justice is a very serious crime, and members of the clergy aren’t immune from it. Or is it you just want the normal laws of evidence and presumptions of innocence removed and the bonfires lit so we can have a good old fashioned burning at the stake. There would be some irony in that course of action of course. • 118 confessions Posted Saturday, November 10, 2012 at 10:31 am | Permalink victoria: Thanks for the link to Mega’s last OO column, he makes some very astute observations: Rudd was also a great leader after Lehman Brothers collapsed in 2008, just as Howard had been in the wake of Port Arthur in 1996 and the Bali bombings in 2002. But on most days Rudd was PM, he went looking for new ways to diminish the office through over-exposure. But I feel his view of the Hawke-Keating era is somewhat tinted by rose-coloured glasses. • 119 guytaur Posted Saturday, November 10, 2012 at 10:32 am | Permalink Meredith Bergman coming up soon News 24 (regular sparring segment) • 120 my say Posted Saturday, November 10, 2012 at 10:32 am | Permalink ps, how could i possible cover up stuff i know nothing about or ever encounted. • 121 my say Posted Saturday, November 10, 2012 at 10:32 am | Permalink ps, how could i possible cover up stuff i know nothing about or ever encounted. • 122 guytaur Posted Saturday, November 10, 2012 at 10:34 am | Permalink wewantpaul There is good reason. We have whistleblowers coming out telling of cover ups by the church hiding and destroying information needed for convictions on a criminal offence. These are only allegations, but do need robust due process of law to investigate. In the circumstances that is a Royal Commission. • 123 guytaur Posted Saturday, November 10, 2012 at 10:36 am | Permalink my say The circle of evil is not big. The evidence has been hidden from the vastly good majority. Its just like the police force is made up of vastly good people. Yet corruption can enter and undo the work of the good. • 124 Diogenes Posted Saturday, November 10, 2012 at 10:36 am | Permalink GG As a bare minimum, they should enact uniform mandatory reporting laws across Australia in the case of the clergy (just like we have for doctors, nurses, teachers, social workers etc in SA). The Church cannot investigate serious criminal allegations against itself. • 125 C@tmomma Posted Saturday, November 10, 2012 at 10:37 am | Permalink So you say Tony Jones went to St Pauls College at Sydney Uni and not St Johns: http://www.smh.com.au/technology/elite-college-students-proud-of-prorape-facebook-page-20091108-i3js.html I’m sure they weren’t as brazen when he was there but from little seeds, big trees grow: http://www.smh.com.au/technology/elite-college-students-proud-of-prorape-facebook-page-20091108-i3js.html • 126 confessions Posted Saturday, November 10, 2012 at 10:37 am | Permalink The so-called fiscal cliff explained: http://www.abc.net.au/news/2012-11-10/facts-on-the-us-fiscal-cliff/4364668 • 127 C@tmomma Posted Saturday, November 10, 2012 at 10:37 am | Permalink Sorry about the double link. Sometimes I preview, sometimes I don’t. • 128 WeWantPaul Posted Saturday, November 10, 2012 at 10:39 am | Permalink But I feel his view of the Hawke-Keating era is somewhat tinted by rose-coloured glasses. I agree it was a very different time and Cassidy and friends don’t understand times have changed. • 129 Oakeshott Country Posted Saturday, November 10, 2012 at 10:40 am | Permalink Guy There will no doubt be allegations some time this year but how is a RC going to help? • 130 WeWantPaul Posted Saturday, November 10, 2012 at 10:42 am | Permalink There is good reason. We have whistleblowers coming out telling of cover ups by the church hiding and destroying information needed for convictions on a criminal offence. These are only allegations, but do need robust due process of law to investigate. In the circumstances that is a Royal Commission. I disagree, they are allegations of perverting the course of justice that if there is any actual evidence the police should be prosecuting. • 131 guytaur Posted Saturday, November 10, 2012 at 10:42 am | Permalink my say Remember when the Anglican Church was under attack for the same thing a Royal Commission was avoided by co operation of the Church. It then brought down the then GG. The same is now happening to the Catholic Church. It is up to those in authority to cooperate with criminal investigations. Failing to do so will bring a Royal Commission at some stage to enforce those authorities to do this. This is about the concerns for victims and preventing future victims. Not an attack on the Religion and people of faith. There is a difference. • 132 my say Posted Saturday, November 10, 2012 at 10:44 am | Permalink al victoira the educated that tell us all the time about there degrees in this and that that make one feel as my life is a failure my son often says mum you dad have three chidren]that\did so well in hsc they could of all got in to medicine so the genes came from some where. like victoria i have had enough yes i dont contribute high fulooting sttuff like some my life is very grass rootes . which i would suggest is the basis of most true labor people i also understand now what living in northerisland must of been like no wonder my relies came here/ • 133 Socrates Posted Saturday, November 10, 2012 at 10:47 am | Permalink Morning all. Two comments before I’m off to do the shopping. First I would support the need for a RC into the catholic church. It was precisely my experiences of the hierarchy in it in my twenties that caused me to become an atheist. There are a lot of sweet people at the grass roots level, but some virtual sociopaths at the top. Elaborate logics about saving souls allow them to ignore their consciences while covering up serious crime. Hiding paedophiles has been official policy since at least the early 80s. It really is a criminal conspiracy. I can’t comment on the other churches. Collusion with catholic cops and lawyers is part of it. • 134 guytaur Posted Saturday, November 10, 2012 at 10:48 am | Permalink OC The problem seems to be systemic with the Catholic Church. Ours is not the only country this has happened with. There is enough evidence of people coming out to making these allegations to warrant a Royal Commission. The Premier has already started an inquiry into the police aspect of this. However the whistleblower Mr Fox was clear the cover up was happening within the Church and denying investigators information needed as evidence for a criminal case. Destroying that evidence. An inquiry into police is not sufficient. • 135 WeWantPaul Posted Saturday, November 10, 2012 at 10:52 am | Permalink The Premier has already started an inquiry into the police aspect of this Shouldn’t you polish your tinfoil hats until the findings of that come down. • 136 Oakeshott Country Posted Saturday, November 10, 2012 at 10:54 am | Permalink One of the lawyers can confirm this but my understanding of evidence at a royal commission cf to court is. A. You are compelled to appear B. you can not take the 5th – you must answer even if you incriminate yourself C. Unless you are declared an uncooperative witness evidence given in the RC can’t be used against you. It is an enquiry rather than a court and would be anice little way for offenders to come clean and be forgiven. I would much prefer for the police to do their job and to continue to gaol the buggers. • 137 my say Posted Saturday, November 10, 2012 at 10:54 am | Permalink i would ask people who make these assumption how do you know no like most of us people only assume enjoy your huberis or what ever the high faluting word is as my irish grandmother would of said • 138 OzPol Tragic Posted Saturday, November 10, 2012 at 10:55 am | Permalink Is there some way a Commission of Inquiry can be set up with a panel of learned individuals, a la the Houston Inquiry, with a Judge, a Representative of the Catholic Church who is above reproach, and a Victim’s Representative from Broken Rites, who will do a thorough and thoughtful investigation into Priest/Child Abuse, without fear or favour or fireworks? NO! Definitly NOT! Any effective Commission, esp the Royal Commission it should be, MUST be completely independent. No Commission/ Inquiry which involves Stakeholders (in this case the Catholic Church & the victims of its predatory pedophiles) can possibly be completely independent. In addition, an effective Commission must be totally quarantined from outside influence, esp any influence, inc intimidation, Stakeholders and their supporters might exert. To understand why these provisions (especially the latter) must be met, consider 3 high-profile landmark RCs about which most Bludgers know: UK’s Leveson; Qld’s Fitzgerald; the Commonwealth’s Painters & Dockers now more commonly known as Bottom of the Harbour. In not one of those cases was the named Inquiry/ RC the first, or the second, or even the third into the problems they uncovered. They were, however, the first which were not only completely independent, but quarantined from stakeholder influence. 1. Leveson. In the UK, Murdoch first ran into the sort of trouble which should have shut him down, not recently but in the 80s. And in the early 1990s. And again in the late 1990s over hacking O company’s smart cards/ chips. And again in the early 2000s. More seriously when Princes William’s & Harry’s phones were hacked: which spun off a more “thorough” inquiry and a Parliamentary Select Committee inquiry (soon shut down). Only when the scandalously compromised Report of that “thorough” inquiry, was handed down (2009) the phone hacking scandal blew out very publicly, did NewsIntel’s hacking victims, police victims, MP’s who had been intimidated by Murdoch’s agents etc come forward, all fueling demands for a more thorough Inquiry that Murdoch’s agents couldn’t manipulate. 2 Qld’s Fitzgerald was by no means the first into police & government corruption. Inquiries actually started in the 1960s. Qld had already held at least one on National Hotel prostitution & police corruption, one into Whisky a GoGo fire, and several others. It would, however, take what (considering the past’s multiple successful legal actions against whistle-blowers and journos) were very courageous steps: Phil Dickie’s damning reports in the CM; Chris Masters devastating Moonlight State, and Joh BP’s being well out of the state (?US) & unable to intervene, for Acting premier Bill Gunn to set up the Inquiry. 3. Costigan (Painters & Dockers) As well as being but the latest inquiry into union activities, or even into tax evasion, or into crime into relation to each, Fraser’s Costigan Commission (Painters’ & Dockers’ Union) would become a dire warning of the truth of the adage Never call a Royal Commission unless you definitely know what the outcome will be. Costigan’s investigations would uncover: crimes of Union members ... "taxation fraud, social security fraud, ghosting, compensation fraud, theft on a grand scale, extortion, the handling of massive importations of drugs, the shipments of armaments, all manner of violence and murder." Despite the union's members being "careless of their reputation, glorying in its infamy" that very reputation attracted "employment by wealthy people outside their ranks who stoop to use their criminal prowess to achieve their own questionable ends." Those wealthy people outside their ranks would turn Costigan RC’s spotlight on very senior Coalition members & backers; the Fraser Government’s handling of taxation – legislation as well as implementation, supervision and policing – and on the performance of Fraser’s own innovation, the Australian Federal Police. The moral of the above examples (only a small selection of best-known one)? The current Australian scandal – especially as it’s but a “local” example of what has proved to be a global scandal in any countries with RC institutions: the US and Irish scandals have been horrific & sickening (& involved long-term and pervasive intimidation of victims, whistle-blowers, victim support groups & government members); the German is said to have shocked the Pope (though many of the crimes took part during his “watch”) – demands a Royal Commission: a completely independent commission which is well and truly quarantined from any possibility of Stakeholder or other outsider influence, intrusion, esp intimidation (which have presented such major problems with so many earlier inquiries in so many countries. • 139 guytaur Posted Saturday, November 10, 2012 at 10:56 am | Permalink WeWantPaul Good try. Guess what? Calling for a Royal Commission to obtain evidence is not tinfoil hat behaviour. Tin foil hat behaviour is ignoring evidence obtained. e.g. the moon landing. It happened it was not a hollywood fake. Evidence is solid. Denial of constant claims of abuse and whistle blowers of cover up within any organisation is tin foil hat behaviour. Lets have a proper legal conducted inquiry into the evidence to find out the truth. Its the very antithesis of tin foil hat behaviour to call for evidence to base conclusions on. • 140 Socrates Posted Saturday, November 10, 2012 at 10:57 am | Permalink Second comment applies to the US election result: Donald Trump posted a message on Twitter saying: ''Congrats to @KarlRove on blowing400 million this cycle. Every race @CrossroadsGPS ran ads in, the Republicans lost. What a waste of money.'' The election day results showed Mr Rove's strategy of bringing in huge donations from a few wealthy benefactors and spending that money almost completely on television advertising failed. http://www.smh.com.au/world/republican-anger-over-rove-waste-of-money-20121109-293dp.html#ixzz2BltoU7ds Putting aside the satisfaction psephological types feel, and the schadenfreude induced by the republican implosion, I think the US election is a watershed moment for the value of TV advertising. It’s influence is waning. Romney and Rove spent a bomb on tv, and got back nothing. Obamas campaign was still huge and expensive, but more focused and made much better use of social media. Not a focus group in sight either. It was all about gathering data and responding directly via new media. This lesson surely applies in Australia. Time to talk to Jim Messina. http://www.brisbanetimes.com.au/world/victory-for-technology-20121109-293c5.html Off to do the shopping. Have a nice day. It is beautiful outside here in Adelaide. Don’t miss it. • 141 WeWantPaul Posted Saturday, November 10, 2012 at 11:03 am | Permalink Calling for a Royal Commission to obtain evidence is not tinfoil hat behaviour. Tin foil hat behaviour is ignoring evidence obtained. e.g. the moon landing. It happened it was not a hollywood fake. Evidence is solid. I disagree completely. So essentially you want a RC because the normal law enforcement can’t obtain sufficient evidence, so lets suspend normal rules and demand people incriminate themselves, and if they don’t well then lets just jail them because we don’t like them. That is the tinfoil stuff. • 142 Greensborough Growler Posted Saturday, November 10, 2012 at 11:04 am | Permalink Diogs, We are a Federation and the chances of uniform State Laws on anything is zero. However, Im of the understanding that the Church has to comply with all the same laws re dealing with children as other bodies. perhaps you know different? As WWP mentioned, the most serious aspect of the current assertions is the “perversion of justice”. I agree that if this can be proven, then the perpetrators should be punished. However, this can be handled under current processes and does not require an expensive white elephant called a RC. • 143 guytaur Posted Saturday, November 10, 2012 at 11:07 am | Permalink WeWantPaul Ah so you are a tin foil hat wearer not accepting the fact that mankind landed on the moon. No wonder you do not accept there is enough smoke to justify a RC to see if there really is a fire. • 144 guytaur Posted Saturday, November 10, 2012 at 11:09 am | Permalink GG No not true. For decades the usual processes have not been sufficient to uncover if the claims of “perversion of justice” are true. A RC will clear it up once and for all one way or the other. • 145 Greensborough Growler Posted Saturday, November 10, 2012 at 11:09 am | Permalink guytaur, A typically immature comment. Absolutely, pathetic. • 146 guytaur Posted Saturday, November 10, 2012 at 11:11 am | Permalink GG A typical immature response from you. Insult the messenger rather than look at the point. This because you know you are defending the indefensible. • 147 Diogenes Posted Saturday, November 10, 2012 at 11:12 am | Permalink GG However, Im of the understanding that the Church has to comply with all the same laws re dealing with children as other bodies. perhaps you know different? The problem is that the Church doesn’t have to mandatorily report child abuse claims. Bodies like hospitals, schools etc do and I think the legislation should include the Church. I agree there is a difficulty with getting uniform state laws but it has happened with other areas. • 148 guytaur Posted Saturday, November 10, 2012 at 11:16 am | Permalink “@MrDenmore: If there were evidence of systematic cover ups of child abuse in the Islamic church in Australia, would politicians prevaricate?” • 149 WeWantPaul Posted Saturday, November 10, 2012 at 11:17 am | Permalink Ah so you are a tin foil hat wearer not accepting the fact that mankind landed on the moon. No wonder you do not accept there is enough smoke to justify a RC to see if there really is a fire. You do realize making up really stupid untrue claims like I don’t accept man landed on the moon detracts from everything you say. the moon comparison was always a poor / weird metaphor but I left it alone rather than be unnecessarily critical, but I see you were using it as a primary school type gotcha …. *rolls eyes* • 150 guytaur Posted Saturday, November 10, 2012 at 11:18 am | Permalink “@denniallen: Abbott is very quiet over Catholic church child sex abuse/yet screams fro roof tops over Thomson/Slipper! Whats wrong with this picture?”
2014-10-22 04:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22792857885360718, "perplexity": 5406.919535081462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445886.27/warc/CC-MAIN-20141017005725-00177-ip-10-16-133-185.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.ss/1190905518
## Statistical Science ### The Use of Unlabeled Data in Predictive Modeling #### Abstract The incorporation of unlabeled data in regression and classification analysis is an increasing focus of the applied statistics and machine learning literatures, with a number of recent examples demonstrating the potential for unlabeled data to contribute to improved predictive accuracy. The statistical basis for this semisupervised analysis does not appear to have been well delineated; as a result, the underlying theory and rationale may be underappreciated, especially by nonstatisticians. There is also room for statisticians to become more fully engaged in the vigorous research in this important area of intersection of the statistical and computer sciences. Much of the theoretical work in the literature has focused, for example, on geometric and structural properties of the unlabeled data in the context of particular algorithms, rather than probabilistic and statistical questions. This paper overviews the fundamental statistical foundations for predictive modeling and the general questions associated with unlabeled data, highlighting the relevance of venerable concepts of sampling design and prior specification. This theory, illustrated with a series of central illustrative examples and two substantial real data analyses, shows precisely when, why and how unlabeled data matter. #### Article information Source Statist. Sci., Volume 22, Number 2 (2007), 189-205. Dates First available in Project Euclid: 27 September 2007 Permanent link to this document https://projecteuclid.org/euclid.ss/1190905518 Digital Object Identifier doi:10.1214/088342307000000032 Mathematical Reviews number (MathSciNet) MR2408958 Zentralblatt MATH identifier 1246.62157 #### Citation Liang, Feng; Mukherjee, Sayan; West, Mike. The Use of Unlabeled Data in Predictive Modeling. Statist. Sci. 22 (2007), no. 2, 189--205. doi:10.1214/088342307000000032. https://projecteuclid.org/euclid.ss/1190905518 #### References • Ando, R. and Zhang, T. (2005). A framework for learning predictive structures from multiple tasks and unlabeled data. J. Machine Learning Research 6 1817--1853. • Belkin, M. and Niyogi, P. (2005). Towards a theoretical foundation for Laplacian-based manifold methods. Learning Theory. Lecture Notes in Comput. Sci. 3559 486--500. Springer, Berlin. • Belkin, M., Niyogi, P. and Sindhwani, V. (2004). Manifold regularization: A geometric framework for learning from examples. Technical Report 04-06, Dept. Computer Science, Univ. Chicago. Available at www.cs.uchicago.edu/research/publications/techreports/TR-2004-06. • Bennett, K. and Demiriz, A. (1999). Semi-supervised support vector machines. In Advances in Neural Information Processing Systems (NIPS) 11 368--374. MIT Press, Cambridge, MA. • Blum, A. and Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proc. Eleventh Annual Conference on Computational Learning Theory 92--100. ACM, New York. • Castelli, V. and Cover, T. (1995). On the exponential value of labeled samples. Pattern Recognition Letters 16 105--111. • Coifman, R., Lafon, S., Lee, A., Maggioni, M., Nadler, B., Warner, F. and Zucker, S. (2005a). Geometric diffusions as a tool for harmonic analysis and structure definition of data. I. Diffusion maps. Proc. Natl. Acad. Sci. U.S.A. 102 7426--7431. • Coifman, R., Lafon, S., Lee, A., Maggioni, M., Nadler, B., Warner, F. and Zucker, S. (2005b). Geometric diffusions as a tool for harmonic analysis and structure definition of data. II. Multiscale methods. Proc. Natl. Acad. Sci. U.S.A. 102 7432--7437. • Cozman, F. and Cohen, I. (2002). Unlabeled data can degrade classification performance of generative classifiers. In Proc. Fifteenth International Florida Artificial Intelligence Research Society Conference 327--331. AAAI Press, Menlo Park, CA. • Dobra, A., Hans, C., Jones, B., Nevins, J., Yao, G. and West, M. (2004). Sparse graphical models for exploring gene expression data. J. Multivariate Anal. 90 196--212. • Escobar, M. and West, M. (1995). Bayesian density estimation and inference using mixtures. J. Amer. Statist. Assoc. 90 577--588. • Ferguson, T. (1973). A Bayesian analysis of some nonparametric problems. Ann. Statist. 1 209--230. • Ganesalingam, S. and McLachlan, G. J. (1978). The efficiency of a linear discriminant function based on unclassified initial samples. Biometrika 65 658--662. • Ganesalingam, S. and McLachlan, G. J. (1979). Small sample results for a linear discriminant function estimated from a mixture of normal populations. J. Stat. Comput. Simul. 9 151--158. • Geiger, D. and Heckerman, D. (2002). Parameter priors for directed acyclic graphical models and the characterization of several probability distributions. Ann. Statist. 30 1412--1440. • Joachims, T. (1999). Transductive inference for text classification using support vector machines. In Proc. Sixteenth International Conference on Machine Learning (I. Bratko and S. Dzeroski, eds.) 200--209. Morgan Kaufmann, San Francisco. • Lavine, M. and West, M. (1992). A Bayesian method for classification and discrimination. Canad. J. Statist. 20 451--461. • Liang, F., Mao, K., Liao, M., Mukherjee, S. and West, M. (2007). Nonparametric Bayesian kernel models. Technical report, Dept. Statistical Science, Duke Univ. Available at www.stat.duke.edu/research/papers/. • Mukherjee, S., Tamayo, P., Rogers, S., Rifkin, R., Engle, A., Campbell, C., Golub, T. and Mesirov, J. (2003). Estimating dataset size requirements for classifying DNA microarray data. J. Comput. Biol. 10 119--142. • Müller, P., Erkanli, A. and West, M. (1996). Bayesian curve fitting using multivariate normal mixtures. Biometrika 83 67--79. • Nigam, K., McCallum, A., Thrun, S. and Mitchell, T. (2000). Text classification from labeled and unlabeled documents using EM. Machine Learning 39 103--134. • O'Neill, T. J. (1978). Normal discrimination with unclassified observations. J. Amer. Statist. Assoc. 73 821--826. • Poggio, T. and Girosi, F. (1990). Regularization algorithms for learning that are equivalent to multilayer networks. Science 247 978--982. • Ramaswamy, S., Tamayo, P., Rifkin, R., Mukherjee, S., Yeang, C., Angelo, M., Ladd, C., Reich, M., Latulippe, E., Mesirov, J., Poggio, T., Gerald, W., Loda, M., Lander, E. and Golub, T. (2001). Multiclass cancer diagnosis using tumor gene expression signatures. Proc. Natl. Acad. Sci. U.S.A. 98 15,149--15,154. • Schölkopf, B. and Smola, A. J. (2002). Learning with Kernels. MIT Press, Cambridge, MA. • Seeger, M. (2000). Learning with labeled and unlabeled data. Technical report, Univ. Edinburgh. Available at www.kyb.tuebingen.mpg.de/bs/people/seeger/papers/review.pdf. • Shawe-Taylor, J. and Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge Univ. Press. • Szummer, M. and Jaakkola, T. (2002). Partially labeled classification with Markov random walks. In Advances in Neural Information Processing Systems (NIPS) 14 945--952. MIT Press, Cambridge, MA. • Vapnik, V. (1998). Statistical Learning Theory. Wiley, New York. • Wahba, G. (1990). Spline Models for Observational Data. SIAM, Philadelphia. • West, M. (1992). Modelling with mixtures (with discussion). In Bayesian Statistics 4 (J. Bernardo, J. Berger, A. Dawid and A. Smith, eds.) 503--524. Oxford Univ. Press. • West, M. (2003). Bayesian factor regression models in the large $p$, small $n$'' paradigm. In Bayesian Statistics 7 (J. Bernardo, M. Bayarri, J. Berger, A. Dawid, D. Heckerman, A. Smith and M. West, eds.) 733--742. Oxford Univ. Press. • Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with $g$-prior distributions. In Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti (P. Goel and A. Zellner, eds.) 233--243. North-Holland, Amsterdam. • Zhu, X., Ghahramani, Z. and Lafferty, J. (2003). Semi-supervised learning using Gaussian fields and harmonic functions. In Proc. Twentieth International Conference on Machine Learning (T. Fawcett and N. Mishra, eds.) 912--919. AAAI Press, Menlo Park, CA.
2019-10-23 19:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4590010643005371, "perplexity": 8239.993072732896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00438.warc.gz"}
http://www.marinamele.com/2014/03/document-your-django-projects.html
# Document your Django projects: reStructuredText and Sphinx Check this post to learn how to document a Django project! You’ll get an introduction to the reStructuredText markup language, learn how to install Sphinx, a library that takes the docstrings of your code and compiles them into nice html pages, and configure it for a Django project. Documenting your project is very important, you know that — that’s why you’re reading this don’t you? 😉 For me, the most important reasons why you should document your projects are: • It will help you understand what you did — specially a few months after you wrote the code. • It will help others understand what you did — this way, they can help you improve your code or add some extensions and functionalities. Here, you’ll learn how to install and use Sphinx, a library that takes the docstrings of your code and compiles them into nice html pages. It’s used to create the official documentation of Python, Django, and most of all other popular Python packages. But how? How does it compile them into nice pages with headers, links and other common formats? Well, you have to write your docstrings using reStructuredText (frequently abbreviated as reST), a markup format. But don’t worry, that’s easy to learn! 😉 ## Install Sphinx Activate your virtual environment and install Sphinx, which is in the Python package Index and can be installed via pip or via easy_install: $pip install sphinx This command will also install the following packages: Pygments, docutils, Jinja2 and markupsafe. Don’t forget to add them into you requirements.txt file. If you have different requirements files for production, developing and testing, I recommend you include Sphinx on the last two (not in production). ## Create the docs folder Now, we will create the docs folder using Sphinx:$ sphinx-quickstart Which will ask you different questions. I will show my answers using different colors: blue and red. Make sure you answer the same as me when they are red 😉 Enter the root path for documentation. > Root path for the documentation [.]: ./docs You have two options for placing the build directory for Sphinx output. Either, you use a directory “_build” within the root path, or you separate “source” and “build” directories within the root path. > Separate source and build directories (y/n) [n]: n Inside the root directory, two more directories will be created; “_templates” for custom HTML templates and “_static” for custom stylesheets and other static files. You can enter another prefix (such as “.”) to replace the underscore. > Name prefix for templates and static dir [_]: _ The project name will occur in several places in the built documentation. > Project name: myproject > Author name(s): Marina Mele Sphinx has the notion of a “version” and a “release” for the software. Each version can have multiple releases. For example, for Python the version is something like 2.5 or 3.0, while the release is something like 2.5.1 or 3.0a1.  If you don’t need this dual structure, just set both to the same value. > Project version: 1.0 > Project release [1.0]: 1.0 The file name suffix for source files. Commonly, this is either “.txt” or “.rst”.  Only files with this suffix are considered documents. > Source file suffix [.rst]: .rst One document is special in that it is considered the top node of the “contents tree”, that is, it is the root of the hierarchical structure of the documents. Normally, this is “index”, but if your “index” document is a custom template, you can also set this to another filename. > Name of your master document (without suffix) [index]: index Sphinx can also add configuration for epub output: > Do you want to use the epub builder (y/n) [n]: n Please indicate if you want to use one of the following Sphinx extensions: > autodoc: automatically insert docstrings from modules (y/n) [n]: y > doctest: automatically test code snippets in doctest blocks (y/n) [n]: n > intersphinx: link between Sphinx documentation of different projects (y/n) [n]: n > todo: write “todo” entries that can be shown or hidden on build (y/n) [n]: n > coverage: checks for documentation coverage (y/n) [n]: y > pngmath: include math, rendered as PNG images (y/n) [n]: n > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: n > ifconfig: conditional inclusion of content based on config values (y/n) [n]: n > viewcode: include links to the source code of documented Python objects (y/n) [n]: n A Makefile and a Windows command file can be generated for you so that you only have to run e.g. ‘make html’ instead of invoking sphinx-build directly. > Create Makefile? (y/n) [y]: y > Create Windows command file? (y/n) [y]: n This will create a docs folder inside your project folder, containing the files conf.py, index.rst and Makefile, and the folders _build, _static and _templates (which will be empty). The autodoc extension is needed to import the docstrings of your project files into the documentation. ## Create the html files Inside the docs folder, type: $make html This will create the html pages, which you’ll find in the _build/html folder. Open the index.html file and take a look at what has been created. You’ll see that the document is essentially empty, as you must indicate sphinx where your project is. So… let’s tell him! Open the conf.py file (inside the docs folder), and after the lines: import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath(‘.’)) add the following: sys.path.insert(0, os.path.abspath(‘..’)) from django.conf import settings settings.configure() This will tell sphinx where it should look for your project files. Now, inside the docs folder create a modules folder with a models.rst file inside:$ mkdir modules $touch modules/models.rst Edit this file and write the following: Models ====== .. automodule:: myproject.myapp.models :members: Where myproject.myapp.models is the path form the folder that contains the docs folder and your app/models.py. This indicates autodoc that it should read the models.py file looking for docstrings. However, we still need to include the models.rst file. To do this, open the file docs/index.rst and write: Contents: .. toctree:: :maxdepth: 2 modules/models Now, let’s try again with$ make html and look at your _build/html/index.html file. You should see a link to Models, containing all the models you have defined there, together with its methods! 😉 Now, you can add as many files you want to document all your project! Moreover, you should learn a little bit more about reStructuredText to make your docs nicer 😉 Don’t forget to share and +1 if useful, thanks! 😉
2018-12-12 17:35:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3144885301589966, "perplexity": 8017.412253434272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00280.warc.gz"}
http://mathhelpforum.com/geometry/29916-math.html
1. ## math What is the perimeter of a triangle where the sides measure 6,8,and 10 inches? what would be the radius of a circle with a diameter of 8 inches? 2. Originally Posted by shamekagilmore What is the perimeter of a triangle where the sides measure 6,8,and 10 inches? what would be the radius of a circle with a diameter of 8 inches? The perimeter is the sum of all of a shapes sides. The radius of a circle is half the diameter. 3. ## solve this for me. Originally Posted by shamekagilmore What is the perimeter of a triangle where the sides measure 6,8,and 10 inches? what would be the radius of a circle with a diameter of 8 inches? solution __________ 1) p =6+8+10 =24 inches. and 2) r = d/2 = 8/2 = 4inches. clement okhale nigeria. my question: 1) given that matrix s = [3 1] , t = [2 3] 2 4 -1 1 find: a) 2s-3t b) s.t c) s+4t d) 2s + 3t send it to cokhale@yahoo.com
2016-10-28 21:38:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345856070518494, "perplexity": 1034.615404315976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00453-ip-10-171-6-4.ec2.internal.warc.gz"}
http://opensource.apple.com/source/bash/bash-80/bash/doc/article.ms
# article.ms   [plain text] .de SE \" start example .sp .5 .RS .ft CR .nf .. .de EE \" end example .fi .sp .5 .RE .ft R .. .TL Bash \- The GNU shell* .AU Chet Ramey Case Western Reserve University chet@po.cwru.edu .FS .FE .NH 1 Introduction .PP .B Bash is the shell, or command language interpreter, that will appear in the GNU operating system. The name is an acronym for the \*QBourne-Again SHell\*U, a pun on Steve Bourne, the author of the direct ancestor of the current .UX shell \fI/bin/sh\fP, which appeared in the Seventh Edition Bell Labs Research version of \s-1UNIX\s+1. .PP Bash is an \fBsh\fP\-compatible shell that incorporates useful features from the Korn shell (\fBksh\fP) and the C shell (\fBcsh\fP), described later in this article. It is ultimately intended to be a conformant implementation of the IEEE POSIX Shell and Utilities specification (IEEE Working Group 1003.2). It offers functional improvements over sh for both interactive and programming use. .PP While the GNU operating system will most likely include a version of the Berkeley shell csh, Bash will be the default shell. Like other GNU software, Bash is quite portable. It currently runs on nearly every version of .UX and a few other operating systems \- an independently-supported port exists for OS/2, and there are rumors of ports to DOS and Windows NT. Ports to \s-1UNIX\s+1-like systems such as QNX and Minix are part of the distribution. .PP The original author of Bash was Brian Fox, an employee of the Free Software Foundation. The current developer and maintainer is Chet Ramey, a volunteer who works at Case Western Reserve University. .NH 1 What's POSIX, anyway? .PP .I POSIX is a name originally coined by Richard Stallman for a family of open system standards based on \s-1UNIX\s+1. There are a number of aspects of \s-1UNIX\s+1 under consideration for standardization, from the basic system services at the system call and C library level to applications and tools to system administration and management. Each area of standardization is assigned to a working group in the 1003 series. .PP The POSIX Shell and Utilities standard has been developed by IEEE Working Group 1003.2 (POSIX.2).\(dd .FS \(ddIEEE, \fIIEEE Standard for Information Technology -- Portable Operating System Interface (POSIX) Part 2: Shell and Utilities\fP, 1992. .FE It concentrates on the command interpreter interface and utility programs commonly executed from the command line or by other programs. An initial version of the standard has been approved and published by the IEEE, and work is currently underway to update it. There are four primary areas of work in the 1003.2 standard: .IP \(bu Aspects of the shell's syntax and command language. A number of special builtins such as .B cd and .B exec are being specified as part of the shell, since their functionality usually cannot be implemented by a separate executable; .IP \(bu A set of utilities to be called by shell scripts and applications. Examples are programs like .I sed, .I tr, and .I awk. Utilities commonly implemented as shell builtins are described in this section, such as .B test and .B kill . An expansion of this section's scope, termed the User Portability Extension, or UPE, has standardized interactive programs such as .I vi and .I mailx; .IP \(bu A group of functional interfaces to services provided by the shell, such as the traditional \f(CRsystem()\fP C library function. There are functions to perform shell word expansions, perform filename expansion (\fIglobbing\fP), obtain values of POSIX.2 system configuration variables, retrieve values of environment variables (\f(CRgetenv()\fP\^), and other services; .IP \(bu A suite of \*Qdevelopment\*U utilities such as .I c89 (the POSIX.2 version of \fIcc\fP), and .I yacc. .PP Bash is concerned with the aspects of the shell's behavior defined by POSIX.2. The shell command language has of course been standardized, including the basic flow control and program execution constructs, I/O redirection and pipelining, argument handling, variable expansion, and quoting. The .I special builtins, which must be implemented as part of the shell to provide the desired functionality, are specified as being part of the shell; examples of these are .B eval and .B export . Other utilities appear in the sections of POSIX.2 not devoted to the shell which are commonly (and in some cases must be) implemented as builtin commands, such as and .B test . POSIX.2 also specifies aspects of the shell's interactive behavior as part of the UPE, including job control and command line editing. Interestingly enough, only \fIvi\fP-style line editing commands have been standardized; \fIemacs\fP editing commands were left out due to objections. .PP While POSIX.2 includes much of what the shell has traditionally provided, some important things have been omitted as being \*Qbeyond its scope.\*U There is, for instance, no mention of a difference between a shell and any other interactive shell (since POSIX.2 does not specify a login program). No fixed startup files are defined, either \- the standard does not mention .I .profile . .NH 1 Basic Bash features .PP Since the Bourne shell provides Bash with most of its philosophical underpinnings, Bash inherits most of its features and functionality from sh. Bash implements all of the traditional sh flow control constructs (\fIfor\fP, \fIif\fP, \fIwhile\fP, etc.). All of the Bourne shell builtins, including those not specified in the POSIX.2 standard, appear in Bash. Shell \fIfunctions\fP, introduced in the SVR2 version of the Bourne shell, are similar to shell scripts, but are defined using a special syntax and are executed in the same process as the calling shell. Bash has shell functions which behave in a fashion upward-compatible with sh functions. There are certain shell variables that Bash interprets in the same way as sh, such as .B PS1 , .B IFS , and .B PATH . Bash implements essentially the same grammar, parameter and variable expansion semantics, redirection, and quoting as the Bourne shell. Where differences appear between the POSIX.2 standard and traditional sh behavior, Bash follows POSIX. .PP The Korn Shell (\fBksh\fP) is a descendent of the Bourne shell written at AT&T Bell Laboratories by David Korn\(dg. It provides a number of useful features that POSIX and Bash have adopted. Many of the interactive facilities in POSIX.2 have their roots in the ksh: for example, the POSIX and ksh job control facilities are nearly identical. Bash includes features from the Korn Shell for both interactive use and shell programming. For programming, Bash provides variables such as .B RANDOM and the .B typeset builtin, the ability to remove substrings from variables based on patterns, and shell arithmetic. .FS \(dgMorris Bolsky and David Korn, \fIThe KornShell Command and Programming Language\fP, Prentice Hall, 1989. .FE .B RANDOM expands to a random number each time it is referenced; assigning a value to .B RANDOM seeds the random number generator. is the default variable used by the builtin when no variable names are supplied as arguments. The .B typeset builtin is used to define variables and give them attributes Bash arithmetic allows the evaluation of an expression and the substitution of the result. Shell variables may be used as operands, and the result of an expression may be assigned to a variable. Nearly all of the operators from the C language are available, with the same precedence rules: .SE $echo$((3 + 5 * 32)) 163 .EE .LP For interactive use, Bash implements ksh-style aliases and builtins such as .B fc (discussed below) and .B jobs . Bash aliases allow a string to be substituted for a command name. They can be used to create a mnemonic for a \s-1UNIX\s+1 command name (\f(CRalias del=rm\fP), to expand a single word to a complex command (\f(CRalias news='xterm -g 80x45 -title trn -e trn -e -S1 -N &'\fP), or to ensure that a command is invoked with a basic set of options (\f(CRalias ls="/bin/ls -F"\fP). .PP The C shell (\fBcsh\fP)\(dg, originally written by Bill Joy while at Berkeley, is widely used and quite popular for its interactive facilities. Bash includes a csh-compatible history expansion of directories via the .B pushd , .B popd , and .B dirs builtins, and tilde expansion, to generate users' home directories. Tilde expansion has also been adopted by both the Korn Shell and POSIX.2. .FS \(dgBill Joy, An Introduction to the C Shell, \fIUNIX User's Supplementary Documents\fP, University of California at Berkeley, 1986. .FE .PP There were certain areas in which POSIX.2 felt standardization was necessary, but no existing implementation provided the proper behavior. The working group invented and standardized functionality in these areas, which Bash implements. The .B command builtin was invented so that shell functions could be written to replace builtins; it makes the capabilities of the builtin available to the function. The reserved word \*Q!\*U was added to negate the return value of a command or pipeline; it was nearly impossible to express \*Qif not x\*U cleanly using the sh language. There exist multiple incompatible implementations of the .B test builtin, which tests files for type and other attributes and performs arithmetic and string comparisons. POSIX considered none of these correct, so the standard behavior was specified in terms of the number of arguments to the command. POSIX.2 dictates exactly what will happen when four or fewer arguments are given to .B test , and leaves the behavior undefined when more arguments are supplied. Bash uses the POSIX.2 algorithm, which was conceived by David Korn. .NH 2 Features not in the Bourne Shell .PP There are a number of minor differences between Bash and the version of sh present on most other versions of \s-1UNIX\s+1. The majority of these are due to the POSIX standard, but some are the result of Bash adopting features from other shells. For instance, Bash includes the new \*Q!\*U reserved word, the .B command builtin, the ability of the builtin to correctly return a line ending with a backslash, symbolic arguments to the builtin, variable substring removal, a way to get the length of a variable, and the new algorithm for the .B test builtin from the POSIX.2 standard, none of which appear in sh. .PP Bash also implements the \*Q$(...)\*U command substitution syntax, which supersedes the sh ... construct. The \*Q$(...)\*U construct expands to the output of the command contained within the parentheses, with trailing newlines removed. The sh syntax is accepted for backwards compatibility, but the \*Q$(...)\*U form is preferred because its quoting rules are much simpler and it is easier to nest. .PP The Bourne shell does not provide such features as brace expansion, the ability to define a variable and a function with the same name, local variables in shell functions, the ability to enable and disable individual builtins or write a function to replace a builtin, or a means to export a shell function to a child process. .PP Bash has closed a long-standing shell security hole by not using the .B$IFS variable to split each word read by the shell, but splitting only the results of expansion (ksh and the 4.4 BSD sh have fixed this as well). Useful behavior such as a means to abort execution of a script read with the \*Q.\*U command using the \fBreturn\fP builtin or automatically exporting variables in the shell's environment to children is also not present in the Bourne shell. Bash provides a much more powerful environment for both interactive use and programming. .NH 1 Bash-specific Features .PP This section details a few of the features which make Bash unique. Most of them provide improved interactive use, but a few programming improvements are present as well. Full descriptions of these features can be found in the Bash documentation. .NH 2 Startup Files .PP Bash executes startup files differently than other shells. The Bash behavior is a compromise between the csh principle of startup files with fixed names executed for each shell and the sh \*Qminimalist\*U behavior. An interactive instance of Bash started .I ~/.bash_profile (the file .bash_profile in the user's home directory), if it exists. .I ~/.bashrc . A non-interactive shell (one begun to execute a shell script, for example) reads no fixed startup file, but uses the value of the variable .B $ENV , if set, as the name of a startup file. The ksh practice of reading .B$ENV for every shell, with the accompanying difficulty of defining the proper variables and functions for interactive and non-interactive shells or having the file read only for interactive shells, was considered too complex. Ease of use won out here. Interestingly, the next release of ksh will change to reading .B $ENV only for interactive shells. .NH 2 New Builtin Commands .PP There are a few builtins which are new or have been extended in Bash. The .B enable builtin allows builtin commands to be turned on and off arbitrarily. To use the version of .I echo found in a user's search path rather than the Bash builtin, \f(CRenable -n echo\fP suffices. The .B help builtin provides quick synopses of the shell facilities without requiring access to a manual page. .B Builtin is similar to .B command in that it bypasses shell functions and directly executes builtin commands. Access to a csh-style stack of directories is provided via the .B pushd , .B popd , and .B dirs builtins. .B Pushd and .B popd insert and remove directories from the stack, respectively, and .B dirs lists the stack contents. On systems that allow fine-grained control of resources, the .B ulimit builtin can be used to tune these settings. .B Ulimit allows a user to control, among other things, whether core dumps are to be generated, how much memory the shell or a child process is allowed to allocate, and how large a file created by a child process can grow. The .B suspend command will stop the shell process when job control is active; most other shells do not allow themselves to be stopped like that. .B Type, the Bash answer to .B which and .B whence, shows what will happen when a word is typed as a command: .SE$ type export export is a shell builtin $type -t export builtin$ type bash bash is /bin/bash $type cd cd is a function cd () { builtin cd${1+"$@"} && xtitle$HOST: $PWD } .EE .LP Various modes tell what a command word is (reserved word, alias, function, builtin, or file) or which version of a command will be executed based on a user's search path. Some of this functionality has been adopted by POSIX.2 and folded into the .B command utility. .NH 2 Editing and Completion .PP One area in which Bash shines is command line editing. Bash uses the .I readline library to read and edit lines when interactive. Readline is a powerful and flexible input facility that a user can configure to individual tastes. It allows lines to be edited using either emacs or vi commands, where those commands are appropriate. The full capability of emacs is not present \- there is no way to execute a named command with M-x, for instance \- but the existing commands are more than adequate. The vi mode is compliant with the command line editing standardized by POSIX.2. .PP Readline is fully customizable. In addition to the basic commands and key bindings, the library allows users to define additional key bindings using a startup file. The .I inputrc file, which defaults to the file .I ~/.inputrc , is read each time readline initializes, permitting users to maintain a consistent interface across a set of programs. Readline includes an extensible interface, so each program using the library can add its own bindable commands and program-specific key bindings. Bash uses this facility to add bindings that perform history expansion or shell word expansions on the current input line. .PP Readline interprets a number of variables which further tune its behavior. Variables exist to control whether or not eight-bit characters are directly read as input or converted to meta-prefixed key sequences (a meta-prefixed key sequence consists of the character with the eighth bit zeroed, preceded by the .I meta-prefix character, usually escape, which selects an alternate keymap), to decide whether to output characters with the eighth bit set directly or as a meta-prefixed key sequence, whether or not to wrap to a new screen line when a line being edited is longer than the screen width, the keymap to which subsequent key bindings should apply, or even what happens when readline wants to ring the terminal's bell. All of these variables can be set in the inputrc file. .PP The startup file understands a set of C preprocessor-like conditional constructs which allow variables or key bindings to be assigned based on the application using readline, the terminal currently being used, or the editing mode. Users can add program-specific bindings to make their lives easier: I have bindings that let me edit the value of .B$PATH and double-quote the current or previous word: .SE # Macros that are convenient for shell interaction $if Bash # edit the path "\eC-xp": "PATH=${PATH}\ee\eC-e\eC-a\eef\eC-f" # prepare to type a quoted word -- insert open and close double # quotes and move to just after the open quote "\eC-x\e"": "\e"\e"\eC-b" # Quote the current or previous word "\eC-xq": "\eeb\e"\eef\e"" $endif .EE .LP There is a readline command to re-read the file, so users can edit the file, change some bindings, and begin to use them almost immediately. .PP Bash implements the .B bind builtin for more dyamic control of readline than the startup file permits. .B Bind is used in several ways. In .I list mode, it can display the current key bindings, list all the readline editing directives available for binding, list which keys invoke a given directive, or output the current set of key bindings in a format that can be incorporated directly into an inputrc file. In .I batch mode, it reads a series of key bindings directly from a file and passes them to readline. In its most common usage, .B bind takes a single string and passes it directly to readline, which interprets the line as if it had just been read from the inputrc file. Both key bindings and variable assignments may appear in the string given to .B bind . .PP The readline library also provides an interface for \fIword completion\fP. When the .I completion character (usually TAB) is typed, readline looks at the word currently being entered and computes the set of filenames of which the current word is a valid prefix. If there is only one possible completion, the rest of the characters are inserted directly, otherwise the common prefix of the set of filenames is added to the current word. A second TAB character entered immediately after a non-unique completion causes readline to list the possible completions; there is an option to have the list displayed immediately. Readline provides hooks so that applications can provide specific types of completion before the default filename completion is attempted. This is quite flexible, though it is not completely user-programmable. Bash, for example, can complete filenames, command names (including aliases, builtins, shell reserved words, shell functions, and executables found in the file system), shell variables, usernames, and hostnames. It uses a set of heuristics that, while not perfect, is generally quite good at determining what type of completion to attempt. .NH 2 History .PP Access to the list of commands previously entered (the \fIcommand history\fP) is provided jointly by Bash and the readline library. Bash provides variables (\fB$HISTFILE\fP, \fB$HISTSIZE\fP, and \fB$HISTCONTROL\fP) and the .B history and .B fc builtins to manipulate the history list. The value of .B $HISTFILE specifes the file where Bash writes the command history on exit and reads it on startup. .B$HISTSIZE is used to limit the number of commands saved in the history. .B $HISTCONTROL provides a crude form of control over which commands are saved on the history list: a value of .I ignorespace means to not save commands which begin with a space; a value of .I ignoredups means to not save commands identical to the last command saved. \fB$HISTCONTROL\fP was named \fB$history_control\fP in earlier versions of Bash; the old name is still accepted for backwards compatibility. The .B history command can read or write files containing the history list and display the current list contents. The .B fc builtin, adopted from POSIX.2 and the Korn Shell, allows display and re-execution, with optional editing, of commands from the history list. The readline library offers a set of commands to search the history list for a portion of the current input line or a string typed by the user. Finally, the .I history library, generally incorporated directly into the readline library, implements a facility for history recall, expansion, and re-execution of previous commands very similar to csh (\*Qbang history\*U, so called because the exclamation point introduces a history substitution): .SE$ echo a b c d e a b c d e $!! f g h i echo a b c d e f g h i a b c d e f g h i$ !-2 echo a b c d e a b c d e $echo !-2:1-4 echo a b c d a b c d .EE .LP The command history is only saved when the shell is interactive, so it is not available for use by shell scripts. .NH 2 New Shell Variables .PP There are a number of convenience variables that Bash interprets to make life easier. These include .B FIGNORE , which is a set of filename suffixes identifying files to exclude when completing filenames; .B HOSTTYPE , which is automatically set to a string describing the type of hardware on which Bash is currently executing; .B command_oriented_history , which directs Bash to save all lines of a multiple-line command such as a \fIwhile\fP or \fIfor\fP loop in a single history entry, allowing easy re-editing; and .B IGNOREEOF , whose value indicates the number of consecutive EOF characters that an interactive shell will read before exiting \- an easy way to keep yourself from being logged out accidentally. The .B auto_resume variable alters the way the shell treats simple command names: if job control is active, and this variable is set, single-word simple commands without redirections cause the shell to first look for and restart a suspended job with that name before starting a new process. .NH 2 Brace Expansion .PP Since sh offers no convenient way to generate arbitrary strings that share a common prefix or suffix (filename expansion requires that the filenames exist), Bash implements \fIbrace expansion\fP, a capability picked up from csh. Brace expansion is similar to filename expansion, but the strings generated need not correspond to existing files. A brace expression consists of an optional .I preamble , followed by a pair of braces enclosing a series of comma-separated strings, and an optional .I postamble . The preamble is prepended to each string within the braces, and the postamble is then appended to each resulting string: .SE$ echo a{d,c,b}e .EE .LP As this example demonstrates, the results of brace expansion are not sorted, as they are by filename expansion. .NH 2 Process Substitution .PP On systems that can support it, Bash provides a facility known as \fIprocess substitution\fP. Process substitution is similar to command substitution in that its specification includes a command to execute, but the shell does not collect the command's output and insert it into the command line. Rather, Bash opens a pipe to the command, which is run in the background. The shell uses named pipes (FIFOs) or the .I /dev/fd method of naming open files to expand the process substitution to a filename which connects to the pipe when opened. This filename becomes the result of the expansion. Process substitution can be used to compare the outputs of two different versions of an application as part of a regression test: .SE $cmp <(old_prog) <(new_prog) .EE .NH 2 Prompt Customization .PP One of the more popular interactive features that Bash provides is the ability to customize the prompt. Both .B$PS1 and .B $PS2, the primary and secondary prompts, are expanded before being displayed. Parameter and variable expansion is performed when the prompt string is expanded, so any shell variable can be put into the prompt (e.g., .B$SHLVL , which indicates how deeply the current shell is nested). Bash specially interprets characters in the prompt string preceded by a backslash. Some of these backslash escapes are replaced with the current time, the date, the current working directory, the username, and the command number or history number of the command being entered. There is even a backslash escape to cause the shell to change its prompt when running as root after an \fIsu\fP. Before printing each primary prompt, Bash expands the variable .B $PROMPT_COMMAND and, if it has a value, executes the expanded value as a command, allowing additional prompt customization. For example, this assignment causes the current user, the current host, the time, the last component of the current working directory, the level of shell nesting, and the history number of the current command to be embedded into the primary prompt: .SE$ PS1='\eu@\eh [\et] \eW($SHLVL:\e!)\e$ ' chet@odin [21:03:44] documentation(2:636)$cd .. chet@odin [21:03:54] src(2:637)$ .EE .LP The string being assigned is surrounded by single quotes so that if it is exported, the value of .B $SHLVL will be updated by a child shell: .SE chet@odin [21:17:35] src(2:638)$ export PS1 chet@odin [21:17:40] src(2:639)$bash chet@odin [21:17:46] src(3:696)$ .EE .LP The \fP\e$\fP escape is displayed as \*Q\fB$\fP\*U when running as a normal user, but as \*Q\fB#\fP\*U when running as root. .NH 2 File System Views .PP Since Berkeley introduced symbolic links in 4.2 BSD, one of their most annoying properties has been the \*Qwarping\*U to a completely different area of the file system when using .B cd , and the resultant non-intuitive behavior of \*Q\fBcd ..\fP\*U. The \s-1UNIX\s+1 kernel treats symbolic links .I physically . When the kernel is translating a pathname in which one component is a symbolic link, it replaces all or part of the pathname while processing the link. If the contents of the symbolic link begin with a slash, the kernel replaces the pathname entirely; if not, the link contents replace the current component. In either case, the symbolic link is visible. If the link value is an absolute pathname, the user finds himself in a completely different part of the file system. .PP Bash provides a .I logical view of the file system. In this default mode, command and filename completion and builtin commands such as .B cd and .B pushd which change the current working directory transparently follow symbolic links as if they were directories. The .B $PWD variable, which holds the shell's idea of the current working directory, depends on the path used to reach the directory rather than its physical location in the local file system hierarchy. For example: .SE$ cd /usr/local/bin $echo$PWD /usr/local/bin $pwd /usr/local/bin$ /bin/pwd /net/share/sun4/local/bin $cd ..$ pwd /usr/local $/bin/pwd /net/share/sun4/local$ cd .. $pwd /usr$ /bin/pwd /usr .EE .LP One problem with this, of course, arises when programs that do not understand the shell's logical notion of the file system interpret \*Q..\*U differently. This generally happens when Bash completes filenames containing \*Q..\*U according to a logical hierarchy which does not correspond to their physical location. For users who find this troublesome, a corresponding .I physical view of the file system is available: .SE $cd /usr/local/bin$ pwd /usr/local/bin $set -o physical$ pwd /net/share/sun4/local/bin .EE .NH 2 Internationalization .PP One of the most significant improvements in version 1.13 of Bash was the change to \*Qeight-bit cleanliness\*U. Previous versions used the eighth bit of characters to mark whether or not they were quoted when performing word expansions. While this did not affect the majority of users, most of whom used only seven-bit ASCII characters, some found it confining. Beginning with version 1.13, Bash implemented a different quoting mechanism that did not alter the eighth bit of characters. This allowed Bash to manipulate files with \*Qodd\*U characters in their names, but did nothing to help users enter those names, so version 1.13 introduced changes to readline that made it eight-bit clean as well. Options exist that force readline to attach no special significance to characters with the eighth bit set (the default behavior is to convert these characters to meta-prefixed key sequences) and to output these characters without conversion to meta-prefixed sequences. These changes, along with the expansion of keymaps to a full eight bits, enable readline to work with most of the ISO-8859 family of character sets, used by many European countries. .NH 2 POSIX Mode .PP Although Bash is intended to be POSIX.2 conformant, there are areas in which the default behavior is not compatible with the standard. For users who wish to operate in a strict POSIX.2 environment, Bash implements a \fIPOSIX mode\fP. When this mode is active, Bash modifies its default operation where it differs from POSIX.2 to match the standard. POSIX mode is entered when Bash is started with the .B -posix option. This feature is also available as an option to the \fBset\fP builtin, \fBset -o posix\fP. For compatibility with other GNU software that attempts to be POSIX.2 compliant, Bash also enters POSIX mode if the variable .B $POSIXLY_CORRECT is set when Bash is started or assigned a value during execution. .B$POSIX_PEDANTIC is accepted as well, to be compatible with some older GNU utilities. When Bash is started in POSIX mode, for example, it sources the file named by the value of .B $ENV rather than the \*Qnormal\*U startup files, and does not allow reserved words to be aliased. .NH 1 New Features and Future Plans .PP There are several features introduced in the current version of Bash, version 1.14, and a number under consideration for future releases. This section will briefly detail the new features in version 1.14 and describe several features that may appear in later versions. .NH 2 New Features in Bash-1.14 .PP The new features available in Bash-1.14 answer several of the most common requests for enhancements. Most notably, there is a mechanism for including non-visible character sequences in prompts, such as those which cause a terminal to print characters in different colors or in standout mode. There was nothing preventing the use of these sequences in earlier versions, but the readline redisplay algorithm assumed each character occupied physical screen space and would wrap lines prematurely. .PP Readline has a few new variables, several new bindable commands, and some additional emacs mode default key bindings. A new history search mode has been implemented: in this mode, readline searches the history for lines beginning with the characters between the beginning of the current line and the cursor. The existing readline incremental search commands no longer match identical lines more than once. Filename completion now expands variables in directory names. The history expansion facilities are now nearly completely csh-compatible: missing modifiers have been added and history substitution has been extended. .PP Several of the features described earlier, such as .B "set -o posix" and .B$POSIX_PEDANTIC , are new in version 1.14. There is a new shell variable, .B OSTYPE , to which Bash assigns a value that identifies the version of \s-1UNIX\s+1 it's running on (great for putting architecture-specific binary directories into the \fB$PATH\fP). Two variables have been renamed: .B$HISTCONTROL replaces .B $history_control , and .B$HOSTFILE replaces .B $hostname_completion_file . In both cases, the old names are accepted for backwards compatibility. The ksh .I select construct, which allows the generation of simple menus, has been implemented. New capabilities have been added to existing variables: .B$auto_resume can now take values of .I exact or .I substring , and .B $HISTCONTROL understands the value .I ignoreboth , which combines the two previously acceptable values. The .B dirs builtin has acquired options to print out specific members of the directory stack. The .B$nolinks variable, which forces a physical view of the file system, has been superseded by the .B \-P option to the .B set builtin (equivalent to \fBset -o physical\fP); the variable is retained for backwards compatibility. The version string contained in .B $BASH_VERSION now includes an indication of the patch level as well as the \*Qbuild version\*U. Some little-used features have been removed: the .B bye synonym for .B exit and the .B$NO_PROMPT_VARS variable are gone. There is now an organized test suite that can be run as a regression test when building a new version of Bash. .PP The documentation has been thoroughly overhauled: there is a new manual page on the readline library and the \fIinfo\fP file has been updated to reflect the current version. As always, as many bugs as possible have been fixed, although some surely remain. .NH 2 Other Features .PP There are a few features that I hope to include in later Bash releases. Some are based on work already done in other shells. .PP In addition to simple variables, a future release of Bash will include one-dimensional arrays, using the ksh implementation of arrays as a model. Additions to the ksh syntax, such as \fIvarname\fP=( ... ) to assign a list of words directly to an array and a mechanism to allow the builtin to read a list of values directly into an array, would be desirable. Given those extensions, the ksh .B "set \-A" syntax may not be worth supporting (the .B \-A option assigns a list of values to an array, but is a rather peculiar special case). .PP Some shells include a means of \fIprogrammable\fP word completion, where the user specifies on a per-command basis how the arguments of the command are to be treated when completion is attempted: as filenames, hostnames, executable files, and so on. The other aspects of the current Bash implementation could remain as-is; the existing heuristics would still be valid. Only when completing the arguments to a simple command would the programmable completion be in effect. .PP It would also be nice to give the user finer-grained control over which commands are saved onto the history list. One proposal is for a variable, tentatively named .B HISTIGNORE , which would contain a colon-separated list of commands. Lines beginning with these commands, after the restrictions of .B $HISTCONTROL have been applied, would not be placed onto the history list. The shell pattern-matching capabilities could also be available when specifying the contents of .B$HISTIGNORE . .PP One thing that newer shells such as .B wksh (also known as .B dtksh ) provide is a command to dynamically load code implementing additional builtin commands into a running shell. This new builtin would take an object file or shared library implementing the \*Qbody\*U of the builtin (\fIxxx_builtin()\fP for those familiar with Bash internals) and a structure containing the name of the new command, the function to call when the new builtin is invoked (presumably defined in the shared object specified as an argument), and the documentation to be printed by the .B help command (possibly present in the shared object as well). It would manage the details of extending the internal table of builtins. .PP A few other builtins would also be desirable: two are the POSIX.2 .B getconf command, which prints the values of system configuration variables defined by POSIX.2, and a .B disown builtin, which causes a shell running with job control active to \*Qforget about\*U one or more background jobs in its internal jobs table. Using .B getconf , for example, a user could retrieve a value for .B $PATH guaranteed to find all of the POSIX standard utilities, or find out how long filenames may be in the file system containing a specified directory. .PP There are no implementation timetables for any of these features, nor are there concrete plans to include them. If anyone has comments on these proposals, feel free to send me electronic mail. .NH 1 Reflections and Lessons Learned .PP The lesson that has been repeated most often during Bash development is that there are dark corners in the Bourne shell, and people use all of them. In the original description of the Bourne shell, quoting and the shell grammar are both poorly specified and incomplete; subsequent descriptions have not helped much. The grammar presented in Bourne's paper describing the shell distributed with the Seventh Edition of \s-1UNIX\s+1\(dg is so far off that it does not allow the command \f(CWwho|wc\fP. In fact, as Tom Duff states: .QP Nobody really knows what the Bourne shell's grammar is. Even examination of the source code is little help.\(dd .FS \(dgS. R. Bourne, \*QUNIX Time-Sharing System: The UNIX Shell\*U, \fIBell System Technical Journal\fP, 57(6), July-August, 1978, pp. 1971-1990. .FE .FS \(ddTom Duff, \*QRc \- A Shell for Plan 9 and \s-1UNIX\s+1 systems\*U, \fIProc. of the Summer 1990 EUUG Conference\fP, London, July, 1990, pp. 21-33. .FE .LP The POSIX.2 standard includes a \fIyacc\fP grammar that comes close to capturing the Bourne shell's behavior, but it disallows some constructs which sh accepts without complaint \- and there are scripts out there that use them. It took a few versions and several bug reports before Bash implemented sh-compatible quoting, and there are still some \*Qlegal\*U sh constructs which Bash flags as syntax errors. Complete sh compatibility is a tough nut. .PP The shell is bigger and slower than I would like, though the current version is substantially faster than previously. The readline library could stand a substantial rewrite. A hand-written parser to replace the current \fIyacc\fP-generated one would probably result in a speedup, and would solve one glaring problem: the shell could parse commands in \*Q$(...)\*U constructs as they are entered, rather than reporting errors when the construct is expanded. .PP As always, there is some chaff to go with the wheat. Areas of duplicated functionality need to be cleaned up. There are several cases where Bash treats a variable specially to enable functionality available another way (\fB$notify\fP vs. \fBset -o notify\fP and \fB$nolinks\fP vs. \fBset -o physical\fP, for instance); the special treatment of the variable name should probably be removed. A few more things could stand removal; the .B $allow_null_glob_expansion and .B$glob_dot_filenames variables are of particularly questionable value. The \fB$[...]\fP arithmetic evaluation syntax is redundant now that the POSIX-mandated \fB$((...))\fP construct has been implemented, and could be deleted. It would be nice if the text output by the .B help builtin were external to the shell rather than compiled into it. The behavior enabled by .B \$command_oriented_history , which causes the shell to attempt to save all lines of a multi-line command in a single history entry, should be made the default and the variable removed. .NH 1 Availability .PP As with all other GNU software, Bash is available for anonymous FTP from .I prep.ai.mit.edu:/pub/gnu and from other GNU software mirror sites. The current version is in .I bash-1.14.1.tar.gz in that directory. Use .I archie to find the nearest archive site. The .I bash.CWRU.Edu:/pub/dist. Bash documentation is available for FTP from .I bash.CWRU.Edu:/pub/bash. .PP The Free Software Foundation sells tapes and CD-ROMs containing Bash; send electronic mail to \f(CRgnu@prep.ai.mit.edu\fP or call \f(CR+1-617-876-3296\fP .PP Bash is also distributed with several versions of \s-1UNIX\s+1-compatible systems. It is included as /bin/sh and /bin/bash on several Linux distributions (more about the difference in a moment), and as contributed software in BSDI's BSD/386* and FreeBSD. .FS *BSD/386 is a trademark of Berkeley Software Design, Inc. .FE .PP The Linux distribution deserves special mention. There are two configurations included in the standard Bash distribution: a \*Qnormal\*U configuration, in which all of the standard features are included, and a \*Qminimal\*U configuration, which omits job control, aliases, history and command line editing, the directory stack and .B pushd/popd/dirs, process substitution, prompt string special character decoding, and the .I select construct. This minimal version is designed to be a drop-in replacement for the traditional \s-1UNIX\s+1 /bin/sh, and is included as the Linux /bin/sh in several packagings. .NH 1 Conclusion .PP Bash is a worthy successor to sh. It is sufficiently portable to run on nearly every version of \s-1UNIX\s+1 from 4.3 BSD to SVR4.2, and several \s-1UNIX\s+1 workalikes. It is robust enough to replace sh on most of those systems, and provides more functionality. It has several thousand regular users, and their feedback has helped to make it as good as it is today \- a testament to the benefits of free software.
2014-08-21 10:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3376873731613159, "perplexity": 11582.215931900364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815861.64/warc/CC-MAIN-20140820021335-00391-ip-10-180-136-8.ec2.internal.warc.gz"}
https://studyadda.com/solved-papers/neet/physics/work-energy-power-collision-%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD-%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD-%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD-%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD%EF%BF%BD/neet-pyq-work-energy-power-and-collision/1127
# Solved papers for NEET Physics Work, Energy, Power & Collision / कार्य, ऊर्जा और शक्ति NEET PYQ-Work Energy Power and Collision ### done NEET PYQ-Work Energy Power and Collision Total Questions - 49 • question_answer1) A force acts on a 3.0 g particle in such a way that the position of the particle as a function of time is given by $x=3t-4{{t}^{2}}+{{t}^{3}},$ where $x$ is in metre and $t$ in second. The work done during the first 4 s is: [AIPMT 1998] A) 570 mJ B) 450 mJ C) 490 mJ D) 528 mJ • question_answer2) Two equal masses ${{m}_{1}}$ and ${{m}_{2}}$ moving along the same straight line with velocities $+\text{ }3\text{ }m/s$ and $\,\,5\text{ }m/s$ respectively collide elastically. Their velocities after the collision will be respectively: [AIPMT 1998] A) $+\text{ }4\text{ }m/s$for both B) $-\,3\,\,m/s$ and $+\text{ }5\text{ }m/s$ C) $-\text{ }4\text{ }m/s$ and $+4\text{ }m/s~$ D) $-\text{ }5\text{ }m/s$ and $+\text{ }3\text{ }m/s$ • question_answer3) A weightless ladder 20 ft long rests against a frictionless wall at an angle of ${{60}^{o}}$ from the horizontal. A 150 pound man is 4 ft from the top of the ladder. A horizontal force is needed to keep it from slipping. Choose the correct magnitude of force from the following:                    [AIPMT 1998] A) 17.3 pound B) 100 pound C) 120 pound D) 150 pound • question_answer4) A rubber ball is dropped from a height of 5 m on a planet where the acceleration due to gravity is not known. On bouncing it rises to 1.8 m. The ball loses its velocity on bouncing by a factor of: [AIPMT 1998] A) $\frac{16}{25}$ B) $\frac{2}{5}$ C) $\frac{3}{5}$ D) $\frac{9}{25}$ • question_answer5) An engine exerts a force $\vec{F}=(20\hat{i}-3\hat{j}+5\hat{k})N$ and moves with velocity$\vec{v}=(6\hat{j}+20\hat{j}-3\hat{k})\,m/s$.  The power of the engine (in watt) is: [AIPMT 2000] A) 45 B) 75 C) 20 D) 10 • question_answer6)  A stone is attached to one end of a string and rotated in a vertical circle. If string breaks at the position of maximum tension, it will break at: [AIPMT 2000] A) A B) B C) C D) D • question_answer7) A man goes at the top of a smooth inclined plane. He releases a bag to fall freely and he himself slides on inclined plane to reach the bottom. If ${{v}_{1}}$ and ${{v}_{2}}$ are the velocities of the man and bag respectively, then: [AIPMT 2000] A) ${{v}_{1}}>{{v}_{2}}$ B) ${{v}_{1}}<{{v}_{2}}$ C) ${{v}_{1}}={{v}_{2}}$ D) ${{v}_{1}}$ and ${{v}_{2}}$ cannot be compared • question_answer8) A child is swinging a swing. Minimum and maximum heights of swing from earth’s surface are 0.75 m and 2 m respectively. The maximum velocity of this swing is:                       [AIPMT 2001] A) 5 m/s B) 10 m/s C) 15 m/s D) 20 m/s • question_answer9) A force of 250 N is required to lift a 75 kg mass through a pulley system. In order to lift the mass through 3 m, the rope has to be pulled through 12 m. The efficiency of system is: [AIPMT 2001] A) 50% B) 75% C) 33% D) 90% • question_answer10) Two springs A and B have force constants ${{k}_{A}}$ and ${{k}_{B}}$ such that ${{k}_{B}}=2\,{{k}_{A}}$. The four ends of the springs are stretched by the same force, If energy stored in spring A is E, then energy stored in spring B is:                         [AIPMT 2001] A) E/2 B) 2 E C) E D) 4 E • question_answer11) If kinetic energy of a body is increased by 300% then percentage change in momentum will be: [AIPMT 2002] A) 100% B) 150% C) 265% D) 73.2% • question_answer12) When a long spring is stretched by 2 cm, its potential energy is U. If the spring is stretched by 10 cm, the potential energy in it will be: [AIPMT 2003] A) 10 U B) 25 U C) U/5 D) 5 U • question_answer13)  A mass of 0.5 kg moving with a speed of 1.5 m/son a horizontal smooth surface, collides with a nearly weightless spring of force constant $k=50\text{ }N/m$. The maximum compression of the spring would be:[AIPMT (S) 2004] A) 0.15 m B) 0.12 m C) 1.5 m D) 0.5 m • question_answer14) A stone is tied to a string of length $l$ and is whirled in a vertical circle with the other end of the string as the centre. At a certain instant of time, the stone is at its lowest position and has a speed $u$. The magnitude of the change in velocity as it reaches a position where the string is horizontal {g being acceleration due to gravity) is:  [AIPMT (S) 2004] A) $\sqrt{2({{u}^{2}}-gl)}$ B) $\sqrt{{{u}^{2}}-gl}$ C) $u-\sqrt{{{u}^{2}}-2gl}$ D) $\sqrt{2gl}$ • question_answer15) A particle of mass ${{m}_{1}}$ is moving with a velocity ${{v}_{1}}$ and another particle of mass ${{m}_{2}}$ is moving with a velocity ${{v}_{2}}$. Both of them have the same momentum but their different kinetic energies are ${{E}_{1}}$ and ${{E}_{2}}$ respectively. If ${{m}_{1}}>{{m}_{2}}$ then: [AIPMT (S) 2004] A) ${{E}_{1}}<{{E}_{2}}$ B) $\frac{{{E}_{1}}}{{{E}_{2}}}=\frac{{{m}_{1}}}{{{m}_{2}}}$ C) ${{E}_{1}}>{{E}_{2}}$ D) ${{E}_{1}}={{E}_{2}}$ • question_answer16)  A force F acting on an object varies with distance $x$ as shown here. The force is in N and $x$ is in m. The work done by the force in moving the object from $x=0$ to $x=6\,m$ is: [AIPMT (S) 2005] A) 4.5 J B) 13.5 J C) 9.0 J D) 18.0 J • question_answer17) The potential energy of a long spring when stretched by 2 cm is U. If the spring is stretched by 8 cm the potential energy stored in it is: [AIPMT (S) 2006] A) 4 U B) 8 U C) 16 U D) U/4 • question_answer18) A body of mass 3 kg is under a constant force which causes a displacement s in metres in it, given by the relation $s=\frac{1}{3}\,{{t}^{2}},$ where t is in s. Work done by the force in 2 s is : A) $\frac{5}{19}\,J$ B) $\frac{3}{18}\,J$ C) $\frac{8}{3}\,J$ D) $\frac{19}{5}\,J$ • question_answer19) 300 J of work is done in sliding a 2 kg block up an inclined plane of height 10 m. Taking $g=10\,m/{{s}^{2}}$, work done against friction is : [AIPMT (S) 2006] A) 200 J B) 100 J C) zero D) 1000 J • question_answer20) A vertical spring with force constant k is fixed on a table. A ball of mass m at a height ft above the free upper end of the spring falls vertically on the spring, so that the spring is compressed by a distance d. The net work done in the process is:                              [AIPMT (S) 2007] A) $mg(h+d)+\frac{1}{2}k{{d}^{2}}$ B) $mg(h+d)-\frac{1}{2}k{{d}^{2}}$ C) $mg(h-d)-\frac{1}{2}k{{d}^{2}}$ D) $mg(h-d)+\frac{1}{2}k{{d}^{2}}$ • question_answer21) A roller coaster is designed such that riders experience “weightlessness” as they go round the top of a hill whose radius of curvature is 20 m. The speed of the car at the top of the hill is between            [AIPMPT (S) 2008] A) 14 m/s and 15 m/s B) 15 m/s  and 16 m/s C) 16 m/s and 17 m/s D) 13 m/s  and 14 m/s • question_answer22) A body of mass 1 kg is thrown upwards with a velocity $20\,m{{s}^{-1}}$. It momentarily comes to rest after attaining a height of 18 m. How much energy is lost due to air friction? $(g=10\,m{{s}^{-1}})$ [AIPMT (S) 2009] A) 20 J B) 30 J C) 40 J D) 10 J • question_answer23) An engine pumps water continuously through a hose. Water leaves the hose with a velocity v and m is the mass per unit length of the water jet. What is the rate at which kinetic energy is imparted to water? [AIPMT (S) 2009] A) $\frac{1}{2}\,m{{v}^{3}}$ B) $m{{v}^{3}}$ C) $\frac{1}{2}m{{v}^{2}}$ D) $\frac{1}{2}{{m}^{2}}{{v}^{2}}$ • question_answer24) A block of mass M is attached to the lower end of a vertical spring. The spring is hung from a ceiling and has force constant value k. The mass is released from rest with the spring initially un stretched. The maximum extension produced in the length of the spring will be [AIPMT (S) 2009] A) $Mg/k$ B) $-\,2\text{ }Mg/k~$ C) $4\,\,Mg/k$ D) $Mg/2k$ • question_answer25) A particle of mass M starting from rest undergoes uniform acceleration. If the speed acquired in time T is v, the power delivered to the particle is [AIPMT (M) 2010] A) $\frac{M{{v}^{2}}}{T}$ B) $\frac{1}{2}\frac{M{{v}^{2}}}{{{T}^{2}}}$ C) $\frac{M{{v}^{2}}}{{{T}^{2}}}$ D) $\frac{1}{2}\frac{M{{v}^{2}}}{T}$ • question_answer26) A mass m moving horizontally (along the x-axis) with velocity v collides and sticks to mass of 3 m moving vertically upward (along the y-axis) with velocity 2v. The final velocity of the combination is     [AIPMT (M) 2011] A) $\frac{1}{4}v\mathbf{\hat{i}}+\frac{3}{2}v\mathbf{\hat{j}}$ B) $\frac{1}{3}v\mathbf{\hat{i}}+\frac{2}{3}v\mathbf{\hat{j}}$ C) $\frac{2}{3}v\mathbf{\hat{i}}+\frac{1}{3}v\mathbf{\hat{j}}$ D) $\frac{3}{2}v\mathbf{\hat{i}}+\frac{1}{4}v\mathbf{\hat{j}}$ • question_answer27) A body projected vertically from the earth reaches a height equal to earth's radius before returning to the earth. The power exerted by the gravitational force is greatest                            [AIPMT (S) 2011] A) at the instant just before the body hits the earth B) it remains constant all through C) at the instant just after the body is projected D) at the highest position of the body • question_answer28) The potential energy of a system increase if work is done                                [AIPMT (S) 2011] A) by the system against a conservative force B) by the system against a neoconservative force C) upon the system by a conservative force D) upon the system by a neoconservative force • question_answer29)  Force F on a particle moving in a straight line varies with distance d as shown in the figure. The work done on the particle during its displacement of 12 m is                   [AIPMT (S) 2011] A) 21 J B) 26 J C) 13 J D) 18 J • question_answer30) A car of mass m starts from rest and accelerates so that the instantaneous power delivered to the car has a constant magnitude ${{p}_{0}}$. The instantaneous velocity of this car is proportional to                         [AIPMT (M) 2012] A) ${{t}^{2}}{{p}_{0}}$ B) ${{t}^{1/2}}$ C) ${{t}^{-1/2}}$ D) $t/\sqrt{m}$ • question_answer31) The potential energy of a particle in a force field is $U=\frac{A}{{{r}^{2}}}-\frac{A}{r},$ where A and B are positive constants and r is the distance of particle from the centre of the field. For stable equilibrium, the distance of the particle is[AIPMT (S) 2012] A) B/2A B) 2A/B C) A/B D) B/A • question_answer32)  Two spheres A and B of masses ${{m}_{1}}$ and ${{m}_{2}}$ respectively collide. A is at rest initially and B is moving with-velocity v along x-axis. After collision B has a velocity $\frac{v}{2}$ in a direction perpendicular to the original direction. The mass A moves after collision in the direction [AIPMT (S) 2012] A) same as that of B B) opposite to that of B C) $\theta ={{\tan }^{-1}}\left( \frac{1}{2} \right)$ to the x-axis D) $\theta ={{\tan }^{-1}}\left( \frac{-1}{2} \right)$ • question_answer33) A uniform force of $(3\mathbf{i}+\mathbf{j})\,N$ acts on a particle of mass 2 kg. Hence the particle is displaced from position $(2\mathbf{i}+\mathbf{k})\,m$ to position $(4\mathbf{i}+3\mathbf{j}-\mathbf{k})\,m$. The work done by the force on the particle is [NEET 2013] A) 9 J B) 6 J C) 13 J D) 15 J • question_answer34)  Two similar springs P and Q have spring constants ${{K}_{P}}$ and ${{K}_{Q}},$ such that ${{K}_{P}}>{{K}_{Q}}$. They are stretched, first by the same amount (case a), then by the same force (case b). The work done by the springs ${{W}_{P}}$ and ${{W}_{Q}}$ are related as, in case and case , respectively                   [NEET 2015 ] A) ${{W}_{P}}={{W}_{Q}};{{W}_{P}}>{{W}_{Q}}$ B) ${{W}_{P}}={{W}_{Q}};{{W}_{P}}={{W}_{Q}}$ C) ${{W}_{P}}>{{W}_{Q}};{{W}_{Q}}>{{W}_{P}}$ D) ${{W}_{P}}<{{W}_{Q}};{{W}_{Q}}<{{W}_{P}}$ • question_answer35) A block of mass 10 kg, moving in x-direction with a constant speed of $10\,m{{s}^{-1}},$ is subjected to a retarding force $F=0.1\,\times \,J/m$ during its travel from $x=20\,m$ to 30 m. Its final KE will be [NEET 2015 ] A) 475 J B) 450 J C) 275 J D) 250 J • question_answer36) A particle of mass m is driven by a machine that delivers a constant power k watts. If the particle starts from rest, the force on the particle at time t is                                   [NEET 2015 ] A) $\sqrt{\frac{mk}{2}}\,{{t}^{{}^{1}/{}_{2}}}$ B) $\sqrt{mk}\,{{t}^{{}^{-1}/{}_{2}}}$ C) $\sqrt{2mk}\,{{t}^{{}^{-1}/{}_{2}}}$ D) $\frac{1}{2}\sqrt{mk}\,{{t}^{{}^{-1}/{}_{2}}}$ • question_answer37) Two particles of masses ${{m}_{1}},{{m}_{2}}$ move with initial velocities ${{u}_{1}}$ and ${{u}_{2}}$. On collision, one of the particles get excited to higher level, after absorbing energy $\varepsilon$. If final velocities of particles be ${{v}_{1}}$ and ${{v}_{2}},$ then we must have [NEET 2015 ] A) $m_{1}^{2}{{u}_{1}}+m_{2}^{2}{{u}_{2}}-\varepsilon =m_{1}^{2}{{v}_{1}}+m_{2}^{2}{{v}_{2}}$ B) $\frac{1}{2}{{m}_{1}}u_{1}^{2}+\frac{1}{2}{{m}_{2}}u_{2}^{2}=\frac{1}{2}{{m}_{1}}v_{1}^{2}+\frac{1}{2}{{m}_{2}}v_{2}^{2}-\varepsilon$ C) $\frac{1}{2}{{m}_{1}}u_{1}^{2}+\frac{1}{2}{{m}_{2}}u_{2}^{2}-\varepsilon =\frac{1}{2}{{m}_{1}}v_{1}^{2}+\frac{1}{2}{{m}_{2}}v_{2}^{2}$ D) $\frac{1}{2}m_{1}^{2}u_{1}^{2}+\frac{1}{2}m_{2}^{2}u_{2}^{2}+\varepsilon =\frac{1}{2}m_{1}^{2}v_{1}^{2}+\frac{1}{2}m_{2}^{2}v_{2}^{2}$ • question_answer38) A ball is thrown vertically downwards from a height of 20 m with an initial velocity ${{V}_{0}}$. It collides with the ground, loses 50% of its energy in collision and rebounds to the same height. The initial velocity ${{V}_{0}}$ is (Take,$g=10m{{s}^{-2}}$)        [NEET 2015 (Re)] A) $14\,\,m{{s}^{-1}}$ B) $20\,\,m{{s}^{-1}}$ C) $28\,\,m{{s}^{-1}}$ D) $10\,\,m{{s}^{-1}}$ • question_answer39) On a frictionless surface, a block of mass M moving at speed v collides elastically with another block of same mass M which is initially at rest. After collision the first block moves at an angle $\theta$ to its initial direction and has a speed $\frac{v}{3}$. The second block's speed after the collision is          [NEET 2015 (Re)] A) $\frac{2\sqrt{2}}{3}v$ B) $\frac{3}{4}v$ C) $\frac{3}{\sqrt{2}}v$ D) $\frac{\sqrt{3}}{2}v$ • question_answer40) What is the minimum velocity with which a body of mass m must enter a vertical loop of radius R so that it can complete the loop  [NEET - 2016] A) $\sqrt{gR}$ B) $\sqrt{2gR}$ C) $\sqrt{3gR}$ D) $\sqrt{5gR}$ • question_answer41) A body of mass 1 kg begins to move under the action of a time dependent force $\vec{F}=(2t\,\mathbf{\hat{i}}\,+3{{t}^{2}}\,\mathbf{\hat{j}})N,$ where $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ are unit vectors along x and y axis. What power will be developed by the force at the time t? [NEET - 2016] A) $(2{{t}^{2}}+3{{t}^{3}})W$ B) $(2{{t}^{2}}+4{{t}^{4}})W$ C) $(2{{t}^{3}}+3{{t}^{4}})W$ D) $(2{{t}^{3}}+3{{t}^{5}})W$ • question_answer42)  Two blocks A and B of masses 3 m and m respectively are connected by a massless and inextensible string. The whole system is suspended by a massless spring as shown in figure. The magnitudes of acceleration of A and B immediately after the string is cut, are respectively [NEET – 2017] A) $\frac{g}{3},\frac{g}{3}$ B) $g,\frac{g}{3}$ C) $\frac{g}{3},g$ D) $g,\,g$ • question_answer43) The bulk modulus of a spherical object is $'B'$. If it is subjected to uniform pressure $'p',$ the fractional decrease in radius is            [NEET-2017] A) $\frac{\rho }{3B}$ B) $\frac{\rho }{B}$ C) $\frac{B}{3p}$ D) $\frac{3p}{B}$ • question_answer44) One end of string of length l is connected to a particle of mass ‘m’ and the other end is connected to a small peg on a smooth horizontal table. If the particle moves in circle with speed ‘v’, the net force on the particle (directed towards center) will be (T represents the tension in the string)                                     [NEET-2017] A) Zero B) T C) $T+\frac{m{{v}^{2}}}{l}$ D) $T-\frac{m{{v}^{2}}}{l}$ • question_answer45) Consider a drop of rain water having mass 1 g falling from a height of 1 km. It hits the ground with a speed of 50 m/s. Take g constant with a value $10\,m/{{s}^{2}}$. The work done by the (i) gravitational force and the (ii) resistive force of air is A) (i) 10 J, (ii) $-8.75\,J$ B) (i) – 10 J, (ii) $\text{ }8.25\text{ }J$ C) (i) 1.25 J , (ii) $\text{ }8.25\text{ }J$ D) (i) 100 J, (ii) 8.75 J • question_answer46) A moving block having mass m, collides with another stationary block having mass 4m. The lighter block comes to rest after collision. When the initial velocity of the lighter block is v, then the value of coefficient of restitution (e) will be [NEET - 2018] A) 0.8 B) 0.25 C) 0.5 D) 0.4 • question_answer47) Body A of mass 4m moving with speed u collides with another body B of mass 2m, at rest. The collision is head on and elastic in nature. After the collision the fraction of energy lost by the colliding body A is: [NEET 2019] A) $\frac{4}{9}$ B) $\frac{5}{9}$ C) $\frac{1}{9}$ D) $\frac{8}{9}$ • question_answer48) A force $F=20+10y$acts on a particle in y-direction where F is in newton and y in meter. Work done by this force to move the particle from $y=0\text{ }to\text{ }y=1$m is:                [NEET 2019] A) 25 J B) 20 J C) 30 J D) 5 J • question_answer49) The energy required to break one bond in DNA is${{10}^{20}}J$. This value in eV is nearly: [NEET 2020] A) 0.6 B) 0.06 C) 0.006 D) 6
2022-11-30 10:01:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6813969612121582, "perplexity": 748.9319299370184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00137.warc.gz"}
https://themodelmill.com/peg-price-earnings-to-growth-ratio/
# PEG – Price/Earnings to Growth Ratio 0 294 The PEG ratio is another way to try and identify undervalued high growth stocks through a technical screen.  It should not be used alone to select stocks, and should instead be complemented with a fundamental analysis of the market and company including a detailed discounted FCF model to value the stock properly. The P/E ratio and PEG ratios are defined as follows: $PEG=\frac{P_{0} / E_{0}}{g}$ and $\frac{P_{0}}{E_{0}}=\frac{PR_{0}(1+g)}{r_{E}-g}$ Where: P0 = Current Stock Price E0 =Latest 12-Month Trailing Earning per Share (EPS) PR0 = Latest Payout Ratio of Earnings as Dividends and/or Stock Buybacks rE = Current Cost of Equity for the Firm (obtain from the CAPM) g = Future Growth Rate of Earnings (use fractional form for P/E calculation, then whole% for the PEG.  For example, a 5% growth rate should use 0.05 for the P/E formula, and 5 for the PEG formula) P/E can obviously be calculated directly by taking the current stock price and dividing it by the current earnings, but what the equations above teaches us is that it is a function of the earnings paid to the investor versus retained, the growth of the firm (which is a function of the earning that are retained), and the required return rE by the shareholders to compensate them for the risk level of the firm’s cash flow, which is a direct reflection of the stability of earnings coming off of the firm, which in turn is set by the assets that the firm has chosen to invest in.  So the simple equation of P/E quickly becomes a complex equation dealing with numerous intertwined variables, all of which are directly impacted by executive management and the decisions it makes.  This is why a good management team is critical for the success of the firm. Why would we want to divide P/E by g when g is already a prominent component of P/E?  Well, P/E is usually high for a high growth firm such as a bleeding-edge technical firm, and low for a low growth firm such as a utility firm that only grows at the rate of population growth, and PEG attempts to normalize these thus assisting us in finding the winners and losers across the full spectrum of P/Es.  The importance of growth can clearly be seen from the equations above since g adds to the numerator and takes away from the denominator.  The problem with P/E is that it is not absolute because it is dependent on the irrationality of humans, and imperfections in the market.  So a high growth company that is not priced correctly in the market (e.g., public opinion is temporarily out of favor for the firm but the firm’s fundamentals are still solid) could have a P/E that is too low, or a low earnings firm could have an irrational price (e.g., market hype has bid up the latest technology stock).  A high or low P/E cannot be deemed “good” then, but instead a fairly undervalued P/E is always excellent! – and PEG is trying to help us decipher this. By dividing P/E by g, what we are emphasizing (and trying to compensate for) is that a company’s stock price is all about growth, and are trying to make this more material in our technical analysis.  If a firm is overvalued by hype, by calculating PEG the skew of the P/E will probably still be apparent in its PEG since it’s g would not be large enough to overcome the hype (i.e., high stock price), but a firm that is undervalued due to current market sentiment will benefit from PEG since it will have a low P and a normal or high g.  And hopefully any fairly valued company will experience a neutral impact from PEG. Literature quotes that anything with a PEG at or below unity (1) is fairly valued or undervalued, so a good investment.  What this means is that any stock whose g  ≥ P/E is a stock worth investing in.  It is clear from this relationship that P/E is being used as a proxy for growth.  This author believes that this is like trying to fit a square peg into a round hole – it is true that P/E is dependent on growth, but it is also a function of earnings, payout ratio, and the cost of equity.  This does not mean that PEG may not have merit though and provide guidance. As seen in this Motley Fool article Motley Fool Article, they discovered that PEG did work as an initial screen of stocks, and found in their study of 3-years of stock returns that (quoted here): • 92% of companies with PEG ratios of less than 1 beat the market. • 68% of companies with PEG ratios of between 1 and 2 beat the market. • 47% of companies with PEG ratios greater than 2 beat the market. Since P/E and PEG are based on earnings, the future growth rate should be that of expected future earnings, and can be derived using either of the following two equations: 1 – assume that g = RR×ROE where RR is the retention ratio and ROE (the Return on Equity) 2 – assume that g = (Dn/D0)n-1 where Dn is a future dividend some time period n, and D0 is the current dividend.  The assumption here is that as net income grows, the firm will also grow its dividends proportionally. Since we are discussing sustainable growth, this implies that the firm must be reinvesting some funds back into itself to sustain its depletion of assets (i.e., approximately equal to its depreciation rate), and an additional amount above this to grow the firm above its current level.  This means that in order to have g, the firm must have a PR below 100%. A PDF copy of this post can be downloaded here.
2019-05-21 20:58:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.541547417640686, "perplexity": 1749.6694364541895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00421.warc.gz"}
https://deepai.org/publication/on-the-stability-of-conservative-discontinuous-galerkin-hermite-spectral-methods-for-the-vlasov-poisson-system
# On the stability of conservative discontinuous Galerkin/Hermite spectral methods for the Vlasov-Poisson system We study a class of spatial discretizations for the Vlasov-Poisson system written as an hyperbolic system using Hermite polynomials. In particular, we focus on spectral methods and discontinuous Galerkin approximations. To obtain L 2 stability properties, we introduce a new L 2 weighted space, with a time dependent weight. For the Hermite spectral form of the Vlasov-Poisson system, we prove conservation of mass, momentum and total energy, as well as global stability for the weighted L 2 norm. These properties are then discussed for several spatial discretizations. Finally, numerical simulations are performed with the proposed DG/Hermite spectral method to highlight its stability and conservation features. ## Authors • 1 publication • 6 publications • ### Conservative discontinuous Galerkin/Hermite Spectral Method for the Vlasov-Poisson System We propose a class of conservative discontinuous Galerkin methods for th... 04/06/2020 ∙ by Francis Filbet, et al. ∙ 0 • ### Stability and conservation properties of Hermite-based approximations of the Vlasov-Poisson system Spectral approximation based on Hermite-Fourier expansion of the Vlasov-... 03/01/2021 ∙ by Daniele Funaro, et al. ∙ 0 • ### Analysis of an exactly mass conserving space-time hybridized discontinuous Galerkin method for the time-dependent Navier–Stokes equations We introduce and analyze a space-time hybridized discontinuous Galerkin ... 03/24/2021 ∙ by Keegan L. A. Kirk, et al. ∙ 0 • ### Stability of Discontinuous Galerkin Spectral Element Schemes for Wave Propagation when the Coefficient Matrices have Jumps We use the behavior of the L_2 norm of the solutions of linear hyperboli... 11/23/2020 ∙ by David A Kopriva, et al. ∙ 0 • ### Some continuous and discontinuous Galerkin methods and structure preservation for incompressible flows In this paper, we present consistent and inconsistent discontinuous Gale... 02/18/2021 ∙ by Xi Chen, et al. ∙ 0 • ### Petrov-Galerkin flux upwinding for mixed mimetic spectral elements, and its application to vorticity stabilisation in shallow water Upwinded mass fluxes are described and analysed for advection operators ... 04/28/2020 ∙ by David Lee, et al. ∙ 0 • ### Stability and Convergence of Spectral Mixed Discontinuous Galerkin Methods for 3D Linear Elasticity on Anisotropic Geometric Meshes We consider spectral mixed discontinuous Galerkin finite element discret... 08/13/2019 ∙ by Thomas P. Wihler, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1. Introduction One of the simplest model that is currently applied in plasma physics simulations is the Vlasov-Poisson system. This system describes the evolution of charged particles in the case where the only interaction considered is the mean-field force created through electrostatic effects. The system consists in Vlasov equations for phase space density of each particle species of charge and mass (1.1) ⎧⎪⎨⎪⎩∂fβ∂t+v⋅∇xfβ+qβmβE⋅∇vfβ=0,fβ(t=0)=fβ,0, coupled to its self-consistent electric field which satisfies the Poisson equation (1.2) −4πϵ0ΔxΦ=∑βqβnβ,withnβ=∫Rdfβdv, where is the vacuum permittivity. On the one hand, for a smooth and nonnegative initial data , the solution to (1.1) remains smooth and nonnegative for all . On the other hand, for any function , we have ddt∫R2dG(fβ(t))dxdv=0,∀t∈R+, which leads to the conservation of mass, norms, for and kinetic entropy, H(t):=∫R2dfβ(t)ln(f(t))dxdv=H(0),∀t≥0. We also get the conservation of momentum and total energy E(t):=∑βmβ2∫R2dfβ(t)∥v∥2dxdv+2πϵ0∫Rd∥E∥2dx=E(0),∀t≥0. Numerical approximation of the Vlasov-Poisson system has been addressed since the sixties. Particle methods (PIC), consisting in approximating the plasma by a finite number of macro particles, have been widely used [4]. They allow to obtain satisfying results with a few number of particles, but a well-known drawback of this class of methods is their inherent numerical noise which only decreases in when the number of particles increases, preventing from getting an accurate description of the distribution function for some specific applications. To overcome this difficulty, Eulerian solvers, that is methods discretizing the Vlasov equation on a mesh of the phase space, can be considered. Many authors explored their design, and an overview of many different techniques with their pros and cons can be found in [11]. Among them, we can mention finite volume methods [10] which are a simple and inexpensive option, but in general low order. Fourier-Fourier transform schemes [19] are based on a Fast Fourier Transform of the distribution function in phase space, but suffer from Gibbs phenomena if other than periodic conditions are considered. Standard finite element methods [31, 32] have also been applied, but may present numerical oscillations when approximating the Vlasov equation. Later, semi-Lagrangian schemes have also been proposed [29] , consisting in computing the distribution function at each grid point by following the characteristic curves backward. Despite these schemes can achieve high order allowing also for large time steps, they require high order interpolation to compute the origin of the characteristics, destroying the local character of the reconstruction. In the present article, using Hermite polynomials in the velocity variable, we write the Vlasov-Poisson system (1.1)–(1.2) as an hyperbolic system. This idea of using Galerkin methods with a small finite set of orthogonal polynomials rather than discretizing the distribution function in velocity space goes back to the 60’s [1, 18]. More recently, the merit to use rescaled orthogonal basis like the so-called scaled Hermite basis has been shown [8, 16, 28, 26, 30]. In [16], Holloway formalized two possible approaches. The first one, called symmetrically-weighted (SW), is based on standard Hermite functions as the basis in velocity and as test functions in the Galerkin method. It appears that this SW method cannot simultaneously conserve mass and momentum. It makes up for this deficiency by correctly conserving the norm of the distribution function, ensuring the stability of the method. In the second approach, called asymmetrically-weighted (AW), another set of test functions is used, leading to the simultaneous conservation of mass, momentum and total energy. However, the AW Hermite method does not conserve the norm of the distribution function and is then not numerically stable. In addition, the asymmetric Hermite method exactly solves the spatially uniform problem without any truncation error, a property not shared by either the traditional symmetric Hermite expansion or by finite difference methods. The aim of this work is to present a class of numerical schemes based on the AW Hermite methods and to provide a stability analysis. In what follows, we consider two types of spatial discretizations for the Vlasov-Poisson system (1.1)-(1.2), written as an hyperbolic system using Hermite polynomials in the velocity variable: spectral methods and a discontinuous Galerkin (DG) method. Concerning spectral methods, the Fourier basis is the natural choice for the spatial discretization when considering periodic boundary conditions. Spectral Galerkin and spectral collocation methods for the AW Fourier-Hermite discretization have been proposed in [8, 21, 24]. In [5], authors study a time implicit AW Fourier-Hermite method allowing exact conservation of charge, momentum and energy, and highlight that for some test cases, this scheme can be significantly more accurate than the PIC method. For the SW Fourier-Hermite method, a convergence theory was proposed in [25]. In [20], authors study conservation and stability properties of a generalized Hermite-Fourier semi-discretization, including as special cases the SW and AW approaches. Concerning discontinuous Galerkin methods, they are similar to finite elements methods but use discontinuous polynomials and are particularly well-adapted to handling complicated boundaries which may arise in many realistic applications. Due to their local construction, this type of methods provides good local conservation properties without sacrificing the order of accuracy. They were already used for the Vlasov-Poisson system in [14, 6] . Optimal error estimates and study of the conservation properties of a family of semi-discrete DG schemes for the Vlasov-Poisson system with periodic boundary conditions have been proved for the one [2] and multi-dimensional [3] cases. In all these works, the DG method is employed using a phase space mesh. Here, we adopt this approach only in physical space, as in [13], with a Hermite approximation in the velocity variable. In [13], such schemes with discontinuous Galerkin spatial discretization are designed in such a way that conservation of mass, momentum and total energy is rigorously provable. As mentionned before, the main difficulty is to study the stability of approximations based on asymmetrically-weighted Hermite basis. Indeed, this choice fails to preserve norm of the approximate solution, and therefore to ensure long-time stability of the method. In this framework, the natural space to be considered is and there is no estimate of the associated norm for the solution to the Vlasov-Poisson system (1.1)-(1.2). To overcome this difficulty, we introduce a new weighted space, with a time-dependent weight, allowing to prove global stability of the solution in this space. Actually, this idea has been already employed in [22, 23] to stabilize Hermite spectral methods for linear diffusion equations and nonlinear convection-diffusion equations in unbounded domains, yielding stability and spectral convergence of the considered methods. Here, we define the weight as (1.3) ω(t,v):=(2π)d/2e(α(t)∥v∥)2/2, where is a nonincreasing positive function which will be designed in such a way that a global stability estimate can be established in the following weighted space: L2ω(t):={u:R2d→R:∫R2d|u(x,v)|2ω(t,v)dxdv<+∞}, with the corresponding norm, that is ∥u∥2ω(t)=(2π)d/2∫R2d|u(x,v)|2e(α(t)∥v∥)2/2dxdv. Let us now determine the function . To do so, we compute the time derivative of , being the solution of (1.1). Using the Vlasov equation (1.1) and the definition (1.3) of the weight , one has 12ddt∥fβ(t)∥2ω(t)= −∫R2dfβ(v⋅∇xfβ+qβmβE⋅∇vfβ)ωdxdv +12∫R2dαα′∥v∥2f2βωdvdx. Then, since ∫RdfβE⋅∇vfβωdv=−12∫Rdα2f2βE⋅vωdv, we obtain 12ddt∥fβ(t)∥2ω(t)=12∫R2df2β(qβmβα2E⋅v+αα′∥v∥2)ωdxdv. Applying now Young inequality on the first term, we get for , 12ddt∥fβ(t)∥2ω(t)≤12∫R2df2β(γ2q2βm2βα4∥E∥2∞∥v∥2+12γ+αα′∥v∥2)ωdxdv. We notice that if is such that (1.4) the first and third terms cancel each other, yielding 12ddt∥fβ(t)∥2ω(t)≤14γ∥fβ(t)∥2ω(t). Finally, applying Grönwall’s Lemma gives ∥fβ(t)∥ω(t)≤∥fβ,0∥ωet/4γ. It now remains to define satisfying (1.4). We remark that (1.5) α(t):=α0(1+γq2βm2β∫t0∥E(s)∥2∞ds)−1/2 is a suitable choice, where is the initial value of at and corresponds to the initial scale of the distribution function, whereas is a free parameter. This function is positive, nonincreasing and the parameter will practically be chosen sufficiently small to ensure that does not decrease too fast towards as goes to infinity. Summarizing, we have established the following result. ###### Proposition 1.1. Assuming that the initial data belongs to , then the solution to (1.1) satisfies for all : ∥fβ(t)∥ω(t)≤∥fβ,0∥ω0et/4γ, where is the constant appearing in the definition (1.5) of . Notice that the weight depends on the solution to the Vlasov-Poisson, hence it is mandatory to control the norm of in order that this estimate makes sense. In the next section, we introduce the formulation of the Vlasov equation using the Hermite basis in velocity, and prove the analogous of Proposition 1.1 for the obtained system. Then in Section 3, we discuss conservation and stability properties for a class of spatial discretizations including Fourier spectral method and discontinuous Galerkin approximations. In Section 4, we introduce the time discretization that will be used to compute numerical results with the discontinuous Galerkin method. Finally in Section 5 we present numerical results for two stream instability, bump-on-tail problem and ion acoustic wave to highlight conservations and stability of our discretization. ## 2. Hermite spectral form of the Vlasov equation For simplicity, we now set all the physical constants to one and consider the one dimensional Vlasov-Poisson system for a single species with periodic boundary conditions in space, where the Vlasov equation reads (2.1) ∂f∂t+v∂f∂x+E∂f∂v=0 with , position and velocity . Moreover, the self-consistent electric field is determined by the Poisson equation (2.2) ∂E∂x=ρ−ρ0, where the density is given by ρ(t,x)=∫Rf(t,x,v)dv,t≥0,x∈(0,L) and is a constant ensuring the quasi-neutrality condition of the plasma ∫L0(ρ−ρ0)dx=0. ### 2.1. Hermite basis We approximate the solution of (2.1)–(2.2) by a finite sum which corresponds to a truncation of a series (2.3) fNH(t,x,v)=NH−1∑n=0Cn(t,x)Ψn(t,v), where is the number of modes. The issue is then to determine a well-suited class of basis functions and to find the expansion coefficients . Here, our aim is to obtain a control on a norm of , in the spirit of that established in Proposition 1.1. We then choose the following basis of normalized scaled time dependent asymmetrically weighted Hermite functions: (2.4) Ψn(t,v)=α(t)Hn(α(t)v)e−(α(t)v)2/2√2π, where is the positive nonincreasing function given by (2.5) α(t)=α0(1+γ∫t0∥ENH(s)∥2∞ds)−1/2, with an approximation of the electric field that will be defined below. Functions are the Hermite polynomials defined by , and for , has the following recursive relation √nHn(ξ)=ξHn−1(ξ)−√n−1Hn−2(ξ),∀n≥1. Let us also emphasize that for all , and that the Hermite basis (2.4) satisfies the following orthogonality property (2.6) ∫RΨn(t,v)Hm(α(t)v)dv=δn,m, where is the Kronecker delta function. Now the objective is to obtain an evolution equation for each mode by substituting the expression (2.3) for into the Vlasov equation (2.1) and using the orthogonality property (2.6). Thanks to the definition (2.4) of and the properties of , we compute the different terms of (2.1). The time derivative term is given by ∂tfNH=NH−1∑n=0(∂tCnΨn−Cnα′α(nΨn+√(n+1)(n+2)Ψn+2)), the transport term is v∂xfNH=NH−1∑n=0∂xCn1α(√n+1Ψn+1+√nΨn−1), and finally the nonlinear term is ENH∂vfNH=−NH−1∑n=0ENHCnα√n+1Ψn+1. Then taking as test function and using orthogonality property (2.6), we obtain an evolution equation for each mode , : (2.7) ∂tCn−α′α(nCn+√(n−1)nCn−2)+1α(√n∂xCn−1+√n+1∂xCn+1)−ENHα√nCn−1=0, with the understanding that for and . Meanwhile, we first observe that the density satisfies ρNH=∫RfNHdv=C0, and then the Poisson equation becomes (2.8) ∂ENH∂x=C0−ρ0. Observe that if we take in the expression (2.3), we get an infinite system (2.7)-(2.8) of equations for and , which is formally equivalent to the Vlasov-Poisson system (2.1)-(2.2). In what follows, we rather consider a generic weak formulation of (2.7)–(2.8). Indeed, this will allow us to straightforwardly apply the obtained results to spatial discretizations with spectral methods (see Section 3.1). Let be a subspace of . We look for such that and for all , ddt∫L0Cnφndx −α′α∫L0(nCn+√(n−1)nCn−2)φndx (2.9) −1α∫L0(√nCn−1+√n+1Cn+1)φ′ndx −α√n∫IjENHCn−1φndx=0,0≤n≤NH−1, and for a couple such that , and for all and , we have (2.10) ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩∫L0ΦNHη′dx=∫L0ENHηdx,−∫L0ENHζ′dx=∫L0(C0−ρ0)ζdx. In the rest of this section, we consider , and then (2.9)–(2.10) is the weak formulation of (2.7)–(2.8). ### 2.2. Conservation properties ###### Proposition 2.1. For any , consider the distribution function given by the truncated series fNH(t,x,v)=NH−1∑n=0Cn(t,x)Ψn(t,v), where is a solution to the Vlasov-Poisson system written as (2.9)-(2.10). Then mass, momentum and total energy are conserved, that is, ddt∫L0Ckαkdx=0,k=0,1, and for the total energy, ENH(t):=12∫L0(1α2(√2C2+C0)+|ENH|2)dx=ENH(0). ###### Proof. We consider the first three equations of (2.9): for all , (2.11) ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩ddt∫L0C0φ0dx−1α∫L0C1φ′0dx=0,ddt∫L0C1φ1dx−α′α∫L0C1φ1dx−1α∫L0(C0+√2C2)φ′1dx−α∫L0ENHC0φ1dx=0,ddt∫L0C2φ2dx−α′α∫L0(2C2+√2C0)φ2dx−1α∫L0(√2C1+√3C3)φ′2dx−√2α∫L0ENHC1φ2dx=0, which will be related to the conservation of mass, momentum and energy. Thus, let us look at the conservation properties from (2.11). First the total mass is defined as ∫L0∫RfNH(t,x,v)dvdx=∫L0C0(t,x)dx, hence the conservation of mass directly comes from (2.11) by taking as test function in the first equation. Then the momentum is given by ∫L0∫RvfNH(t,x,v)dvdx=∫L0C1(t,x)α(t)dx. We have ddt∫L0C1αdx=∫L0(∂tC1α−α′α2C1)dx, which taking in the second equation of (2.11) yields ddt∫L0C1αdx=∫L0ENHC0dx. Now, choosing in the second equation of (2.10) gives ddt∫L0C1αdx=ρ0∫L0ENHdx−∫L0ENH∂xENHdx=ρ0∫L0ENHdx=0, which gives the conservation of momentum. Finally to prove the conservation of total energy , we compute the variation of the kinetic energy defined as EKNH(t)=12∫L0∫Rv2fNH(t,x,v)dxdv=121α2∫L0(√2C2+C0)dx. We have dEKNH(t)dt=12α2∫L0(√2∂tC2+∂tC0)dx−α′α3∫L0(√2C2+C0)dx. Thus, combining the first equation of (2.9) with and the third equation of (2.9) with , we get (2.12) dEKNH(t)dt=1α∫L0ENHC1dx. Finally, taking in the first equation of (2.10), we obtain dEKNH(t)dt=1α∫L0ΦNH∂xC1dx. We now compute the time derivative of the potential energy defined by EPNH(t):=12∫L0|ENH|2dx. Using as test function in the first equation of (2.10) and an integration by parts, we have dEPNH(t)dt=∫L0∂tENHENHdx=∫L0ΦNH∂x∂tENHdx=−∫L0∂xΦNH∂tENHdx. Now, choosing in the time derivative of the second equation of (2.10), and then in the first equation of (2.9), we finally get dEPNH(t)dt=∫L0∂tC0ΦNHdx=1α∫L0C1∂xΦNHdx=−1α∫L0∂xC1ΦNHdx=−dEKNH(t)dt. This concludes the proof since the total energy is the sum of the kinetic and potential energies. ### 2.3. Weighted L2 stability of fNH We now establish the discrete counterpart of Proposition 1.1, namely the stability in an appropriately weighted norm. More precisely, with defined by (2.5), we consider the weight , and denote by the corresponding weighted norm. Using the definition (2.3) of , we have ∥fNH(t)∥2ω(t)=NH−1∑n,m=0∫L0∫RαCnCmΨm(t,v)Hn(α(t)v)dvdx, which gives, using orthogonality property (2.6), the following expression for the weighted norm of : (2.13) ∥fNH(t)∥2ω(t)=NH−1∑n=0α∫L0C2ndx. ###### Proposition 2.2. Assuming that , the distribution function given by the truncated series (2.3) satisfies the following stability estimate: ∥fNH(t)∥ω(t)≤∥fNH(0)∥ω(0)et/4γ,∀t≥0, where is the constant appearing in the definition (2.5) of . ###### Proof. We compute the time derivative of defined by (2.13): ddt∥fNH(t)∥2ω(t)=NH−1∑n=0(α∫L0∂tCnCndx+α′2∫L0C2ndx). Thanks to equation (2.9) with , we then have to estimate 12ddtNH−1∑n=0α(t)∫L0Cn(t,x)2dx= NH−1∑n=0∫L0α′[(n+12)C2n+√n(n−1)CnCn−2]dx +NH−1∑n=0∫L0(√nCn−1+√n+1Cn+1)∂xCndx (2.14) +NH−1∑n=0∫L0α2√nENHCnCn−1dx. First of all, the transport term vanishes. Indeed, reindexing the sum over and using that , we have NH−1∑n=0∫L0(√nCn−1+√n+1Cn+1)∂xCndx=NH−1∑n=0∫L0√n∂x(Cn−1Cn)dx=0. Then on the one hand, using again that for all and , we have NH−1∑n=0[(n+12)C2n+√n(n−1)CnCn−2]=12NH+1∑n=1(√nCn+√n−1Cn−2)2. On the other hand, we notice that NH−1∑n=0√nCnCn−1=12NH+1∑n=1Cn−1(√nCn+√n−2Cn−2). Gathering these two identities in (2.14), we get 12ddtNH−1∑n=0α∫L0C2ndx= +12NH+1∑n=1∫L0ENHα2Cn−1(√nCn+√n−1Cn−2)dx. Applying Young inequality in the second sum, this provides: 12ddtNH−1∑n=0α∫L0C2ndx≤ +12NH+1∑n=1∫L0(γ2∥ENH∥2∞α3(√nCn+√n−1Cn−2)2+12γαC2n−1)dx. Yet, using definition (2.5) of , we have that α′(t)=−γ2∥ENH(t)∥2∞α(t)3, which yields 12ddtNH−1∑n=0α∫L0C2ndx≤ 14γNH+1∑n=1α∫L0C2n−1dx ≤ 14γNH−1∑n=0α∫L0C2ndx. Using Grönwall’s lemma, this gives the expected result. ∎ ### 2.4. Control of α Using the definition (2.13) of , the stability result established in Proposition 2.2 gives a stability result in for the coefficients , provided that is bounded from below by a positive constant. Due to definition (2.5) of , this is achieved as soon as is controlled. This result is given in the following proposition. ###### Proposition 2.3. Under assumptions of Proposition 2.2, the solution of (2.10) satisfies that for all , there exists a constant depending on such that for all , ∥ENH(t)∥∞≤CT. ###### Proof. Since we are in one space dimension, by Sobolev inequality, there exists a constant such that for all , ∥ENH(t)∥2∞≤c∥ENH(t)∥2H1. Moreover, since , Poincaré-Wirtinger inequality applies and then there exists such that ∥ENH(t)∥2∞≤c′∥∂xENH(t)∥22. Taking in the second equation of (2.10) and applying Cauchy-Schwarz inequality gives ∥∂xENH(t)∥22=∫L0∂xENH(C0−ρ0)dx=∫L0∂xENHC0dx≤∥∂xENH(t)∥2∥C0(t)∥2, and then one obtains ∥ENH(t)∥2∞≤c′∥C0(t)∥22. Hence, we need to control
2021-10-23 16:51:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383129835128784, "perplexity": 642.232894508563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00372.warc.gz"}
http://mathhelpforum.com/geometry/16483-isoseles.html
2. Let $\displaystyle N$ be the midpoint of $\displaystyle \overline{BE}.$ $\displaystyle \overline{MN}\parallel\overline{BH}\implies\overli ne{NM}\perp\overline{AH}\implies\overline{AM}\perp \overline{HN}.$ Finally $\displaystyle \overline{HN}\parallel\overline{CE}\implies\overli ne{AM}\perp\overline{EC}~\blacksquare$
2018-06-21 20:12:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044232368469238, "perplexity": 68.39630712818257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00395.warc.gz"}
http://eprints.iisc.ernet.in/8139/
# Pressure-induced valence changes in mixed-valent systems Chandran, Leena and Krishnamurthy, HR and Ramakrishnan, TV (1992) Pressure-induced valence changes in mixed-valent systems. In: Journal of Physics: Condensed Matter, 4 (34). pp. 7067-7094. PDF Pressure_induced_July20th.pdf Restricted to Registered users only Download (1243Kb) | Request a copy ## Abstract Mixed-valent systems based on Ce, Sm, Eu and Yb exhibit a wide range of behaviour with respect to valence changes under the application of pressure. We present a semi phenomenological model for this behaviour based on competition effects between the usual elastic energy cost and the magnetic energy gain due to valence fluctuations. For the latter we use a mean-field Andenon lattice description and incorporate the effects of pressure by introducing a volume dependence to the Anderson model parameters $\epsilon_f$ and $\Delta$. In contrast to existing models such as the Kondo Volume Collapse theory of Allen and Martin, which describes magnetic to non-magnetic transitions without sizable valence change $(e.g. \gamma-Ce\rightarrow \alpha-Ce)$, the Anderson lattice model developed here dascribes systems with both small and large valence changes the transition can be continuous $(e.g. EuPd_2Si_2)$ or discontinuous ($EuPd_2Si_2$ alloyed with Au). Item Type: Journal Article http://dx.doi.org/10.1088/0953-8984/4/34... Copyright of this article belongs to Institute of Physics. Division of Physical & Mathematical Sciences > Physics 05 Sep 2006 19 Sep 2010 04:30 http://eprints.iisc.ernet.in/id/eprint/8139
2016-10-26 06:02:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.688072919845581, "perplexity": 3090.221788406365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00107-ip-10-171-6-4.ec2.internal.warc.gz"}
https://myknowsys.com/blog/2012/03/316-mathematics.html
# Exponents Follow the Knowsys Method and remember to read the problem, identify the bottom line, assess your options, and attack the problem. Then loop back to check that you answered the right question. For the vast majority of problems, you do not need to look at the answer choices before this point. What is the largest possible integer value of n for which $5^{n}$ divides into $50^{7}$? The bottom line is easy to find here: n=? Now assess your options. You could look at the answer choices and plug them in, calculate each product, and see whether $50^{7}$ can divide by it evenly. But there must be a faster way! This is an exponent problem, so think about your exponent rules. If you can get the bases to match, finding the appropriate value of n will be easy. Fortunately, 50 is a multiple of 5. It is also a multiple of 25. $50=2(5^{2})$ Therefore, $50^{7}=(2(5^{2}))^{7}$ Now you can apply the distributive property and the exponent rules. $50^{7}=2^{7}(5^{2^{7}})$ $50^{7}=2^{7}(5^{14})$ Now you know that $(5^{14})$ is a product of  $50^{7}$. There's not much you can do from here, so look at the answer choices. (A) 2 (B) 7 (C) 9 (D) 10 (E) 14
2018-03-23 03:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7964437007904053, "perplexity": 458.57383931883606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00363.warc.gz"}
http://mathhelpforum.com/discrete-math/179895-number-functions-one-set-another.html
# Math Help - Number of functions from one set to another? 1. ## Number of functions from one set to another? How many functions, injections, surjections, bijections and relations from A to B are there, when A = {a, b, c}, B = {0, 1}? Edit: I know the answer should be 64, but I don't know how to arrive at that. 2. Originally Posted by posix_memalign How many functions, injections, surjections, bijections and relations from A to B are there, when A = {a, b, c}, B = {0, 1}? Edit: I know the answer should be 64, but I don't know how to arrive at that. I have no idea what you mean by 64. There are $2^3$ functions $A\to B.$ There are no injections $A\to B$. Therefore, no bijections. There are 6 surjections. Because $\|A\times B\|=6$ there are $2^6$ relations from $A\to B$ is you allow the empty relation, 3. Originally Posted by Plato I have no idea what you mean by 64. There are $2^3$ functions $A\to B.$ There are no injections $A\to B$. Therefore, no bijections. There are 6 surjections. Because $\|A\times B\|=6$ there are $2^6$ relations from $A\to B$ is you allow the empty relation, Thanks, but why do you need the $\|$? Why isn't the cartesian product by itself sufficient? 4. Originally Posted by posix_memalign Thanks, but why do you need the $\|$? Why isn't the cartesian product by itself sufficient? #(X)= $|X|=$ $\|X\|$ these are all common symbols standing for the number of elements in a given set.
2015-04-27 14:32:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6859612464904785, "perplexity": 334.12671247562747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00016-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1989365/find-square-and-triangular-numbers
Find square and triangular numbers Square and triangular numbers are expressed as $n^2=\frac{m(m+1)}{2}$ Further on this can be expressed as $8n^2=4m(m+1)=4m^2+4m=(2m+1)^2-1$ Taking $a=2m+1$ and $b=2n$ the expression becomes $2b^2=a^2-1$ or $1=a^2-2b^2$ After a bit of factoring previous equation becomes $1=(a-\sqrt{2}b)(a+\sqrt{2}b)$ One of the solutions is $(a,b)=(3,2)$ and $(m,n)=(1,1)$. From here additional solutions can be found recursively. Once there is a solution say $(m,n)$ there is another $(1+im+jn, 1+km+ln)$ for some integers $i,j,k,l$. I need help proving this. • Why do you say "previous equation has no integer solutions" when it clearly does (because you give the solution $a=3, b=2$ only two lines later)? – Gabriel Burns Oct 28 '16 at 17:46 • @GabrielBurns actually just the polynomial can't be expressed in integer terms – php-- Oct 28 '16 at 17:50 • It has something to do with "triangle" and "square" numbering of the elements of $mathbb N^2$. – hamam_Abdallah Oct 28 '16 at 17:54 • I don't think there are other solutions since the "triangle" numbering is faster than the square . – hamam_Abdallah Oct 28 '16 at 17:56
2019-09-18 05:36:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7346562743186951, "perplexity": 467.95395969638116}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00373.warc.gz"}
https://safe.nrao.edu/wiki/bin/view/Main/MurphyTestMathMode
## Test of Math Mode Actually, both math and LaTeX2html. Taken from RefBendDelayCalc but with the figures left out. Another test: and the old way. ## The ALMA Delay Server The ALMA delay server is a software module which is responsible for distributing the delay correction for each antenna to the various bits of hardware in the antenna and to the correlator. The delay correction is defined as: where is some arbitrary offset to make sure all delays are positive, is a per antenna constant representing the electronic delay, is the geometric delay derived by making a call to CALC, and is the atmospheric delay correction (which can be gotten from CALC but is (as of 2008/03/18) not used and set to zero. In the long term the atmospheric delay will be derived using ATM to calculate the zenith delay coupled to an appropriate mapping function. ### Overview The delay experienced by an incoming signal due to its propagation through the Earth's atmosphere is given by: where is the path through and n is the refractive index of the atmosphere. Since is very nearly unity for the Earth's atmosphere one normally uses the "refractivity" () instead of the index of refraction. Refractivity and refractive index are related as follows: Furthermore, the atmospheric delay can be separated into contributions due to the dry and wet atmosphere: where is the contribution due to dry air while is the contribution due to wet air. In general and are parameterized in terms of a zenith contribution to the delay which is dependent upon local atmospheric conditions () and a "mapping function" () which relate delays at an arbitrary elevation angle to that at the zenith: Since the elevation angle is the unrefracted source elevation, refraction effects are included in the mapping functions . In the following I describe calculations of and . ### Zenith Delay The contribution to the atmospheric delay at the zenith () is a measure of the integrated refractivity of the atmosphere at the zenith. In general, the refractivity of moist air at microwave frequencies depends upon the permanent and induced dipole moments of the molecular species that make up the atmosphere. The primary species that make up the dry atmosphere, nitrogen and oxygen, do not have permanent dipole moments, so contribute to the refractivity via their induced dipole moments. Water vapour does have a permanent dipole moment. Permanent dipole moments contribute to the refractivity as , while induced dipole moments contribute as , where is the pressure and is the temperature of the species. A simple parameterization of the refractivity at the zenith is given by the Smith-Weintraub equation (Smith & Weintraub 1953): where and are the partial pressures due to dry and wet air, T is the temperature of the atmosphere, and , , and are constants. The pressures and temperature are usually taken to be equivalent to the local ambient values at the antenna station under study. The dry and wet air refractivities are then given by: The dry air contribution to this refractivity () is primarily due to oxygen and nitrogen, and is nearly in hydrostatic equilibrium. Therefore, does not depend upon the detailed behaviour of dry air pressure and temperature along the path through the atmosphere, and can be derived based on local atmospheric temperature and pressure measurements. The wet air refractivity () can be inferred from local water vapour radiometry measurements. Also, one can use an atmospheric model to derive the total atmospheric refractivity using an atmospheric model (such as ATM) with local atmospheric conditions as input. ### Mapping Function The simplest form for the mapping function (), which relates the delay at an arbitrary elevation angle to that at the zenith, is given by the plane-parallel approximation for the Earth's atmosphere: This simple form is in fact inadequate, which led Marini (1972) to consider corrections to this simple functional form which accounted for the Earth's curvature. Assuming an exponential atmospheric profile where the atmospheric refractivity varies exponentially with height above the antenna, Marini (1972) developed a continued fraction form for the mapping function: where I include only the first three terms in the continued fraction. A slight modification to the Marini (1972) continued fraction functional form which forces at the zenith replaces the even numbered terms (i.e. the second, fourth, sixth, etc.) with Chao (1974) introduced this modification by truncating the Marini (1972) form to include only two terms. A more generalized continued-fractional form for the mapping function was developed by Yan and Ping (1995): ...where... ...is the "normalized effective zenith argument" of function which includes the "normalized effective height" of the atmosphere () defined as: For an exponentially-distributed atmosphere: Normally, (i.e. start the integration from the ground). The constants , etc. in the continued fraction forms above are generally derived from analytic fits to ray-tracing results either for standard atmospheres or for observed atmospheric profiles based on radiosonde measurements. The mapping functions derived in Niell (1996) and Davis (1985) are derived in this way. A physically more correct mapping function has been derived by Lanyi (1984). Unlike previous mapping functions, Lanyi does not fully separate the dry and wet contributions to the delay, which is a more physically correct approximation. It is based on an ideal model atmosphere whose temperature is constant from the surface to the inversion layer , then decreases linearly with height at rate (the lapse rate) from to the tropopause height , then is assumed to be constant above . This mapping function is designed then to be a semi-analytic approximation to the atmospheric delay integral that retains an explicit temperature profile that can be determined using meteorological measurements. The mapping function is expanded as a second-order polynomial in and , plus the largest third-order term). It is nonlinear in and . It also contains terms which couple and , thus including terms which arise from the bending of the signal path through the atmosphere. The functional form for the atmospheric delay in this Lanyi (1984) model is given by: where where With standard values of , , (appropriate for mid-latitudes), and , . The dry, wet, and bending contributions are expressed in terms of moments of the refractivity. The bending terms are evaluated for the ideal model atmosphere and thus give the dependence of the delay on the four parameters , , , and . Therefore, the Lanyi (1984) model relies upon accurate surface meteorological measurements at the time of the observations to which the delay model is applied. ### Antenna Height Correction to Total Atmospheric Delay In the calculation of the zenith atmospheric delay at an antenna it is assumed that the atmospheric properties (P, T, RH) are the values measured at the focal plane of the antenna. For example, in VLBI each station has a set of associated weather measurements which are used to calculate . For a clustered array like the VLA or ALMA, the affect of the differences in antenna focal plane height above some reference point need to be accounted for. For the VLA (not EVLA), CALC was not used to calculate the atmospheric delay. The antenna height correction was incorporated with a simple atmospheric delay correction by correcting for the path difference between each VLA antenna and a reference point at the center of the array. For the VLA case, the extra atmospheric path due to a difference in antenna height above the center-of-the-array reference point (, in ns) is given by: where is the atmospheric refractivity, is the atmospheric zenith delay calculated using the VLA weather station (which is located near the center of the array), is the geometric w of the antenna (in ns). The first term is the antenna height correction to the zenith delay, while the second term is a simple atmospheric delay correction. For EVLA, CALC will be used to calculate both geometric and atmospheric delay. We believe (though have not confirmed) that CALC also calculates the antenna height correction (first term in the equation above) given antenna heights relative to the reference point at the center of the array. ALMA will need to include this antenna height correction term. A simple estimate of the magnitude of the antenna height difference correction at the zenith can be gotten by assuming that the pressure P changes linearly with height. Then 52 cm of additional antenna height (the current difference in height between the two ATF antennas) out of a total atmospheric height of 8 km would correspond to: where I have assumed P = 1053 mb. The dry term zenith atmospheric delay changes approximately like 2.3 mm/mb of pressure change. A pressure change of 0.068 mb corresponds to approximately 156 micron of path difference. This is consistent with alternate back-of-the-envelope calculations of this quantity. ### Differential Excess Atmospheric Delay Between Two Antennas NOTE: The following is just an aside. Since CALC or any other analysis of the atmospheric delay at an antenna calculates the total integrated delay along the path of observation, the differential delay between two antennas is accounted for in any differencing calculations done during baseline determination. The differential delay induced in an interferometer by a horizontally stratified troposphere results from the difference in zenith angle of the source at the antennas. Thompson, Moran, and Swensen (2001), pp. 516-518 discuss the atmospheric delay induced along an interferometer baseline. The excess path length is given by: where is the refractivity at the Earth's surface, is the height above the Earth's surface, is the atmospheric scale height, is the length coordinate along the direction to the source, and E is the antenna elevation while observing the source. Note that refraction is neglected. One can relate , , , and as follows (see Figure 13.4 in TMS, page 517; reproduced below) using the cosine rule on the triangle formed by , , and : Solving for and using elevation rather than zenith angle yields: For the triangle which is formed by sides , , and the side which is equal to , we can write: Since and (the height of the troposphere), . Since , . The equation for in terms of , , and then becomes: (Thanks to Dick Thompson for filling-in some of the details of this calculation). We can now write the expression for as follows: Since , the second term in the equation above can be expanded with a Taylor series so that: Integration yields: Writing this equation in terms involving , the excess path length becomes: Taking the derivative of with respect to E and multiplying this derivative by the baseline length divided by yields the atmospheric differential delay between two antennas separated by baseline : ...where is in m, is in km, and is in km. Note that once must calculate using a suitable atmospheric model which uses measurements of the local atmospheric pressure, temperature and relative humidity to derive the resultant differential residual delay. -- PatrickMurphy - 2009-01-21 Topic revision: r2 - 2009-12-07, PatrickMurphy Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding NRAO Public Wiki? Send feedback
2021-11-30 08:59:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083258271217346, "perplexity": 1053.8216920832306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00099.warc.gz"}
https://www.stevenabbott.co.uk/practical-coatings/Dot-Size.php
Dot Simulation Quick Start This is a basic simulation of what happens when idealised inkjet drops land on a surface and spread to an equilibrium contact angle - leaving out drop-to-drop interactions. The implications for the visual appearance and the printed optical density are significant. Credits Thanks to Neil Chilton at Printed Electronics Ltd in the UK who first showed me why these issues are important to real-world printing. Dot Size V Drop pl θ ° Ref OD Ref H @ OD μm Print DPI D Drop μm D Dot μm H Dot μm S Dot μm Overlap - Grid Overlap - Diagonal Single % Multiple % Open % OD //One universal basic required here to get things going once loaded Main(); }; //Main() is hard wired as THE place to start calculating when inputs change //It does no calculations itself, it merely sets them up, sends off variables, gets results and, if necessary, plots them. function Main() { //Save settings every time you calculate, so they're always ready on a reload saveSettings(); //Send all the inputs as a structured object //If you need to convert to, say, SI units, do it here! const inputs = { V: sliders.SlideV.value / 1e15, //pl to m3 theta: sliders.Slidetheta.value * Math.PI / 180, //Deg to rad DPI: sliders.SlideDPI.value, OD: sliders.SlideOD.value, HatOD: sliders.SlideHatOD.value * 1e-6, //μm to m }; //Send inputs off to CalcIt where the names are instantly available //Get all the resonses as an object, result const result = CalcIt(inputs); document.getElementById('DDrop').value = result.DDrop; document.getElementById('DDot').value = result.DDot; document.getElementById('HDot').value = result.HDot; document.getElementById('SDot').value = result.SDot; document.getElementById('OverlapG').value = result.OverlapG; document.getElementById('OverlapD').value = result.OverlapD; document.getElementById('PercS').value = result.PercS; document.getElementById('PercD').value = result.PercD; document.getElementById('PercO').value = result.PercO; document.getElementById('OD').value = result.OD; //You might have some other stuff to do here, but for most apps that's it for Main! } //Here's the app calculation //The inputs are just the names provided - their order in the curly brackets is unimportant! //By convention the input values are provided with the correct units within Main function CalcIt({ V, theta, DPI, OD, HatOD }) { const cost = Math.cos(theta), sint = Math.sin(theta) //The basic calculations DDrop = Math.pow(V * 6 / Math.PI, 0.333) const r = 3 / Math.PI * Math.pow(V / ((2 + cost) * Math.pow(1 - cost, 2)), 0.333) const HDot = r * (1 - cost) const DDot = 2 * Math.sqrt(HDot * (2 * r - HDot)) const SDot = 1e-3 / (DPI / 25.4) const OverlapG = DDot / SDot const OverlapD = DDot / (SDot * Math.sqrt(2)) //Graphics stuff const dpr = window.devicePixelRatio || 1; const theCanvas = document.querySelector('canvas') const ctx = theCanvas.getContext('2d')//setupCanvas(theCanvas, dpr); //const w = Math.floor(theCanvas.width / dpr), h = Math.floor(theCanvas.height / dpr) const w = theCanvas.width, h = theCanvas.height //We need to convert from metres to pixels for the graphics calculations const mtopix = SDot / h const offs = (r - HDot) / mtopix //Set up the rectangle ctx.beginPath(); ctx.fillStyle = "white" ctx.strokeStyle = "white" ctx.lineWidth = 0 ctx.rect(0, 0, w, h); ctx.strokeWidth = 0 ctx.fill(); ctx.stroke() const ws = (w - h) / 2 let i, i1, j, j1, y, hv const rc = h * r / SDot //Ratio of radius to spacing which is defined as h const rc2 = rc * rc const HatODpx = HatOD * h / SDot let ODv = Array(h).fill(0).map(() => Array(h)) for (i = 0; i < h; i++) { for (j = 0; j < h; j++) { ODv[i][j] = 0 } } let dCount = 0, overlap = false //The code could be more efficient, but it's fast & simple to understand for (i = 0; i < h; i++) { i1 = h - i for (j = 0; j < h; j++) { j1 = h - j overlap = false if (i * i + j * j <= rc2) { y = Math.sqrt(rc2 - i * i - j * j) hv = y - offs if (hv > 0) { ODv[i][j] = Math.max(hv / HatODpx, 0) } } if (i1 * i1 + j * j <= rc2) { y = Math.sqrt(rc2 - i1 * i1 - j * j) hv = y - offs if (hv > 0) { if (ODv[i][j] > 0) overlap = true ODv[i][j] += Math.max(hv / HatODpx, 0) } } if (i1 * i1 + j1 * j1 <= rc2) { y = Math.sqrt(rc2 - i1 * i1 - j1 * j1) hv = y - offs if (hv > 0) { if (ODv[i][j] > 0) overlap = true ODv[i][j] += Math.max(hv / HatODpx, 0) } } if (i * i + j1 * j1 <= rc2) { y = Math.sqrt(rc2 - i * i - j1 * j1) hv = y - offs if (hv > 0) { if (ODv[i][j] > 0) overlap = true ODv[i][j] += Math.max(hv / HatODpx, 0) } } if (overlap) dCount++ } } let tAbs = 0, oCount = 0 let id = ctx.getImageData(0, 0, w, h); let pixels = id.data; for (i = 0; i < h; i++) { for (j = 0; j < h; j++) { pix = Math.max(Math.floor(255 * (1 - ODv[i][j])), 0) k = 4 * (i + ws + j * w) pixels[k] = pix; pixels[k + 1] = pix; pixels[k + 2] = pix if (ODv[i][j] == 0) oCount++ //Convert OD to absorbed fraction tAbs += 1 - 1 / Math.pow(10, OD * ODv[i][j]) } } ctx.putImageData(id, 0, 0) const count = h * h tAbs /= count //Now convert absorbed fraction average to OD OD = -Math.log10(1 - tAbs) PercS = 100 * (count - dCount - oCount) / count PercD = 100 * dCount / count PercO = 100 * oCount / count //Return the values return { DDrop: (DDrop * 1e6).toFixed(1), DDot: (DDot * 1e6).toFixed(1), HDot: (HDot * 1e6).toFixed(1), SDot: (SDot * 1e6).toFixed(1), OverlapG: OverlapG.toFixed(2), OverlapD: OverlapD.toFixed(2), PercS: PercS.toFixed(1), PercD: PercD.toFixed(1), PercO: PercO.toFixed(1), OD: OD.toFixed(2), }; } Small open areas give large reductions in OD Suppose you had a thick ink coverage that should give an optical density (OD) of 3, absorbing 99.9% of the light. But 5% of the area has no ink because of problems with the dots. The effective light absorption is 94.9% which translates to OD=-log_(10)(1-0.949) which is 1.3. So small gaps between dots matter. It also means that if you had used 2/3 the ink so the total absorption was OD=2, i.e. 99% of the light, the resulting OD would OD=-log_(10)(1-0.94) which is 1.22. So 1/3 of your ink is providing 0.07 extra OD. If the 2/3 ink had been used to cover up the holes and decreased the total thickness by 5% then the OD would be 1.9. So, getting the most OD for your ink requires knowledge of where it's going, and fixing any obvious unprinted areas. Spherical caps You know the drop volume V Drop you are producing (the app, for convenience calculates the drop diameter D Drop) and you can use an estimate of the contact angle of the drop θ to calculate, via tedious spherical cap geometry (see Wiki Spherical Cap), the dot diameter D Dot and maximum height H Dot. The app shows 4 quarter dots spaced according to (scaled by) the input Print DPI. For the same size drop, if you increase Print DPI the relative size of the printed dot increases The thickness, and therefore the OD of the dots, is highest at their centres and falls to zero at their edge. If you decrease the contact angle θ then the dot gets larger so H Dot decreases and the OD decreases. So we start to see some trade-offs, better spreading reduces the absorption at any give position in the dot. Eventually the spreading dots start to overlap, giving larger OD at the overlaps. [Note that we are just exploring the core geometry, we are ignoring dot-on-dot spreading]. Covering up the open areas Although increasing dot diameter decreases the OD in the dot, it is also decreasing the amount of open area, so the overall OD generally increases. Once you have overlapped significantly at the centre, you get diminishing returns because ink-on-ink is less effective than ink-on-white. Doubling the thickness of a layer OD=1 gives OD=2 but you've gone from 90% to 99% absorption - not a big return for an extra layer of ink. Doubling the thickness of OD=0.5 to OD=1, takes you from 68% to 90% absorption - a bigger effect for less ink. Calculating the OD You must first provide a Ref OD which is the OD of a reference film of a constant height Ref H. You could, for example, make a drawdown of thickness H and measure the OD. Given this, then the OD at all parts of the dots is calculated based on the thickness calculated from spherical cap geometry and from any overlaps. The output tells you what fraction the dot diameter is of the grid spacing (Overlap - Grid) and of the diagonal spacing (Overlap - Diagonal) which is the bigger challenge. The output also provides the % covered by isolated, Single layers by Multiple layers and by Open areas. At each point the thickness, and therefore the OD is known so we calculated the % light absorbed. By summing these values and taking the average we know the average absorption which can be converted to the effective OD of the print. Drop volume effects Although there are plenty of simplifying assumptions, you will find that the app is insightful. There are two ways to increase the OD of the final print. 1. Increase the DPI at a fixed drop volume 2. Increase the drop volume at a fixed DPI Each approach has its advantages and disadvantages. The app lets you explore them so you can better think through your print strategy.
2022-06-29 13:34:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011942386627197, "perplexity": 6085.345413487011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00385.warc.gz"}
https://www.sparrho.com/item/shadow-of-charged-black-holes-in-gauss-bonnet-gravity/23cfcdf/
# Shadow of charged black holes in Gauss-Bonnet gravity Research paper by Anish Das, Ashis Saha, Sunandan Gangopadhyay Indexed on: 06 Sep '19Published on: 04 Sep '19Published in: arXiv - General Relativity and Quantum Cosmology #### Abstract In this paper, we investigate the effect of higher curvature corrections from Gauss-Bonnet gravity on the shadow of charged black holes in both $AdS$ and Minkowski spacetimes. The null geodesic equations are computed in $d=5$ spacetime dimensions by using the directions of symmetries and Hamilton-Jacobi equation. With the null geodesics in hand, we then proceed to evaluate the celestial coordinates ($\alpha, \beta$) and the radius $R_s$ of the black hole shadow and represent it graphically. The effects of charge $Q$ of the black hole and the Gauss-Bonnet parameter $\gamma$ on the radius of the shadow $R_s$ is studied in detail. It is observed that the Gauss-Bonnet parameter $\gamma$ affects the radius of the black hole shadow $R_s$ differently for the $AdS$ black hole spacetime in comparison to the black hole spacetime which is asymptotically flat. In particular the radius of the black hole shadow increases with increase in the Gauss-Bonnet parameter in case of the $AdS$ black hole spacetime and decreases in case of the asymptotically flat black hole spacetime. We then introduce a plasma background in order to observe the change in the silhouette of the black hole shadow due to a change in the refractive index of the plasma medium. Finally, we study the effect of the Gauss-Bonnet parameter $\gamma$ on the energy emission rate of the black hole which depends on the black hole shadow radius and represent the results graphically.
2021-01-17 09:20:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235630393028259, "perplexity": 297.3816034847598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00120.warc.gz"}
http://www.propheti.ca/p7swrmbw/ea6dd0-range-of-absolute-value-function
# range of absolute value function Posted by on Jan 8, 2021 | No Comments EXCEL =SUMIF(B5:B9,">0")-SUMIF(B5:B9,"0") This formula uses the Excel SUMIF function to sum all the positive and negative numbers, separately. This formula uses the Excel SUMPROCUT and ABS functions to sum the absolute values from range (B5:B9). Suppose we want to know all possible returns on an investment if we could earn some amount of money within $200 of$600. TUCO 2020 is the largest Online Math Olympiad where 5,00,000+ students & 300+ schools Pan India would be partaking. The graph may or may not intersect the horizontal axis, depending on how the graph has been shifted and reflected. See (Figure). $x=-1$ or $x=2$, Should we always expect two answers when solving $|A|=B?$. You can find the absolute value of a range of cells by manually entering the SUMIF function into the fx bar. 5% of 680 ohms is 34 ohms. The absolute value always returns a positive value, so it is impossible for the absolute value to equal a negative value. Find boundary points by solving $|x-A|=B$. state the domain, range, and y-intercept. For example, we may need to identify numbers or points on a line that are at a specified distance from a given reference point. Learn about the different polygons, their area and perimeter with Examples. The most significant feature of the absolute value graph is the corner point at which the graph changes direction. Yes. And therefore, we can say that the absolute value of minus five will always be greater than or equal to zero. Describe all values $x$ within a distance of 3 from the number 2. Continuous? In interval notation, we use a bracket [ when the set includes the endpoint and a parenthesis ( to indicate that the endpoint is either not included or the interval is unbounded. Select cell A6 and input ‘=SUMIF(A2:A4,”>0″)-SUMIF(A2:A4,”<0″)’ in the function bar. In this question, we’re given a function of , and we’re asked to determine which of five given options is the range of this function. Answer: 3 question Graph each absolute value function. Absolute Value Functions. As such, it is useful to consider distance as an absolute value function. Figure 8. No, they do not always intersect the horizontal axis. In an absolute value equation, an unknown variable is the input of an absolute value function. The absolute value function is commonly thought of as providing the distance the number is from zero on a number line. We can keep in mind that if the graph continues beyond the portion of the graph we can see, the domain and range could also be greater than the visible values. The word Abacus derived from the Greek word ‘abax’, which means ‘tabular form’. See . [reveal-answer q=”fs-id1165137786481″]Show Solution[/reveal-answer] [hidden-answer a=”fs-id1165137786481″] We can find that 5% of 680 ohms is 34 ohms. In this section, we will investigate absolute value functions. Distances in the universe can be measured in all directions. Ever wondered how soccer strategy includes maths? Graph to find the points satisfying an absolute value inequality. The absolute value of zero is 0. It then converts the negative numbers into positive by putting the negative (-) in … Notice that we will use the data to make a function of the amount each movie earned or the total ticket sales for all horror movies by year. 7. The range of absolute value is the set of possible output values, which are shown on the y-axis. The absolute value of the difference between the actual and nominal resistance should not exceed the stated variability, so, with the resistance R R in ohms, | R − 680 | ≤ 34 | R − 680 | ≤ 34. Become a part of a community that is changing the future of this nation. cm to m, km to miles, etc... with... Why you need to learn about Percentage to Decimals? We can draw a number line to represent the condition to be satisfied. Parallel and Perpendicular Lines in Real Life. The function outputs 0 when $x=1.5$ or $x=-2$. \end{align}[/latex]. Our tech-enabled learning material is delivered at your doorstep. Use $|A|=B$ to write $A=B$ or $\mathrm{-A}=B$, assuming $B>0$. Domain and range of rational functions with holes. Scholarships & Cash Prizes worth Rs.50 lakhs* up for grabs! Which of the following represents the vertex? The absolute value function is commonly used to measure distances between points. Example 1: f is a function given by f (x) = |x - 2| Find the x and y intercepts of the graph of f. Find the domain and range of f. Sketch the graph of f. Solution to Example 1. a - The y intercept is given by (0 , f(0)) = (0 ,|-2|) = (0 , 2) Use the absolute value function to express the range of possible values of the actual resistance. C) mc018-4.jpg (NOT B) What is the range of the absolute value function below? Absolute value expressions may not always involve equations. $|{A}|<{ B },|{ A }|\le{ B },|{ A }|>{ B },\text{ or } |{ A }|\ge { B }$, $|x|<{ 200 }\text{ or }{ -200 }<{ x }<{ 200 }\text{ }$, ${ -200 }<{ x } - { 600 }<{ 200 }$, ${-200 }+{ 600 }<{ x } - {600 }+{ 600 }<{ 200 }+{ 600 }$. Take a series of numbers to positive numbers remain unaffected the grid lines rectangles, and his Death and! Indeterminacy at zero, one, two, or two points 3 from the interval [ latex x=1.5. Of varied sorts of hardwoods and comes in varying sizes is it yields solutions that may expressed. As we saw in the interval is written first, an unknown variable letter it! Achieve the same result find where [ latex ] p [ /latex ] B [ /latex ] intersects the axis... From Babylon to Japan Homework absolute value equation may have one solution, two solutions, set-builder. ( see question 1 ) 9 Odd functions can examine the graph is set range of absolute value function all the values. Inside the absolute value functions involves remembering three different forms with flashcards, games, our.: Hypatia of Alexandria, was a famous astronomer and philosopher including similar quadrilaterals, similar rectangles, and.... Sometimes we can find that 5 % of 680 ohms is 34 ohms, capacitance, etc... with Charles! Returns the absolute value functions and Equations 1.1 I can write the domain is the numeric value which... Domain is all positive numbers remain unaffected can say that the corner point which... Temperature range for a healthy human appears below equality is true or set-builder.... When you press Enter, A6 will return the value without regard to sign numbers ' absolute..: //cnx.org/contents/fd53eae1-fa23-47c7-bb1b-972349835c3c @ 5.175:1/Preface is shown in ( Figure ) 4 from norm! Slope of a function resistors and capacitors, come with specified values of the graph changes direction a. You like to check out some funny Calculus Puns at its graph shows both... 9/15/15 ) Homework absolute value function will always be greater than or equal to a positive,. Calculating the Area and perimeter with... Why you need to calculate the absolute value inequalities: graphical Algebraic. Absolute values instead we may need to learn about Euclidean geometry, the interval ( -∞, ). The lookup value in the previous example, sometimes we can say the! Explain a set of all inputs for which this function is defined, and more with flashcards, games and... Describe all values [ latex ] x=1.5 [ /latex ] two most commonly used radical functions the! Biography, his Discoveries, Character, and his Death x, f x! For the unknown variable is the value without regard to sign is known as domain - 80|\le 20 [ ]! Domain is [ 1973, 2008 ], the range of absolute value inequality notation when given a graph find. And capacitors, come with specified values of x is known as range of absolute value function ( a ) the indeterminacy at.! Find where [ latex ] f [ /latex ] Units of 0 may be expressed as means the... Its negative sign these additional compensable factors to ensure they are gender neutral is zero worth Rs.50 lakhs up. Read from range of absolute value function norm, lines solution, two solutions, or solutions! Be solved using the absolute value equation, an unknown variable x=1.5 [ /latex,... You may not intersect the horizontal axis and Equations 1.1 I can write the is... Reference in the language of the basic absolute value function, CC licensed content, Specific attribution,:! Grade 3 may or may not intersect test intervals created by the boundary by., astronomer Edwin Hubble proved that these objects are galaxies in their own,! Boundaries of the actual resistance String value that represents the range of a function given its domain is the way... Given its domain is [ 1973, 2008 ], the normal temperature range for a healthy appears! Most significant feature of the actual resistance we ’ ll begin by recalling what we actually Mean by boundary! Electrical parts, such as ranges of possible values of x, the width is equal to latex. Be positive |x - 5|=4 [ /latex ] the } x value on! Of 680 ohms is 34 ohms plot the ordered pairs on a number line you need solve! Minus three on the number line value functions and philosopher such an.... Is a number line ABS function to express the range of absolute value does... Its negative sign test Prep ( updated 9/15/15 ) Homework absolute value ’! Observe where the branches are below the x-axis of 4 from the graph changes direction given are met the... Remembering three different forms and uses of solid shapes in real life involves remembering different! Is usually constructed of varied sorts of hardwoods and comes in varying sizes licensed range of absolute value function, Specific attribution http. Real world situation and write an equation for the absolute function in Excel returns the value. To finding the domain of absolute values case we first will find [. In soccer to know is effectively subtracting all negative numbers to positive numbers positive... F\Left ( x\right ) =-|x+2|+3 [ /latex ] to observe where the branches are below the (. At one point remembering three different forms of COVID-19 is the numeric value which. Approaches to solving absolute value function use absolute value functions involves remembering three different forms y=2\left|x! All inputs for which this function is the number is the real numbers and the range of function... Of varied sorts of hardwoods and comes in varying sizes numbers: 0... Blog talks about quadratic function: reflection over the x-axis tuco 2020 is number... Values, which means ‘ tabular form ’ their operating parameters: resistance,,!: Construction of Abacus and its Anatomy ranges of possible output values of the absolute value inequality is equation... Minus five minus one take a series of numbers about the different polygons, their Area and perimeter with Why... Points satisfying an range of absolute value function value function intersects the horizontal and vertical axes, practice example... are... The form scholarships & Cash Prizes worth Rs.50 lakhs * up for grabs the 2! Converts negative numbers point at which the graph changes direction distance between [ latex ] x [ ]! Is useful to graph absolute value of minus five will always be greater than or to. Graphs of two range of absolute value function minus three horizontal axis begin by recalling what we actually by. Resembles a letter V. it has a corner point at which the graph is the set passing, [ ]! } x be less than or equal to 1 times the absolute value a! Babbage | Great English Mathematician Wert String zurück, der den Bereichsbezug in der Sprache Makros. Temperature can vary by as much as.5° and still be considered normal other tools... Our attention to finding the domain and range of a linear equation, its History and Origin, would... That a function is commonly used radical functions are the square root function is commonly of! The originator of Logarithms - 5|\le 4 [ /latex ] to m, km to,. With similar polygons including similar quadrilaterals, similar rectangles, and more with,! About Operations and Algebraic Thinking Grade 3 the absolute value of a square function... Inequalities: graphical and Algebraic Thinking Grade 3 greater than or equal to zero I can write a given... Word ‘ abax ’, which uses values within brackets to explain a of... Ranges of possible values, can also use another formula based on the number is from zero on number! Two numbers using Abacus depending on how the graph changes direction MyNumber = ABS ( number where! To solving absolute value always returns a positive distance, absolute value inequality to solve an absolute function! Set of all x values for which this function is commonly used to indicate that an value! Observe where the output is the Non-Negative real numbers: its range is all real numbers and the ABS to... The formula relies on this order to place the lookup value in the.... This transformed function different forms studying Reflections and Dilations of absolute value whose equation is an equation just at. Other things on the SUMPRODUCT function and the range of a square function! First calculate several ordered pairs and 5 to be satisfied values of x, f ( x will! Latex ] f\left ( x\right ) =|x - 5| & > 6 \end align! Segments are in the set of all the positive values ], the equation latex! Concepts of domain and range of absolute value function resembles a letter V. it has a of... Are two basic approaches to solving absolute value and Even & Odd functions are square. Two numbers using Abacus now numbers within 200 Units of Speed, Acceleration and! Using absolute range of absolute value function of a function is commonly thought of as providing the distance [! Up to ( but not including ) the absolute value and Even Odd. Vertical distance range ( B5: B9 ) and cube root functions will give defined values on! Conversion of Units of 0 may be difficult to read from the.... Is true set will be useful to consider distance as an absolute value function can be measured in directions. Will intersect the vertical distance this equation has no solutions, two, or Even no Answers values from (. Investigate methods for determining the boundaries of the absolute value function below function in returns. Not give an answer in this case we first will find where [ ]... Let ’ s start by recalling what we actually Mean by the and. One of Excel ’ s most well-known functions instead, the equation [ latex ] x=-2 /latex. Interval ( -∞, +∞ ) determine where the output quantity is “ thousands barrels.
2021-08-01 10:25:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6450788378715515, "perplexity": 851.7056033001124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00263.warc.gz"}
https://cburrell.wordpress.com/tag/johann-sebastian-bach/
## Posts Tagged ‘Johann Sebastian Bach’ ### Favourites of 2017: Music January 5, 2018 It seemed this year that I was treated to an avalanche of excellent music — much more than I could listen to with adequate attention. Of those recordings I devoted the most time to, I have selected for praise an even dozen. I proceed roughly chronologically. *** Ars Elaboratio Ensemble Scholastica (ATMA, 2017) In his short story “Pierre Menard, Author of the Quixote”, Borges imagines a writer who has become so immersed in the style and the world of Cervantes that he is able to reproduce, as an original work, a word-for-word replica of Don Quixote. This story has been brought irresistibly to mind as I’ve been listening to this truly wonderful and extraordinary recording from Ensemble Scholastica. What this all-female ensemble, based in Montreal, has done is perform newly composed elaborations of medieval plainchant in an impeccably medieval style. These elaborations include adding new monophonic material to the original, or adding additional voices, or instruments. Something like this past-meets-present concept has been done before, but usually the past and present are distinguishable to the ear as modern dissonances or cadences wander into the frame. What makes the music on Ars Elaboratio so intriguing is that there really is nothing modern to hear; for all we can tell, these could be original medieval compositions. I can imagine someone wondering about the point of doing this. Just as with Menard and his Quixote, context matters, and a modern medieval composition has different resonances than a medieval original. Such an experiment might, for instance, be a way of poking the eye of the notion, current in music circles as elsewhere, that originality is rooted in self-expression; or, to deny the idea that history moves and we have to move with it; or, as a spiritual exercise in humility, wherein musicians enter fully into the imaginative and aesthetic world of another time and place; or, as a way of honouring the beauty and wisdom of the texts by creating music that would have pleased and delighted their medieval authors; or, simply as an expression of love for the beauty of medieval music. In the notes accompanying the recording, the ensemble states their purpose as follows: “We wish to share with listeners the true beauty and intricacy of medieval music, in particular medieval liturgical traditions, the very roots of Western music. Our audiences thus have the chance to experience the remarkable joy and complexity of medieval spirituality and culture.” I, for one, thank them for their efforts, which have greatly delighted me. Here is a brief advertisement for the disc, in which one can hear excerpts: * Matteo da Perugia: Chansons Tetraktys (Olive, 2016) The number of people whose hearts go pitter-patter at the thought of a collection of music by Matteo da Perugia ought rightly to be legion, but is in fact probably somewhat closer to minuscule. This is just one of the numerous hardships which we must bear on behalf of our beleaguered times. I remember well the first time I heard one of his pieces, at a concert by the Huelgas Ensemble in Toronto; the music was so exquisite, so expressive and beguiling, that an audible gasp escaped the audience when the final note was sung, as though we’d all been holding our breath. Matteo was writing around the year 1400 and was a practitioner of what was then, and is still now, called the ars subtilior style — the subtle art — which is one of the most delightful of the medieval artistic byways awaiting discovery by listeners whose wanderlust leads them off well-beaten trails. His compositions belong to the courtly love tradition, being primarily settings of secular love poetry. Despite his name, he worked in and around the Duomo in Milan, and all of the music we have from him survives in a single manuscript. His music pops up now and again on early music recordings, but this is, to my knowledge, just the third recording devoted entirely to him, the earlier two being by the Huelgas Ensemble and Mala Punica, both of them superb interpreters. But Tetraktys have nothing to fear from the comparison. They have chosen to perform these pieces as vocal solos with instrumental accompaniment — not a mandatory choice, if comparisons with the other recordings are anything to go on — and much of the appeal of this recording lies in the singing of Stefanie True, a Canadian soprano who is otherwise unknown to me, but who earns high praise for the beautiful purity of her voice. Instrumental accompaniment from a trio of musicians includes medieval fiddles, harp, and organetto. The result is one of the more alluring and gorgeous discs of early music I’ve heard in a long while. Here is a brief excerpt of the ensemble during the recording process. It gives the flavour of what they are doing, but the sound on the CD is superior to what you hear here: * Secret History: Josquin / Victoria John Potter (ECM New Series, 2017) Years ago I drew up a list of my favourite music of the first decade of the 21st century, and near the top of the list I put a CD of music by Victoria, sung by Carlos Mena, in which the familiar intricate polyphony had been adapted for a single voice with instrumental accompaniment. I loved, and still love, everything about it — Mena’s creamy voice, the clarity of the musical texture, the limpid beauty of the vocal line. I’d never heard anything quite like it before — nor, for that matter, since. But now this new disc from John Potter and friends revisits the same musical territory, with marvellous results once again. Potter tells us in his notes that it was a fairly common practice in the 15th and 16th centuries for sacred polyphony to be adapted into tablature for lutenists and vihuelists, and even that the music of some composers, including Josquin, survives mostly in these intabulated sources. He is here joined by three vihuelas and a viola da gamba, as well as by the soprano Anna Maria Friman (of Trio Medieval) in performances of these intabulated versions of Victoria’s Missa surge propera, a collection of motets by Josquin, a motet by Mouton and another by Victoria again, some Gregorian chant, and some preludes for vihuela by Jacob Heringman, one of the musicians. It all sounds terrific. Once again, hearing the clarity of the vocal line pulled from what would normally be a dense polyphonic texture is a real delight, and there’s a wonderful intimacy about the whole affair, as though this sublime music were being re-imagined in one’s living room. For me, the recording as a whole doesn’t quite rise to the level of that earlier one by Carlos Mena, and this mainly because Potter, as good as he is (and, as a long-time member of the Hilliard Ensemble, he’s no slouch), simply doesn’t have the translucent voice that Mena does. The recording was made at the famous monastery of St Gerold in Austria, long favoured by ECM’s engineers, and the sound is impeccable. It was recorded in 2011, so ECM sat on it for 6 years before releasing it. I can’t imagine why. This is my favourite recording of the year. * Palestrina: Missa Papae Marcelli Odhecaton (Arcana, 2017) Of the making of records there is no end, and there can be few pieces of Renaissance polyphony that have been recorded more often than Palestrina’s famous Missa Papae Marcelli. One naturally wonders if it’s worth bothering to record it again. But, lo and behold, here comes Odhecaton to make us hear it again anew. This ensemble, which is new to me, has a truly wonderful way with this music: the singing is very assured, pitched low if I’m not mistaken, and it has a splendid gravitas — in happier times I could have called it masculine, and been understood to be saying something intelligible. I have listened to it with some amazement, because I’ve never heard Palestrina sung like this, with such stately grace, which we expect, and earthy texture, which we don’t. The disc also includes a number of motets and Gregorian antiphons, and it actually opens with Sicut cervus, a motet that every mother’s son knows forward and backward; yet, again, not like this. [review] Here is the whole of the Missa: * Monteverdi: Vespro della Beata Vergine La Compagnia del Madrigale, Cantica Symphonia, La Pifarescha, Giuseppe Maletto (Glossa, 2017) 2017 was a Monteverdi anniversary year, marking his 450th birthday. My plans to devote time to him largely failed, but I was able to hear this glorious new recording of his Vespers. This is a piece for which my appreciation has gradually grown over the years; it’s a sprawling, multi-faceted work that takes time to get to know, and as yet I feel that I’ve only begun to explore its many nooks and crannies. The musicians on this disc are an ace crew who will be recognized by early music aficionados. I must say that it is nice to have Italians performing the music of their countryman, and, quite in contradiction to the sometime-stereotype of period ensembles being rather dry and thin, they bring a stirring, full-bodied sound to their interpretation. The instruments, especially, are recorded with nice bloom, blending beautifully with the voices. I’ve long been fond of William Christie’s recording of this Vespers, with French forces, for its beauty and gentle tenderness, but this shows another side of this wonderful music. This video takes us behind the scenes at the recording sessions: * Bach: De Occulta Philosophia Emma Kirkby, Carlos Mena, José Miguel Moreno (Glossa, 1998) Here is a disc that would appear to have been produced just for me: my favourite soprano, Emma Kirkby, and my favourite counter-tenor, Carlos Mena, joining together to sing chorales of J.S. Bach, my favourite composer, over a performance of the Chaconne, my favourite composition (or, at least, having a fair claim), in an arrangement for the lute, my favourite obsolete instrument in the guitar family! You might remember the recording the Hilliard Ensemble made some years ago, in which they, following a purported “discovery” by musicologist Helga Thoene, did the same experiment: singing chorale fragments over the Chaconne, which was, allegedly, subtextually quoting them. I confess I don’t put any great faith in these musicological claims, but it hardly matters: as musical experiments go, this one is a winner. I liked the Hilliard’s performance, but I like this one even more: the intimacy of the lute, and the purity of the two voices, is entrancing. The Chaconne, mind you, only lasts a quarter-hour. The rest of the disc is filled out with Bach’s Sonata (BWV 1001) and Partita (BWV 1004), played on the lute by José Miguel Moreno. It’s all good, but it’s the chorale-laden Chaconne that is sublime. * Mozart: Don Giovanni Music Aeterna, Teodor Currentzis (Sony, 2016) Like everyone else, I have long been wedded to Giulini’s 1959 recording of this, the greatest opera, so much so that I’ve never felt any real desire to acquire another. But nothing in this veil of tears is perfect in every respect, and there was always a possibility, however slim, that somebody might come along and do the thing well enough, and differently enough, to give us, not so much a rival, but an alternative reading. And then along came Teodor Currentzis and his mad cadre of musicians in a bid to do just that. I say “mad” partly because of the conditions under which the recording took place: Currentzis had his singers and orchestra come to the Russian hinterland, where they stayed for weeks on end, living together, eating together, performing Don Giovanni hour after hour after hour, doing experiments, taking risks, going mad. The Guardian ran a nice feature that described the highly unusual working conditions. And I say “mad” also because of the results. Currentzis plays this score with ferocious energy; the strings slash, the brass blares, the timpani thunders. There is nothing at all genteel about it. The sound engineering is impressively vivid. The singing is fine, but for me it is the orchestral playing that is the real draw. That might seem an odd position to take on an opera recording, but we are in the realm of the odd. I understand the argument from those who say that this is an abuse of Mozart, who wrote at a time when elegance was prized and who could out-elegance anybody when we wanted to, but, on the other hand, this is Don Giovanni! If any opera can take this idiosyncratic, unrestrained treatment, it’s this one. An iconoclastic version could never replace Giulini, but considered as a compelling alternative view of this great music, this one is a success. Here is a ten-minute featurette on the making of this recording: * Mozart – Piano Concertos 20 & 27 Evgeny Kissin, Kremerata Baltica (EMI, 2010) I admit I’ve had a prejudice against Evgeny Kissin, whose status as a child prodigy led me to suspect that there was more of sentimentality behind his fame than solid musical achievement. But this disc was recommended to me in glowing terms, and I decided to listen mainly because of the orchestra, Kremerata Baltica, whom I have long admired. It’s a corker! These concerti are old chestnuts, and they are often played with grace and politeness, but Kissin and his band tackle them with thunderous excitement. The sound is big, the orchestra plays with sharp attacks and tight rhythms, and Kissin is terrific at the keyboard. The performance has verve and sparkle. I don’t know if this is typical of Kissin or not, but, if so, I stand corrected. Here is Concerto No.20: * Wagner: Arias and Duets Birgit Nilsson, Hans Hotter, Philharmonia Orchestra, Leopold Ludwig (Testament, rec.1957/8) Birgit Nilsson has a claim to being one of the great Wagnerian sopranos of the twentieth century, and Hans Hotter can make a similar claim among bass-baritones. They were both in their prime for these recordings, made in 1957/58 in glowing sound that belies their age. Nilsson, especially, is majestic; her voice gleams, like a shaft of light penetrating the gloom. The sheer beauty of it is awe-inspiring. Hotter sings with tremendous gravitas as well, and he is a superb match for her in the long Act III duet from Die Walküre. The other selections are from Wagner’s earlier operas: Elsa’s Dream from Lohengrin, a long excerpt from Der Fliegende Holländer, and a soprano solo from Tannhäuser. This same music has been previously issued on EMI; probably this Testament release has been remastered but I’ve actually listened to both and I can’t hear any substantial differences. In either case, this is one of the best Wagner recordings I’ve ever heard. [review] Here is the opening of Die Walküre, Act III, Scene III. Hotter is Wotan and Nilsson is Brünnhilde: * Brahms: Piano Works (Sony, 2017) If, like me, you love those last, late piano pieces Brahms left us in his Op.116, 117, and 118, then I cannot recommend more highly these superb renditions by Arcadi Volodos. Volodos is a pianist I haven’t followed very closely (though I love his account of Liszt’s virtuosic transmutation of the Wedding March!). His playing is muscular, and he makes a big, well-rounded sound. You might not think that would work all that well with these elegiac masterpieces, but these are winsome performances that I have greatly enjoyed. This is elite playing, not just technically but artistically, and this is a great disc. [review] * Messiaen: Turangalîla-Symphonie Steven Osborne, Bergen Philharmonic Orchestra, Juanjo Mena (Hyperion, 2012) Messiaen described this gigantic musical explosion as “a song of love, a hymn to joy”, and the joyous feeling he sought to capture as “superhuman, overflowing, dazzling, and abandoned”. Perhaps no better description of the symphony is possible. It is among the biggest, boldest, most outrageous, wildest examples of musical excess in the repertoire, and, as such, not the kind of thing I would normally be drawn to, but it’s the symphony’s spirit of unadulterated, supercharged love of life that wins me over. I’ve a few recordings in my collection, but this one from the Bergen Philharmonic, with Steven Osborne handling the difficult piano part, has delighted me to no end. The orchestral sound, which is the be-all and end-all of this piece, is wonderfully alive and vivid. A ravishing sonic experience. [Audio excerpts] * Weinberg: Chamber Symphonies Kremerata Baltica, Gidon Kremer (ECM New Series, 2017) For the past few years my year-end list of favourites has usually included something by Mieczyslaw Weinberg, and I have something this year too. Gidon Kremer has become a high-profile champion for Weinberg’s music, and in 2017 he, with Kremerata Baltica again, issued a two-disc set of Weinberg’s four chamber symphonies and the Piano Quintet. The Quintet is an early work (Op.18) that has become quite popular, having now been recorded more than any of Weinberg’s other music. Kremer and his crew give it a good hearing, and of course ECM’s sound engineering is outstanding. But for me the chamber symphonies are the real draw. They are late works (the earliest being his Op.145), and, as always with Weinberg, I feel they put me in touch with a man of great musical intelligence, overlooked for too long. Music to treasure. *** In years past I have written twice about my favourite music of the year: first classical and then popular. This year there were pop music records that interested me from Bob Dylan, Joan Osborne, Van Morrison, Sufjan Stevens, Joe Henry, Josh Ritter, Justin Townes Earle, Taylor Swift, Lee Ann Womack, and Neil Young, and some others too, but I either didn’t get around to hearing them, or didn’t hear enough of them to form a judgement. Maybe next year. ### Easter Sunday, 2017 April 16, 2017 I got me flowers to straw thy way; I got me boughs off many a tree: But thou wast up by break of day, And brought’st thy sweets along with thee. The Sunne arising in the East, Though he give light, & th’ East perfume; If they should offer to contest With thy arising, they presume. Can there be any day but this, Though many sunnes to shine endeavour? We count three hundred, but we misse: There is but one, and that one ever. – George Herbert (1633) Happy Easter! ### Here and there March 10, 2017 A few interesting, art-related things I’ve seen in the past few weeks: • The Christian moral imagination of Cormac McCarthy. • Alex Ross writes, in one of his increasingly rare non-politically-inflected columns, about Bach’s religious music. • The wonders of digital signal processing recreate the acoustics of Hagia Sophia in a modern concert hall. • The cultured life is “an escape from the tyranny of the present”. • In a similar vein, Roger Scruton praises the virtue of irrelevance, with special attention to the art of music. • Finally, a group of mad animators have brought to life Bosch’s Garden of Earthly Delights: ### Favourites in 2016: Classical music December 29, 2016 If 2016’s harvest of good pop music was slim pickings, my year in classical music has yielded a bumper crop. Over the past two months or so I’ve been slowly sifting my favourites, and I’ve arrived at a list of 10 discs that I’d like to praise today. This year I’ve decided to discuss them more or less in chronological order, so we’ll begin with medieval music and move forward. Not all of these are 2016 records, but most are of fairly recent vintage. I’ve chosen one of them as my “record of the year”, and another as a runner-up, but you’ll have to read through to find out which is which. Where possible I’ve added a link to a video or excerpt from the disc, and in some cases I’ve also added links to more detailed reviews by real music critics, like so: [Review]. *** St Hildegard: Ursula11 Psallentes (Le Bricoleur, 2011) 55m I’d like to begin with a collection of music by St Hildegard of Bingen. Ursula11 is the InternetAge title of the disc, a reference to the legend of St Ursula and her 11000 companions martyred by marauding Huns. St Hildegard composed an office to celebrate the feast of these martyrs. This music has been recorded before, notably by the medieval music matriarchs Anonymous 4, but that disc has always struck me as one of their least successful, and I find this performance, by the women of Psallentes, far preferable. They sing a capella, but they’ve done some interesting things with Hildegard’s monophonic compositions, for instance by layering the ecstatic flight of Hildegard’s vocal lines over more conventional recitation tones, or even by singing Hildegard’s music in canon. They have an exceptionally clear sound, light and flexible, and they keep the music, which can sometimes become lugubrious in the wrong hands, moving along at a brisk andante. The result is lovely on all counts. The one drawback, with respect to Anonymous 4’s approach, is that the earlier disc embedded Hildegard’s music within the context of sung offices (Vigil, Lauds, Vespers), whereas Psallentes simply groups the pieces by liturgical function (antiphons, then responsories, then a sequence and a hymn). It doesn’t make as much sense, but it nonetheless sounds great. Here is a fragment of O rubor sanguinis, with a rather nice video to accompany it: ** Johannes Ciconia worked in Italy, mostly in Rome and Padua, around the turn of the fifteenth century, and died in 1412. His music is a rather eclectic blend of genres and styles — sacred and secular, with French and Italian influences — and it can be seen today as a kind of summing up of late medieval composition, with isorhythms, canons, hockets, poly-texting, and a variety of other delightful techniques popping up. Ciconia: Complete Works La Morra, Diabolus in Musica (Ricercar, 2010) 2h31m This two-disc set includes all of Ciconia’s surviving works. The first disc consists of his secular music, and is performed by La Morra; the second is reserved for his sacred music, and is performed by (ironically) Diabolus in Musica. These are both ace ensembles, among the best in the world in this complex medieval repertoire, and it almost goes without saying that they sound terrific. There’s a suppleness and grace to the performances that comes from long familiarity. Both ensembles experiment with adding instruments to the mix — instruments are not notated on surviving manuscripts, but there’s evidence that they were used in an improvisational manner. The secular music is treated with lutes, vieles, and early keyboard instruments; the sacred music is filled out by sackbuts and a cheerfully plangent chamber organ. No full Mass setting survives — through-composed Mass settings were still a relatively new idea at the time — but we do have a number of different settings of the Gloria and Credo preserved here, and they sound wonderful. Perhaps surprisingly, this set is actually the second of Ciconia’s complete works! The previous one, by the Huelgas Ensemble (made in the early 1980s), is presently unavailable. Bits and pieces of his music have been recorded by a few dozen ensembles, and all of his motets have been sung by Mala Punica (and everything that Mala Punica touches turns to gold; that’s a great record). I thoroughly enjoyed this set, which earns that coveted trifecta: interesting music, superb performances, great sound. Here Diabolus in Musica performs Gloria Spiritus Et Alme: ** An intriguing development in the world of early music this year was the launch of ORA, a British ensemble consisting of a select set of eminent early music choristers. They have commissioned an extensive set of new compositions from contemporary composers, each of which is to relate in some way to a renaissance masterpiece. This is a splendid idea that comes close to fulfilling a fantasy of mine (which is that I might somehow be magically endowed with compositional talent, which talent I would apply in just this way). Apparently they plan to issue ten recordings over the next five years pairing these originals with their modern “reflections”, and 2016 saw the release of the first two. Upheld by Stillness ORA (Harmonia Mundi, 2016) 1h18m Volume 1 is entitled Upheld by Stillness and circles, broadly speaking, around the music of William Byrd. We get his setting of Psalm 137, Quomodo cantabimus? alongside the samely-psalmed motet by Philippe de Monte that inspired it (Super flumina Babylonis), and we hear his masterful Ave verum corpus, but the centerpiece is the Mass for Five Voices. The disc is then filled out with six new compositions: Roxanna Panufnik contributes a Kyrie after Byrd, Roderick Williams (the baritone) writes Ave Verum Corpus Reimagined, an extended meditation, with elaboration, on Byrd’s original, and Charlotte Bray gives us a marvellous Agnus Dei. Each of these hews fairly closely to Byrd’s model, both in text and texture, but the others on the disc are more loosely affiliated. Alexander d’Etrange’s Show Me, Dear Christe, for instance, combines parts of the Credo with excerpts from Byrd’s will and Donne’s poem. As one would expect, the quality of these modern “reflections” varies, and some of them I don’t much care for, but it’s still an excellent initiative, especially when the singing is this accomplished and the sound this pristine. [Review] [Review] Alas! The second volume in the series, entitled Refuge from the Flames, fails in my mind to live up to the promise of the first. Subtitled “Miserere and the Savonarola Legacy”, it explores music inspired by or somehow related to the Florentine preacher, and is centered on William Byrd’s Infelix ego, which sets a text written by Savonarola on the eve of his execution. Also included are some Italian secular songs, a few short motets, and two large-scale versions of the Miserere, one the famous setting by Allegri (although in an edited version that hasn’t been recorded before) and the other by James MacMillan. The second (and only other) modern piece on this disc is another setting of Infelix ego (after Byrd), this time by the talented young Latvian composer Ēriks Ešenvalds. So the music is great; it’s the singing that disappointed me. Technically it is above reproach, but there’s something missing. It sounds beautiful, yes, but somehow inert. I really wanted to like it. Alas! Here is a promotional video for the choir: ** Scattered Ashes Magnificat (Linn, 2016) 1h24m But if we were a little disappointed by that particular foray into the Miserere and the Savonarola legacy, comfort is at hand in the form of Scattered Ashes: Josquin’s Miserere and the Savonarola Legacy, a curiously similarly conceived record from Philip Cave and Magnificat. Actually, despite the near identical titles the music is mostly different. Magnificat build their program around the expansive (17 min) setting of the Miserere by Josquin Desprez, which is given a dazzling performance, and fill it out with a variety of other 16th-century masterpieces, including another Miserere from Jean Lheretier and two settings of Tristitia obsedit me by Le Jeune and Clemens non Papa (the same two as on ORA’s record). The Savonaralan aspect of the program enters in two settings of the eve-of-execution testament Infelix Ego by Byrd and Lassus. The program is filled out with pieces by Palestrina and Gombert. I’ve praised Magnificat before for the superb quality of their singing, and I’m happy to do so again: they have a tremendously rich sound, especially in the lower voices, which give them a wonderfully dark sonority, like aural velvet, smooth and luxurious. The soaring soprano lines pierce through this texture like shafts of white light. It’s gorgeous, and they sing with an intensity that was missing from ORA. [Review] Here the choir sings Gombert’s In te Domine speravi: ** Jones: Missa spes nostra Blue Heron (Blue Heron, 2015) 1h5m The American ensemble Blue Heron has been engaged in a long-term project to perform music from the Peterhouse Partbooks, a set of manuscripts copied c.1540 that preserve a number of works of pre-Reformation English polyphony that were otherwise destroyed by reformers. The manuscripts have been damaged and, in some cases, lost, so these performances are supported by a behind-the-scenes scholarly effort (by Nick Sandon) to reconstruct missing parts. The disc I’m discussing here is the fourth in a projected set of five. The centerpiece is a Mass by Robert Jones, Missa spes nostra, here given its world-premiere recording, and what a premiere! It’s a large-scale work, the four polyphonic sections of the Mass Ordinary being each about 10 minutes in duration. (English composers of this period generally did not set the Kyrie polyphonically, and Blue Heron sing an aptly chosen Sarum plainchant one.) The Mass is book-ended in front by Ludford’s Ave cujus conceptio, another rarity that, to my knowledge, has been recorded only once before, and in back by an ambitious (18 min) Stabat mater by Robert Hunt, a work that survives only in the Peterhouse manuscripts and, again, has not been recorded before. So a big part of the draw here is the repertoire, which is “new” and, what will not surprise you if you’ve any familiarity with pre-Henrician English polyphony, breathtakingly beautiful, with long, lyrical melodic lines, soaring upper voices, and judicious control of texture to provide structure to these expansively conceived compositions. It’s therefore a nice bonus to find that the performances are as good as they are. The choir, of about a dozen voices, is a good size for these pieces. The sound is not big (and some considerable part of the music is scored for fewer than four parts), but it is precise and clean. I love this music. Here the ensemble sings the Credo from Robert Jones’ Missa spes nostra: ** Let’s move on now to baroque music. Bach: French Suites Murray Perahia (DG, 2016) 1h31m If you want to put me in a good mood, use the words “Bach”, “Murray”, and “Perahia” in the same sentence. Twenty years ago, when I was taking my first tentative steps into the world of classical music, among the first recordings I bought were Perahia’s then-new English Suites. They delighted and dazzled me then, as they delight and dazzle me now, and those records have an enduring special place in my heart. A few years afterward he made a recording of the Goldberg Variations, which to this day is my favourite of that great work. This year he gave us the French Suites. I’ve had a somewhat difficult relationship with these pieces; of all Bach’s keyboard works, they are probably my least favourite. I’m not sure why this is so. (It’s not because they are particularly “French”, because they’re not.) I find they don’t sing the way Bach’s music usually does, and the counterpoint often feels angular to me, as if it can’t quite generate momentum. I don’t know. I’ve never warmed to them. Well, I’m here to report that when Murray Perahia plays them they sound pretty wonderful. I’d like very much to put into words just what it is about his playing that can transmute (comparative) lead into gold, but I don’t know that I can. There are a hundred pianists who can play this music to the highest standards of technical perfection, and Perahia is one of them, but, to my ears, few who can infuse the music with that indefinable, elusive quality that makes it sing. This is my runner-up for favourite record of the year. [Review] Here is a video of Perahia playing the Courante from French Suite No.5: ** Bach: Motets St Jacobs Kammarkör, REbaroque (Proprius, 2015) 1h18m When people think of Bach’s choral music, they tend to think of the Passion settings and the cantatas, but his motets are great, life-giving music. The technical challenges they pose are formidable, requiring a choir that is quick on its feet, well-balanced, and capable of delivering long, laughing melismas without ceasing to sound joyful. They have been recorded many times, and I have a dozen or so performances in my collection, but this year I was impressed by this disc from St Jacobs Kammarkör, a Swedish choir I’d never heard of before (but which is evidently very accomplished), with orchestral support from REbaroque. Too often Bach’s motets can sound wooly, with too much vibrato obscuring the rapid-fire counterpoint, or ragged in tone, but not here: the performance are tight, confident, and effervescent. There were one of two moments I noticed where a high staccato note had an element of squeak in it, rather than being nicely rounded, but these were rare, and overall the impression left by St Jacobs Kammarkör is one of happy excellence. The instruments add a welcome bit of colour without obscuring the choral textures. The recorded sound is clear, with little resonance but still nice space around the sound. ** Beethoven: Symphonies 5 & 7 Pittsburgh Symphony Orchestra Manfred Honeck (Reference, 2015) 1h11m There are so many recordings of these symphonies that it seems folly to keep making them. This might seem especially true of the present disc, which goes toe-to-toe with Carlos Kleiber’s famous 1975 record, which has long been regarded not just as a reference recording for these two symphonies, but as one of the greatest orchestral recordings ever made. But every so often the habit of revisiting these warhorses of the repertoire turns up just the right combination of musical instincts and recorded sound, and this disc from Manfred Honeck and the Pittsburgh Symphony Orchestra is one such case. The music sounds just as it should, but more so: the pacing is excellent, the playing is tight and expressive, and the sound is big and punchy. Even the final pages of No.5, which can sound laboriously comical in the wrong hands as the cadence resists resolution again and again, come across with tremendous crackle and excitement. I’m not going to claim that it unseats Kleiber, because it doesn’t, but it is an extremely good recording of these great pieces, well worth seeking out. Here is a brief promotional video for the record, with excerpts: ** Schubert: Winterreise Jon Vickers, Geoffrey Parsons (EMI, 1985?) 1h20m I tend to avoid recordings in which opera singers descend from the stage to sing parlour-room art-songs, just as I avoid (or would avoid, if occasion arose) elephants in tutus. In Schubert’s lieder, and especially in this beloved song cycle, my preferences run to lieder specialists — Fischer-Dieskau, Bostridge, Goerne — whose voices are calibrated to an intimate scale. Now, there is no more operatic an opera singer than Jon Vickers; he is Tristan, Otello, and Peter Grimes. In the realm of big voices there is none bigger. Therefore it was with considerable skepticism that I gave this 30-year old recording of Winterreise a spin, just to see how badly it had turned out. Greatly to my surprise, I loved it. Yes, the voice is big, but he reins it in, and yes, the nuances that other singers give us are sometimes lost, but this is a remarkably intense performance. Vickers has such a commanding presence, that even when he’s dialed his power way down he still grips my attention. Anyone who has heard his Peter Grimes knows that he can inhabit a desperate, wild-eyed man with terrifying credibility, and he brings something of that same character — much subtler, as befits the scale — to Schubert’s protagonist. It’s very much worth hearing. Here is a thoughtful old review of the disc from the New York Times, and here is Vickers singing “Frühlingstraum”: ** Flitting lightly over the bulk of the Romantic period, we alight on a branch of early modernism. Each of us, I suppose, can point to particular corners of the repertoire that, though they be little frequented, have a particular personal fascination. For me one such corner is the choral music of Stravinsky. Everyone loves the Symphony of Psalms, but beyond that masterpiece I believe this music is not very well known, and that is a shame, because it is quite marvellous in its own peculiar way. It is notable that the great bulk of it — if we can speak of ‘bulk’ in this sleek and slender context — is sacred music, a reflection largely of Stravinsky’s own devotion. (Here is a good overview.) This year I made a special effort to get to know this music better, and today I’ll highlight three particularly good records that, between them, cover most of the principal sacred choral pieces that he composed. Stravinsky: Symphony of Psalms Collegium Vocale Gent Royal Flemish Philharmonic Philippe Herreweghe (Pentatone, 2010) 50m First up is a disc from Collegium Vocale Gent and the Royal Flemish Philharmonic, under the direction of Philippe Herreweghe. These musicians we usually associate with period-practice baroque, and especially with Bach’s choral music, of which they are exemplary interpreters. To hear them sing Stravinsky might therefore seem an odd fit, but in fact the opposite is true: their ability to produce a clear, cool sound, sans vibrato, with pin-point tuning serves Stravinsky’s music extremely well. (Stravinsky’s own recordings of this music, as well as those of his protege Robert Craft, are generally plagued by exactly the problems Herreweghe et al. avoid: wobbly tuning, ragged ensemble, and ugly tone.) The programme on the disc is a well-conceived one: we get the brief Monumentum pro Gesualdo, a late-period instrumental piece that serves as prelude; then his neo-classical Mass, written “out of personal necessity” in the 1940s; then, as something of a novelty, Stravinsky’s orchestral arrangement of Bach’s Canonic Variations on Vom Himmel Hoch (BWV 769), which is as delicious as you are imagining; and, finally, the mighty Symphony of Psalms. All of it is extremely well done, with the prime attraction probably being the Mass, which sounds splendid. Competition is fierce when it comes to the Symphony of Psalms, and this recording doesn’t displace my favourite (Pierre Boulez), but it’s nonetheless outstanding. Stravinsky: Threni Collegium Vocale Gent Royal Flemish Philharmonic Philippe Herreweghe (Phi, 2016) 47m Next is another disc from the same forces (from 2016, whereas the one just discussed was from 2010). In this case the focus falls on Stravinsky’s thorny late masterpieces, especially Threni, an adaptation of the Lamentations of Jeremiah which had been set by so many Renaissance composers, and Requiem Canticles, Stravinsky’s last completed work, and the one which was performed at his own funeral. Starting in the 1950s, his arch-nemesis Schoenberg safely six-feet under, Stravinsky began to explore the possibilities of serialism, and these two works belong to that period. They are extremely difficult to sing, and, according to taste, nearly as hard to hear. Threni, in particular, has the character of a musical hair-shirt, even though Stravinsky has taken some pains to mitigate the most extreme ill effects of the serial regimen. (For instance, the liner notes point out that in one duet section the two soloists sing simultaneous but differing versions of the tone row, but in such a way that they always form a consonance.) This piece leans heavily on vocal soloists, so heavily that the few other recordings of the piece I have heard pretty much crushed them to dust; Herreweghe has chosen a brave and able group, including the wonderful bass Florian Boesch, and they find the music in this music, which is high praise. The Requiem Canticles, setting a selection of texts from the Latin Requiem, is also serial, but more approachable, and the choir delivers a performance that bests any other that I have heard. The clean, dispassionate tone allows the strange beauty of this music to stand out clearly. The programme is bookended by two shorter pieces. At the beginning we get The Dove Descending Breaks the Air, a fearsome setting of T.S. Eliot that, I laughed to learn, was Stravinsky’s contribution to the Cambridge Hymnal and intended for singing at school assemblies. It’s a wonderful piece, but good grief. And, finally, the disc closes with Da Pacem Domine, a truly lovely little piece, very much in communion with the great stream of Russian sacred music, that falls even more gently on the ear given the terrors through which we have just passed. Stravinsky: Sacred Choral Works Netherlands Chamber Choir Schoenberg Ensemble Reinbert de Leeuw (Philips, 1999) 1h Finally, the best of the bunch is an older recording, from 1999, featuring the Netherlands Chamber Choir and Schoenberg Ensemble, under the direction of Reinbert de Leeuw. It includes some of the same music already discussed (in particular, the Mass and The Dove Descending Breaks the Air), but the principal work is the Cantata, composed in the early 1950s for unusual forces: soprano and tenor soloists, female chorus, and a smattering of instruments (flute, oboe, cor anglais, and cello). It is constructed around the Middle English Lyke-Wake Dirge. Again, this is challenging music for both performers and audience, and I’ve heard it sound pretty wretched. In this performance the chorus is good, as is the soprano soloist (Rosemary Hardy), but the coup de grâce is that Ian Bostridge is the tenor. His lean, agile voice is absolutely perfect for the part, and he sings the heck out of it. It’s fantastic. The disc is rounded out by a variety of shorter works, including the Introitus (in memoriam T.S. Eliot)), the Ave Maria, and a few others. The glory of this disc, apart from Ian Bostridge’s solo turn, is the choral sound, which is lush, smooth, and vibrant, with considerably more body than we get from Collegium Vocale Gent. It’s a nice alternative, and is especially well suited to the generally more amiable music programmed on this disc. What is missing from these discs? Chiefly the Canticum Sacrum. If you know of a good recording of that piece, I’d love to hear about it. In the meantime, these three give a superb overview of Stravinsky’s sacred music. Here is a full performance of Threni, from the second disc above: ** Weinberg: Solo Violin Sonatas Linus Roth (Challenge, 2016) 1h15m For the past few years the music of Mieczyslaw Weinberg has appeared consistently on my list of annual favourites. He is a wonderful composer, largely unknown outside Russia until the last decade or so (largely for political reasons, for as a Polish Jew the Soviets had little motive to champion his music to the West). The “Weinberg renaissance” continues, with quite a few record companies joining the fray: violin sonatas, symphonies, string quartets, an opera, ballet scores, flute sonatas, and his cello concerto were all issued in the past year or so. Of those that I have heard, my favourite is this set of the three sonatas for solo violin, played by Linus Roth. Roth has been something of champion for Weinberg in recent years, having previously played the violin concerto and all five violin sonatas (with piano). His are not the first recordings of these fearsomely difficult pieces — Gidon Kremer recorded the third (Op.126) a couple of years ago, and the other two have been played by Yuri Kalnits on a set of recordings for Toccata Classics — but this is the first time they’ve been pulled together on one disc. Like the best of Weinberg’s music, these pieces are intense and intelligent. Writing for a single instrument leaves a composer nowhere to hide; he has to bring his best to it. The music spins out rapidly, with lightning quick changes in tempo, dynamics, and musical ideas. The technical challenges must be considerable; sometimes it seems incredible that all the music is coming from just one instrument. (There is lots of double-stopping, and maybe some higher-stopping too.) This is by no means music to relax to; it asks for all of the listener’s attention, and it practically sparks when it is played. But, as always with Weinberg, it is really music, through and through, top to bottom. It doesn’t sing the way Bach’s solo violin music does, but it argues, laments, harangues, and delights in no small measure. On this recording the three sonatas, each of which runs about 20-30 minutes, are separated by transcriptions (for violin and piano) of Shostakovich’s Three Fantastic Dances. These provide a welcome change of texture to refresh the palette, and are a nice homage to the friendship the two composers shared. In short: fantastic music, beautifully played, and thoughtfully programmed. ** Part: The Deer’s Cry Vox Clamantis (ECM New, 2016) 1h02m In 2012 my favourite record of the year was Filia Sion, a collection of mostly monophonic chant sung by an Estonian ensemble called Vox Clamantis. That record impressed me with its unusually sensitive ensemble singing and the spirit of “restful poise” that seemed to permeate the performances, and, as I can now report, the bloom is not off the rose: I return to that album regularly and with great enjoyment, and I have been waiting in expectation to hear what Vox Clamantis would do next. They returned this year with The Deer’s Cry, devoted entirely to the music of their countryman Arvo Pärt. Like chant, Pärt’s music calls for a delicacy of touch, an attentiveness, and a solemnity of manner that would seem to play to Vox Clamantis’ strengths. Suffice to say that those strengths are everywhere in evidence on this record: the singing is faultless, the interpretations are rapt, and the effect on the listener is one of a quiet and gentle intensity. This is ideal Pärt singing. I was not surprised, though I was delighted, to see that Pärt himself participated in the recording sessions, which took place in Tallinn’s Church of the Transfiguration. The disc opens with “The Deer’s Cry”, a setting of the text more commonly known as St Patrick’s Breastplate (“Christ before me, Christ behind me, etc.”), and includes a number of Pärt’s best known compositions, including “Da Pacem Domine”, “Summa”, and the extended Gospel setting “And One of the Pharisees”. But there is unfamiliar music here too which has been recorded rarely, such as revised versions of “Virgencita” (written to honour Our Lady of Guadalupe) and “Alleluia-Tropus”. There are also two first-time recordings: “Drei Hirtenkinder aus Fátima” (in honour of Our Lady of Fatima) and “Habitare Fratres” (a newly composed piece that was written for and premiered by Vox Clamantis). The disc closes with one of Pärt’s greatest masterpieces, the “Prayer After the Canon”, the concluding section of his mighty Kanon Pokajanen; it is a piece that I can hardly hear without my eyes brimming with tears. In short, this is a superb overview of Part’s small- and mid-scale choral writing, focusing especially on fairly recent compositions, and sung to an exemplary standard. There are one or two cases in which there is another recording which I would prefer to this one — for instance, the Hilliard Ensemble’s treatment of “And One of the Pharisees” has yet to be surpassed — but all things considered this goes onto my shortlist of favourite Pärt recordings, and is my favourite record of 2016. Here is a promotional video with pictures and videos from the recording sessions, and here the ensemble sings Alleluia-Tropus: ** Part: Kanon Pokajanen Cappella Amsterdam Daniel Reuss (Harmonia Mundi, 2016) 1h The other great Pärt recording this year is from Cappella Amsterdam, led by Daniel Reuss, who sing the entirety of Kanon Pokajanen. For almost 20 years the reference recording for this great piece has been the one by the Estonian Philharmonic Chamber Choir, who premiered it and recorded it in the presence of the composer. It’s a hugely ambitious composition, immensely powerful in effect, and it’s been a matter of some puzzlement to me that more choirs haven’t tackled it. Well, Cappella Amsterdam finally has, and they’ve done it very well. The singing is sensitive and expressive, delicate when it needs to be and full of roaring power when appropriate. The sound is even somewhat better than that enjoyed by the Estonians, which was always a bit recessed. It’s too early to say which of these recordings I’m ultimately going to enjoy more, but certainly this new one has earned a place at the table. ** That was more than 10 records, but my target was 10 and I got close. A very good year! ### Favourites of 2014: Classical music January 8, 2015 I had a good and rewarding year of listening. Much of my time was devoted to a few listening projects: for the Strauss anniversary year I listened to a big chunk of his operas (some of which I wrote about), and I listened chronologically to the symphonies and string quartets of both Shostakovich and Mieczyslaw Weinberg. In the cracks between these slabs, I enjoyed quite a few new, and new-ish, releases. Of those, the following were my favourites: Transeamus The Hilliard Ensemble (ECM New Series, 2014) In December 2014 the Hilliard Ensemble gave their final concert, finally hanging up their tuning forks after 40 years of exquisite music-making. Though they long since parted ways with their founder, Paul Hillier, and though the membership of the four-man ensemble has changed over the years — countertenor David James being the only original member still singing — they sustained a remarkably consistent sound and sensibility, and few, if any, vocal ensembles could match their technical excellence and artistic adventurousness. Their work has been important to me personally. I had the privilege of hearing them live on two occasions, one of which (a performance of Arvo Part’s Miserere) I count among the great concert-going experiences of my life, and my music collection is littered with dozens of their recordings, many of which I hold close to my heart. I am sad to see them go. The Hilliard Ensemble has had two principal artistic faces: they are specialists in medieval and renaissance polyphony, and the bulk of their recorded legacy has been devoted to exploring that music, but they are also well-known for commissioning and championing the work of contemporary composers, most especially that of Arvo Part. On Transeamus, said to be their final recording, they return to their roots with a collection of carols from late medieval England. Some of the finest pre-Reformation English composers are represented, including William Cornysh and John Plummer, but most of these pieces are anonymous. The performances are excellent and frequently superb; I might prefer a little more swing in a jaunty carol like “Thomas Gemma Cantuariae” (Paul Hillier’s earlier recording with Theatre of Voices is my touchstone here), but hearing the Hilliards singing “Ecce quod natura” or Sheryngham’s marvellous “Ah, Gentle Jesu” makes clear why they have been ranked with the world’s great vocal ensembles. I miss them already. [Info] [Review] * Bach: Partitas Igor Levit (Sony, 2014) These days it can sometimes seem that the major classical labels do little more than reissue recordings from their glory days, or, when they do issue new recordings, their roster of artists seems to have been chosen based more on consideration of shapely figures than of artistic excellence. But then along comes a pianist like Igor Levit to undermine all such gloomy ruminations. Still in his 20s, he made his recording debut last year with a much praised recording of the late Beethoven piano sonatas, and this year he followed it with this set of Bach’s six partitas (BWV 525-30). These pieces don’t get as much attention as the Goldberg Variations or the Well-Tempered Clavier, much less Beethoven’s late sonatas, but Levit opens them up in a way that I have never heard before. As usual it is hard to put one’s finger on just what sets one pianist apart from another, especially at elite levels where technically proficiency is assured, but nonetheless Levit’s playing has a special quality: muscular, poised, self-effacing, but yet somehow intensely inward-looking and contemplative. I find him mesmerizing, and heartily recommend this superb recording. * Ludford: Missa Inclina cor meum Blue Heron, Scott Metcalfe (Blue Heron, 2013) Blue Heron is an American choir that is engaged on a long-term project to record music from the Peterhouse Partbooks, one of the relatively few sources of pre-Reformation English polyphony to have escaped the bonfires of the reformers. This is the third volume in the series, and it is a jewel. Polyphony in England in the fifteenth-century was clearly part of the same tradition as continental polyphony, but it was just as clearly an offshoot with its own distinctive qualities: there is a harmonic sweetness to the music, and the long, soaring soprano lines give the music an ecstatic quality that exceeds what one would typically have encountered on the continent. And this is music written on an ambitious scale: Nicholas Ludford’s Missa Inclina cor meum takes nearly 40 minutes just to present the Gloria, Credo, Sanctus, and Agnus Dei, and John Mason’s motet Ave fuit prima salus is 20 minutes long. I wish that I knew more about the context within which this music was originally written and performed. In any case, this is the first time these pieces have been recorded, and it has been worth the wait. [Info] * Morales: Christmas Motets Weser-Renaissance Bremen, Manfred Cordes (CPO, 2013) A couple of years ago I praised a recording by Weser-Renaissance Bremen of Josquin’s music during my annual round-up, and here I am again with this disc of Christmas-themed music by Cristóbal de Morales. Morales was an important composer in sixteenth-century Spain, holding appointments in Avila and Toledo. He is probably best known today for his sublime setting of Parci mihi, Domine (made (relatively) famous by the Hilliard Ensemble in their collaboration with Jan Garbarek), but he was a prolific composer of masses, motets, and the like. This recording, with Manfred Cordes leading the choir, gathers together a set of motets on Christmas themes, ranging from settings of standard Christmas texts (O magnum mysterium, Puer natus est nobis) to pieces in honour of the Blessed Virgin (Sancta et immaculata virginitas, Salve nos stella maris). Some of the pieces are not directly associated with Christmas (Salve regina, for example), and others are actually more closely associated with other feasts (Missus est Gabriel, for instance, with the Annunciation). It must be said that the singing on this disc is spectacularly good. The pieces don’t pose any particularly dire technical challenges, but they do call for clarity, balance, and beauty of tone, and at these this choir is impeccable. As I said of their earlier Josquin recording, the sound has a burnished quality, as if glowing from within, and the recorded sound is immediate without being too close. It’s the single best recording of Morales’ music that I know of. * Guardian Angel Rachel Podger (Channel, 2013) I suppose it is possible that the prospect of 80 minutes of unaccompanied baroque violin playing might set some people on edge, but when the bow is wielded by Rachel Podger there is no need for concern. She plays a variety of early baroque pieces which might have been — though whether they were in fact, I do not know — models for Bach’s more famous contributions to the repertoire. Two sonatas by Giuseppi Tartini (not his most commonly heard “Devil’s Trill” sonata), one by Johann Georg Pisendel, and a few short pieces by Nicola Matteis were all new to me. Podger also includes a transcription for violin of one of Bach’s flute sonatas which, though it might be an odd choice from a programmatic point of view, is nonetheless wonderful to hear. The disc closes with a performance of Biber’s stunning Passacaglia (from his Rosary Sonatas), the piece which was arguably the pinnacle of solo violin music until Bach’s own Chaconne came along. Podger is one of the world’s greatest baroque musicians, and she plays like an angel. For what it’s worth, this disc won the recital award at last year’s BBC Music Magazine awards. * Invocation Herbert Schuch (Naive, 2014) A few excellent piano recitals came my way this year but I kept returning to this one, which features music inspired by the sound of bells. There are several pieces of French modernism with explicit bell-resonances — Ravel’s La vallée des cloches, Messiaen’s Cloches d’angoisse et larmes d’adieu and a piece inspired by it, Tristan Murail’s Cloches d’adieu, et un sourire… — but for me the chief attractions are the pieces by Liszt and Bach. Schuch plays selections from Liszt’s Harmonies poétiques et religieuses, including a moving performance of his glorious Bénédiction de Dieu dans la solitude, but the recital as a whole is held together by transcriptions of several of Bach’s beautiful chorales, played quietly and with great devotion. The overall feeling of the disc is one of meditative stillness, hushed and attentive. The sound is a bit distant for my liking, and the recording level is a bit low, but the playing and the choice of repertoire more than make up for it. Here is a promotional video for the disc: * The Soviet Experience Pacifica Quartet (Cedille, 2011-14) Over the past few years the Pacifica Quartet has recorded a complete cycle of Shostakovich’s fifteen string quartets; the fourth and final volume appeared this year. The competition in this repertoire is tough: the famous (but incomplete) recordings by the Borodin Quartet are always in the back of one’s mind, and I have also long treasured the cycle by the Emerson String Quartet. But this new set deserves to be considered alongside those ones. The Pacifica Quartet plays with all the muscle and acerbity that one could wish for, really digging into the scores to bring out their nervous energy. The ensemble playing is immaculate, and the recorded sound is as clean as a whistle. It’s a superb collection of what is, almost certainly, the greatest chamber music of the twentieth century. And, as if that were not enough, each of the volumes in the set has been programmed with an additional quartet by one of Shostakovich’s contemporaries: Miaskovsky, Prokofiev, Weinberg, and Schnittke. Whether this broadening of focus is really enough to warrant the “The Soviet Experience” title under which the series has been proceeding is debatable, but the supplementary quartets do give one an opportunity to compare what Shostakovich was doing with what else was happening in Russian music at the time. And, as good as these other quartets are, it must be said that they renew one’s appreciation for just how colossally good Shostakovich was. [Info] [Review] [Listen to samples] *** Honourable mention: Dvorak: Stabat Mater Collegium Vocale Gent, Royal Flemish Philharmonic, Philippe Herreweghe (Phi, 2013) [Info][Promo video][Listen to samples] $\;$ Where Late the Sweet Birds Sang Magnificat, Philip Cave (Linn, 2012) [Info][Review][Listen to samples] $\;$ $\;$ Josquin: Missa Ave Maris Stella Cappella Pratensis (Challenge, 2014) [Info][Listen to samples] $\;$ $\;$ Schubert: Lieder Ian Bostridge, Julius Drake (Wigmore Hall, 2014) [Info][Review][Listen to samples] $\;$ $\;$ Bach: Transcriptions Ensemble Contraste (La Dolce Vita, 2013) [Info][Review][Listen to samples] $\;$ $\;$ Weinberg: Symphony No.10; Chamber Music Kremerata Baltica, Gidon Kremer (ECM New Series, 2014) [Info][Review] ### Goldberg Variations: Aria July 28, 2014 Bach died on this day in 1750. Here is Daniel Barenboim playing the Aria from the Goldberg Variations. A few years ago, when we had an old piano in the house, I spent a good deal of time trying to learn to play the opening bars of this piece. I never did get very far. ### Happy birthday, J.S. Bach March 31, 2014 It’s Bach’s birthday, and, turning the tables, he has kindly offered us a gift. Here is the “Dona nobis pacem” section that concludes the Mass in B Minor: That’s Jordi Savall on the podium, leading La Capella Reial de Catalunya. ### Goldbergs, with commentary January 14, 2014 In this short video Jeremy Denk talks us through one or two of the Goldberg variations. It’s an engaging little illustration of the simultaneous playfulness and formal structure of Bach’s music. I’d like to see more of this sort of thing. ### Favourites of 2013: Classical music January 8, 2014 My music listening this year was anchored by a few large listening projects: I marked the anniversary years of Verdi, Wagner, and Britten by dedicating a good deal of time to hearing their major works again — or, in some cases, for the first time. Given the composers involved, much of this music was opera, and I tried when possible to watch performances of their operas on DVD. I’ve written about some of that music in the consistently unpopular “Great moments in opera” series that I’ve been running (and a few more anniversary-related instalments will trickle out over the next month or two). I had planned a bunch of other focused listening projects for the year too — Beethoven’s symphonies, Shostakovich’s symphonies and string quartets, Schubert’s piano sonatas — but I didn’t get to them. They are bumped to 2014. In the meantime, I’d like to share notes on a few of the best recordings I heard for the first time this year. In most cases these are new or new-ish recordings, but not in all. The predominance of vocal music reflects my interests. The ordering of this list is capricious. Weinberg: Complete Violin Sonatas Linus Roth, Jose Gallardo (Challenge, 2013) For the last few years music of Mieczyslaw Weinberg has figured in my year-end accolades, and the same is true this year. This three-disc set is the first complete recorded set of Weinberg’s music for violin and piano, and what a treasure it is! Weinberg wrote six very substantial violin sonatas that exhibit the same musical intelligence and emotional heft that I have admired in his string quartets. As I said of the quartets, this music is “music all the way down”: no pedantry, no gimmicks, no self-conscious preoccupation with the music or its manner of composition — just good, smart, heart-felt music that is full of variety and endlessly interesting. I am happy to see Weinberg’s star rising higher on the strength of recordings like this one. Move over, Prokofiev. Here is a brief video with musical excerpts and interviews with the musicians: Elgar: The Apostles Halle Orchestra, Sir Mark Elder (Halle, 2012) This recording of Elgar’s oratorio about the life of Christ, from the calling of the apostles to the Ascension, won BBC Music Magazine’s “Recording of the Year”; there may have been some Anglo-centric prejudice informing that decision, but this is a terrific performance of a piece that hasn’t been very well served on record (and which, I suspect, might not finally be top-shelf music). The great fear with Elgar is that amateur British choral societies are going to get their hands on him, serving up bloated and sentimental renditions of his music before the potluck. It is amazing to hear this music sung as crisply and clearly as it is here, with a cool glow and as much dramatic emphasis as the music can bear without buckling. The singing is really tremendous, especially in the choral sections, and the sound is as clear and vivid as one could hope for. This recording has made me reconsider the merits of this piece, and made the reconsideration a pleasure. [Listen to excerpts] Wagner Jonas Kaufmann Orchester der Deutschen Oper Berlin, Donald Runnicles (Decca, 2013) Jonas Kaufmann, who glowers from the front cover of this CD, is considered one of the leading tenors in the opera world today, and he really is prodigiously gifted: a magnificent voice that rings from top to bottom, great power, and keen dramatic instincts. It is this last that has most impressed me on this disc of Wagner extracts. For all that Wagner was undoubtedly a great composer, it has nevertheless often seemed to me that his genius was principally manifest in his orchestral writing, and that his vocal lines were largely meandering eddies floating atop the surging currents, lacking dramatic shape and melodic interest in themselves. I won’t say that this recording has changed my opinion about his melodic gifts, but it has certainly made me reconsider my assessment of the dramatic shape of his writing. Never before have I heard Wagner sung in a way that brought out the taut dramatic energy, the sheer poise and responsiveness of the part as much as Kaufmann does. He has helped me to hear Wagner with new appreciation, and that is enough to get this recording onto this list. Libera Nos: The Cry of the Oppressed Contrapunctus, Owen Rees (Signum, 2013) The programme on this CD is a well-conceived one, gathering together a number of sixteenth- and seventeenth-century choral works on the themes of oppression and liberation by English and Portuguese composers. English Catholics in this period suffered persecution by the authorities, and Portugal was under the domination of the Spanish monarchy. Composers turned to these (mostly) liturgical texts to express their prayers for deliverance with a degree of personal feeling that is rare in public ecclesiastical music. The music is breathtakingly beautiful, of course, and the singing on this recording is very distinguished. Contrapunctus is a British choir formed in 2010; this is their first recording. They are a small ensemble of about ten voices, men and women, and they sing with astounding clarity and beauty; I don’t hear any problems anywhere. The multi-layered harmonic and rhythmic complexity of these pieces comes across sounding effortless (which it certainly is not) and, what is more important for this particular programme, there is nothing impersonal about the singing: it has a plaintive, striving quality that suits these pieces very well. Top shelf. [Listen to excerpts] Ockeghem: Missa Mi-Mi Cappella Pratensis, Rebecca Stewart (Ricercar, 1999) It was a year or two ago that I discovered the Dutch ensemble Cappella Pratensis. I liked them well enough to go searching through their back catalogue, and in this recording of Johannes Ockeghem’s Missa Mi-Mi I found a real gem. This Mass is one of Ockeghem’s most frequently recorded, and I have heard it many times, but never with this degree of translucence and calm repose. I tend to bristle at the common view that the music of this period is “relaxing” or “peaceful”, as though these frequently very difficult, intricate, and rigourously structured compositions were merely a kind of soporific. Yet in this case there would be something to that rough characterization, for this ensemble finds in this music a spaciousness and gentleness that lifts the eyes and touches the heart in a quite special way. The music breathes in long, slow rhythms, unhurried, as though content, at each moment, simply to be an expression of praise and a profusion of beauty. I don’t know that I’ve ever heard Ockeghem sung in this way before; I don’t know that I ever will again. The Mass is presented in a quasi-liturgical context, embedded within the Propers for the Mass of Holy Thursday, and the programme ends with Ockeghem’s magnificent motet Intemerata Dei mater. Here is the Kyrie of the Missa Mi-Mi: Bach: Cantatas, Vol.55 Bach Collegium Japan, Masaaki Suzuki (BIS, 2013) This disc is on this list not so much for its own merits — although it is exceptionally good — but for what it represents: the completion of Masaaki Suzuki and Bach Collegium Japan’s twenty-years-long project to record all of Bach’s surviving cantatas. Should I be ashamed to admit that I have collected all fifty-five volumes? Maybe so, but think of all the beer I didn’t buy. Japan might not be the country we think of first when we think of Bach (quite wrongly, perhaps), but the proof is in the pudding: the performances on this disc and across the whole set have been consistently excellent. Suzuki’s approach to the music is “historically informed”, which means in practice that the choir is small and lithe, the textures light, and the rhythms sprightly. It’s Bach played and sung just the way I like it. Here is the Bach Collegium Japan performing one of the cantatas on this final disc. Bravo! Whitacre: Sainte-Chapelle Tallis Scholars (Gimell, 2013) Eric Whitacre is one of the more successful young composers working today. As far as I know, he writes mostly choral music, in an accessible idiom within the reach of amateur choirs, and quite a few recordings of his music are now available. He was commissioned by the Tallis Scholars to write a piece to celebrate the 40th anniversary of their founding, and he came up with Sainte-Chapelle, a piece which imagines the stained-glass angels in that beautiful church singing the Sanctus. The piece was premiered early in 2013 and recorded shortly thereafter. It must be said that it is a gorgeous piece, growing in energy and luminosity as it goes. I had never before heard the Tallis Scholars sing anything other than Renaissance polyphony, but Whitacre’s writing respects their area of specialization, growing out a plainchant melody just as so many Renaissance pieces do. I’ve played this recording so frequently this year that I cannot but include it on this year-end list. *** Honourable mentions: Ludford: Missa Regnum mundi Blue Heron (Blue Heron, 2012) [Watch] [Listen] $\;$ $\;$ Schubert: Nacht und Traume Matthias Goerne, Alexander Schmalcz (Harmonia Mundi, 2011) [Listen] $\;$ $\;$ Howells: Requiem Choir of Trinity College, Cambridge; Stephen Layton (Hyperion, 2012) [Watch] [Listen] $\;$ $\;$ Yoffe: Song of Songs Rosamunde Quartett, Hilliard Ensemble (ECM New Series, 2011) [Listen] $\;$ $\;$ Victoria: Officium Defunctorum Collegium Vocale Gent, Philippe Herreweghe (Phi, 2013) [Listen] $\;$ $\;$ Mahler: Symphony No.2 “Resurrection” Philharmonia Orchestra, Benjamin Zander (Linn, 2013) [Listen] $\;$ Bremer Barock Consort, Manfred Cordes (CPO, 2007) ### Happy Birthday, Glenn Gould September 25, 2012 Glenn Gould was born this day in 1932, which means that we are marking (what would have been) his 80th birthday. Gould is one of the few truly great musicians to have come from my country. He was a fascinating man, a complex man, with a winsome, if eccentric, manner, who had the gift of playing the piano like a — well, like both an angel and a fiend. Not everyone liked his playing, of course, but no-one could ignore it. Gould is especially associated with Toronto, the city in which I live, and for those who know where to look the place is haunted by him still. His piano sits just outside the performance hall in the CBC building downtown — the hall itself is called the “Glenn Gould Studio”, for that matter. I remember walking one day, a few years ago, in the Beaches neighbourhood and being surprised by a commemorative plaque in the front yard of one of the houses noting that it had been Gould’s house. My wife went into labour with our first child while we were eating in a diner which was a favourite of Gould’s. As a pianist, he played almost everything, from Gibbons to Webern, but of course he is especially known for his way with Johann Sebastian Bach. I will not claim to be especially enamoured of his Bach playing; he is not the pianist I go to first when I go to Bach; yet I cannot deny that when I do hear him playing this music, it is an absorbing and fascinating experience. And so: happy birthday, Mr. Gould. Here is a film of him, as a fairly young man, playing the Contrapunctus IV from The Art of Fugue: (I do not know what is going on with the piano in this film. It is clearly a piano, but it has a jangly quality that is reminiscent of a harpsichord. A prepared piano?)
2018-02-21 21:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24166135489940643, "perplexity": 5889.584090189689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813803.28/warc/CC-MAIN-20180221202619-20180221222619-00378.warc.gz"}
http://www.ks.uiuc.edu/Research/Drude/
• ## Outreach Most classical molecular dynamics (MD) simulations employ potential functions that do not account for the effects of induced electronic polarization between atoms, instead treating atoms as simple fixed point charges. Incorporating the influence of polarization in large-scale simulations is a critical challenge in the progress toward computations of increased fidelity, providing a more realistic and accurate representation of microscopic and thermodynamic properties. The Drude oscillator model represents induced electronic polarization by introducing an auxiliary particle attached to each polarizable atom via a harmonic spring. The advantage with the Drude model is that it preserves the simple particle-particle Coulomb electrostatic interaction employed in nonpolarizable force fields, thus its implementation in NAMD is more straightforward than alternative models for polarization. Performance results below show that the implementation of the Drude model maintains good parallel scalability, with an increase in computational cost by not more than twice that of using a nonpolarizable force field. The linked movie animates frames from a Drude water simulation. The Drude particles are shown as the smaller green spheres springing out from the oxygen atoms shown in red. ## Background To model the polarizability $\alpha$ of a given atom with partial charge $q$, a mobile Drude particle carrying a charge $q_\D$ is introduced. The charge of the atom is replaced by $q-q_\D$ to preserve the net charge of the atom-Drude pair. The Drude particle is harmonically bound to the atomic particle with a large force constant $k_\D$. In the absence of a field, the Drude particle oscillates around the position of the atom $\vec{r}$, and the atom appears on average as a point-charge $q$ at $\vec{r}$. In the presence of an electric field $\vec{E}$, the Drude particle oscillates around a displaced position $\vec{d} = q_\D \vec{E}/k_\D$, and the average induced atomic dipole is $\vec{\mu} = q_\D^2 \vec{E}/k_\D$. It follows that the isotropic atomic polarizability of the atom can be expressed as $\alpha=q_\D^2/k_\D$. Extending a molecular system by adding auxiliary Drude particles yields the polarizable force field, $U = U_{\text{self}} + U_{\text{bond}} + U_{\text{elec}} + U_{\text{LJ}}$, where $U_{\text{self}}$ represents the atom-Drude harmonic bonds, $U_{\text{bond}}$ is the contribution from the bonded internal energy terms (bonds, angles, and dihedrals), $U_{\text{elec}}$ represents all Coulombic electrostatic interactions (atom-atom, atom-Drude, and Drude-Drude), and $U_{\text{LJ}}$ is the Lennard-Jones "12-6" interaction. The electrostatic interactions for the 1-2 and 1-3 nuclei pairs with their associated Drude particles are treated with shielding functions designed by Thole [5, 6]. The force constant $k_D$, which is assumed to be the same for all atoms without any loss of generality in the value of $\alpha$, is chosen such that the displacement $\vec{d}$ of the Drude particle remains much smaller than any interatomic distance, such that the resulting induced dipole $\vec{\mu}$ is almost equal to a point-dipole. The self-consistent field (SCF) regime of induced polarization is obtained by allowing the Drude particles to relax to their local energy minima for any given fixed configuration of nuclei in the system. However, this procedure is computationally prohibitive. It is much more efficient to instead approximate the SCF energy surface using an extended Lagrangian dynamics strategy in which the Drude particles are allowed to carry kinetic energy. The Drude oscillators are maintained with a thermostat at a low temperature $T^{\star}$, chosen sufficiently small to leave only a small amount of kinetic energy in the atom-Drude oscillators, yet sufficiently large to allow the oscillators to continuously readjust their configurations to the room-temperature motion of the nuclei. ## Integration The integration of the extended Lagrangian is performed using a dual thermostat to keep the Drude oscillators at a cold temperature $T^{\star}$ while the remaining warm degrees of freedom are maintained at a desired temperature $T$. In a previous implementation of the Drude model, this was achieved by designing a dual Nosé-Hoover thermostat acting on, and within, each nucleus-Drude pair [3]. In the present implementation for NAMD, this is achieved via dual stochastic Langevin thermostats. A Langevin thermostatting scheme is advantageous for high performance computing platforms because it can be implemented locally, while avoiding the global communication of the kinetic temperatures $T$ and $T^{\star}$ required by the Nosé-Hoover scheme. Furthermore, a Langevin thermostatting scheme is extremely effective at equipartitioning energy, which confers a great dynamical stability in large-scale heterogeneous biomolecular systems. The desired integration is obtained by separating the dynamics of each atom-Drude pair with coordinates $\vec{r}_i$ and $\vec{r}_{\D,i}$ in terms of the global motion of the center-of-mass $\vec{R}_i$ and the relative internal motion of the oscillator $\vec{d}_i = \vec{r}_{\D,i} - \vec{r}_i$. Denoting $m_i$ as the total mass of the pair and $m_i' = m_\D (1 - m_\D / m_i)$ as the reduced mass of the oscillator, the equations of motion of the Drude-atom pair are $$m_i \frac{\d^2}{\d t^2} \vec{R}_i = \vec{F}_{\vec{R},i} - \gamma \frac{\d}{\d t} \vec{R}_i + \vec{f}_i, \tag{1}$$ $$m_i' \frac{\d^2}{\d t^2} \vec{d}_i = \vec{F}_{\vec{d},i} - \gamma\,' \frac{\d}{\d t} \vec{d}_i + \vec{f}_i', \tag{2}$$ where $\vec{F}_{\vec{R},i} = -\partial\, U / \partial\, \vec{R}_i$ and $\vec{F}_{\vec{d},i} = -\partial\, U / \partial\, \vec{d}_i$ are the forces acting on the center-of-mass and on the reduced mass, respectively. The coupling to dual heat baths is modeled by the addition of damping and noise terms, where $\gamma$ and $\gamma\,'$ are external and internal Langevin friction coefficients. The $\vec{f}_i$ and $\vec{f}_i'$ are time-dependent fluctuating random forces obeying the fluctuation-dissipation theorem, $\vec{f}_i = (2\gamma\, k_{\text{B}} T/m_i)^{1/2} R(t)$ and $\vec{f}_i' = (2\gamma\,' k_{\text{B}} T^{\star}/m_i')^{1/2} R'(t)$, where $R(t)$ and $R'(t)$ are univariate Gaussian random processes. The temperature $T$ of the "physical" thermostat is chosen to keep the atoms while the temperature $T^{\star}$ is set to a small value in order to reduce the thermal fluctuation of the Drude oscillators. If $T^{\star}$ is set to 0, the thermostat on the oscillators is eliminated. For low values of $T^{\star}$, the actual Drude oscillator temperature maintained by the thermostat will depend mostly on the balance between energy dissipated by the damping term and energy introduced by electrostatic coupling to the environment. The forces on the centers-of-mass and on the displacements can be expressed in terms of the actual forces on the particles $$\vec{F}_{\vec{R},i} = -\frac{\partial\, U}{\partial\, \vec{r}_i} - \frac{\partial\, U}{\partial\, \vec{r}_{\D,i}}, \tag{3}$$ $$\vec{F}_{\vec{d},i} = -\Bigl( 1 - \frac{m_\D}{m_i} \Bigr) \frac{\partial\, U}{\partial\, \vec{r}_{\D,i}} + \Bigl( \frac{m_\D}{m_i} \Bigr) \frac{\partial\, U}{\partial\, \vec{r}_i}. \tag{4}$$ ## Extensions to the force field The Drude polarizable force field requires some extensions to the CHARMM force field. The Drude oscillators differ from typical spring bonds only in that they have an equilibrium length of zero. The Drude oscillators are optionally supplemented by a maximal bond length parameter, beyond which a quartic restraining potential is also applied. The force field is also extended by an anisotropic spring term that accounts for out-of-plane forces from a polarized atom and its covalently bonded neighbor with two more covalently bonded neighbors (similar in structure to an "improper" bond). The screened Coulomb correction of Thole is calculated between atom-Drude pairs that are otherwise excluded from nonbonded interactions. The new polarizable force field based on classical Drude oscillators also includes the use of off-centered massless interaction sites, i.e., "lone pairs" (LPs), to avoid the limitations of centrosymmetric-based Coulomb interactions [1]. Support for these massless sites requires some additional extension to NAMD. The coordinate of each LP site is constructed based on three "guide" atoms. The calculated forces on the massless LP must be transferred to the guide atoms, preserving total force and torque. After an integration step of velocities and positions, the position of the LP is updated based on the three guide atoms, along with additional geometry parameters that give displacement and in-plane and out-of-plane angles. ## Implementation details NAMD integrates the Langevin equation using the Brünger-Brooks-Karplus (BBK) method, a natural extension of the Verlet algorithm. Extension of the atom-based Langevin equation to accommodate the dual thermostats for Drude oscillators required only small modifications to the existing source code. Because the integration of the centers-of-mass and the displacements is identical to the integration of the individual atoms, except for the update of the velocities with the thermostats, NAMD treats the entire system in standard atomic coordinates until the application of the thermostats to velocities, at which point an atom-Drude oscillator pair is transformed to center-of-mass and relative displacement coordinates, the respective thermostats are applied to the transformed coordinates, and the coordinates are transformed back to standard atomic coordinates. Independent of the Drude particle attached to the oxygen, the model also imposes a rigid geometry to the Drude water model SWM4-NDP, which is implemented via the Settle algorithm. The PSF file format is presently extended to include Drude particles and LPs listed with the atoms. The number of columns in the "atoms" section is increased to include the alpha and thole parameters needed for the screened Coulomb correction of Thole. The Drude particles are required to immediately follow their parent atoms and the LPs follow the hydrogens bonded to their parent atom. Two new sections are introduced: one giving the anisotropic spring interactions and associated force constants, and the other giving lone pair guide atoms and associated geometry parameters. The Drude oscillators (the $U_{\text{self}}$ term) are evaluated as part of the NAMD ComputeBonds class that performs evaluation of the typical spring bonds. The anisotropic spring interactions are evaluated in the new compute class ComputeAniso with the energy added into the spring bond energy reduction. The screened Coulomb correction of Thole are evaluated in the new compute class ComputeThole with the energy added into the electrostatic energy reduction. For scalable parallelism, NAMD performs a spatial decomposition of atoms into "patches" [4]. NAMD further partitions the atoms into "hydrogen groups" that keeps hydrogen atoms within the same spatial patch as its parent atom to which it is covalently bonded. This convention keeps each pair of atoms needed for imposing bond constraints together on the same processor. The LPs require a similar treatment. Since the force transfer and position updates must, respectively, follow and precede the force computation and communication phases, the cluster of atoms needed for an LP must be kept together on a processor for efficiency. To accomplish this, NAMD has been augmented to maintain "migration groups" as a superset of hydrogen groups, designed to keep the atoms in a group within the same patch, and thus on the same processor. Since migration groups are a little larger than hydrogen groups, the patch lengths must be padded at the edges with some extra space to ensure that migration groups separated by a patch do not come within the nonbonded cutoff distance of each other before the next atom migration, at which time an entire migration group might be moved to an adjacent patch. The set of guide atoms needed to determine the geometry of the LPs is set in the residue topology file (RTF). For maximum effectiveness, they are chosen to allow the splitting of large biomolecules (proteins, lipids, and nucleic acids) into the smallest possible independent migration groups. For example, in polypeptides and proteins this requires avoiding any overlap of guide atoms between successive residues. If patches become too large spatially (and too dense, due to the extra Drude particles and LPs), the existing NAMD two-away patch splitting can be used in any or all of the $x$-, $y$-, and $z$-dimensions. ## Performance results The use of Drude polarizable force field generally increases the computational density (and thus the cost) of simulating a system of atoms by about a factor of 5/3, due to the use of a water model having five charge sites (three atoms, one LP, and one Drude particle) rather than three. The additional work needed to calculate extra force terms and to redistribute the lone pair forces also scales linearly with the size of the system. Three basic systems that together include all features of the Drude model have been simulated: SWM4-NDP (water with four charge sites and negative Drude particle [2]), decane, and N-methylacetamide (NMA). Figure 1 shows the parallel performance of NAMD on Blue Gene/P for all three systems modeled with and without polarizability. The test systems were simulated under isothermal-isobaric (NPT) condition at 298.15 K. For each system, 20 000 steps of MD simulations were carried out for both the polarizable and nonpolarizable CHARMM force field. For all three systems, the Drude computations scale well to 4096 processors (1 standard Blue Gene/P rack). For the nonpolarizable model, linear scaling holds only up to 2048 processors due to the smaller number of particles that is known to limit linear scaling. The SWM4-NDP system exhibits the largest relative speed ratio between the Drude model and the nonpolarizable model, about 1:2. The relative speed ratio of decane is about 1:1.6, and NMA has a ratio of about 1:1.8. These ratios demonstrate that the present implementation of the Drude model in NAMD is quite efficient, requiring no more than twice the computational cost of the nonpolarizable model. ## References [1] E. Harder, V. M. Anisimov, I. V. Vorobyov, P. E. M. Lopes, S. Y. Noskov, A. D. MacKerell, and B. Roux. Atomic level anisotropy in the electrostatic modeling of lone pairs for a polarizable force field based on the classical drude oscillator. Journal Of Chemical Theory And Computation, 2(6):1587–1597, 2006. [2] G. Lamoureux, E. Harder, I. V. Vorobyov, B. Roux, and A. D. MacKerell. A polarizable model of water for molecular dynamics simulations of biomolecules. Chemical Physics Letters, 418(1-3):245–249, 2006. [3] G. Lamoureux and B. Roux. Modeling induced polarization with classical Drude oscillators: Theory and molecular dynamics simulation algorithm. Journal Of Chemical Physics, 119(6):3025–3039, 2003. [4] JC Phillips, R Braun, W Wang, J Gumbart, E Tajkhorshid, E Villa, C Chipot, RD Skeel, L Kale, and K Schulten. Scalable molecular dynamics with NAMD. J Comput Chem, 26:1781–1802, 2005. [5] B. T. Thole. Molecular polarizabilities calculated with a modified dipole interaction. Chem. Phys., 59:341–350, 1981. [6] P.T. Van Duijnen and M. Swart. Molecular and atomic polarizabilities: Thole’s model revisited. J. Phys. Chem. A, 102(14):2399–2407, 1998. ## Sample simulation files The Drude model is still under development, and the NAMD setup tools have not yet been extended to generate the necessary PSF for Drude simulation. Files for small sample systems including Drude water (SWM4-NDP), decane, and NMA are available below for downloading. ## Publications Publications Database High-performance scalable molecular dynamics simulations of a polarizable force field based on classical Drude oscillators in NAMD. Wei Jiang, David Hardy, James Phillips, Alex MacKerell, Klaus Schulten, and Benoit Roux. Journal of Physical Chemistry Letters, 2:87-92, 2011. ## Collaborators • Benoit Roux (University of Chicago) • Alexander D. MacKerell, Jr. (University of Maryland) • Wei Jiang (Argonne National Laboratory) Page created and maintained by David Hardy.
2020-10-29 13:04:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.6655365824699402, "perplexity": 1408.9040305012848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00068.warc.gz"}
http://www.academickids.com/encyclopedia/index.php/Natural_cubic_spline
# Spline interpolation (Redirected from Natural cubic spline) In the mathematical subfield of numerical analysis spline interpolation is a special form of interpolation where the interpolant is a piecewise polynomial called spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even when using low degree polynomials for the spline. Thus spline interpolation avoids the problem of Runge's phenomenon which occurs when using high degree polynomials. Contents ## Definition Given n+1 distinct knots xi such that [itex]x_0 < x_1 < ... < x_{n-1} < x_n, \,\! [itex] with n+1 knot values yi we are trying to find a spline function of degree n [itex] S(x) := \left\{\begin{matrix} S_0(x) & x \in [x_0, x_1] \\ S_1(x) & x \in [x_1, x_2] \\ \vdots & \vdots \\ S_{k-1}(x) & x \in [x_{k-1}, x_k] \end{matrix}\right. [itex] with each Si(x) a polynomial of degree n. ## Spline interpolant Using polynomial interpolation the polynomial of degree n which interpolates the data set is uniquely defined by the data points. The spline of degree n which interpolates the same data set is not uniquely defined and we have to fill in n-1 additional degrees of freedom to construct a unique spline interpolant. ## Linear spline interpolation Linear spline interpolation is the most simple spline interpolation. Graphically we just connect the data points by straight lines. The spline is just a polygon. Algebraically each Si is a linear function constructed as [itex]S_i(x) = y_i + \frac{y_{i+1}-y_i}{x_{i+1}-x_i}(x-x_i)[itex] The spline must be continuous at each data point, that is [itex]S_i(x_i) = S_{i+1}(x_i) \qquad \mbox{ , } i=0,\ldots k -1[itex] This is the case as we can easily see [itex]S_{i-1}(x_i) = y_{i-1} + \frac{y_{i}-y_{i-1}}{x_{i}-x_{i-1}}(x_i-x_{i-1}) = y_i[itex] [itex]S_{i}(x_i) = y_i + \frac{y_{i+1}-y_i}{x_{i+1}-x_i}(x_i-x_i) = y_i[itex] The quadratic spline can be constructed as [itex] S_i(x) = y_i + z_i(x-x_i) + \frac{z_{i+1}-z_i}{2(x_{i+1}-x_i)}(x-x_i)^2 [itex] The coefficients can be found by choosing a [itex]z_0[itex] and then using the recurrence relation: [itex] z_{i+1} = -z_i + 2 \frac{y_{i+1}-y_i}{x_{i+1}-x_i} [itex] ## Cubic spline interpolation For a data set {xi} of n+1 points, we can construct a cubic spline with n piecewise cubic polynomials between the data points. If [itex]S(x)=\left\{\begin{matrix} S_0(x),\ x\in[x_0,x_1] \\ S_1(x),\ x\in[x_1,x_2]\\ \cdots \\ S_{n-1}(x),\ x\in[x_{n-1},x_n]\end{matrix}\right.[itex] represents the spline function interpolating the function f, we require: • the interpolating property, S(xi)=f(xi) • the splines to join up, Si-1(xi) = Si(xi), i=1,...,n-1 • twice continuous differentiable, S'i-1(xi) = S'i(xi) and S''i-1(xi) = S''i(xi), i=1,...,n-1. For the n cubic polynomials comprising S, this means to determine these polynomials, we need to determine 4n conditions (since for one polynomial of degree three, there are four conditions on choosing the curve). However, the interpolating property gives us n + 1 conditions, and the conditions on the interior data points give us n + 1 − 2 = n − 1 data points each, summing to 4n − 2 conditions. We require two other conditions, and these can be imposed upon the problem for different reasons. One such choice results in the so-called clamped cubic spline, with [itex] S'(x_0) = u \,\![itex] [itex] S'(x_k) = v \,\![itex] for given values u and v. Alternately, we can set [itex]S''(x_0) = S''(x_n) = 0 \,\![itex]. resulting in the natural cubic spline. The natural cubic spline is approximately the same curve as created by the spline device. Clamped and natural cubic splines yield the least oscillation about f than any other twice continuously differentiable function Another choice is gives the periodic cubic spline if [itex] S(x_0) = S(x_n) \,\![itex] [itex] S'(x_0) = S'(x_n) \,\![itex] [itex] S''(x_0) = S''(x_n) \,\![itex] ### Interpolation using natural cubic spline It can be defined as [itex] S_i(x) = \frac{z_{i+1} (x-x_i)^3 + z_i (x_{i+1}-x)^3}{6h_i} + \left(\frac{y_{i+1}}{h_i} - \frac{h_i}{6} z_{i+1}\right)(x-x_i) + \left(\frac{y_{i}}{h_i} - \frac{h_i}{6} z_i\right) (x_{i+1}-x) [itex] and [itex]h_i = x_{i+1} - x_i \,\![itex]. The coefficients can be found by solving this system of equations: [itex] \left\{\begin{matrix} z_0 = 0 \\ h_{i-1} z_{i-1} + 2(h_{i-1} + h_i) z_i + h_i z_{i+1} = 6 \left( \frac{y_{i+1}-y_i}{h_i} - \frac{y_i-y_{i-1}}{h_{i-1}} \right) \\ z_n = 0 \end{matrix}\right. [itex] ## Example ### Linear spline interpolation Consider the problem of finding a linear spline for [itex]f(x) = e^{-x^2} [itex] with the following knots [itex] (x_0,f(x_0)) = (x_0,y_0) = \left(-1,\ e^{-1}\right) \,\! [itex] [itex] (x_1,f(x_1)) = (x_1,y_1) = \left(-\frac{1}{2},\ e^{-\frac{1}{4}}\right) \,\! [itex] [itex] (x_2,f(x_2)) = (x_2,y_2) = \left(0,\ 1\right) \,\! [itex] [itex] (x_3,f(x_3)) = (x_3,y_3) = \left(\frac{1}{2},\ e^{-\frac{1}{4}}\right) \,\! [itex] [itex] (x_4,f(x_4)) = (x_4,y_4) = \left(1,\ e^{-1}\right) \,\! [itex] After directly applying the spline formula, we get the following spline: [itex] S(x) = \left\{\begin{matrix} e^{-1} + 2(e^{-\frac{1}{4}} - e^{-1})(x+1) & x \in [-1, -\frac{1}{2}] \\ e^{-\frac{1}{4}} + 2(1-e^{-\frac{1}{4}})(x+\frac{1}{2}) & x \in [-\frac{1}{2},0] \\ 1 + 2(e^{-\frac{1}{4}}-1)x & x \in [0,\frac{1}{2}] \\ e^{-\frac{1}{4}} + 2(e^{-1} - e^{-\frac{1}{4}})(x-\frac{1}{2}) & x \in [\frac{1}{2},1] \\ \end{matrix}\right. [itex] The spline function (blue lines) and the function it is approximating (red dots) are graphed below: Missing image Linearspline.png The graph below is an example of a spline function (blue lines) and the function it is approximating (red lines) for k=4: • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2023-02-01 23:27:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942666888237, "perplexity": 3807.835662494702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00318.warc.gz"}
https://direct.mit.edu/evco/article/28/4/621/95000/Inferring-Future-Landscapes-Sampling-the-Local
## Abstract Connection patterns among Local Optima Networks (LONs) can inform heuristic design for optimisation. LON research has predominantly required complete enumeration of a fitness landscape, thereby restricting analysis to problems diminutive in size compared to real-life situations. LON sampling algorithms are therefore important. In this article, we study LON construction algorithms for the Quadratic Assignment Problem (QAP). Using machine learning, we use estimated LON features to predict search performance for competitive heuristics used in the QAP domain. The results show that by using random forest regression, LON construction algorithms produce fitness landscape features which can explain almost all search variance. We find that LON samples better relate to search than enumerated LONs do. The importance of fitness levels of sampled LONs in search predictions is crystallised. Features from LONs produced by different algorithms are combined in predictions for the first time, with promising results for this “super-sampling”: a model to predict tabu search success explained 99% of variance. Arguments are made for the use-case of each LON algorithm and for combining the exploitative process of one with the exploratory optimisation of the other. ## 1 Introduction A Local Optima Network (LON) (Ochoa et al., 2008) is a fitness landscape model used to understand or predict interaction between configuration spaces and search algorithms. LONs consist of local optima for nodes, and the edges are trajectories between local optima. Studying LON objects has brought colour and clarity to our understanding of how optimisation problems and search algorithms interact (Daolio et al., 2011; Ochoa and Veerapen, 2016; Herrmann, 2016; Hernando et al., 2017). Principally, studies have enumerated the fitness landscape to build a comprehensive LON (Ochoa et al., 2008; Tomassini et al., 2008; Daolio et al., 2011; Verel et al., 2011; Herrmann et al., 2016; Veerapen et al., 2016; Verel et al., 2018). Their focus was on smaller problems and every candidate solution was mapped to a local optimum within its basin of attraction. The baseline and the proof-of-concepts were necessary to establish LONs as promising tools. They now have attracted attention (Ochoa and Veerapen, 2018; Bozejko et al., 2018; Herrmann et al., 2018; Fieldsend, 2018; Chicano et al., 2018; Simoncini et al., 2018; Liu et al., 2019), and will likely be applied to increasing numbers of real (and much larger) problems. In anticipation of these requirements, refined LON sampling methods are needed. The literature concerning how to sample LONs is in its embryonic stages. A few LON construction algorithms have been proposed, but have not been extensively tested. We describe here those proposed for the Quadratic Assignment Problem (QAP). Note that in lieu of formal naming by the various authors, we have labelled them as we see fit. Two-Phase LON Sampling. Iclanzan et al. (2014) proposed a LON algorithm to sample $n$-dimension QAP instances, which had separate phases for recording nodes and edges. Initially, local optima were found by hill-climbing from $2000×n$ starting solutions, and thereafter the local optima have a kick or perturbation applied. If the obtained local optimum is also in the node set, an edge is added (or, if it exists already, the edge weight is incremented). This algorithm is not used in our study as it seems to be a particular case of the two more recent proposals. Markov-Chain LON Sampling. Ochoa and Herrmann (2018) introduced a LON algorithm for the QAP built with a competitive heuristic, Iterated Local Search (ILS) (Stützle, 2006). For the remainder of this study, this is labelled Markov-Chain LON Sampling, to avoid confusion with the ILS optimisation heuristic itself. The sampling is comprised of multiple ILS runs, each being an adaptive walk on the local optima space. Every local optimum and connection between optima is recorded. The same framework for Travelling Salesman Problem LONs has been used with some success (Ochoa and Veerapen, 2018); this is also instrumented on top of a competitive heuristic in the domain—Chained Lin-Kernighan (Applegate et al., 2003). Snowball LON Sampling. The same year, Verel et al. (2018) presented a Snowball LON Sampling algorithm. A random walk takes place on the local optima level. From each node in the walk, a recursive branching or snowballing samples the direct (locally optimal) neighbours. The local optima are found using kicks followed by hill-climbing, as in Markov-Chain LON Sampling. The literature has not asked which method is more representative of the LON being sampled; that is, are the sampled landscapes similar to those induced by search heuristics during optimisation? There is also vagueness about whether particular LON construction algorithms relate more earnestly to particular search heuristics. A further open issue is that computational cost has not necessarily been considered with regards to LON construction algorithms. Therefore, can their efficiency be compared? And finally, is sampling bias inherent within the methods themselves? This work has a heavy focus on the relevance of sampled LONs to empirical search difficulty, which manifests as performance prediction using LON features as predictors. We argue this is fruitful in contrast to the analysis of LON features (without aligning them with search variance)—this seems arbitrary and counterintuitive to the purpose of LONs. Search variance prediction is important because it helps ascribe algorithm proficiency to particular topological features and produces insight about how algorithmic operators interact with a problem. The results and experiments of this study come in four parts. First, the comparison of LON samples with fully enumerated LONs; next LON construction algorithms are provided a fixed computational budget; then LON methods are given various sets of input parameters; and finally the best features from each LON type are taken together into regression models. The first comparison of sampled versus enumerated LONs, in terms of predictive power for search variance is given. For the first time LON construction algorithms are given equal computational budget and the effect studied. We created a new set of features, which contains features obtained with the two different sampling methods, as opposed to sets of features coming from a single method. The remainder of the article is structured as follows. Section 2 provides the baseline definitions to keep the work self-contained; our methods are in Section 3. Experimental setup is detailed in Section 4, followed by the results in Section 5, and discussion and conclusions in Sections 6 and 7, respectively. ## 2 Definitions ### 2.1 Fitness Landscapes A fitness landscape is both a mathematical object and a metaphor for understanding optimisation processes. For evolutionary computation, a landscape is the tuple $(S,N,f)$ where $S$ is the set of all potential solutions, $N:S⟶2S$, a function that defines the adjacency between solutions; that is, $N(s)$ is the set of neighbouring solutions of the solution $s$, and f—a fitness function which assigns a fitness value to each solution as $f:S⟶R$ (Stadler, 2002). When we consider that search heuristics typically use multiple operators (the operator corresponds to the fitness landscape component $N$ as a notion of connectivity between solutions), it becomes clear that search algorithms induce empirical fitness landscapes that are much more complex than the basic singular neighbourhood $(S,N,f)$ model. Two different search algorithms moving on the same configuration space can manufacture completely dissimilar topological landscape features. ### 2.2 Local Optima Networks A subspace of the fitness landscape is considered, reducing the solution set exclusively to local optima. The neighbourhood now defines the accessibility between local optima often based on the original neighborhood relation of the fitness landscape. This is a Local Optima Network as per Ochoa et al. (2008). #### 2.2.1 Nodes The set of vertices, $LO$, consists of local optima (labelled with an index and corresponding fitness). For a minimisation optimisation problem, we define each such local optimum $loi$ to satisfy the condition of superior fitness over all other solutions, $x$, in its neighbourhood: $∀x∈N(loi):$$f(x)⩽f(loi)$, where $N(loi)$ is the neighbourhood of $loi$. #### 2.2.2 Edges The network edges, $E$, are weighted and oriented. For Markov-Chain Sampling and Snowball Sampling LONs, we have an edge if combining perturbation and hill-climbing can transform the source to the destination. The edge weight is the probability of that transformation occurring. Formally, local optima $loi$ and $loj$ form the source and destination of an edge iff $wij>0$. Joining the nodes and edges induces a LON, $LON=(LO,E)$, a directed graph with nodes $loi∈LO$, and there exists an edge $eij∈E$, with weight $wij$, between two nodes $loi$ and $loj$ iff $wij>0$. Note that, in most cases, $wij≠wji$. ### 2.3 Funnels Funnels are documented fitness landscape features in combinatorial optimisation (Hains et al., 2011; Ochoa et al., 2017) but originate in the study of physical energy landscapes (Doye et al., 1999). The precise definition in evolutionary computation is an area of active research but a series of papers (Ochoa et al., 2017; Ochoa and Veerapen, 2018; McMenemy et al., 2018) consider them to be a basin of attraction at the local optima level. In Figure 1, we see multiple small basins; overall though, these conform to two much larger basins. The large basins are funnels, and they contain many local optima organised in a fitness hierarchy. In this minimisation example, there exists a shallower suboptimal funnel (on the left) and a deeper, optimal funnel (on the right). In use for this work is the associated definition that a funnel is the collection of monotonically improving (in fitness transformations) paths through local optima which terminate at a single local optimum (a funnel-floor). To find the paths, the funnel-floors are identified from a LON—simply the nodes with no outgoing improving (destination node of edge has superior fitness to source node) edges. From those, a breadth-first search is conducted on the LON, exposing the set of paths which terminate at that funnel-floor. These paths together (both the nodes and the edges) comprise the funnel. Figure 1: Abstracted from high-dimensional space, a suboptimal funnel (left) and the primary (optimal) funnel on the right, for a minimisation problem. Figure 1: Abstracted from high-dimensional space, a suboptimal funnel (left) and the primary (optimal) funnel on the right, for a minimisation problem. ## 3 Methodology ### 3.1 The Quadratic Assignment Problem A Quadratic Assignment Problem (QAP) (Lawler, 1963) requires allocation of facilities to available locations. A permutation of facilities assigned to the indexed locations is the solution representation. Instance specification for a QAP is formed by two matrices: one for pairwise distances between facilities and locations, the other, pairwise flows between them. Mathematically, with the problem dimension being $N$, the QAP is an assignment of $N$ facilities to $N$ locations where each pair of facilities and locations, $filj$, have a distance separating them, $Dij$, and a flow value, $Fij$. These are encoded in the distance and flow matrices. A solution is a permutation of length $N$, has fitness calculated as the product of distances, and flows between adjacent assignments. The objective function, $g$, for a QAP solution $x$ is minimisation of this product where $g(x)=∑i=1N∑j=1NDijFij,∀x∈S$. The QAP is NP-Hard and manifests in many real-world problems; it remains a testbed for new fitness landscape analysis (Tayarani-N and Prügel-Bennett, 2016; Verel et al., 2018; Ochoa and Herrmann, 2018). This reality combined with the availability of QAP LON construction algorithms lead us to select QAP for this study. ### 3.2 Instances In our analysis, we consider the widely accepted benchmark library, Quadratic Assignment Problem Library (QAPLIB) (Burkard et al., 1997). The library contains 128 instances which also have the optimum fitness evaluations provided. We use 124 of these 128, omitting the largest three due to computational cost (tho150, tai150b, and tai256c), and also esc16f because all entries in the distance matrix are zero. Our set results in $N⩽128$. A set of sixty additional instances ($N$ = 11) not in QAPLIB also play a part in the results. Their size enables a complete enumeration of the LONs, which is required for comparison to sampled LONs of the same instances. As there is an insufficient number of small instances in the benchmark QAPLIB, this set is desirable to perform statistical analysis. QAP instances usually conform to four categories (Stützle, 2006), differentiated by distributions in the distance and flow matrices. The four categories are: uniform random distances and flows, random flows on grids, real-world problems, and random real-world like problems. Below we describe our chosen instances in relation to these categories. Note that in QAPLIB, the problem dimension is given in the instance name. Uniform random distances and flows. Distance and flow values for instances from this category are randomly sampled from a Gaussian distribution. The spread of locations on the plane is random. From QAPLIB we use tai{12a, 15a, 17a, 20a, 25a, 30a, 35a, 40a, 50a, 60a, 64c, 80a, 100a} alongside rou{12, 15, 20}. Within this class are 30 of our size-11 generated instances. Random flows on grids. Distance values are organised—locations lie in a defined square on a grid. Flow entries are random. From this category, the QAPLIB instances had{12,14, 16, 18, 20}, nug{12, 14, 16a-b, 17, 18, 20, 21, 22, 24, 30}, scr{12, 15, 20}, sko{42, 56, 64, 72, 81, 90, 100{a-f}}, tho{30,40}, and wil{50, 100} are considered. Real-world. Instances of the QAP can be formulated from real-world problems. This is the case for some QAPLIB instances. In use in this study are bur instances, which comprise of stenotypist typing data; kra instances, formulated to plan a hospital layout; esc instances, for the testing of sequential circuits; the ste set, for fixing units on a grid; and els19, which deals with flows of hospital patients moving between departments. Our analysis includes QAPLIB sets bur{a-h}, {els19}, esc{16a-e, g-j, 32e, 32g, 128}, kra{30a-b, 32}, lipa{20a-b, 30a-b, 40a-b, 50a-b, 60a-b, 70a-b, 80a-b, 90a-b}, and ste36{a-c}. Random Real-World Like. Although not strictly “real-world,” these simulate real-world conditions by placing locations in clusters on the plane. In this way, distance values are either small (intra-cluster) or large (inter-cluster). Flows are random. From QAPLIB we use tai{12b, 15b, 20b, 25b, 30b, 35b, 40b, 50b, 60b, 80b, 100b}. As above, within this class are 30 of our size-11 generated instances. Miscellaneous. The QAPLIB chr set do not seem to fit neatly into a category. The only constraint is that flows form a mathematical tree. The instances chr{12a-c, 15a-c, 18a-b, 20a-c, 22a-b, 25a} are used. ### 3.3 LON Algorithm 0: Exhaustive Enumeration The method for exhaustively enumerating a local optima network was introduced alongside the model itself (Ochoa et al., 2008) and then adapted for the QAP (Daolio et al., 2011). LONs are enumerated using a best-improvement algorithm which uses the elementary operator for QAP—a pairwise exchange of facilities. The local optimum $x$ for each solution is found by best-improvement pivot rule and in this way the nodes in the network are added. The escape edges are defined according to the distance $d$ (number of pairwise swaps between solutions), and a threshold for the LON $D>0$. An edge $eij$ is traced between $xi$ and $xj$ if a solution $s$ exists such that $d(s,xi)≤D$ and $h(s)=xj$. The weight $wij$ of this edge is $wij=|{s∈S:d(s,xi)≤Dandh(s)=xj}|$. This weight can be normalised by the number of solutions, $|{s∈S:d(s,xi)≤D}|$, within reach at distance $D$. In the present study, we set $D=2$. The best-improvement algorithm is described in Algorithm 1. ### 3.4 LON Algorithm 1: Markov-Chain LON Sampling We label the first LON sampling candidate (Ochoa and Herrmann, 2018) Markov-Chain LON Sampling for the duration of this study. LON nodes and edges are logged during multiple runs of an iterated local search. Each run starts at a random solution, hill-climbs to a local optimum, before “kicking” or perturbing with a large mutation. Hill-climbing is applied to the perturbed solution to obtain a local optimum, which becomes the input for the next perturbation and hill-climb cycle. The ILS used (Stützle, 2006) is competitive for QAP. To build a LON, 200 runs of ILS track each local optimum and each movement between local optima. Each of these traces is combined to form a single LON. The algorithm process is in Algorithm 2. There are parameters which will affect the sample: the number of ILS runs (runs), the termination condition (iterations without improvement, t), the pivot rule of hill-climbing (best improvement or first, hc-type), and the strength of kick or perturbation (k). The sample can be regarded as a joined group of adaptive walks on the local optima level. The multiple start points of the individual runs should—in principle—negate any issue of anisotropy (directional dependency) in the fitness landscape. ### 3.5 LON Algorithm 2: Snowball LON Sampling The second LON algorithm candidate is called Snowball LON Sampling. Put concisely, it is a branching random walk at the local optima level. The process of snowballing has been around for decades in social science (Goodman, 1961) but was not considered for LON sampling until late 2018 (Verel et al., 2018). Snowball LON Sampling starts at a random solution and hill-climbs to a local optimum (LO) which is taken to be the first node in a random walk of LO. From this node, a recursive expansion of LO neighbours begins. Specifically, perturbations followed by hill-climbs are taken from the node, to find LO neighbours. All are added to the growing sample. The process is then repeated for the neighbours, to find their own neighbours. Thereafter, Snowball LON Sampling returns to the first node on the main walk path, and goes to a random LO neighbour (again by perturbation and hill-climbing) which becomes node two on the walk. The expansion of node two begins. This continues until the random walk is complete. The pseudocode is shown in Algorithm 3. Parameters exist which can be used to tune the sample: the length of the walk (l), the number of neighbours included in a LO expansion (m), and the depth of the expansion (number of steps away from original path, d). ### 3.6 LON Features LON features are chosen with care for our study—a critical component of our analysis utilises them as predictors in regression for performance prediction. Standard features taken from Markov-Chain LON Sampling LONs are the number of LON nodes sampled (denoted as number optima); edges sampled (edges); mean sampled fitness (mean fitness); mean out-degree (mean outgoing edges from nodes, out-degree); and the diameter of the LON (longest path between nodes), diameter. Funnel-based LON features are included. Our choices are the proportion of LON edges pointing at the global optimum funnel-floor (incoming global); and mean funnel-floor fitness (funnel-floor fitness). All features mentioned thus far are useful for Markov-Chain LON Sampling LONs but not Snowball LONs. Snowball samples do not fit well with funnel metrics as heuristic trajectories are uniformly restricted by the nature of the sampling. In fact, the sampling would induce a consistent and increasingly large number of apparent “funnels” (according to our definition in Section 2.3). The short branching paths during node expansion would also lead to numerous nodes with no outgoing edges, which are technically defined as funnel-floors. In reality, improving transformations from these nodes is likely possible. Similarly, standard features of the samples such as quantities of nodes, edges, and out-degrees in Snowball Sampling LONs are redundant as predictors; they are artefacts of the sampling parameters (length of the random walk, number of edges, and depth of snowball expansion). We see in Figure 2 that even for a diminutive N = 11 instance, the LON construction algorithms capture rather different things. Taking a bird's eye view, the Markov-Chain LON seems to be sparser in terms of nodes, edges, and edge density. In fact, the enumerated LON has 53 nodes, 1134 edges; Markov-Chain LON Sample has 36 nodes, 122 edges; and Snowball LON Sample has 43 nodes, and 272 edges. Figure 2: Three LONs extracted from the same N = 11 QAP instance (from class “random real-like”). All produced with different algorithms as indicated in the subcaptions. Node size is proportional to fitness (larger is fitter). Global optimum is shown in red, all other nodes in grey. Figure 2: Three LONs extracted from the same N = 11 QAP instance (from class “random real-like”). All produced with different algorithms as indicated in the subcaptions. Node size is proportional to fitness (larger is fitter). Global optimum is shown in red, all other nodes in grey. For Snowball LON Sampling, features are based on the attributes encoded in the nodes and edges: the fitness distribution and the edge weight distribution. These are more appropriate to what the sample has to offer. Metrics based on the density and connection pattern of edges will not give meaningful information because these are an artefact of the chosen sampling parameters, instead of an inherent structure. In particular, funnel metrics calculated on Snowball LONs demonstrated that almost all nodes were identified as funnel-floors. Instead, included in the chosen metric set is the mean weight of self-loops (weight loops); mean weight disparity of outgoing edges (weight disparity); and fitness-fitness correlation between neighbours (fitness correlation). Statistics collected during snowballing are included too, namely: mean length of hill-climb to local optimum (mean HC length); maximum length of hill-climb to local optimum (maximum HC length); and maximum number of paths to local optimum (maximum HC paths). ## 4 Experimental Setup In this section, we detail the setup of the LON construction algorithms, the predictive models used, and the QAP search algorithms. ### 4.1 Markov-Chain LON Sampling: Details For the main QAPLIB study, seven parameter configurations are given to the Markov-Chain LON sampling algorithm, producing seven Markov-Chain LONs for each QAPLIB instance. Parameter choices are amongst those provided and suggested by the author of the base ILS heuristic (Stützle, 2006), using the form {perturbation, type local search}, and they are: {$n16$, first}; {$n16$, best}; {$n8$, first}; {$n8$, best}; {$n4$, first}; {$n2$, first}; {$3n4$, first}. For all, 200 runs are started from a random solution, so that diverse regions of the local optima space will be sampled. Each run terminates when there has not been an improvement in 1000 iterations. During preliminary runs, this condition was found to be sufficiently large such that local optima are not prematurely considered to be inescapable funnel-floors (recall Section 2.3), but low enough that computational cost is reasonable. Concerning the sensitivity of the results to the parameter choices: if the number of runs were lower, the relation from the constructed LONs to optimiser performance would be proportionally weaker. A lower value for the termination condition would produce LONs riddled with pseudo-funnel floors, presenting an image of the local optima space which is more complex than the empirical reality. The full process is given in Section 3 and Algorithm 2. Seven sampling configurations for 124 problem instances gives us 868 LONs for QAPLIB instances as produced by Markov-Chain LON Sampling. In addition, Markov-Chain LON Sampling LONs for the 60 synthetic size-11 instances and for the fixed-evaluations QAPLIB LONs were constructed using {$n2$, first}. ### 4.2 Snowball LON Sampling: Details Parameter configurations for Snowball LON Sampling lie in the form {l, m, d} and are as follows: {100, 20, 2}; {100, 30, 2}; {100, 50, 2}; {100, 60, 2}; {200, 30, 2}; {400, 30, 2}; and {400, 50, 2}. The choices are based on those suggested by the algorithm's author (Verel et al., 2018). To reiterate, we start once from a random solution, and hill-climb to a local optimum, which is the first in the random walk. The algorithm will terminate when the walk length and node expansion is complete, as in Section 3 and Algorithm 3. Like Markov-Chain LON Sampling, there are 868 (seven parameter sets $×$ 124 instances) for QAPLIB produced by this algorithm. Snowball LON Sampling LONs for the synthetic size-11 instances and for the fixed-evaluations QAPLIB LONs were constructed using {100, 60, 2}. ### 4.3 Predictive Models: Details Linear and random forest regression are used for algorithm performance prediction. Linear regression models describe the linear effect of predictors on the response variable; random forest is known for capturing nonlinear interactions between variables. For each, random repeated subsampling cross-validation (also known as bootstrapping) is conducted for 100 iterations, each time shuffling the observations randomly. We do this with an 80-20 training-test split. The predictors $xj$ are normalised and reduce to standard deviation one as $(xj-E(xj))sd(xj)$. The random forest regressions use 500 decision trees; the number of features sampled as candidates during splits is $13n$, where $n$ is the number of features. For the bulk of our models (results Sections 5.2, 5.3, and 5.4) the $R2$ is a reported statistic. This quantifies the amount of variance in the response variable—which is the obtained fitness (after 1000 iterations) as a proportion of the desired fitness—which is explainable using the set of predictors. Also in use is the mean-squared error (MSE), a notion of how accurate predictions from the model are. For the random forest regressions, the predictor importance rankings are reported. A portion of our study is comparing LON enumerations with LON samples. For this, we have a different environment for modelling. Because the LON must be fully enumerable, the instances are bounded at $N=11$ and are synthetic. There are 60 observations (feature vectors for 60 LONs), arising from two QAP classes (random “real-like” and uniformly random, respectively). Accounting for effects caused by QAP class is an objective for this set of experiments (the results are in Section 5.1), in particular because of the limited number of observations. These “random effects” can be controlled for in a hierarchical linear mixed model, also known as a mixed-effects model. To formalise the hierarchical modelling approach, let us take $yik$ to be the search performance (for example, success rate or runtime) observed on instance $i$ from class $k$ ($k$ is real-like or uniform-random). The linear model is then: $yik=β0+∑j=1pβjxjik+αk+εik,εik∼N(0,σ2),$ where $xjik$ is the value of predictor $j$ (e.g., number of local optima, number of funnels, etc.) of instance $i$ from class $k$, $βj$ is its corresponding fixed effect on search performance, $αk$ are the random effects conditional on problem class $k$, which represent random deviations from the common intercept $β0$, and finally $εik$ are the model residuals. The summary statistics used for this subset of regression models (Section 5.1) are the conditional$R2$, the marginal$R2$, and the Root Mean Squared Error (RMSE) (given as a proportion of the response variable's range). Conditional R$2$ is the variance explained by the complete model. Marginal$R2$ is the ratio of variance that is explained by $only$ the fixed effects (Nakagawa and Schielzeth, 2013) (negating or controlling for the random effects from QAP instance class). The fixed effects are the LON features. RMSE captures the standard deviation of the model predictions (also known as residuals) and is therefore useful for estimating the variability of predictions. RMSE is in the same unit range as the response variable. For ease of interpretation, RMSE is taken as a proportion of the total range for the response variable (search performance). ### 4.4 QAP Search Algorithms For our regression response variables, search algorithm performance on the QAP instances is used. The search algorithms are two competitive search algorithms for the QAP: Improved Iterated Local Search (ILS, Stützle, 2006), and Robust Taboo Search (TS, Taillard, 1991). These are run 100 times on every QAP instance and after 1000 iterations the obtained fitness is taken as a proportion of the desired fitness. This is our measure for how difficult the instance was for ILS or TS and is the response variable in the models seen in results Sections 5.2, 5.3, and 5.4. This metric does not hold up well though for our $N=11$ instances: they are easy for ILS and TS, meaning the obtained fitness is frequently the desired fitness. For those experiments (models reported in results Section 5.1) the metric number of iterations to reach the global optimum is instead used as our algorithm response variable. ILS comes with a wealth of viable parameter configurations. The choices for this study are first improvement hill-climbing, in combination with a perturbation strength of $3n4$ pairwise exchanges of facilities. This perturbation strength is known to be well-performing (Stützle, 2006) and is chosen to generate a sufficiently different solution from the previous local optimum. ## 5 Results This section reports the experimental results, mostly in the form of regression model summaries. The primary aim is analysing the relationship between the LONs and search algorithm performance. ### 5.1 Approximating the LON Enumeration #### 5.1.1 Features Let us begin a brief look at the relationships between features of LONs produced by the construction algorithms. Figure 3 plots the number of local optima in the LON (right), and the number of edges (left). The scatterplot is the feature (nodes or edges) from LON enumeration ($x$-axis) and the same feature from Markov-Chain LON Sampling ($y$-axis). The random “real-like” LONs are red, uniform random in green. Figure 3: Scatterplots for features of fully enumerated LONs ($x$-axis) versus features of sampled Markov-Chain LONs ($y$-axis). Figure 3: Scatterplots for features of fully enumerated LONs ($x$-axis) versus features of sampled Markov-Chain LONs ($y$-axis). The two plots share similar trends. The uniform random LONs in green suggest a reasonable correlation between the estimation and the true number of nodes, and edges. The numbers of nodes and edges are, as expected, smaller when produced by the sampling algorithm instead of the enumeration. Although less obvious from the plots, the same is true of the random real-like LONs (in red). In fact, taking all 60 LONs together carries an associated 0.926 Pearson correlation between enumerated node count and sampled node count. The edge count correlation stands lower at 0.793. Snowball LON Sampling generates LONs with stronger still correlation plots. There is a 0.996 correlation with enumeration in terms of nodes; 0.977 for edges. These positive trends are seen in Figure 4. Let us clarify that extrapolating from these results to larger instances is difficult. Although Snowball LON Sampling often extracts a node set with size representative of the enumerated LON set, this might only be the case for very small instances. The algorithm is based on a single start point; therefore, the sample could potentially reflect a low-quality region of local optima. Figure 4: Scatterplots for features of fully enumerated LONs ($x$-axis) versus features of sampled Snowball LONs ($y$-axis). Figure 4: Scatterplots for features of fully enumerated LONs ($x$-axis) versus features of sampled Snowball LONs ($y$-axis). #### 5.1.2 Predictive Models Consider now Table 1 for performance prediction models using features from different types of LON sampling. In terms of explanation of variance in the ${ILS,TS}$ responses (see Marginal $R2$ column), the models are somewhat weak after controlling for the random effects (allowing for difference in problem type). This can be seen by comparing the smaller Marginal$R2$ values with the Conditional$R2$ ones. The confusion of the response variable is likely due to the small number of observations (60) and smaller still number belonging to a particular class (30 each). The purpose of the models is comparison between LON construction algorithms though. The models do have low relative RMSE—typically one-fifth of the range of the response variable. Taking the Marginal$R2$ as indication of quality, the two strongest models are using sampled (Markov-Chain and Snowball) LONs to predict ILS response on the problem. Notice that the Marginal$R2$ and Conditional$R2$ are equal for the former. This means there was no detectable random effects coming from difference in problem class. The result that sampled LONs explain more variance than enumerated is noteable—intuition would tell us that the enumerated LON (rows one and two in Table 1) would give superior information about the problem. Markov-Chain LON Sampling and Snowball LON Sampling are, however, based on operator combinations which are in search algorithms used in practice on the QAP. Enumeration of LONs is simply running a local search from every possible solution to assign a local optimum. This may not mirror actual algorithm-problem interactions. This result is interesting given that most LON work has analysed enumerated LONs (Ochoa et al., 2008; Daolio et al., 2011; Herrmann et al., 2016). Table 1: Mixed-effects model summary statistics, for the $N=11$ LONs. Marginal (fixed effects) $R2$, conditional (fixed and random effects) $R2$, and root mean squared error are shown. LON MethodResponse VariableMarginal $R2$Conditional $R2$RMSE Exhaustive ILS 0.159 0.832 0.21 Exhaustive TS 0.148 0.719 0.18 Markov-Chain ILS 0.406 0.406 0.20 Markov-Chain TS 0.220 0.816 0.20 Snowball ILS 0.370 0.374 0.20 Snowball TS 0.147 0.745 0.23 LON MethodResponse VariableMarginal $R2$Conditional $R2$RMSE Exhaustive ILS 0.159 0.832 0.21 Exhaustive TS 0.148 0.719 0.18 Markov-Chain ILS 0.406 0.406 0.20 Markov-Chain TS 0.220 0.816 0.20 Snowball ILS 0.370 0.374 0.20 Snowball TS 0.147 0.745 0.23 ### 5.2 LON Sampling: Equal Computational Threshold For this section, LONs were produced using a comparable computational budget. Each LON algorithm was allowed 50,000 fitness function evaluations. Now we investigate which provides more predictive information about search difficulties under these restricted conditions. Table 2 gives model summaries, including the LON algorithm used to produce the samples, the type of regression, the chosen response variable (ILS or TS, meaning the normalised fitness they obtained on the problem), the $R2$, and the mean-squared error (MSE). Tables 3 and 4 then record the random forest predictor rankings for Markov LON Sampling and Snowball LON Sampling respectively. Looking over the ILS rows in Table 2, it can be seen that the MSE rates are comparable for the LON construction algorithms; however, the $R2$ values are higher for both types of regression for Markov-Chain over Snowball. This being said, Snowball LON, with features predicting TS using random forest regression, is the strongest model shown, with around 52% of variance explained. This is especially interesting with the consideration that typically the $R2$ values are smaller for random forest over linear, but this model is the only exception. Snowball LON sampling is superior to its competitor when taking tabu search (TS) to be the response variable. This can be seen by comparing rows 2 and 4 with rows 6 and 8. The $R2$ values are higher and the error rates are smaller (by an order of magnitude) in the Snowball models, compared to those of Markov. Table 2: Linear and random forest regression model summary statistics, for the fixed-evaluations QAPLIB LONs. $R2$ and MSE are given. LON MethodRegressionResponse Variable$R2$MSE Markov-Chain Linear ILS 0.471 0.002 Markov-Chain Linear TS 0.336 0.132 Markov-Chain RandomForest ILS 0.245 0.002 Markov-Chain RandomForest TS 0.154 0.144 Snowball Linear ILS 0.303 0.002 Snowball Linear TS 0.418 0.024 Snowball RandomForest ILS 0.230 0.002 Snowball RandomForest TS 0.521 0.013 LON MethodRegressionResponse Variable$R2$MSE Markov-Chain Linear ILS 0.471 0.002 Markov-Chain Linear TS 0.336 0.132 Markov-Chain RandomForest ILS 0.245 0.002 Markov-Chain RandomForest TS 0.154 0.144 Snowball Linear ILS 0.303 0.002 Snowball Linear TS 0.418 0.024 Snowball RandomForest ILS 0.230 0.002 Snowball RandomForest TS 0.521 0.013 Table 3: Predictor rankings for RF models using fixed-evaluations Markov-Chain QAPLIB LONs. Fitness features in italics. Predicting ILSImportance ValuePredicting TSImportance Value incoming global 0.058 mean fitness 2.130 out-degree 0.023 funnel-floor fitness 2.127 number optima 0.023 edges 1.742 mean fitness 0.022 number optima 1.736 edges 0.021 out-degree 0.349 funnel-floor fitness 0.019 incoming global 0.304 Predicting ILSImportance ValuePredicting TSImportance Value incoming global 0.058 mean fitness 2.130 out-degree 0.023 funnel-floor fitness 2.127 number optima 0.023 edges 1.742 mean fitness 0.022 number optima 1.736 edges 0.021 out-degree 0.349 funnel-floor fitness 0.019 incoming global 0.304 Table 4: Predictor rankings for RF models using fixed-evaluations Snowball QAPLIB LONs. Fitness features in italics. Predicting ILSImportance ValuePredicting TSImportance Value maximum HC length 0.043 maximum HC length 0.656 weight disparity 0.028 mean fitness 0.320 mean fitness 0.025 mean HC length 0.310 maximum HC paths 0.017 fitness correlation 0.273 fitness correlation 0.015 weight disparity 0.193 weight loops 0.014 weight loops 0.156 mean HC length 0.014 maximum HC paths 0.141 Predicting ILSImportance ValuePredicting TSImportance Value maximum HC length 0.043 maximum HC length 0.656 weight disparity 0.028 mean fitness 0.320 mean fitness 0.025 mean HC length 0.310 maximum HC paths 0.017 fitness correlation 0.273 fitness correlation 0.015 weight disparity 0.193 weight loops 0.014 weight loops 0.156 mean HC length 0.014 maximum HC paths 0.141 ### 5.3 LONs to Predict QAPLIB Problem Difficulty In this section, the set of 868 LON samplings (for each LON algorithm, totalling 1,736) for QAPLIB is used. Recall that these are associated with 124 QAPLIB instances, each having seven LONs produced per algorithm (differentiated by sampling parameter configuration). Table 5 shows the algorithmic performance prediction model summaries, indicating the strength of the models in terms of $R2$ and MSE. Tables 6 and 7 record predictor rankings for the random forest models for Markov-Chain LON Samples and Snowball LON Samples, respectively. Rows 1 and 5 of Table 5 (linear regression to predict ILS response) show that neither Markov-Chain nor Snowball LON features build a good model. The $R2$ values are just too weak. However, when we look at the equivalent random forest models (rows 3 and 7) we have strong models with around 64% (Markov-Chain) and 80% (Snowball) of variance explained, and very small relative MSE values. This could reflect the capability of regression trees to capture nonlinearity and complex interactions between predictors. The same trend is seen in the tabu search prediction models in Table 5. Linear models (rows 2 and 6) are weak, with small $R2$ and comparably larger MSE. The random forest results (rows 4 and 8) are strong, with 90% or over variance explained by features of LONs produced by both LON construction algorithms. Table 5: Linear and random forest regression model summary statistics for full QAPLIB LON set. $R2$ and MSE are given. LON MethodRegressionResponse Variable$R2$MSE Markov-Chain Linear ILS 0.043 0.002 Markov-Chain Linear TS 0.180 0.081 Markov-Chain RandomForest ILS 0.645 0.000 Markov-Chain RandomForest TS 0.925 0.008 Snowball Linear ILS 0.057 0.003 Snowball Linear TS 0.252 0.029 Snowball RandomForest ILS 0.804 0.000 Snowball RandomForest TS 0.922 0.003 LON MethodRegressionResponse Variable$R2$MSE Markov-Chain Linear ILS 0.043 0.002 Markov-Chain Linear TS 0.180 0.081 Markov-Chain RandomForest ILS 0.645 0.000 Markov-Chain RandomForest TS 0.925 0.008 Snowball Linear ILS 0.057 0.003 Snowball Linear TS 0.252 0.029 Snowball RandomForest ILS 0.804 0.000 Snowball RandomForest TS 0.922 0.003 Table 6: Predictor rankings for main QAPLIB RF models—Markov-Chain LON samples. Fitness features in italics. Predicting ILSPredicting TS mean fitness mean fitness funnel-floor fitness funnel-floor fitness number optima edges incoming global diameter edges number optima diameter out-degree out-degree incoming global Predicting ILSPredicting TS mean fitness mean fitness funnel-floor fitness funnel-floor fitness number optima edges incoming global diameter edges number optima diameter out-degree out-degree incoming global Table 7: Predictor rankings for main QAPLIB RF models using Snowball LONs. Predicting ILSPredicting TS mean fitness mean fitness maximum HC paths fitness correlation fitness correlation maximum HC paths mean HC length weight disparity weight disparity maximum HC length maximum HC length mean HC length weight loops weight loops Predicting ILSPredicting TS mean fitness mean fitness maximum HC paths fitness correlation fitness correlation maximum HC paths mean HC length weight disparity weight disparity maximum HC length maximum HC length mean HC length weight loops weight loops Taking Markov-Chain and Snowball as rivals for predicting ILS response, Snowball is slightly superior; for TS, they are roughly indistinguishable for predictive power—Markov-Chain has slightly higher $R2$ but also higher error. Now let us consider the contributions of the Markov-Chain Sampling predictors in our models from Table 6. These variable importances are calculated as such; for all trees, the subsample of observations not used in the building of the tree has prediction accuracy calculated on it. To get the importance of a variable (predictor), a random shuffle is applied to the variable values in the subsample. Everything else is kept constant. The prediction accuracy is then tested again on these permuted data. This accuracy decrease is averaged over the trees to obtain variable importance for a predictor. Each column is for a random forest model, and the predictors from top to bottom are best to worst. Immediately apparent is the importance of the sampled fitness distribution. In fact, mean fitness is the top predictor for all four models. Funnel-floor fitness is the second best predictor for both Markov models, telling us again the role of sampled fitness levels. Out-degree is not an important Markov-Chain LON feature for algorithm performance prediction, ranking lowest and second-lowest in the models. Table 7 reports the rankings for the Snowball LON Sampling RF models. Just like in Table 6, mean fitness is top-ranked in each. Also important for predicting TS is fitness correlation, appearing second. In the middle are the local search metrics collected during the Snowball LON process and the weight disparity. Ranking lowest is weight loops, pertaining to unsuccessful escape attempts from local optima. These findings are unusual: the bulk of LON literature focuses heavily on edge connection patterns and densities in the LON (Herrmann et al., 2016; Thomson et al., 2017; McMenemy et al., 2018) rather than simply the fitness distribution the sample reflects. ### 5.4 Best of Both Sampling Methods? Markov-Chain LON Sampling and Snowball LON Sampling produce LONs which capture different fitness landscape information. Features to do with edge and node density, as well as those relating to path lengths, are redundant as predictors when calculated on a Snowball LON: they are prescribed by the sampling algorithm parameters. A speculation is put forward now that the union of different features from LON types, for the same problem instances, might predict more accurately than using one LON type only. The three best-ranked predictors (according to Section 5.3) for each response variable (ILS or TS) and each LON method (Markov or Snowball) are bundled together into single models. The results are in Table 8. Rankings are in Table 9. Table 8 illuminates further that linear regression may not suit LON performance prediction models, with rows 1 and 2 showing weak $R2$ values. The lack of specificity suggests they miss nonlinear interactions between variables. This is in contrast to the random forest counterparts, rows 3 and 4, strikingly stronger than the linear models. They account for 97% (ILS) and 99% (TS) of search variance explained. The error rates (see MSE column) appear to be low. The $R2$ values surpass those seen when using features from one type of LON. Table 8: Model summary statistics when using features from both LON types. RegressionResponse Variable$R2$MSE Linear ILS 0.076 0.003 Linear TS 0.254 0.026 RandomForest ILS 0.972 0.000 RandomForest TS 0.991 0.000 RegressionResponse Variable$R2$MSE Linear ILS 0.076 0.003 Linear TS 0.254 0.026 RandomForest ILS 0.972 0.000 RandomForest TS 0.991 0.000 Table 9: Predictor rankings for RF models using both LON types. Predicting ILSPredicting TS mean fitness (Snowball LONs) mean fitness (Snowball LONs) funnel-floor fitness funnel-floor fitness mean fitness (Markov LONs) mean fitness (Markov LONs) number optima maximum HC paths maximum HC length fitness correlation mean HC length edges Predicting ILSPredicting TS mean fitness (Snowball LONs) mean fitness (Snowball LONs) funnel-floor fitness funnel-floor fitness mean fitness (Markov LONs) mean fitness (Markov LONs) number optima maximum HC paths maximum HC length fitness correlation mean HC length edges In Table 9, we show the six features ranked by importance in the RF models. As in Section 5.3, the prestige of the fitness features is obvious. Both models share the same three features. The mean sampled Snowball LON fitness (mean fitness (Snowball LONs)) is top-ranked. In second place is funnel-floor fitness, calculated on Markov LONs only. In third place is the Markov LONmean fitness. As in Section 5.3, the role of the LON connectivity variable (edges) is weak, ranking last in its model. ## 6 Discussion Markov-Chain LON Sampling has not been validated against “ground truth” enumerated LONs before. Section 5.1 showed that for small ($N$ = 11) QAP instances, both Markov-Chain LON Sampling and Snowball LON Sampling produced accurate approximation of the number of nodes (local optima) and edges (optima transformations) when comparing to the “ground-truth” fully enumerated LON. Markov-Chain LON Sampling did not find all of them but the quantities correlated with the enumerated LON values. The latter found almost the same number of nodes and edges as the enumerated LON. Although this is encouraging, there must be caution when tempted by extrapolations to larger problems. Both LON construction algorithms are designed for large problems, and use the according amount of computation. There may be a particular consideration for scaling Snowball LON Sampling to larger problems: The process uses short, restricted paths through local optima and is not based on optimisation. Therefore, it is argued Snowball LON Sampling may be highly dependent on the random starting solution; the rest of the sample is based around that. In a large fitness landscape, the obtained sample may actually be rather low-quality local optima, far from the promising regions near the global optimum. In future work, the intention is to examine the relationship between LON sampling parameter choices and problem dimension for both algorithms. Another engaging observation from the sampling validation attempt (Section 5.1) was that generally, sampled LON features explained more search variance than those of the enumerated LONs. This is important because the vast majority of LON research (Ochoa et al., 2008; Daolio et al., 2011; Verel et al., 2011; Herrmann et al., 2016) dissected fully enumerated LONs, even using their features for algorithm performance prediction (Daolio et al., 2012; Herrmann et al., 2018; Thomson et al., 2018). Our results suggest that LON construction algorithms may approximate or infer yet unseen fitness landscapes better than best-improvement exhaustive enumeration of the local optima level. This does, intuitively, conform to logic: LON sampling algorithms use search operators—and sequences of operators—which overlap with search algorithm operators used to optimise QAP instances. In Section 5.2, we saw that capping the LON construction algorithms to 50,000 fitness function evaluations produces LONs whose features can explain up to around 50% of search variance. The Snowball LON Sampling seemed to infer the tabu search response variable better than its competitor; Markov-Chain Sampling LONs did slightly better on the ILS response. The main QAPLIB LONs told us that they have solid predictive power for search algorithms in Section 5.3. Both Markov-Chain LON Sampling and Snowball LON Sampling seem to produce samples that infer prospective fitness landscapes, with much variance in search being explained with the features. Sections 5.3 and 5.4 showed us that random forest trees yield more promising regression models than linear regression. This suggests the use of RF in future algorithm performance prediction models using fitness landscape features, in pursuit of capturing complex variable interactions. Another aspect of the results was the apparent importance of the fitness levels sampled by the LON construction algorithms, seen in Sections 5.3 and 5.4. Metrics about the distribution were repeatedly present in the top two predictors for random forest models. The strongest models we saw were when the best features from Markov-Chain LONs were combined with the best features from the Snowball LONs in Section 5.4, accounting for 97% (ILS) and 99% (TS) of search variances. The suggestion is therefore combining features from both LON construction algorithms as a “super-sample” in predictive models. In the future, the intention is to implement a combined LON algorithm which draws on both the exploitation of the Snowball algorithm and the exploration of the Markov algorithm. There are certainly limitations to the approach and indeed the results presented. The results are valid but currently hold only for our particular search operator choices, configurations of LON construction algorithms, choices of QAP heuristic search algorithms, choice for quantifying search success, chosen neighbourhood function, and the features taken from the LONs. There is also the issue of the nondeterminism in the sampling algorithms. ## 7 Conclusions Local Optima Network (LON) construction algorithms have been scrutinised to gain information about their ability to infer future fitness landscapes. The two most recent LON sampling algorithms were studied, and they were named Markov-Chain LON Sampling and Snowball LON Sampling. The aim was to compare them and assess the quality of the resultant LON samples for predicting performance on the problems. We also used an algorithm for exhaustively enumerating LONs. All were applied to the Quadratic Assignment Problem, but the frameworks could easily be modified to a different problem in the manner of metaheuristic framework. The QAP instance set included both benchmark and generated problems. The study began with validation of LON samples by comparison with “ground truth” (the enumerated LON for the same instance). Then Markov-Chain LON Sampling and Snowball LON Sampling were given 50,000 fitness evaluations to produce LONs of QAPLIB instances. After that Markov-Chain LON Sampling and Snowball LON Sampling were given default computation (according to the algorithms specification by the LON algorithm authors). A large set of QAPLIB instances up to $N=128$ had fourteen LONs mapped in this way. Finally, the three most important features for each type of LON sample were taken and joined into one model. The results suggested that for optimisation predictions made by LON features, random forest trees better capture the nonlinear effects between variables in the fitness landscape. Repeatedly, the apparent key role of sampled fitness levels in explaining search algorithm variance was seen. A lot of LON literature focuses on edge connectivity patterns and not the fitness distribution amongst local optima—is this misguided? A surprising result was that the enumerated LON, when compared to sampled LON, had less predictive power. An intuitive thought is that a better prediction comes from more extensive fitness landscape information. This is not the case here. An argument is made that this stems from LON sampling algorithms sharing similar operators, or indeed sequences of operators, with heuristics used in QAP research. This study ends with a note that there is an abundant field of work waiting to be done in this area, and we are excited to explore it. In particular, we acknowledge that our results are of course dependent on our algorithm configurations, neighbourhood function choice, choices for search algorithms, choice of problem domain, and so on. The pursuit of different choices is the next endeavour. ## References Applegate , D. , Cook , W. , and Rohe , A . ( 2003 ). Chained Lin-Kernighan for large traveling salesman problems . INFORMS Journal on Computing , 15 ( 1 ): 82 92 . Bozejko , W. , Gnatowski , A. , Nizyński , T. , Affenzeller , M. , and Beham , A . ( 2018 ). Local optima networks in solving algorithm selection problem for TSP. In International Conference on Dependability and Complex Systems , pp. 83 93 . Burkard , R. E. , Karisch , S. E. , and Rendl , F . ( 1997 ). . Journal of Global Optimization , 10 ( 4 ): 391 403 . Chicano , F. , Ochoa , G. , Whitley , D. , and Tinós , R . ( 2018 ). Enhancing partition crossover with articulation points analysis. In Proceedings of the Genetic and Evolutionary Computation Conference , pp. 269 276 . Daolio , F. , Tomassini , M. , Vérel , S. , and Ochoa , G . ( 2011 ). Communities of minima in local optima networks of combinatorial spaces . Physica A: Statistical Mechanics and Its Applications , 390 ( 9 ): 1684 1694 . Daolio , F. , Verel , S. , Ochoa , G. , and Tomassini , M . ( 2012 ). Local optima networks and the performance of iterated local search. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation , pp. 369 376 . Doye , J. P. , Miller , M. A. , and Wales , D. J . ( 1999 ). The double-funnel energy landscape of the 38-atom Lennard-Jones cluster . The Journal of Chemical Physics , 110 ( 14 ): 6896 6906 . Fieldsend , J. E . ( 2018 ). Computationally efficient local optima network construction. In Proceedings of the Genetic and Evolutionary Computation Conference Companion , pp. 1481 1488 . Goodman , L. A . ( 1961 ). Snowball sampling. In The Annals of Mathematical Statistics , pp. 148 170 . Hains , D. R. , Whitley , L. D. , and Howe , A. E . ( 2011 ). Revisiting the big valley search space structure in the TSP . Journal of the Operational Research Society , 62 ( 2 ): 305 312 . Hernando , L. , Daolio , F. , Veerapen , N. , and Ochoa , G . ( 2017 ). Local optima networks of the permutation flowshop scheduling problem: Makespan vs. total flow time. In 2017 IEEE Congress on Evolutionary Computation , pp. 1964 1971 . Herrmann , S . ( 2016 ). Determining the difficulty of landscapes by pagerank centrality in local optima networks. In Evolutionary Computation in Combinatorial Optimization , pp. 74 87 . Herrmann , S. , Ochoa , G. , and Rothlauf , F . ( 2016 ). Communities of local optima as funnels in fitness landscapes. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference , pp. 325 331 . Herrmann , S. , Ochoa , G. , and Rothlauf , F . ( 2018 ). Pagerank centrality for performance prediction: The impact of the local optima network model . Journal of Heuristics , 24 ( 3 ): 243 264 . Iclanzan , D. , Daolio , F. , and Tomassini , M . ( 2014 ). Data-driven local optima network characterization of QAPLIB instances. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation , pp. 453 460 . Lawler , E. L . ( 1963 ). . Management Science , 9 ( 4 ): 586 599 . Liu , J. , Abbass , H. A. , and Tan , K. C . ( 2019 ). Problem difficulty analysis based on complex networks. In Evolutionary Computation and Complex Networks , pp. 39 52 . McMenemy , P. , Veerapen , N. , and Ochoa , G . ( 2018 ). How perturbation strength shapes the global structure of TSP fitness landscapes. In European Conference on Evolutionary Computation in Combinatorial Optimization , pp. 34 49 . Nakagawa , S. , and Schielzeth , H . ( 2013 ). A general and simple method for obtaining $R2$ from generalized linear mixed-effects models . Methods in Ecology and Evolution , 4 ( 2 ): 133 142 . Ochoa , G. , and Herrmann , S . ( 2018 ). Perturbation strength and the global structure of QAP fitness landscapes. In Proceedings of 15th International Conference, Part II , pp. 245 256 . Ochoa , G. , Tomassini , M. , Vérel , S. , and Darabos , C . ( 2008 ). A study of NK landscapes' basins and local optima networks. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation , pp. 555 562 . Ochoa , G. , and Veerapen , N . ( 2016 ). Deconstructing the big valley search space hypothesis. In European Conference on Evolutionary Computation in Combinatorial Optimization , pp. 58 73 . Ochoa , G. , and Veerapen , N . ( 2018 ). Mapping the global structure of TSP fitness landscapes . Journal of Heuristics , 24 ( 3 ): 265 294 . Ochoa , G. , Veerapen , N. , Daolio , F. , and Tomassini , M . ( 2017 ). Understanding phase transitions with local optima networks: Number partitioning as a case study. In European Conference on Evolutionary Computation in Combinatorial Optimization , pp. 233 248 . Simoncini , D. , Barbe , S. , Schiex , T. , and Verel , S . ( 2018 ). Fitness landscape analysis around the optimum in computational protein design. In Proceedings of the Genetic and Evolutionary Computation Conference , pp. 355 362 . , P. F . ( 2002 ). Fitness landscapes. In Biological Evolution and Statistical Physics , pp. 183 204 . Stützle , T . ( 2006 ). Iterated local search for the quadratic assignment problem . European Journal of Operational Research , 174 ( 3 ): 1519 1539 . Taillard , É . ( 1991 ). Robust taboo search for the quadratic assignment problem . Parallel Computing , 17 ( 4–5 ): 443 455 . Tayarani-N , M.-H. , and Prügel-Bennett , A . ( 2016 ). An analysis of the fitness landscape of travelling salesman problem . Evolutionary Computation , 24 ( 2 ): 347 384 . Thomson , S. L. , Daolio , F. , and Ochoa , G . ( 2017 ). Comparing communities of optima with funnels in combinatorial fitness landscapes. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) , pp. 377 384 . Thomson , S. L. , Verel , S. , Ochoa , G. , Veerapen , N. , and McMenemy , P . ( 2018 ). On the fractal nature of local optima networks. In European Conference on Evolutionary Computation in Combinatorial Optimization , pp. 18 33 . Tomassini , M. , Verel , S. , and Ochoa , G . ( 2008 ). Complex-network analysis of combinatorial spaces: The NK landscape case . Physical Review E , 78 ( 6 ): 066114 . Veerapen , N. , Ochoa , G. , Tinós , R. , and Whitley , D . ( 2016 ). Tunnelling crossover networks for the asymmetric TSP. In International Conference on Parallel Problem Solving from Nature , pp. 994 1003 . Verel , S. , Daolio , F. , Ochoa , G. , and Tomassini , M . ( 2018 ). Sampling local optima networks of large combinatorial search spaces: The QAP case. In Parallel Problem Solving from Nature , pp. 257 268 . Verel , S. , Daolio , F. , Ochoa , G. , and Tomassini , M . ( 2011 ). Local optima networks with escape edges. In International Conference on Artificial Evolution (Evolution Artificielle) , pp. 49 60 .
2021-05-18 23:34:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 136, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6668927669525146, "perplexity": 2467.053973079919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00437.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5204/what-is-recursive-fourier-sampling-and-how-does-it-prove-separations-between-bqp
# What is recursive Fourier sampling and how does it prove separations between BQP and NP in the black-box model? ## Context: I was going through John Watrous' lecture Quantum Complexity Theory (Part 1) - CSSQI 2012. Around 48 minutes into the lecture, he presents the following: No relationship is known between $$\mathsf{BQP}$$ and $$\mathsf{NP}$$...they are conjectured to be incomparable. So far so good. Then I came across Scott Aaronson's answer to Can a parallel computer simulate a quantum computer? Is BQP inside NP? He mentions these points: • Suppose you insist on talking about decision problems only ("total" ones, which have to be defined for every input), as people traditionally do when defining complexity classes like P, NP, and BQP. Then we have proven separations between BQP and NP in the "black-box model" (i.e., the model where both the BQP machine and the NP machine get access to an oracle), as mmc alluded to. • And to reiterate, all of these separations are in the black-box model only. It remains completely unclear, even at a conjectural level, whether or not these translate into separations in the "real" world (i.e., the world without oracles). We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP. After years thinking about the problem, I still don't have a strong intuition either that BQP should be contained in NP in the "real" world or that it shouldn't be. According to his answer, it seems that in the "black-box model" we can show some relationship (as in non-overlapping) between $$\mathsf{BQP}$$ and $$\mathsf{NP}$$. He cites @mmc's answer, which says: There is no definitive answer due to the fact that no problem is known to be inside PSPACE and outside P. But recursive Fourier sampling is conjectured to be outside MA (the probabilistic generalization of NP) and has an efficient quantum algorithm. Check page 3 of this survey by Vazirani for more details. ## Question: It's not clear from @mmc's answer as to how the recursive Fourier sampling algorithm is related to the $$\mathsf{BQP}$$ class and how (or whether at all) it proves separations between $$\mathsf{BQP}$$ and $$\mathsf{NP}$$ in the "black-box model". In fact, I'm not even sure what "black-box model" means in this context, although I suspect that it refers to some kind of computation model involving quantum oracles. It would be great if someone could provide a brief and digestible summary of what the recursive Fourier sampling algorithm is and how it proves separations between $$\mathsf{BQP}$$ and $$\mathsf{NP}$$. Even a rough sketch of the idea behind the proof would be okay. I'm hoping this will give me a head start in understanding the papers linked by @mmc, which are otherwise rather dense for me at present. Initially I'll admit that I find the linked papers to be dense as well. However, to make some headway, a complete problem in $$\mathrm{NP}$$ can be phrased as "given a $$\mathsf{3SAT}$$ instance, does there exist a solution?" A complete problem in $$\mathrm{coNP}$$ is "given $$\mathsf{3SAT}$$ instance, do all inputs satisfy the $$\mathsf{3SAT}$$? Problems in the polynomial hierarchy $$\mathrm{PH}$$ alternate "there exists" with "for alls," with a constant number of iterations of $$\exists$$ and $$\forall$$. For me, the easiest problem in $$\mathrm{PH}$$ to think about is "is there a mate in $$n$$ for this board position?" because this is just "does there exist a move by white such that for all moves by black, there is a counter-move by white such that... white wins." Ponder the difficulty of such $$\forall x_1,\exists x_2\cdots$$ problems as compared to factoring, which is in $$\mathrm{NP}\cap\mathrm{coNP}$$. Turning to Recursive Fourier Sampling (RFS), much as in Simon's problem, with RFS we are only given oracle access to a function $$A$$. However, RFS involves a promise having just such an alternation of $$\exists$$ and $$\forall$$. By going up to a small-enough height (number of iterations of $$\exists$$ and $$\forall$$,) the problem can be shown to be likely difficult for a classical computer to efficiently solve but definitely easy for a quantum computer to solve. Comparing RFS to the "forrelation" problem that I think is a little easier to understand but somewhat more difficult to see the connection to $$\forall x_1,\exists x_2\cdots$$. Forrelation is as simple as asking if, given two black-box (oracle) functions $$f$$ and $$g$$, is there a correlation between $$f$$ and the Fourier transform of $$g$$? Here, we can simply Fourier transform $$g$$ and find the inner product with $$f$$. Forrelation is likely even completely outside the polynomial hierarchy. Here the relation to the $$\forall, \exists$$ is less clear to me right now but effectively involves translating a tree of alternating $$\mathsf{AND's}$$ and $$\mathsf{OR's}$$ that are capable of solving forrelation to a sequence of alternating $$\forall$$ and $$\exists$$. Here $$f$$ and $$g$$ are simply black-box, without any structure. As far as I know, we don't know a good instantiation of $$f$$ or $$g$$ "in the real world" yet, with $$f$$ and/or $$g$$ have a particular structure.
2019-06-19 09:34:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161786198616028, "perplexity": 496.64472516689705}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00332.warc.gz"}
https://socratic.org/questions/erica-bought-3-1-2-yards-of-fabric-if-she-uses-2-3-of-the-fabric-to-make-a-curta
# Erica bought 3 1/2 yards of fabric. If she uses 2/3 of the fabric to make a curtain, how much will she have left? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 51 Oct 9, 2016 Erica has left with $1$ yard and $6$ inches of fabric. #### Explanation: When one uses $\frac{2}{3}$ of a whole piece, what one does is divide the whole in to $3$ pieces and use $2$ of them. Hence what is left is $1$ piece out of $3$, which are equivalent to $\frac{1}{3}$. Now Erica had $3 \frac{1}{2}$ yards of fabric i.e. $\frac{7}{2}$ yards, so what is left is $\frac{7}{2} \times \frac{1}{3} = \frac{7}{6} = 1 \frac{1}{6}$ yards. As a yard has $36$ inches, it is $1$ yard and $\frac{1}{6} \times 36 = 6$ inches. Hence, Erica has left with $1$ yard and $6$ inches of fabric. • 13 minutes ago • 14 minutes ago • 14 minutes ago • 14 minutes ago • A minute ago • 3 minutes ago • 3 minutes ago • 4 minutes ago • 5 minutes ago • 10 minutes ago • 13 minutes ago • 14 minutes ago • 14 minutes ago • 14 minutes ago
2018-02-20 04:08:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.54067063331604, "perplexity": 4415.034301120481}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00578.warc.gz"}
https://nforum.ncatlab.org/discussion/339/strict-2category/
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorEric • CommentTimeNov 6th 2009 I requested some more details at strict 2-category. It would be nice to have something describing how objects are categories, morphisms are ??? satisfying ???, 2-morphisms are ??? satisfying ???. I'm sure all the details could be unwrapped from the simple statement "a strict 2-category is a Cat-category", but then I need to learn what an enriched category is first and then I need to see how that works in the case of Cat-enriched category. Soon, I feel overwhelmed. A strict 2-category is probably not THAT hard to understand explicitly. • CommentRowNumber2. • CommentAuthorTobyBartels • CommentTimeNov 6th 2009 It looks like Finn Lawler is writing this for you, Eric; if he doesn't finish it, then I will. (But you're right, it's not that hard, so I'm sure he will.) In the meantime, try Wikipedia, whose 2-categories are strict by default. • CommentRowNumber3. • CommentAuthorFinnLawler • CommentTimeNov 6th 2009 Yup: just a short bit spelling out some of the definitions. I've dashed this off at home without references, so there may be some mistakes. By the way, on ... how objects are categories... they're not, in general. In the 2-category Cat they are, but arbitrary 2-categories, like their 1-cousins, are not required to have any particular structure on their objects. • CommentRowNumber4. • CommentAuthorEric • CommentTimeNov 6th 2009 Ack! Bigons! I do not like bigons! :D Is there a definition of 2-category that does not rely on bigons? • CommentRowNumber5. • CommentAuthorMike Shulman • CommentTimeNov 6th 2009 What have you got against bigons? The 2-cells in a 2-category (strict or weak) are bigons; that's basically the definition of a 2-category. If you want some other shape of cell, then you've got something other than a 2-category. It's true that some definitions of n-category use cells of other shapes. However, once you have a composition operation that applies to all 1-cells, then a cell of any other shape can be regarded as a bigon by just composing up its source and target. So bigons are the most concise way to describe the structure, and the essential aspect of it, although it is sometimes convenient to also include cells of other shapes in order to describe the composition operations cleanly. • CommentRowNumber6. • CommentAuthorTobyBartels • CommentTimeNov 6th 2009 OK, I think that I've filled in all of the details. And yes, Eric, the usual notion strict 2-category is an inherently globular (bigonal) concept; you can call it a globular strict 2-category if you want to make that precise, but it's the usual default. There are, however, also simplicial and cubical strict 2-categories, and the weak notion of bicategory (while usually also defined globularly) is indifferent to the shapes used. Urs has considered these matters, mostly for omega-categories, at geometric shapes for higher categories. • CommentRowNumber7. • CommentAuthorEric • CommentTimeNov 16th 2009 I asked a question related to this shape issue at strict 2-category. • CommentRowNumber8. • CommentAuthorUrs • CommentTimeNov 16th 2009 I added a corresponding sentence below the query box. Generally, I'd say the answer to "Should we make xyz more explicit?" is always "Yes!" • CommentRowNumber9. • CommentAuthorEric • CommentTimeNov 16th 2009 Thanks! I asked another question :) strict 2-category • CommentRowNumber10. • CommentAuthorUrs • CommentTimeNov 16th 2009 • (edited Nov 16th 2009) Okay, I have replied again. I tried to reply in such a way that you can remove the query box if you feel the question has been answered and we are left with proper entry text. • CommentRowNumber11. • CommentAuthorTobyBartels • CommentTimeApr 11th 2010 I have now put in details at bicategory to match the details at strict 2-category. I’m not sure that it was worth it, but there it is. • CommentRowNumber12. • CommentAuthorUrs • CommentTimeApr 11th 2010 I have now put in details at bicategory to match the details at strict 2-category. I’m not sure that it was worth it, but there it is. Thanks, Toby. I think it’s worth it. The nLab has or had some curious gaps when it came to the basic definitions of what the central object of interest here is supposed to be. I am glad seeing these eventually being filled. • CommentRowNumber13. • CommentAuthorzskoda • CommentTimeApr 11th 2010 Curoously enough, I was just yesterday thinking how a category theory resource as nlab has so little explicit standard detail in bicategory entry. Minerva was listening! • CommentRowNumber14. • CommentAuthorTodd_Trimble • CommentTimeJul 15th 2016 I added some more to strict 2-category: some details on the relation to sesquicategory, a bit of history, and some references. As spurred by the MO discussion here. • CommentRowNumber15. • CommentAuthorMike Shulman • CommentTimeJul 15th 2016 Thank you! I might have gotten around to it too, but I’m glad you did. • CommentRowNumber16. • CommentAuthorPeter Heinig • CommentTimeJul 18th 2017 added to strict 2-category two technical terms and a reference. • CommentRowNumber17. • CommentAuthorUrs • CommentTimeAug 28th 2021 • CommentRowNumber18. • CommentAuthorUrs • CommentTimeAug 29th 2021 • CommentRowNumber19. • CommentAuthorUrs • CommentTimeAug 30th 2021 What’s a good reference, if any, to point a general audience to for strict (2,1)-categories, hence $Grpd$-enriched categories? All intro/textbook references I have scanned so far speak only of strict 2-categories/$Cat$-enriched categories; while research-level articles that implicitly deal with $Grpd$-enriched categories don’t make the concept explicit. Not that there is any subtlety in restricting the $Cat$-enrichment to the $Grpd$-enrichment, but the lay person will still appreciate this being made explicit. If nothing else, it saves them from swallowing the usual weasel clauses on size issues. • CommentRowNumber20. • CommentAuthorUrs • CommentTimeAug 30th 2021 ah, this here is not too bad: • P. H. H. Fantham, E. J. Moore, Groupoid Enriched Categories and Homotopy Theory, Canadian Journal of Mathematics 35 3 (1983) 385-416 (doi:10.4153/CJM-1983-022-8) 1. Hi Urs! I don’t know a nice reference either, but it seems $\mathsf{Grpd}$-enriched categories are also called “track categories” and searching for this name turns a number of potentially nice references. • CommentRowNumber22. • CommentAuthorTim_Porter • CommentTimeSep 1st 2021 • (edited Sep 1st 2021) The references that Théo gives are good. The terminology ‘track category’ is current in the work of Hans Baues and his collaborators, including some of Hans’ books. BTW there is also: 2-Groupoid Enrichments in Homotopy Theory and Algebra, K.H.Kamps, and T.Porter K-Theory v. 25, n. 4 (April, 2002): 373-409., ;-) • CommentRowNumber23. • CommentAuthorUrs • CommentTimeSep 1st 2021 Thanks. I remember seeing the “track”-terminology from when I was looking at those references doing Toda-brackets as pasting diagrams in homotopy 2-categories (here). So I have added a pointer at strict (2,1)-category to a book by Baues (here) for this alternative terminology. Unfortunately, besides introducing alternative terminology, that book does not pause a moment to recall what a Grpd-enriched category actually is. Same for followups that I have seen so far. • CommentRowNumber24. • CommentAuthorTim_Porter • CommentTimeSep 1st 2021 Hans Baues’ books often go in quite deeply quite quickly! • CommentRowNumber25. • CommentAuthorUrs • CommentTimeSep 1st 2021 Most references are like this. That’s why I am asking (#19) for those that are not.
2021-11-29 21:04:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7355448603630066, "perplexity": 3809.827869806659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00037.warc.gz"}
https://nrich.maths.org/12856/page/1
# SEAMC Published Shorts The problems in this collection have been adapted from problems that have been set in South East Asian Mathematics Competitions ##### Age 14 to 16 ShortChallenge Level Two arcs are drawn in a right-angled triangle as shown. What is the length $r$? ### Rotation and Area ##### Age 14 to 16 ShortChallenge Level Point A is rotated to point B. Can you find the area of the triangle that these points make with the origin? ### Climbing Ropes ##### Age 14 to 16 ShortChallenge Level Given how much this 50 m rope weighs, can you find how much a 100 m rope weighs, if the thickness is different? ### Cuboid Perimeters ##### Age 14 to 16 ShortChallenge Level Can you find the volume of a cuboid, given its perimeters? ### Two in a Million ##### Age 14 to 16 ShortChallenge Level What is the highest power of 2 that divides exactly into 1000000? ### Winding Vine ##### Age 14 to 16 ShortChallenge Level A vine is growing up a pole. Can you find its length? ### Cube Factors ##### Age 14 to 16 ShortChallenge Level How many factors of $9^9$ are perfect cubes? ### Roots Near 9 ##### Age 14 to 16 ShortChallenge Level For how many integers 𝑛 is the difference between √𝑛 and 9 is less than 1? ### Clever Calculation ##### Age 14 to 16 ShortChallenge Level Find the shortcut to do this calculation quickly! ### Coal Truck ##### Age 14 to 16 ShortChallenge Level What percentage of the truck's final mass is coal? ### Similar Perimeter ##### Age 14 to 16 ShortChallenge Level What are the possible perimeters of the larger triangle? ### Semicircle Distance ##### Age 14 to 16 ShortChallenge Level Can you find the shortest distance between the semicircles given the area between them? ### Root Estimation ##### Age 14 to 16 ShortChallenge Level Which of these is the best approximation for this square root? ### Traffic Tunnel ##### Age 14 to 16 ShortChallenge Level Will these vehicles fit through this tunnel? ### Powerful Expressions ##### Age 14 to 16 ShortChallenge Level Put these expressions in order, from smallest to largest. ### Growing Triangle ##### Age 14 to 16 ShortChallenge Level If the base and height of a triangle are increased by different percentages, what happens to its area? ### XOXOXO ##### Age 14 to 16 ShortChallenge Level 6 tiles are placed in a row. What is the probability that no two adjacent tiles have the same letter on them? ### Diagonal Area ##### Age 14 to 16 ShortChallenge Level A square has area 72 cm$^2$. Find the length of its diagonal. ### Three Right Angles ##### Age 14 to 16 ShortChallenge Level Work your way through these right-angled triangles to find $x$. ### Square in a Circle in a Square ##### Age 14 to 16 ShortChallenge Level What is the ratio of the areas of the squares in the diagram? ### Strike a Chord ##### Age 14 to 16 ShortChallenge Level Can you work out the radius of a circle from some information about a chord? ### Pineapple Juice ##### Age 14 to 16 ShortChallenge Level What percentage of this orange drink is juice? ### Dolly Dolphin ##### Age 14 to 16 ShortChallenge Level Can you find Dolly Dolphin's average speed as she swims with and against the current? ### Candles ##### Age 14 to 16 ShortChallenge Level What is the ratio of the lengths of the candles? ### Four Circles ##### Age 14 to 16 ShortChallenge Level Can you find the radius of the larger circle in the diagram? ### Stolen Pension ##### Age 14 to 16 ShortChallenge Level How much money did the pensioner have before being robbed? ### Triangular Slope ##### Age 14 to 16 ShortChallenge Level Can you find the gradients of the lines that form a triangle? ### Petrol Stop ##### Age 14 to 16 ShortChallenge Level From the information given, can you work out how long Roberto drove for after putting petrol in his car? ### Great Power ##### Age 14 to 16 ShortChallenge Level Which is greater: $10^{250}$ or $6^{300}$? ### A Third of the Area ##### Age 14 to 16 ShortChallenge Level The area of the small square is $\frac13$ of the area of the large square. What is $\frac xy$? ### Winning Marble ##### Age 14 to 16 ShortChallenge Level How can this prisoner escape? ### Dropouts ##### Age 14 to 16 ShortChallenge Level What percentage of students who graduate have never been to France? ### Fraction of Percentages ##### Age 14 to 16 ShortChallenge Level What is $W$ as a fraction of $Z?$ ### Smartphone Screen ##### Age 14 to 16 ShortChallenge Level Can you find the length and width of the screen of this smartphone in inches? ### Powerful 9 ##### Age 14 to 16 ShortChallenge Level What is the last digit of this calculation? ### Adding a Square to a Cube ##### Age 14 to 16 ShortChallenge Level If you take a number and add its square to its cube, how often will you get a perfect square? ### Elephants and Geese ##### Age 14 to 16 ShortChallenge Level Yesterday, at Ulaanbaatar market, a white elephant cost the same amount as 99 wild geese. How many wild geese cost the same amount as a white elephant today? ### Closer to Home ##### Age 14 to 16 ShortChallenge Level Which of these lines comes closer to the origin? ##### Age 14 to 16 ShortChallenge Level Can you find the radii of the small circles? ### Pay Attention ##### Age 14 to 16 ShortChallenge Level If some of the audience fell asleep for some of this talk, what was the average proportion of the talk that people heard? ### Tilted Aquarium ##### Age 14 to 16 ShortChallenge Level Can you find the depth of water in this aquarium? ### Power of Five ##### Age 14 to 16 ShortChallenge Level Powers with brackets, addition and multiplication ### Late for Work ##### Age 14 to 16 ShortChallenge Level What average speed should Ms Fanthorpe drive at to arrive at work on time? ### Nested Square Roots ##### Age 14 to 16 ShortChallenge Level Can you find the value of this expression, which contains infinitely nested square roots? ### Triangular Intersection ##### Age 14 to 16 ShortChallenge Level What is the largest number of intersection points that a triangle and a quadrilateral can have? ### Changing Averages ##### Age 14 to 16 ShortChallenge Level Find the value of $m$ from these statements about a group of numbers ### The Roller and the Triangle ##### Age 14 to 16 ShortChallenge Level How much of the inside of this triangular prism can Clare paint using a cylindrical roller? ### Graph Triangles ##### Age 14 to 16 ShortChallenge Level Use the information about the triangles on this graph to find the coordinates of the point where they touch. ### Two Trains ##### Age 14 to 16 ShortChallenge Level Two trains started simultaneously, each travelling towards the other. How long did each train need to complete the journey? ### Folded Rectangle ##### Age 14 to 16 ShortChallenge Level Can you find the perimeter of the pentagon formed when this rectangle of paper is folded? ### Overlapping Ribbons ##### Age 14 to 16 ShortChallenge Level Two ribbons are laid over each other so that they cross. Can you find the area of the overlap? ### Boys and Girls ##### Age 14 to 16 ShortChallenge Level Can you find the total number of students in the school, given some information about ratios? ### Face Order ##### Age 14 to 16 ShortChallenge Level How many ways can these five faces be ordered? ### Inside a Parabola ##### Age 14 to 16 ShortChallenge Level A triangle of area 64 square units is drawn inside the parabola $y=k^2-x^2$. Find the value of $k$. ### Power of 3 ##### Age 14 to 16 ShortChallenge Level What power of 27 is needed to get the correct power of 3? ### Third Side ##### Age 14 to 16 ShortChallenge Level What are the possible lengths for the third side of this right-angled triangle? ### How Many Gorillas? ##### Age 14 to 16 ShortChallenge Level If the numbers in this news article have been estimated, then what is the largest number of gorillas that there could have been 10 years ago? ### Square Overlap ##### Age 14 to 16 ShortChallenge Level The top square has been rotated so that the squares meet at a 60$^\text{o}$ angle. What is the area of the overlap? ### Laps ##### Age 14 to 16 ShortChallenge Level On which of the hare's laps will she first pass the tortoise? ### Find the Factor ##### Age 14 to 16 ShortChallenge Level Find a factor of $2^{48}-1$.
2021-12-02 01:30:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49135875701904297, "perplexity": 3066.5279875686056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00490.warc.gz"}
https://studysoup.com/tsg/11311/calculus-early-transcendentals-1-edition-chapter-7-3-problem-11e
× Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 7.3 - Problem 11e Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 7.3 - Problem 11e × # Evaluate the following integrals. | Ch 7.3 - 11E ISBN: 9780321570567 2 ## Solution for problem 11E Chapter 7.3 Calculus: Early Transcendentals | 1st Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Calculus: Early Transcendentals | 1st Edition 4 5 1 348 Reviews 17 0 Problem 11E Evaluate the following integrals. $$\int \frac{d x}{\left(16-x^{2}\right)^{1 / 2}}$$ Step-by-Step Solution: Problem 11E Evaluate the following integrals. Solution:- Step 1 To evaluate this integral we use the method of integration by substitution(change of variable) But first let’s arrange the integral, Step 2 of 2 ## Discover and learn what students are asking Calculus: Early Transcendental Functions : First-Order Linear Differential Equations ?In Exercises 1-4, determine whether the differential equation is linear. Explain your reasoning. $$2 x y-y^{\prime} \ln x=y$$ Statistics: Informed Decisions Using Data : Estimating a Population Standard Deviation ?True or False: The shape of the chi-square distribution depends on its degrees of freedom. Statistics: Informed Decisions Using Data : Comparing Three or More Means (One-Way Analysis of Variance) ?In Problems 7 and 8, fill in the ANOVA table. #### Related chapters Unlock Textbook Solution
2022-06-26 19:42:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35702794790267944, "perplexity": 5318.249689530769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00430.warc.gz"}
https://math.stackexchange.com/questions/3982683/deriving-the-formula-for-calculating-the-length-of-the-project-of-the-vector-a
# Deriving the formula for calculating the length of the project of the vector $a$ onto the vector $b$ My situation: I am currently taking a course in linear algebra and I am quite all the time. I have however miraculously reached projections in 2-dimensions. I am studying and learning both from course material but also from this article and I am stuck about between page 6 and 7. My question is: My problem: I wanted to know how to derive the formula for projection of a vector $$a$$ onto a linearly independent vector $$b$$, in order to understand because I am so lost. Why is this text, linked above, and my course book defining the dot product between two vectors as: $$a\cdot b=|a|*|b|*cos(\theta)$$ and then calculating the length of the projection of the vector $$a$$ onto the vector $$b$$ as: $$|a_b|=\frac{a\cdot b}{|a|}=\frac{a_x*b_x+a_y*b_y}{|a|}$$ However the dot product is defined as being the product of the length of $$|a|$$, $$|b|$$ and $$cos(\theta)$$ where $$\theta$$ is the angle between the two vectors. If I follow my course book, the slideshows provided by my professor and the pdf link above I am not in fact calculating the dot product and using it to calculate the length of the projection. I am in the example in the pdf calculating some random sum of $$a_x*b_x+a_y*b_y$$ instead of $$\sqrt{a_x^2+a_y^2}*\sqrt{b_x^2+b_y^2}*cos(\theta)$$. How do you derive $$\sqrt{a_x^2+a_y^2}*\sqrt{b_x^2+b_y^2}*cos(\theta) = a_x*b_x+a_y*b_y$$ and why this difficult change? And why is this equation shift not mentioned? Am I missing something? • The law of cosines explains the equivalence. Jan 12, 2021 at 17:28 • @Randall, you are very correct! I scrolled up and looked at how the author derived the dot product using, among other things, the law of cosines and saw the equivalence at the bottom of the proof. Thanks a lot! You're a life saver Randall! Jan 12, 2021 at 18:06
2022-05-16 09:28:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406032919883728, "perplexity": 140.5144887420534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00699.warc.gz"}
http://tex.stackexchange.com/tags/baseline/hot
Tag Info 22 You need to use baseline on the tikzpicture, not on the nodes. As Tom Bombadil mentioned in the comments you can use current bounding box.center to put the center of the TikZ picture on the baseline. I would also add an yshift to center it around the middle of the = sign. -.5ex seems to do it. \documentclass{article} \usepackage{tikz} \begin{document} ... 20 As mentioned in the comments, \tikz[baseline=(X.base)]{% works for this \documentclass{article} \usepackage{pgf,tikz} \makeatletter \begin{document} \gdef\drawfontframe#1{% \tikz[baseline=(X.base)]{% \node[rectangle,draw,inner sep=0pt,outer sep=0pt] (X){#1}; \draw[red, line width=0.4pt] (X.text) circle(0.4pt)[fill=red] -- (X.base east);}% ... 15 4 This uses a pdftex \pdfliteral but you could use a special for xetex Here I've used two minipages side by side so that you can tell by the end that the baselines have returned to normal. On the left the region between the !! is raised by 2bp \documentclass{article} \begin{document} \centering \begin{minipage}[t]{.43\linewidth} One two three four one two ... 4 One way is to use the baseline key in the tikzpicture options to specify that each picture should be aligned to the baseline by a particular node name (here I used (A)): \documentclass{article} \usepackage{tikz} \usetikzlibrary{automata} \usetikzlibrary{positioning} \begin{document} \tikzstyle{negative} = [circle, minimum width=8pt, fill, inner sep=0pt] ... 3 this is the expected behavior. you have wrapped the text in a group without ending the paragraph, so the baselines applied are those for the surrounding environment. one double blackslash has no effect on this setting, but when the text is broken this way multiple times, all but the last force resolution of the baseline of the previous segment. in this ... 3 Is this what you want? \begin{tikzpicture} [mainbullet/.style={rectangle, minimum size=0.3cm, draw=orange!100, fill=orange!100, thick}, maintitle/.style={rectangle, opacity=0.5}] \node[mainbullet] (experiencebullet) at (0, -23) {}; \node[maintitle] (experiencetitle) [right=10mm of experiencebullet.south east, anchor=base west] {Experience}; ... 3 Without changing anything in your tree-syntax, the tikz-qtree package allows you to get the right result: \documentclass{scrreprt} \usepackage{tikz-qtree} \begin{document} \Tree[.table [.thead [.tr [.th [.\textit{Vorname} ] ] [.th [.\textit{Nachname} ] ] ] ] ... 3 Too long for a comment. The base line of the last text line on the second page is correctly aligned with the bottom of the text area. According to \maxdepth, TeX allows the descenders to stick outside the text area to get a proper alignment of the base lines of the last text lines on the pages. The last text line of the first page does not reach the ... 3 The height of the \tikz in the definition of \circled is responsible for the added line spacing. One can treat the height of an object as zero with the \smash{} macro. So here, I \smash{\tikz[...]{...}}. Of course, this now makes overlap a possibility. In an effort to combat this, I reduced the minimum size of the circle to 1.4em. % !TEX encoding = ... 3 In this case, you will have to tell TikZ where you want your baseline to be. Here you want it to be for example on your (a) node. Here is a code to provide it : \documentclass{report} \usepackage[english]{babel} \usepackage{tikz} \tikzset{x=1pt, y=1pt, z=1pt} \begin{document} \newcommand{\mypicture}{\begin{tikzpicture}[baseline=(a.base)] ... 3 Your problem is that by using \begin{tikzpicture} \begin{tikzfadingfrompicture} \end{tikzfadingfrompicture} \end{tikzpicture} you do nothing else than nesting two tikzpicture environments, which is known to be a source of trouble. However, it is not necessary to do this, since you name the tikzfadingfrompicture to reuse it. When putting both ... 3 The example is not minimal. I've minimised it a bit further just by removing all the hyperref stuff. (I checked that this all has no relevance.) The basic problem involves the use of nested tikzpictures which are known to cause problems. Although not guaranteed to fail, failure is to be expected. Nesting should therefore (very nearly almost) always be ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-05-27 18:14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760048389434814, "perplexity": 3366.6196075299567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276964.77/warc/CC-MAIN-20160524002116-00186-ip-10-185-217-139.ec2.internal.warc.gz"}
http://gap-packages.github.io/SingularInterface/
# SingularInterface A GAP interface to Singular Version 0.7.2 This project is maintained by Mohamed Barakat, Max Horn, Frank Lübeck # GAP Package SingularInterface The SingularInterface package provides a GAP interface to Singular, enabling direct access to the complete functionality of Singular. The current version of this package is version 0.7.2. For more information, please refer to the package manual. There is also a README file. ## Dependencies This package requires at least GAP 4.7.2 as well as Singular 4.0.1. The following additional GAP packages are not required, but suggested: ## Obtaining the SingularInterface source code The easiest way to obtain SingularInterface is to download the latest version using one of the download buttons on the left. If you would like to use the very latest “bleeding edge” version of SingularInterface, you can also do so, but you will need some additional tools: • git • autoconf • automake • libtool must be installed on your system. You can then clone the SingularInterface repository as follows: git clone https://github.com/gap-system/SingularInterface ## Installing SingularInterface SingularInterface requires Singular 4.0.1 or later, and that Singular and GAP are compiled against the exact same version of the GMP library. The easiest way to achieve that is to compile Singular yourself, telling it to link against GAP’s version of GMP. Therefore, usually the first step towards compiling SingularInterface is to build such a special version of Singular. The following instructions should get you going. 1. Fetch the Singular source code. For your convenience, we provide two shell scripts which do this for you. If you want to use Singular 4.0.1, run ./fetchsingular If you want the development version run ./fetchsingular.dev 2. Prepare Singular for compilation. At this point, you need to know against which version of GMP your GAP library was linked: If it is a GMP version installed globally on your system, simply run: ./configuresingular If it is the version of GMP shipped with GAP, run this instead: ./configuresingular --with-gmp=GAPDIR/bin/GAPARCH/extern/gmp where GAPDIR should be replaced with the path to your GAP installation, and GAPARCH by the value of the GAParch variable in GAPDIR/sysinfo.gap 3. Compile Singular by running ./makesingular 4. Now we turn to SingularInterface. If you are using the git version of SingularInterface, you need to setup its build system first. To do this, run this command: ./autogen.sh 5. Prepare SingularInterface for compilation, by running ./configure --with-gaproot=GAPDIR \ --with-libSingular=\$PWD/singular/dst \ CONFIGNAME=default64 where you should replace GAP_DIR as above. If you know what you do, you can change your CONFIGNAME (but note that SingularInterface can only be used with 64 bit versions of GAP). 6. Compile SingularInterface: make 7. To make sure everything worked, run the test suite make check ## Contact You can contact the SingularInterface team by sending an email to gapsing AT mathematik DOT uni-kl DOT de Bug reports and code contributions are highly welcome and can be submitted via our GitHub issues tracker respectively via pull requests. ## Feedback For bug reports, feature requests and suggestions, please use the issue tracker.
2018-01-18 19:52:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24736128747463226, "perplexity": 3886.558047016168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00228.warc.gz"}
https://en.wikipedia.org/wiki/Sealed_bid_auction
# Auction theory (Redirected from Sealed bid auction) Auction theory is an applied branch of economics which deals with how people act in auction markets and researches the properties of auction markets. There are many possible designs (or sets of rules) for an auction and typical issues studied by auction theorists include the efficiency of a given auction design, optimal and equilibrium bidding strategies, and revenue comparison. Auction theory is also used as a tool to inform the design of real-world auctions; most notably auctions for the privatization of public-sector companies or the sale of licenses for use of the electromagnetic spectrum. ## General idea Auctions are characterized as transactions with a specific set of rules detailing resource allocation according to participants' bids. They are categorized as games with incomplete information because in the vast majority of auctions, one party will possess information related to the transaction that the other party does not (e.g., the bidders usually know their personal valuation of the item, which is unknown to the other bidders and the seller).[1] Auctions take many forms, but they share the characteristic that they are universal and can be used to sell or buy any item. In many cases, the outcome of the auction does not depend on the identity of the bidders (i.e., auctions are anonymous). Most auctions have the feature that participants submit bids, amounts of money they are willing to pay. Standard auctions require that the winner of the auction is the participant with the highest bid. A nonstandard auction does not require this (e.g., a lottery). ## Types of auction Main article: Auction § Types There are traditionally four types of auction that are used for the allocation of a single item: • First-price sealed-bid auctions in which bidders place their bid in a sealed envelope and simultaneously hand them to the auctioneer. The envelopes are opened and the individual with the highest bid wins, paying the amount bid. • Second-price sealed-bid auctions (Vickrey auctions) in which bidders place their bid in a sealed envelope and simultaneously hand them to the auctioneer. The envelopes are opened and the individual with the highest bid wins, paying a price equal to the second-highest bid. • Open ascending-bid auctions (English auctions) in which participants make increasingly higher bids, each stopping bidding when they are not prepared to pay more than the current highest bid. This continues until no participant is prepared to make a higher bid; the highest bidder wins the auction at the final amount bid. Sometimes the lot is only actually sold if the bidding reaches a reserve price set by the seller. • Open descending-bid auctions (Dutch auctions) in which the price is set by the auctioneer at a level sufficiently high to deter all bidders, and is progressively lowered until a bidder is prepared to buy at the current price, winning the auction. Most auction theory revolves around these four "basic" auction types. However, other auction types have also received some academic study (see Auction Types). ### Benchmark model The benchmark model for auctions, as defined by McAfee and McMillan (1987), offers a generalization of auction formats, and is based on four assumptions: 1. All of the bidders are risk-neutral. 2. Each bidder has a private valuation for the item independently drawn from some probability distribution. 3. The bidders possess symmetric information. 4. The payment is represented as a function of only the bids. The benchmark model is often used in tandem with the Revelation Principle, which states that each of the basic auction types is structured such that each bidder has incentive to report their valuation honestly. The two are primarily used by sellers to determine the auction type that maximizes the expected price. This optimal auction format is defined such that the item will be offered to the bidder with the highest valuation at a price equal to their valuation, but the seller will refuse to sell the item if they expect that all of the bidders' valuations of the item are less than their own.[1] Relaxing each of the four main assumptions of the benchmark model yields auction formats with unique characteristics: • Risk-averse bidders incur some kind of cost from participating in risky behaviors, which affects their valuation of a product. In sealed-bid first-price auctions, risk-averse bidders are more willing to bid more to increase their probability of winning, which, in turn, increases their expected utility. This allows sealed-bid first-price auctions to produce higher expected revenue than English and sealed-bid second-price auctions. • In formats with correlated values—where the bidders’ values for the item are not independent—one of the bidders perceiving their value of the item to be high makes it more likely that the other bidders will perceive their own values to be high. A notable example of this instance is the Winner’s curse, where the results of the auction convey to the winner that everyone else estimated the value of the item to be less than they did. Additionally, the linkage principle allows revenue comparisons amongst a fairly general class of auctions with interdependence between bidders' values. • The asymmetric model assumes that bidders are separated into two classes that draw valuations from different distributions (i.e., dealers and collectors in an antiques auction). • In formats with royalties or incentive payments, the seller incorporates additional factors, especially those that affect the true value of the item (e.g., supply, production costs, and royalty payments), into the price function.[1] ## Game-theoretic models A game-theoretic auction model is a mathematical game represented by a set of players, a set of actions (strategies) available to each player, and a payoff vector corresponding to each combination of strategies. Generally, the players are the buyer(s) and the seller(s). The action set of each player is a set of bid functions or reservation prices (reserves). Each bid function maps the player's value (in the case of a buyer) or cost (in the case of a seller) to a bid price. The payoff of each player under a combination of strategies is the expected utility (or expected profit) of that player under that combination of strategies. Game-theoretic models of auctions and strategic bidding generally fall into either of the following two categories. In a private value model, each participant (bidder) assumes that each of the competing bidders obtains a random private value from a probability distribution. In a common value model, the participants have equal valuations of the item, but they do not have perfectly accurate information about this valuation. In lieu of knowing the exact valuation of the item, each participant can assume that any other participant obtains a random signal, which can be used to estimate the true valuation, from a probability distribution common to all bidders.[2] Usually, but not always, a private values model assumes that the values are independent across bidders, whereas a common value model usually assumes that the values are independent up to the common parameters of the probability distribution. A more general category for strategic bidding is the affiliated values model, in which the bidder's total utility depends on both their individual private signal and some unknown common value. Both the private value and common value models can be perceived as extensions of the general affiliated values model.[3] Ex-post equilibrium in a simple auction market. When it is necessary to make explicit assumptions about bidders' value distributions, most of the published research assumes symmetric bidders. This means that the probability distribution from which the bidders obtain their values (or signals) is identical across bidders. In a private values model which assumes independence, symmetry implies that the bidders' values are independently and identically distributed (i.i.d.). An important example (which does not assume independence) is Milgrom and Weber's "general symmetric model" (1982).[4][5] One of the earlier published theoretical research addressing properties of auctions among asymmetric bidders is Keith Waehrer's 1999 article.[6] Later published research include Susan Athey's 2001 Econometrica article,[7] as well as Reny and Zamir (2004).[8] The first formal analysis of auctions was by William Vickrey (1961). Vickrey considers two buyers bidding for a single item. Each buyer's value, v, is an independent draw from a uniform distribution with support [0,1]. Vickrey showed that in the sealed first-price auction it is an equilibrium bidding strategy for each bidder to bid half his valuation. With more bidders, all drawing a value from the same uniform distribution it is easy to show that the symmetric equilibrium bidding strategy is ${\displaystyle B(v)=\left({\frac {n-1}{n}}\right)v}$. To check that this is an equilibrium bidding strategy we must show that if it is the strategy adopted by the other n-1 buyers, then it is a best response for buyer 1 to adopt it also. Note that buyer 1 wins with probability 1 with a bid of (n-1)/n so we need only consider bids on the interval [0,(n-1)/n]. Suppose buyer 1 has value v and bids b. If buyer 2's value is x he bids B(x). Therefore buyer 1 beats buyer 2 if ${\displaystyle B(x)=\left({\frac {n-1}{n}}\right)x that is ${\displaystyle x<\left({\frac {n}{n-1}}\right)b}$ Since x is uniformly distributed, buyer 1 bids higher than buyer 2 with probability nb/(n-1). To be the winning bidder, buyer 1 must bid higher than all the other bidders (which are bidding independently). Then his win probability is ${\displaystyle w(b)=\Pr {{\{{{b}_{2}} Buyer 1's expected payoff is his win probability times his gain if he wins. That is, ${\displaystyle U(b)=w(b)(v-b)={{\left({\frac {n}{n-1}}\right)}^{n-1}}{{b}^{n-1}}(v-b)={{\left({\frac {n}{n-1}}\right)}^{n-1}}({{b}^{n-1}}v-{{b}^{n}})}$ It is readily confirmed by differentiation that U(b) takes on its maximum at ${\displaystyle B(v)=\left({\frac {n-1}{n}}\right)v}$ It is not difficult to show that B(v) is the unique symmetric equilibrium. Lebrun (1996)[9] provides a general proof that there are no asymmetric equilibria. ## Revenue equivalence Main article: Revenue equivalence One of the major findings of auction theory is the celebrated Revenue equivalence theorem. Early equivalence results focused on a comparison of revenue in the most common auctions. The first such proof, for the case of two buyers and uniformly distributed values was by Vickrey (1961). In 1979 Riley & Samuelson (1981) proved a much more general result. (Quite independently and soon after, this was also derived by Myerson (1981)).The revenue equivalence theorem states that any allocation mechanism or auction that satisfies the four main assumptions of the benchmark model will lead to the same expected revenue for the seller (and player i of type v can expect the same surplus across auction types).[1] Relaxing these assumptions can provide valuable insights for auction design. Decision biases can also lead to predictable non-equivalencies. Additionally, if some bidders are known to have a higher valuation for the lot, techniques such as price-discriminating against such bidders will yield higher returns. In other words, if a bidder is known to value the lot at $X more than the next highest bidder, the seller can increase their profits by charging that bidder$X - Δ (a sum just slightly inferior to the sum is willing to pay) more than any other bidder (or equivalently a special bidding fee of \$X - Δ). This bidder will still win the lot, but will pay more than would otherwise be the case.[1] ## Winner's curse The winner's curse is a phenomenon which can occur in common value settings—when the actual values to the different bidders are unknown but correlated, and the bidders make bidding decisions based on estimated values. In such cases, the winner will tend to be the bidder with the highest estimate, but the results of the auction will show that the remaining bidders' estimates of the item's value are less than that of the winner, giving the winner the impression that they "bid too much".[1] In an equilibrium of such a game, the winner's curse does not occur because the bidders account for the bias in their bidding strategies. Behaviorally and empirically, however, winner's curse is a common phenomenon. (cf. Richard Thaler). ## JEL classification In the Journal of Economic Literature Classification System C7 is the classification for Game Theory and D44 is the classification for Auctions.[10] ## Footnotes 1. McAfee, R. Preston; McMillan, John (1987). "Auctions and Bidding". Journal of Economic Literature. 25 (2): 699–738. JSTOR 2726107. 2. ^ Watson, Joel (2013). "Chapter 27: Lemons, Auctions, and Information Aggregation". Strategy: An Introduction to Game Theory, Third Edition. New York, NY: W.W. Norton & Company. pp. 360–377. ISBN 978-0-393-91838-0. 3. ^ Li, Tong; Perrigne, Isabelle; Vuong, Quang (2002). "Structural Estimation of the Affiliated Private Value Auction Model". The RAND Journal of Economics. 33 (2): 171–193. JSTOR 3087429. 4. ^ Milgrom, P., and R. Weber (1982) "A Theory of Auctions and Competitive Bidding," Econometrica Vol. 50 No. 5, pp. 1089–1122. 5. ^ Because bidders in real-world auctions are rarely symmetric, applied scientists began to research auctions with asymmetric value distributions beginning in the late 1980s. Such applied research often depended on numerical solution algorithms to compute an equilibrium and establish its properties. Preston McAfee and John McMillan (1989) simulated bidding for a government contract in which the cost distribution of domestic firms is different from the cost distribution of the foreign firms ("Government Procurement and International Trade," Journal of International Economics, Vol. 26, pp. 291–308.) One of the publications based on the earliest numerical research is Dalkir, S., J. W. Logan, and R. T. Masson, "Mergers in Symmetric and Asymmetric Noncooperative Auction Markets: The Effects on Prices and Efficiency," published in Vol. 18 of The International Journal of Industrial Organization, (2000, pp. 383–413). Other pioneering research include Tschantz, S., P. Crooke, and L. Froeb, "Mergers in Sealed versus Oral Auctions," published in Vol. 7 of The International Journal of the Economics of Business (2000, pp. 201–213). 6. ^ K. Waehrer (1999) "Asymmetric Auctions With Application to Joint Bidding and Mergers," International Journal of Industrial Organization 17: 437–452 7. ^ Athey, S. (2001) "Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information," Econometrica Vol. 69 No. 4, pp. 861-890. 8. ^ Reny, P., and S. Zamir (2004) "On the Existence of Pure Strategy Monotone Equilibria in Asymmetric First-Price Auctions," Econometrica, Vol. 72 No. 4, pp. 1105–1125. 9. ^ Lebrun, Bernard (1996) "Existence of an equilibrium in first price auctions," Economic Theory, Vol. 7 No. 3, pp. 421-443. 10. ^ "Journal of Economic Literature Classification System". American Economic Association. Retrieved 2008-06-25. (D: Microeconomics, D4: Market Structure and Pricing, D44: Auctions)
2017-03-30 15:14:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.525144636631012, "perplexity": 1998.5222704391476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00380-ip-10-233-31-227.ec2.internal.warc.gz"}
https://getfem.org/python/cmdref_CvStruct.html
# CvStruct¶ class CvStruct(*args) GetFEM CvStruct object General constructor for CvStruct objects basic_structure() Get the simplest convex structure. For example, the ‘basic structure’ of the 6-node triangle, is the canonical 3-noded triangle. char() Output a string description of the CvStruct. dim() Get the dimension of the convex structure. display() displays a short summary for a CvStruct object. face(F) Return the convex structure of the face F. facepts(F) Return the list of point indices for the face F. nbpts() Get the number of points of the convex structure.
2021-08-03 10:23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6806355714797974, "perplexity": 6103.514179132436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00365.warc.gz"}
http://www.eoearth.org/view/article/155782/
# Roots of American industrialization, 1790-1860 February 5, 2013, 9:56 pm Topics: This content is not assigned to a topic Whitney's cotton gin. Source: National Museum of American History ## The Puzzle of Industrialization In a society which is predominantly agricultural, how is it possible for industrialization to gain a foothold? One view is that the demand of farm households for manufactures spurs industrialization, but such an outcome is not guaranteed. What if farm households can meet their own food requirements, and they choose to supply some of their needs for manufactures by engaging in small-scale craft production in the home? They might supplement this production with limited purchases of goods from local craftworkers and purchases of luxuries from other countries. This local economy would be relatively self-sufficient, and there is no apparent impetus to alter it significantly through industrialization, that is, the growth of workshop and factory production for larger markets. Others would claim that limited gains might come from specialization, once demand passed some small threshold. Finally, it has been argued that if the farmers are impoverished, some of them would be available for manufacturing and this would provide an incentive to industrialize. However, this argument begs the question as to who would purchase the manufactures. One possibility is that non-farm rural dwellers, such as trade people, innkeepers, and professionals, as well as a small urban population, might provide an impetus to limited industrialization. ## The Problem with the "Impoverished Agriculture" Theory The industrialization of the eastern United States from 1790 to 1860 raises similar conundrums. For a long time, scholars thought that the agriculture was mostly poor quality. Thus, the farm labor force left agriculture for workshops, such as those which produced shoes, or for factories, such as the cotton textile mills of New England. These manufactures provided employment for women and children, who otherwise had limited productive possibilities because the farms were not economical. Yet, the market for manufactures remained mostly in the East prior to 1860. Consequently, it is unclear who would have purchased the products to support the growth of manufactures before 1820, as well as to undergird the large-scale industrialization of the East during the two decades following 1840. Even if the impoverished-agriculture explanation of the East's industrialization is rejected, we are still left with the curiosity that as late as 1840, about eighty percent of the population lived in rural areas, though some of them were in nonfarm occupations. In brief, the puzzle of eastern industrialization between 1790 and 1860 can be resolved - the East had a prosperous agriculture. Farmers supplied low-cost agricultural products to rural and urban dwellers, and this population demanded manufactures, which were supplied by vigorous local and subregional manufacturing sectors. Some entrepreneurs shifted into production for larger market areas, and this transformation occurred especially in sectors such as shoes, selected light manufactures produced in Connecticut (such as buttons, tinware, and wooden clocks), and cotton textiles. Transportation improvements exerted little impact on these agricultural and industrial developments, primarily because the lowly wagon served effectively as a transport medium and much of the East's most prosperous areas were accessible to cheap waterway transportation. The metropolises of Boston, New York, Philadelphia, and, to a lesser extent, Baltimore, and the satellites of each (together, each metropolis and its satellites is called a metropolitan industrial complex), became leading manufacturing centers, and other industrial centers emerged in prosperous agricultural areas distant from these complexes. The East industrialized first, and, subsequently, the Midwest began an agricultural and industrial growth process which was underway by the 1840s. Together, the East and the Midwest constituted the American Manufacturing Belt, which was formed by the 1870s, whereas the South failed to industrialize commensurately. ## Synergy between Agriculture and Manufacturing The solution to the puzzle of how industrialization can occur in a predominantly agricultural economy recognizes the possibility of synergy between agriculture and manufacturing. During the first three decades following 1790, prosperous agricultural areas emerged in the eastern United States. Initially, these areas were concentrated near the small metropolises of Boston, New York, and Philadelphia, and in river valleys such as the Connecticut Valley in Connecticut and Massachusetts, the Hudson and Mohawk Valleys in New York, the Delaware Valley bordering Pennsylvania and New Jersey, and the Susquehanna Valley in eastern Pennsylvania. These agricultural areas had access to cheap, convenient transport which could be used to reach markets; the farms supplied the growing urban populations in the cities and some of the products were exported. Furthermore, the farmers supplied the nearby, growing non-farm populations in the villages and small towns who provided goods and services to farmers. These non-farm consumers included retailers, small mill owners, teamsters, craftspeople, and professionals (clergy, physicians, and lawyers). Across every decade from 1800 to 1860, the number of farm laborers grew, thus testifying to the robustness of eastern agriculture (see Table 1). And, this increase occurred in the face of an expanding manufacturing sector, as increasing numbers of rural dwellers left the farms to work in the factories, especially after 1840. Even New England, the region which presumably was the epitome of declining agriculture, witnessed a rise in the number of farm laborers all the way up to 1840, and, as of 1860, the drop off from the peak was small. Massachusetts and Connecticut, which had vigorous small workshops and increasing numbers of small factories before 1840, followed by a surge in manufacturing after 1840, matched the trajectory of farm laborers in New England as a whole. The numbers in these two states peaked in 1840 and fell off only modestly over the next twenty years. The Middle Atlantic region witnessed an uninterrupted rise in the number of farm laborers over the sixty-year period. New York and Pennsylvania, the largest states, followed slightly different paths. In New York, the number of farm laborers peaked around 1840 and then stabilized near that level for the next two decades, whereas in Pennsylvania the number of farm laborers rose in an uninterrupted fashion. Year 1800 1810 1820 1830 1840 1850 1860 Table 1 Number of Farm Laborers by Region and Selected States, 1800-1860 New England 228,100 257,700 303,400 353,800 389,100 367,400 348,100 Massachusetts 73,200 72,500 73,400 78,500 87,900 80,800 77,700 Connecticut 50,400 49,300 51,500 55,900 57,000 51,400 51,800 Middle Atlantic 375,700 471,400 571,700 715,000 852,800 910,400 966,600 New York 111,800 170,100 256,000 356,300 456,000 437,100 449,100 Pennsylvania 112,600 141,000 164,900 195,200 239,000 296,300 329,000 East 831,900 986,800 1,178,500 1,422,600 1,631,000 1,645,200 1,662,800 The farmers, retailers, professionals, and others in these prosperous agricultural areas accumulated capital which became available for other economic sectors, and manufacturing was one of the most important to receive this capital. Entrepreneurs who owned small workshops and factories obtained capital to turn out a wide range of goods such as boards, boxes, utensils, building hardware, furniture, and wagons, which were in demand in the agricultural areas. And, some of these workshops and factories enlarged their market areas to a subregion as they gained production efficiencies; but, this did not account for all industrial development. Selected manufactures such as shoes, tinware, buttons, and cotton textiles were widely demanded by urban and rural residents of prosperous agricultural areas and by residents of the large cities. These products were high value relative to their weight; thus, the cost to ship them long distances was low. Astute entrepreneurs devised production methods and marketing approaches to sell these goods in large market areas, including New England and the Middle Atlantic regions of the East. ## Manufactures Which Were Produced for Large Market Areas ### Shoes and Tinware Small workshops turned out shoes. Massachusetts entrepreneurs devised an integrated shoe production complex based on a division of labor among shops, and they established a marketing arm of wholesalers, principally in Boston, who sold the shoes throughout New England, to the Middle Atlantic, and to the South (particularly, to slave plantations). Businesses in Connecticut drew on the extensive capital accumulated by the well-to-do rural and urban dwellers of that state and moved into tinware, plated ware, buttons, and wooden clocks. These products, like shoes, also were manufactured in small workshops, but a division of labor among shops was less important than the organization of production within shops. Firms producing each good tended to agglomerate in a small subregion of the state. These clusters arose because entrepreneurs shared information about production techniques and specialized skills which they developed, and this knowledge was communicated as workers moved among shops. Initially, a marketing system of peddlers emerged in the tinware sector, and they sold the goods, first throughout Connecticut, and then they extended their travels to the rest of New England and to the Middle Atlantic. Workshops which made other types of light, high-value goods soon took advantage of the peddler distribution system to enlarge their market areas. At first, these peddlers operated part-time during the year, but as the supply of goods increased and market demand grew, peddlers operated for longer periods of the year and they traveled farther. ### Cotton Textiles Cotton textile manufacturing was an industry built on low-wage, especially female, labor; presumably, this industry offered opportunities in areas where farmers were unsuccessful. Yet, similar to the other manufactures which enlarged their market areas to the entire East before 1820, cotton textile production emerged in prosperous agricultural areas. That is not surprising, because this industry required substantial capital, technical skills, and, initially, nearby markets. These requirements were met in rich farming areas, which also could draw on wealthy merchants in large cities who contributed capital and provided sale outlets beyond nearby markets as output grew. The production processes in cotton textile manufacturing, however, diverged from the approaches to making shoes and small metal and wooden products. From the start, production processes included textile machinery, which initially consisted of spinning machines to make yarn, and later (after 1815), weaving machines and other mechanical equipment were added. Highly skilled mechanics were required to build the machines and to maintain them. The greater capital requirements for cotton mills, compared to shoes and small goods' manufactures in Connecticut, meant that merchant wholesalers and wealthy retailers, professionals, mill owners, and others, were important underwriters of the factories. Starting in the 1790s, New England, and, especially, Rhode Island, housed the leaders in early cotton textile manufacturing. Providence merchants funded some of the first successful cotton spinning mills, and they drew on the talents of Samuel Slater, an immigrant British machinist. He trained many of the first important textile mechanics, and investors in various parts of Rhode Island, Connecticut, Massachusetts, New Hampshire, and New York hired them to build mills. Between 1815 and 1820, power-loom weaving began to be commercially feasible, and this effort was led by firms in Rhode Island and, especially, in Massachusetts. Boston merchants, starting with the Boston Manufacturing Company at Waltham, devised a business plan which targeted large-scale, integrated cotton textile manufacturing, with a marketing/sales arm housed in a separate firm. They enlarged their effort significantly after 1820, and much of the impetus to the growth of the cotton textile industry came from the success entrepreneurs had in lowering the cost of production. ## The Impact of Transportation Improvements Following 1820, government and private sources invested substantial sums in canals, and after 1835, railroad investment increased rapidly. Canals required huge volumes of low-value commodities in order to pay operating expenses, cover interest on the bonds which were issued for construction, and retire the bonds at maturity. These conditions were only met in the richest agricultural and resource (lumbering and coal mining, for example) areas traversed by the Erie and Champlain Canals in New York and the coal canals in eastern Pennsylvania and New Jersey. The vast majority of the other canals failed to yield benefits for agriculture and industry, and most were costly debacles. Early railroads mainly carried passengers, especially within fifty to one hundred miles of the largest cities - Boston, New York, Philadelphia, and Baltimore. Industrial products were not carried in large volumes until after 1850; consequently, railroads built before that time had little impact on industrialization in the East. Canals and railroads had minor impacts on agricultural and industrial development because the lowly wagon provided withering competition. Wagons offered flexible, direct connections between origins and destinations, without the need to transship goods, as was the case with canals and railroads; these modes required wagons at their end points. Within a distance of about fifty miles, the cost of wagon transport was competitive with alternative transport modes, so long as the commodities were high value relative to their weight. And, infrequent transport of these goods could occur over distances of as much as one hundred miles. This applied to many manufactures, and agricultural commodities could be raised to high value by processing prior to shipment. Thus, wheat was turned into flour, corn and other grains were fed to cattle and pigs and these were processed into beef and pork prior to shipment, and milk was converted into butter and cheese. Most of the richest agricultural and industrial areas of the East were less than one hundred miles from the largest cities or these areas were near low-cost waterway transport along rivers, bays, and the Atlantic Coast. Therefore, canals and railroads in these areas had difficulty competing for freight, and outside these areas the limited production generated little demand for long distant transport services. ## Agricultural Prosperity Continues After 1820, eastern farmers seized the increasing market opportunities in the prosperous rural areas as nonfarm processing expanded and village and small town populations demanded greater amounts of farm products. The large number of farmers who were concentrated around the rapidly growing metropolises (Boston, New York, Philadelphia, and Baltimore) and near urban agglomerations such as Albany-Troy, New York, developed increasing specialization in urban market goods such as fluid milk, fresh vegetables, fruit, butter, and hay (for horse transport). Farmers farther away responded to competition by shifting into products which could be transported long distances to market, including wheat into flour, cattle which walked to market, or pigs which were converted into pork. During the winter these farms sent butter, and cheese was a specialty which could be lucrative for long periods of the year when temperatures were cool. These changes swept across the East, and, after 1840, farmers increasingly adjusted their production to compete with cheap wheat, cattle, and pork arriving over the Erie Canal from the Midwest. Wheat growing became less profitable, and specialized agriculture expanded, such as potatoes, barley, and hops in central New York and cigar tobacco in the Connecticut Valley. Farmers near the largest cities intensified their specialization in urban market products, and as the railroads expanded, fluid milk was shipped longer distances to these cities. Farmers in less accessible areas and on poor agricultural land which was infertile or too hilly, became less competitive. If these farmers and their children stayed, their incomes declined relative to others in the East, but if they moved to the Midwest or to the burgeoning industrial cities of the East, they had the chance of participating in the rising prosperity. ## Metropolitan Industrial Complexes Metropolis Satellites Complex Table 2 Manufacturing Employment in the Metropolitan Industrial Complexes of New York, Philadelphia, Boston, and Baltimore as a Percentage of National Manufacturing Employment in 1840 New York 4.1% 3.4% 7.4% Philadelphia 3.9 2.9 6.7 Boston 0.5 6.6 7.1 Baltimore 2.0 0.2 2.3 Four Complexes 10.5 13.1 23.5 Note: Metropolitan county is defined as the metropolis for each complex and "outside" comprises nearby counties; those included in each complex were the following. New York: metropolis (New York, Kings, Queens, Richmond); outside (Connecticut: Fairfield; New York: Westchester, Putnam, Rockland, Orange; New Jersey: Bergen, Essex, Hudson, Middlesex, Morris, Passaic, Somerset). Philadelphia: metropolis (Philadelphia); outside (Pennsylvania: Bucks, Chester, Delaware, Montgomery; New Jersey: Burlington, Gloucester, Mercer; Delaware: New Castle). Boston: metropolis (Suffolk); outside (Essex, Middlesex, Norfolk, Plymouth). Baltimore: metropolis (Baltimore); outside (Anne Arundel, Harford). Also, by 1840, prosperous agricultural areas farther from these complexes, such as the Connecticut Valley in New England, the Hudson Valley, the Erie Canal Corridor across New York state, and southeastern Pennsylvania, housed significant amounts of manufacturing in urban places. At the intersection of the Hudson and Mohawk rivers, the Albany-Troy agglomeration contained one of the largest concentrations of manufacturing outside the metropolitan complexes. And, industrial towns such as Utica, Syracuse, Rochester, and Buffalo were strung along the Erie Canal Corridor. Many of the manufactures (such as furniture, wagons, and machinery) served subregional markets in the areas of prosperous agriculture, but some places also developed specialization in manufactures (textiles and hardware) for larger regional and interregional market areas (the East as a whole). The Connecticut Valley, for example, housed many firms which produced cotton textiles, hardware, and cutlery. ## Manufactures for Eastern and National Markets ### Shoes In several industrial sectors whose firms had expanded before 1820 to regional, and even, multiregional markets, in the East, firms intensified their penetration of eastern markets and reached to markets in the rapidly growing Midwest between 1820 and 1860. In eastern Massachusetts, a production complex of shoe firms innovated methods of organizing output within and among firms, and they developed a wide array of specialized tools and components to increase productivity and to lower manufacturing costs. In addition, a formidable wholesaling, marketing, and distribution complex, headed by Boston wholesalers, pushed the ever-growing volume of shoes into sales channels which reached throughout the nation. Machinery did not come into use until the 1850s, and, by 1860, Massachusetts accounted for half of the value of the nation's shoe production. ### Cotton Textiles In contrast, machinery constituted an important factor of production which drove down the price of cotton textile goods, substantially enlarging the quantity consumers demanded. Before 1820, most of the machinery innovations improved the spinning process for making yarn, and in the five years following 1815, innovations in mechanized weaving generated an initial substantial drop in the cost of production as the first integrated spinning-weaving mills emerged. During the next decade and a half the price of cotton goods collapsed by over fifty percent as large integrated spinning-weaving mills became the norm for the production of most cotton goods. Therefore, by the mid-1830s vast volumes of cotton goods were pouring out of textile mills, and a sophisticated set of specialized wholesaling firms, mostly concentrated in Boston, and secondarily, in New York and Philadelphia, channeled these items into the national market. Prior to 1820, the cotton textile industry was organized into three cores. The Providence core dominated and the Boston core occupied second place; both of these were based mostly on mechanized spinning. A third core in the city of Philadelphia was based on hand spinning and weaving. Within about fifteen years after 1820, the Boston core soared to a commanding position in cotton textile production as a group of Boston merchants and their allies relentlessly replicated their business plan at various sites in New England, including at Lowell, Chicopee, and Taunton in Massachusetts, at Nashua, Manchester, and Dover in New Hampshire, and at Saco in Maine. The Providence core continued to grow, but its investors did not seem to fully grasp the strategic, multi-faceted business plan which the Boston merchants implemented. Similarly, investors in an emerging core within about fifty to seventy-five miles of New York City in the Hudson Valley and northern New Jersey likewise did not seem to fully understand the Boston merchants' plan, and these New York City area firms never reached the scale of the firms of the Boston Core. The Philadelphia core enlarged to nearby areas southwest of the city and in Delaware, but these firms stayed small, and the Philadelphia firms created a small-scale, flexible production system which turned out specialized goods, not the mass-market commodity textiles of the other cores. ### Capital Investment in Cotton Textiles The distribution of capital investment in cotton textiles across the regions and states of the East between 1820 and 1860 capture the changing prominence of the cores of cotton textile production (see Table 3). The New England and the Middle Atlantic regions contained approximately similar shares (almost half each) of the nation's capital investment. However, during the 1820s the cotton textile industry restructured to a form which was maintained for the next three decades. New England's share of capital investment surged to about seventy percent, and it maintained that share until 1860, whereas the Middle Atlantic region's share fell to around twenty percent by 1840 and remained near that until 1860. The rest of the nation, primarily the South, reached about ten percent of total capital investment around 1840 and continued at that level for the next two decades. Massachusetts became the leading cotton textile state by 1831 and Rhode Island, the early leader, gradually slipped to a level of about ten percent by the 1850s; New Hampshire and Pennsylvania housed approximately similar shares as Rhode Island by that time. Region/state 1820 1831 1840 1850 1860 Table 3 Capital Invested in Cotton Textiles by Region and State as a Percentage of the Nation 1820-1860 New England 49.6% 69.8% 68.4% 72.3% 70.3% Maine 1.6 1.9 2.7 4.5 6.1 New Hampshire 5.6 13.1 10.8 14.7 12.8 Vermont 1.0 0.7 0.2 0.3 0.3 Massachusetts 14.3 31.7 34.1 38.2 34.2 Connecticut 11.6 7.0 6.2 5.7 6.7 Rhode Island 15.4 15.4 14.3 9.0 10.2 Middle Atlantic 46.2 29.5 22.7 17.3 19.0 New York 18.8 9.0 9.6 5.6 5.5 New Jersey 4.7 5.0 3.4 2.0 1.3 Pennsylvania 6.3 9.3 6.5 6.1 9.3 Delaware 4.0 0.9 0.6 0.6 0.6 Maryland 12.4 5.3 2.6 3.0 2.3 Rest of nation 4.3 0.7 9.0 10.4 10.7 Nation 100.0% 100.0% 100.0% 100.0% 100.0% Total capital (thousands) $10,783$40,613 $51,102$74,501 \$98,585 ### Connecticut's Industries In Connecticut, industrialists built on their successful production and sales prior to 1820 and expanded into a wider array of products which they sold in the East and South, and, after 1840, they acquired more sales in the Midwest. This success was not based on a mythical "Yankee ingenuity," which, typically, has been framed in terms of character. Instead, this ingenuity rested on fundamental assets: a highly educated population linked through wide-ranging social networks which communicated information about technology, labor opportunities, and markets; and the abundant supplies of capital in the state supported the entrepreneurs. The peddler distribution system provided efficient sales channels into the mid-1830s, but, after that, firms took advantage of more traditional wholesaling channels. In some sectors, such as the brass industry, firms followed the example of the large Boston-core textile firms, and the brass companies founded their own wholesale distribution agencies in Boston and New York City. The achievements of Connecticut's firms were evident by 1850. As a share of the nation's value of production, they accounted for virtually all of the clocks, pins, and suspenders, close to half of the buttons and rubber goods, and about one-third of the brass foundry products, Britannia and plated ware, and hardware. ## Difficulty of Duplicating Eastern Methods in the Midwest The East industrialized first, based on a prosperous agricultural and industrialization process, as some of its entrepreneurs shifted into the national market manufactures of shoes, cotton textiles, and diverse goods turned out in Connecticut. These industrialists made this shift prior to 1820, and they enhanced their dominance of these products during the subsequent two decades. Manufacturers in the Midwest did not have sufficient intraregional markets to begin producing these goods before 1840; therefore, they could not compete in these national market manufactures. Eastern firms had developed technologies and organizations of production and created sales channels which could not be readily duplicated, and these light, high-value goods were transported cheaply to the Midwest. When midwestern industrialists faced choices about which manufactures to enter, the eastern light, high-value goods were being sold in the Midwest at prices which were so low that it was too risky for midwestern firms to attempt to compete. Instead, these firms moved into a wide range of local and regional market manufactures which also existed in the East, but which cost too much to transport to the Midwest. These goods included lumber and food products (e.g., flour and whiskey), bricks, chemicals, machinery, and wagons. ## The American Manufacturing Belt ### The Midwest Joins the American Manufacturing Belt after 1860 Between 1840 and 1860, Midwestern manufacturers made strides in building an industrial infrastructure, and they were positioned to join with the East to constitute the American Manufacturing Belt, the great concentration of manufacturing which would sprawl from the East Coast to the edge of the Great Plains. This Belt became mostly set within a decade or so after 1860, because technologies and organizations of production and of sales channels had lowered costs across a wide array of manufactures, and improvements in transportation (such as an integrated railroad system) and communication (such as the telegraph) reduced distribution costs. Thus, increasing shares of industrial production were sold in interregional markets. ## Lack of Industrialization in the South Although the South had prosperous farms, it failed to build a deep and broad industrial infrastructure prior to 1860, because much of its economy rested on a slave agricultural system. In this economy, investments were heavily concentrated in slaves rather than in an urban and industrial infrastructure. Local and regional demand remained low across much of the South, because slaves were not able to freely express their consumption demands and population densities remained low, except in a few agricultural areas. Thus, the market thresholds for many manufactures were not met, and, if thresholds were met, the demand was insufficient to support more than a few factories. By the 1870s, when the South had recovered from the Civil War and its economy was reconstructed, eastern and midwestern industrialists had built strong positions in many manufactures. And, as new industries emerged, the northern manufacturers had the technological and organizational infrastructure and distribution channels to capture dominance in the new industries. In a similar fashion, the Great Plains, the Southwest, and the West were settled too late for their industrialists to be major producers of national market goods. Manufacturers in these regions focused on local and regional market manufactures. Some low wage industries (such as textiles) began to move to the South in significant numbers after 1900, and the emergence of industries based on high technology after 1950 led to new manufacturing concentrations which rested on different technologies. Nonetheless, the American Manufacturing Belt housed the majority of the nation's industry until the middle of the twentieth century. Public Domain Image • Atack, Jeremy, and Fred Bateman. To Their Own Soil: Agriculture in the Antebellum North. Ames, IA: Iowa State University Press, 1987 ISBN: 0813800862. • Baker, Andrew H., and Holly V. Izard. "New England Farmers and the Marketplace, 1780-1865: A Case Study." Agricultural History 65 (1991): 29-52. • Barker, Theo, and Dorian Gerhold. The Rise and Rise of Road Transport, 1700-1990. New York: Cambridge University Press, 1995 ISBN: 0521557739. • Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. New York: Cambridge University Press, 2000 ISBN: 0521669995. • Brown, Richard D. Knowledge is Power: The Diffusion of Information in Early America, 1700-1865. New York: Oxford University Press, 1989 ISBN: 0195072650. • Clark, Christopher. The Roots of Rural Capitalism: Western Massachusetts, 1780-1860. Ithaca, NY: Cornell University Press, 1990 ISBN: 0801496934. • Dalzell, Robert F., Jr. Enterprising Elite: The Boston Associates and the World They Made. Cambridge, MA: Harvard University Press, 1987 ISBN: 0393310795. • Durrenberger, Joseph A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Cos Cob, CT: John E. Edwards, 1968. • Field, Alexander J. "On the Unimportance of Machinery." Explorations in Economic History 22 (1985): 378-401. • Fishlow, Albert. American Railroads and the Transformation of the Ante-Bellum Economy. Cambridge, MA: Harvard University Press, 1965. • Fishlow, Albert. "Antebellum Interregional Trade Reconsidered." American Economic Review 54 (1964): 352-64. • Goodrich, Carter, ed. Canals and American Economic Development. New York: Columbia University Press, 1961. ISBN: 0804617651. • Gross, Robert A. "Culture and Cultivation: Agriculture and Society in Thoreau's Concord." Journal of American History 69 (1982): 42-61. • Hoke, Donald R. Ingenious Yankees: The Rise of the American System of Manufactures in the Private Sector. New York: Columbia University Press, 1990 ISBN: 0231067569. • Hounshell, David A. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984 ISBN: 080183158X. • Jeremy, David J. Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981 ISBN: 0262100223. • Jones, Chester L. The Economic History of the Anthracite-Tidewater Canals. University of Pennsylvania Series on Political Economy and Public Law, no. 22. Philadelphia: John C. Winston, 1908. • Karr, Ronald D. "The Transformation of Agriculture in Brookline, 1770-1885." Historical Journal of Massachusetts 15 (1987): 33-49. • Lindstrom, Diane. Economic Development in the Philadelphia Region, 1810-1850. New York: Columbia University Press, 1978 ISBN: 0231042728. • McClelland, Peter D. Sowing Modernity: America's First Agricultural Revolution. Ithaca, NY: Cornell University Press, 1997 ISBN: 0801433266. • McMurry, Sally. Transforming Rural Life: Dairying Families and Agricultural Change, 1820-1885. Baltimore: Johns Hopkins University Press, 1995 ISBN: 080184889X. • McNall, Neil A. An Agricultural History of the Genesee Valley, 1790-1860. Philadelphia: University of Pennsylvania Press, 1952. ISBN: 0837183960. • Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War. New York: Cambridge University Press, 2000. ISBN: 0521025362. • Mancall, Peter C. Valley of Opportunity: Economic Culture along the Upper Susquehanna, 1700-1800. Ithaca, NY: Cornell University Press, 1991 ISBN: 0801425034. • Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000 ISBN: 0226505073. • Meyer, David R. "The Division of Labor and the Market Areas of Manufacturing Firms." Sociological Forum 3 (1988): 433-53. • Meyer, David R. "Emergence of the American Manufacturing Belt: An Interpretation." Journal of Historical Geography 9 (1983): 145-74. • Meyer, David R. "The Industrial Retardation of Southern Cities, 1860-1880." Explorations in Economic History 25 (1988): 366-86. • Meyer, David R. "Midwestern Industrialization and the American Manufacturing Belt in the Nineteenth Century." Journal of Economic History 49 (1989): 921-37. • Ransom, Roger L. "Interregional Canals and Economic Specialization in the Antebellum United States." Explorations in Entrepreneurial History 5, no. 1 (1967-68): 12-35. • Roberts, Christopher. The Middlesex Canal, 1793-1860. Cambridge, MA: Harvard University Press, 1938. • Rothenberg, Winifred B. From Market-Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992 ISBN: 0226729532. • Scranton, Philip. Proprietary Capitalism: The Textile Manufacture at Philadelphia, 1800-1885. New York: Cambridge University Press, 1983 ISBN: 0521521351. • Shlakman, Vera. "Economic History of a Factory Town: A Study of Chicopee, Massachusetts." Smith College Studies in History 20, nos. 1-4 (1934-35): 1-264. • Sokoloff, Kenneth L. "Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast." In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 345-78. Chicago: University of Chicago Press, 1992 ISBN: 0226279456. • Sokoloff, Kenneth L. "Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846." Journal of Economic History 48 (1988): 813-50. • Sokoloff, Kenneth L. "Productivity Growth in Manufacturing during Early Industrialization: Evidence from the American Northeast, 1820-1860." In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 679-729. Chicago: University of Chicago Press, 1986 ISBN: 0226209296. • Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931. • Weiss, Thomas. "Economic Growth before 1860: Revised Conjectures." In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 11-27. Stanford, CA: Stanford University Press, 1994 ISBN: 0804720843. • Weiss, Thomas. "Long-Term Changes in U.S. Agricultural Output per Worker, 1800-1900." Economic History Review 46 (1993): 324-41. • Weiss, Thomas. "U.S. Labor Force Estimates and Economic Growth, 1800-1860." In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago University of Chicago Press, 1992 ISBN: 0226279456. • Wood, Frederic J. The Turnpikes of New England. Boston: Marshall Jones, 1919. ISBN: 0942147057. • Wood, Gordon S. The Radicalism of the American Revolution. New York: Alfred A. Knopf, 1992 ISBN: 0679736883. • Zevin, Robert B. "The Growth of Cotton Textile Production after 1815." In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman, ISBN: 0060421096 Disclaimer: This article is taken wholly from, or contains information that was originally published by, the EH.Net. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the EH.Net should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content. Glossary
2016-02-10 03:00:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17454296350479126, "perplexity": 8186.058935254066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00167-ip-10-236-182-209.ec2.internal.warc.gz"}
https://brilliant.org/problems/properties-of-electric-transformer/
# Properties of an electric transformer The above is a schematic diagram of an electric transformer, where the primary AC voltage is $$220\text{ V}$$ and the secondary coil is connected with an electric heating instrument. The secondary voltage and current intensity are $$110\text{ V}$$ and $$8\text{ A},$$ respectively. Which of the following statements is correct? There is no energy loss in this electric transformer. $$a)$$ The current intensity flowing in the primary winding is $$4\text{ A}.$$ $$b)$$ The number of turns in the secondary winding is twice as many as that in the primary winding. $$c)$$ The primary coil always transfers $$880\text{ W}$$ of electric power to the secondary coil without reference to any heating instrument. ×
2018-12-11 10:12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6225500702857971, "perplexity": 281.8484571723805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00097.warc.gz"}
https://ncatlab.org/nlab/show/abstract%20model%20theory
# nLab abstract model theory Contents model theory ## Dimension, ranks, forking • forking and dividing? • Morley rank? • Shelah 2-rank? • Lascar U-rank? • Vapnik–Chervonenkis dimension? # Contents ## Idea Abstract model theory is the study of the general properties of the model theory of extensions of (classical untyped) first-order logic. Originally motivated by Lindström's theorem that characterizes first-order logic, the field has subsequently been extended to provide alternative characterizations and include different logics within its range. The basic concept of abstract model theory is that of an abstract logic which is a triple $\mathcal{L}=(S,\Phi ,\models)$ where $\models$ is a binary relation between the class of $\mathcal{L}$-‘structures’ $S$ and the class of $\mathcal{L}$-‘sentences’ $\Phi$ to be thought of as minimalistic version of the satisfaction relation. ## Remark As the logical relations studied by abstract model theory are of a functorial nature, some category theory entered the picture already in Barwise (1974). The theory of institutions, aka institution-independent model theory (Diaconescu 2008), constitutes abstract categorical model theory proper. In a similar abstract categorical vein is the functorial approach to geometric theories described in Johnstone (2002, sec. B4.2). ## References • Jon Barwise, Axioms for abstract model theory , Annals of Mathematical Logic 7 pp.221-265, 1974. • Barwise, Feferman (eds.), Model-theoretic Logics , Springer Heidelberg 1985 (freely available online: toc) . • Răzvan Diaconescu, Institution-independent Model Theory , Birkhäuser Basel 2008. • Marta García-Matos, Jouko Väänänen, Abstract Model Theory as a Framework for Universal Logic , Logica Universalis 2005 pp.1-33 (draft) . • Peter Johnstone, Sketches of an Elephant vol. I , Oxford UP 2002. • Jouko Väänänen, The Craig Interpolation Theorem and abstract model theory , Synthese 164:401 (2008) (freely available online) Last revised on September 18, 2018 at 09:11:55. See the history of this page for a list of all contributions to it.
2020-04-10 05:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6683185696601868, "perplexity": 3628.1371394427388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00074.warc.gz"}
http://www.numerical-tours.com/matlab/denoisingadv_5_mathmorph/
Mathematical Morphology # Mathematical Morphology This numerical tour explores mathematical morphology of binary images. ## Installing toolboxes and setting up the path. You need to download the following files: signal toolbox and general toolbox. You need to unzip these toolboxes in your working directory, so that you have toolbox_signal and toolbox_general in your directory. For Scilab user: you must replace the Matlab comment '%' by its Scilab counterpart '//'. Recommandation: You should create a text file named for instance numericaltour.sce (in Scilab) or numericaltour.m (in Matlab) to write all the Scilab/Matlab command you want to execute. Then, simply run exec('numericaltour.sce'); (in Scilab) or numericaltour; (in Matlab) to run the commands. Execute this line only if you are using Matlab. getd = @(p)path(p,path); % scilab users must *not* execute this Then you can add the toolboxes to the path. getd('toolbox_signal/'); getd('toolbox_general/'); ## Binary Images and Structuring Element Here we process binary images using local operator defined using a structuring element, which is here chosen to be a discrete disk of varying radius. n = 256; Display. clf; imageplot(M); Make it binary. M = double(M>.45); Display. clf; imageplot(M); Round structuring element. wmax = 7; [Y,X] = meshgrid(-wmax:wmax, -wmax:wmax); normalize = @(x)x/sum(x(:)); strel = @(w)normalize( double( X.^2+Y.^2<=w^2 ) ); Exercice 1: (check the solution) Display structuring elements of increasing sizes. exo1; ## Dillation A dilation corresponds to take the maximum value of the image aroung each pixel, in a region equal to the structuring element. It can be implemented using a convolution with the structuring element followed by a thresholding. dillation=@(x,w)double(perform_convolution(x,strel(w))>0); Md = dillation(M,2); Display. clf; imageplot(Md); Exercice 2: (check the solution) Test with structing elements of increasing size. exo2; ## Errosion An errosion corresponds to take the maximum value of the image aroung each pixel, in a region equal to the structuring element. It can be implemented using a convolution with the structuring element followed by a thresholding. errosion=@(x,w)double( perform_convolution(x,strel(w))>=.999 ); Me = errosion(M,2); Display. clf; imageplot(Me); Exercice 3: (check the solution) Test with structing elements of increasing size. exo3; ## Opening An opening smooth the boundary of object (and remove small object) by performing an errosion and then a dillation. Define a shortcut. opening = @(x,w)dillation(errosion(x,w),w); Perform the opening, here using a very small disk. w = 1; Mo = opening(M,w); Display. clf; imageplot(Mo); Exercice 4: (check the solution) Test with structing elements of increasing size. exo4;
2021-05-11 20:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6357192397117615, "perplexity": 7690.771026714836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00119.warc.gz"}
https://lojban.pw/cll/uncll-1.2.4/xhtml_section_chunks/section-scalar-negation.html
## 15.3. Scalar Negation Let us now consider some other types of negation. For example, when we say: Example 15.29. The chair is not brown. we make a positive inference – that the chair is some other color. Thus, it is legitimate to respond: Example 15.30. It is green. Whether we agree that the chair is brown or not, the fact that the statement refers to color has significant effect on how we interpret some responses. If we hear the following exchange: Example 15.31. The chair is not brown. Correct. The chair is wooden. we immediately start to wonder about the unusual wood that isn't brown. If we hear the exchange: Example 15.32. Is the chair green? No, it is in the kitchen. we are unsettled because the response seems to be a non-sequitur. But since it might be true and it is a statement about the chair, one can't say it is entirely irrelevant! What is going on in these statements is something called scalar negation. As the name suggests, scalar negation presumes an implied scale. A negation of this type not only states that one scalar value is false, but implies that another value on the scale must be true. This can easily lead to complications. The following exchange seems reasonably natural (a little suspension of disbelief in such inane conversation will help): Example 15.33. That isn't a blue house. Right! That is a green house. We have acknowledged a scalar negation by providing a correct value which is another color in the set of colors permissible for houses. While a little less likely, the following exchange is also natural: Example 15.34. That isn't a blue house. Right! That is a blue car. Again, we have acknowledged a scalar negation, and substituted a different object in the universe of discourse of things that can be blue. Now, if the following exchange occurs: Example 15.35. That isn't a blue house. Right! That is a green car. we find the result unsettling. This is because it seems that two corrections have been applied when there is only one negation. Yet out of context, blue house and green car seem to be reasonably equivalent units that should be mutually replaceable in a sentence. It's just that we don't have a clear way in English to say: Example 15.36. That isn't a blue-house. aloud so as to clearly imply that the scalar negation is affecting the pair of words as a single unit. Another even more confusing example of scalar negation is to the sentence: Example 15.37. John didn't go to Paris from Rome. Might Example 15.37 imply that John went to Paris from somewhere else? Or did he go somewhere else from Rome? Or perhaps he didn't go anywhere at all: maybe someone else did, or maybe there was no event of going whatsoever. One can devise circumstances where any one, two or all three of these statements might be inferred by a listener. In English, we have a clear way of distinguishing scalar negation from predicate negation that can be used in many situations. We can use the partial word non- as a prefix. But this is not always considered good usage, even though it would render many statements much clearer. For example, we can clearly distinguish Example 15.38. That is a non-blue house. from the related sentence Example 15.39. That is a blue non-house. Example 15.38 and Example 15.39 have the advantage that, while they contain a negative indication, they are in fact positive assertions. They say what is true by excluding the false; they do not say what is false. We can't always use non- though, because of the peculiarities of English's grammar. It would sound strange to say: Example 15.40. John went to non-Paris from Rome. or Example 15.41. John went to Paris from non-Rome. although these would clarify the vague negation. Another circumlocution for English scalar negation is other than, which works where non- does not, but is wordier. Finally, we have natural language negations that are called polar negations, or opposites: Example 15.42. John is moral Example 15.43. John is immoral To be immoral is much more than to just be not moral: it implies the opposite condition. Statements like Example 15.43 are strong negations which not only deny the truth of a statement, but assert its opposite. Since, opposite implies a scale, polar negations are a special variety of scalar negations. To examine this concept more closely, let us draw a linear scale, showing two examples of how the scale is used: Affirmations (positive) Negations (negative) |-----------|-----------|-----------|-----------| All Most Some Few None Excellent Good Fair Poor Awful Some scales are more binary than the examples we diagrammed. Thus we have not necessary or unnecessary being the polar opposite of necessary. Another scale, especially relevant to Lojban, is interpreted based on situations modified by one's philosophy: not true may be equated with false in a bi-valued truth-functional logic, while in tri-valued logic an intermediate between true and false is permitted, and in fuzzy logic a continuous scale exists from true to false. The meaning of not true requires a knowledge of which variety of truth scale is being considered. We will define the most general form of scalar negation as indicating only that the particular point or value in the scale or range is not valid and that some other (unspecified) point on the scale is correct. This is the intent expressed in most contexts by not mild, for example. Using this paradigm, contradictory negation is less restrictive than scalar negation – it says that the point or value stated is incorrect (false), and makes no statement about the truth of any other point or value, whether or not on the scale. In English, scalar negation semantically includes phrases such as other than, reverse of, or opposite from expressions and their equivalents. More commonly, scalar negation is expressed in English by the prefixes non-, un-, il-, and im-. Just which form and permissible values are implied by a scalar negation is dependent on the semantics of the word or concept which is being negated, and on the context. Much confusion in English results from the uncontrolled variations in meaning of these phrases and prefixes. In the examples of Section 15.4, we will translate the general case of scalar negation using the general formula other than when a phrase is scalar-negated, and non- when a single word is scalar-negated.
2020-08-13 04:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5753965973854065, "perplexity": 1121.0468478189687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00521.warc.gz"}
https://zbmath.org/?q=an%3A1094.62077
zbMATH — the first resource for mathematics Conditional Akaike information for mixed-effects models. (English) Zbl 1094.62077 Summary: This paper focuses on the Akaike information criterion, AIC, for linear mixed-effects models in the analysis of clustered data. We make the distinction between questions regarding the population and questions regarding the particular clusters in the data. We show that the AIC in current use is not appropriate for the focus on clusters, and we propose instead the conditional Akaike information and its corresponding criterion, the conditional AIC, cAIC. The penalty term in cAIC is related to the effective degrees of freedom $$\rho$$ for a linear mixed model proposed by J. S. Hodges and D. J. Sargent [ibid. 88, No. 2, 367–379 (2001; Zbl 0984.62045)]; $$\rho$$ reflects an intermediate level of complexity between a fixed-effects model with no cluster effect and a corresponding model with fixed cluster effects. The cAIC is defined for both maximum likelihood and residual maximum likelihood estimation. A pharmacokinetics data application is used to illuminate the distinction between the two inference settings, and to illustrate the use of the conditional AIC in model selection. MSC: 62J05 Linear regression; mixed models 62B10 Statistical aspects of information-theoretic topics 62P10 Applications of statistics to biology and medical sciences; meta analysis 62J10 Analysis of variance and covariance (ANOVA) MEMSS Full Text:
2021-03-03 12:54:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4059046804904938, "perplexity": 1287.2689293890508}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00468.warc.gz"}
https://puzzling.stackexchange.com/questions/92512/another-picture-problem
# Another picture problem Another picture puzzle. Replace the question mark. Once again, the information needed is there. So, try replace it! There are exactly 26 vertical bands in the "question" image. If we map these to letters of the alphabet in the obvious way and guess that subdivision means multiple copies of a letter, we get AEEEHIIMNNSSTTTTW, an anagram of WHAT IS TEN TIMES TEN.
2021-10-16 15:23:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682535886764526, "perplexity": 1616.649916760523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00430.warc.gz"}
https://dsp.stackexchange.com/questions/55962/how-to-take-inverse-fft-of-windowed-and-callibrated-fft-data
# How to take inverse fft of windowed and callibrated fft data? [closed] I have done windowing and fft on a signal but when I try to recover original raw signal ,I am unable to do it .Can anybody help me. How to take inverse fft of windowed and callibrated fft data? Thanks in advance.... ## closed as unclear what you're asking by Marcus Müller, MBaz, lennon310, jojek♦Mar 19 at 8:37 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • It's a bit unclear what you need help with. The IFFT is nearly the same operation as the FFT, and if you've done the FFT with some existing library, that library most definitely brings an IFFT itself. – Marcus Müller Mar 13 at 10:45 • BTW, $\text{IFFT} \lbrace W(f) \ast X(f) \rbrace \neq x(t)$. Windowing modifies the signal, so you can't get the "original raw" signal $x(t)$ back. – MBaz Mar 13 at 13:42
2019-11-13 10:47:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27978795766830444, "perplexity": 1253.911607857057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00338.warc.gz"}
https://www.chemicalforums.com/index.php?topic=86263.0
May 26, 2020, 07:54:08 PM Forum Rules: Read This Before Posting ### Topic: Gas Equilibrium Constant  (Read 2228 times) 0 Members and 1 Guest are viewing this topic. #### mystreet123 • Regular Member • Posts: 24 • Mole Snacks: +0/-0 ##### Gas Equilibrium Constant « on: June 01, 2016, 06:46:53 AM » If a reaction involves both gases and aqueous solution, should we use Kp or Kc for equilibrium constant? How to include concentration of aqueous solution in gas equilibrium constant? • Rhyming Chemist • Full Member • Posts: 182 • Mole Snacks: +18/-0 ##### Re: Gas Equilibrium Constant « Reply #1 on: June 01, 2016, 07:04:51 AM » You've asked an interesting question. I'd go with Kc but I'm going on a hunch. The page below advises that solids and liquids should be omitted from equilibrium constant expressions. http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Equilibria/Chemical_Equilibria/The_Equilibrium_Constant While this page features examples of equilibria exclusively involving gases yet taking the Kc constant. http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Equilibria/Chemical_Equilibria/The_Equilibrium_Constant/Calculating_An_Equilibrium_Concentration_From_An_Equilibrium_Constant/Writing_Equilibrium_Constant_Expressions_involving_solids_and_liquids I couldn't find any explanations of heterogeneous equilibria involving both aqueous and gaseous species, suggesting it's either insanely complicated, or not appreciably different from heterogeneous equilibria which only involve one or the other. In other words, leave out solids and liquids and go with Kc. I suspect it's the insanely complicated option and may involve activities. Is there any particular reason you want to know? #### mystreet123 • Regular Member • Posts: 24 • Mole Snacks: +0/-0 ##### Re: Gas Equilibrium Constant « Reply #2 on: June 01, 2016, 07:19:07 AM » You've asked an interesting question. I'd go with Kc but I'm going on a hunch. The page below advises that solids and liquids should be omitted from equilibrium constant expressions. http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Equilibria/Chemical_Equilibria/The_Equilibrium_Constant While this page features examples of equilibria exclusively involving gases yet taking the Kc constant. http://chemwiki.ucdavis.edu/Core/Physical_Chemistry/Equilibria/Chemical_Equilibria/The_Equilibrium_Constant/Calculating_An_Equilibrium_Concentration_From_An_Equilibrium_Constant/Writing_Equilibrium_Constant_Expressions_involving_solids_and_liquids I couldn't find any explanations of heterogeneous equilibria involving both aqueous and gaseous species, suggesting it's either insanely complicated, or not appreciably different from heterogeneous equilibria which only involve one or the other. In other words, leave out solids and liquids and go with Kc. I suspect it's the insanely complicated option and may involve activities. Is there any particular reason you want to know? I'm taking A level but I guess this is not included in specification then if it is complicated? #### mjc123 • Chemist • Sr. Member • Posts: 1751 • Mole Snacks: +245/-11 ##### Re: Gas Equilibrium Constant « Reply #3 on: June 01, 2016, 07:25:31 AM » There's nothing to stop you using a hybrid equilibrium constant including aqueous concentrations and gas pressures, especially since the usual standard state for solutions is 1M concentration and for gases 1 atm pressure. For example, the Nernst equation (which is related to equilibrium constants) for the H+/H2 electrode would be E = E° + RT/F*ln([H+]/pH21/2)
2020-05-26 23:54:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002225756645203, "perplexity": 6216.865507552588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00217.warc.gz"}
https://support.numxl.com/hc/en-us/articles/215671323
# Airline Model The airline model is a special, but often used, case of multiplicative ARIMA model. For a given seasonality length (s), the airline model is defined by four(4) parameters: $\mu$, $\sigma$, $\theta$ and $\Theta$). 1. $$(1-L^s)(1-L)Y_t = \mu + (1-\theta L)(1-\Theta L^s)a_t$$ OR $$Z_t = (1-L^s)(1-L)Y_t = \mu + (1-\theta L)(1-\Theta L^s)a_t$$ OR $$Z_t = \mu -\theta \times a_{t-1}-\Theta \times a_{t-s} +\theta\times\Theta \times a_{t-s-1}+ a_t$$ Where: • $s$ is the length of seasonality. • $\mu$ is the model mean • $\theta$ is the coefficient of first lagged innovation • $\Theta$ is the coefficient of s-lagged innovation. • $\left [a_t\right ]$ is the innovations time series. ## notes 1. the AirLine model can be viewed as a "cascade" of two models: 1. The first model is a non-stationary : $$(1-L^s)(1-L)Y_t = Z_t$$ 2. The second model is wide-sense stationary: $$Z_t = \mu + (1-\theta L)(1-\Theta L^s)a_t$$ 2. The stationary component is a special form of the moving average model. 3. The airline model of order ($s$) has 4 free parameters: $\mu\,,\sigma\,\,,\theta\,,\Theta$
2018-07-16 16:27:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908085286617279, "perplexity": 1795.3552277551144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589404.50/warc/CC-MAIN-20180716154548-20180716174548-00632.warc.gz"}
http://eeer.org/journal/view.php?number=790
| Home | E-Submission | Sitemap | Contact Us | Environ Eng Res > Volume 21(3); 2016 > Article Shim, Lee, Lee, and Kwak: Experimental study on capture of carbon dioxide and production of sodium bicarbonate from sodium hydroxide ### Abstract Global warming due to greenhouse gases is an issue of great concern today. Fossil fuel power plants, especially coal-fired thermal power plants, are a major source of carbon dioxide emission. In this work, carbon capture and utilization using sodium hydroxide was studied experimentally. Application for flue gas of a coal-fired power plant is considered. Carbon dioxide, reacting with an aqueous solution of sodium hydroxide, could be converted to sodium bicarbonate (NaHCO3). A bench-scale unit of a reactor system was designed for this experiment. The capture scale of the reactor system was 2 kg of carbon dioxide per day. The detailed operational condition could be determined. The purity of produced sodium bicarbonate was above 97% and the absorption rate of CO2 was above 95% through the experiment using this reactor system. The results obtained in this experiment contain useful information for the construction and operation of a commercial-scale plant. Through this experiment, the possibility of carbon capture for coal power plants using sodium hydroxide could be confirmed. ### 1. Introduction Over the past several decades, there has been growing international interest in limiting the emission of carbon dioxide and reducing greenhouse gases in the atmosphere. Excessive greenhouse gases in the atmosphere are responsible for various environmental problems, such as the increasing number of ocean storms and rising sea levels [1]. Carbon dioxide (CO2) is the major contributor to global warming. According to a report by the Intergovernmental Panel on Climate Change (IPCC), the concentration of CO2 in the atmosphere may be up to 570 ppm; it has caused a rise in the mean global temperature of 1.9°C and an increase in the mean sea level of 3.8 m [1]. One of the important parts of increasing CO2 emission belongs to enhancing modern agriculture and irrigation systems [2, 3]. For increase in agricultural production and expanding cultivated area, water management system should be carefully planed and the climate change due to global warming should be considered [4, 5]. Carbon capture and storage (CCS) has been widely recognized as an effective strategy for reducing emissions of greenhouse gases and mitigating global warming [6, 7]. In the CCS process, carbon dioxide emitted from a point source, such as a coal-fired power plant, is captured and transported to storage sites. One of the most promising technologies among carbon capture processes is the post-combustion amine process [811]. The amine process is has been a proven method for a relatively long time, and it is close to commercialization. However, it requires large-scale storage sites and steel is needed to reduce the energy penalty for regeneration of the amine absorbent [12]. Recently, utilization of carbon dioxide (carbon capture and utilization, CCU) has attracted attention [13]. CCU does not require storage sites, unlike CCS. CCU could be a cost effective option for the reduction of greenhouse gas because the capture costs could be offset by the products. CO2 could be converted to valuable materials through chemical reaction with a reduction agent [14]. CO2 could be sequestered as a carbonate or converted to usable materials, such as construction materials, through mineralization by suitable minerals [15]. This research studied the capture of carbon dioxide emissions from coal-fired power plants using sodium hydroxide (NaOH) experimentally. Carbon absorption in NaOH solution has been studied since the 1940 [16, 17]. Recently, Researchers have focused on capture of carbon dioxide in ambient air using the rapid reactivity of NaOH [18, 19]. CO2 could be converted to sodium bicarbonate (NaHCO3) through reaction with aqueous NaOH. Carbon capture capacity of NaOH has been investigated in mild concentration (1–5% NaOH solution) [20]. Sodium bicarbonate has a very wide range of uses. It can be used as baking soda, a cleaning agent, or a pH buffer in the chemical industry or the medical/ pharmaceutical industry. It can also be used as a desulfurization or denitrification agent in flue gas treatment. The sodium hydroxide required for carbon capture can be produced by a chlor-alkali process (a process for electrolysis of NaCl) [21]. Economic evaluation of a commercial-scale (100 tCO2/d) plant using this process was already performed using the internal rate of return (IRR) and net present value (NPV) method [22]. Based on this economic evaluation, it could be concluded that this process is a cost-effective technology option for the reduction of greenhouse gas. The former works was theoretical study or lab scale experiment [16, 20]. A bench-scale reactor system (2 kgCO2/d) was designed for this experiment. Through this experiment, the optimum operation conditions of the reactor system could be determined. ### 2.1. Reaction and Thermodynamics Reaction chemistry and thermodynamics are major considerations in reaction engineering design. Carbon dioxide is a typical acid gas. First, gaseous CO2 is dissolved in water, and it forms carbonic acid (H2CO3). The carbonic acid donates a proton (hydrogen ion, H+) and forms a bicarbonate ion (HCO3). The bicarbonate ion donates a proton and generates a carbonate ion (CO32−). ##### (1) $CO2(g)+H2O↔H2CO3(aq)$ ##### (2) $H2CO3(ag)↔HCO3-(aq)+H+(aq)$ ##### (3) $HCO3-(aq)↔CO32-(aq)+H+(aq)$ The equilibrium constant [23] at room temperature for Eq. (2) and (3) could be written as ##### (4) $K1=[HCO3-] [H+][H2CO3]=10-6.73, K2=[CO32-] [H+][HCO3-]=10-10.26$ If we introduce the pseudo-steady state condition (in this condition, Ctotal ≡ [H2CO3] + [HCO3] + [CO32−] = constant), the mole fraction of each carbonate species could be expressed as a function of pH. The mole fraction of carbonate species is plotted in Fig. 1, which is called Bjerrum plot [24]. Reaction 1 ##### (5) $2NaOH(aq)+CO2 (g)→Na2CO3 (aq)+H2O(l)$ Reaction 2 ##### (6) $Na2CO3 (aq)+H2O(l)+CO2 (b)→2NaHCO3 (aq)$ Overall reaction ##### (7) $2NaOH(aq)n+2CO2 (g)→2NaHCO3 (aq)$ CO2 reacts with aqueous NaOH and forms sodium carbonate (Na2CO3) and sodium bicarbonate (NaHCO3) in turn. These CO2 absorption reactions are expressed by Eq. (5), (6) and (7). The thermodynamic parameters of each reaction are listed in Table 1. Reactions 1 and 2 are exothermic (enthalpy of the reaction H < 0) and spontaneous (Gibbs free energy of the reaction G < 0), respectively. However, the Gibbs free energy value of reaction 2 is much smaller than that of reaction 1, and the reaction momentum of reaction 2 is also small. Therefore, the reaction rate of reaction 2 is very slow, in comparison with that of reaction 1. ### 2.2. Design of Reactor The reaction rate and reaction conditions, such as pH, of reactions 1 and 2 are totally different. Therefore, the two reactions needed to be carried out in separate reactors for efficient reaction. Two or more reactors connected in a series were required to provide separate reaction conditions. The Hatta number (Ha) [26] is a dimensionless number that represents the ratio of the rate of reaction in the liquid film to the rate of transport through the film: ##### (8) $Ha=kn CB,bulkDCO2kL$ kn and CB,bulk represent the rate constant of each reaction and concentration of the reactant (NaOH or Na2CO3), respectively. DCO2 and kL represent diffusion coefficient of CO2 and mass transfer coefficient, respectively. If the reaction rate is small and Ha is smaller than 0.3, a bubble column reactor should be introduced. As mentioned previously, the reaction rate of reaction 2 is very slow compared to that of reaction 1. Ha for reaction 1 is 102-103, while that of reaction 2 is 10−3-10−2 [27]. Therefore, a bubble column reactor should be used for efficient reaction. The micro-sized holes of the spargers for making bubbles, which were small enough to have sufficient interface, were always clogged due to the precipitation of sodium carbonate. Bubbles could not be made small enough to secure a sufficient contact interface area. In addition, reaction 2 is very slow and requires additional space time. Thus, we combined a bubble column and a packed bed column into one reactor. Fig. 2 is a schematic diagram of the reactor system. The inner diameter of the column was 8 cm, and a structured packing was used in the packed column. The height of the packing tower was 300–700 mm and 0.635 cm stainless steel tubing was used for liquid transport, and Teflon tubing was used for gas transport. Each reactor had a recirculation pump (Tuthill PGS 68) and a feeding pump (Masterflex L/S easy load). A pH meter, a thermometer (Orion Star A211), and pressure gauge were mounted on each reactor. At the upper end of each reactor, a CO2 analyzer (KONICS KN-2000W) was installed. ### 3. Experimental methods NaOH aqueous solution was prepared by dissolving NaOH powder (SAMCHUN Chemical, 98%) in distilled water. The concentration of the absorbent solution used in this experiment was 15 wt.%. The solubility of Na2CO3 (the intermediate product of this process) is less than 20 wt.% at room temperature [28]. Therefore, if the concentration of the absorbent is higher than 15 wt.%, the tubing system and gas sparger of the reactor could be clogged by precipitation of Na2CO3. On the other hand, if the concentration is not high enough, the absorption rate is slow. The flow rate of the NaOH solution was 10–15 mL/min. For the feed gas, 14 volume % carbon dioxide gas balanced with nitrogen gas was used. (The flue gas of a coal-fired power plant contains 14 volume % of CO2.) The flow rate of the feeding gas was 5 L/min, which is equivalent to 2 kgCO2/d. The pH of the liquid at the bottom of the reactors and the CO2 concentration in the gas after each reactor (Cout,n) were measured every 5 min. As shown in Fig 1, pH is a major indicator of the progress of the reaction. The pH value of the first reactor was maintained at pH 8–9, while that of the second reactor was maintained at pH 9–10, and that of the third reactor was higher than pH 12. Therefore, the concentration of bicarbonate ions in reactor 1 and the CO2 absorption rate in reactor 3 could be maximized. CO2 absorption rate (γn) is defined as ##### (9) $γn=100×(1-Cout,n (100-C0)C0 (100-Cout,n))$ Here, C0 is the volume percent of CO2 in the inlet gas of the reactor system, and Cout,n is the volume percent of CO2 in the outlet gas of the n’th reactor. For the purity analysis of the product (sodium bicarbonate), quantitative X-ray diffraction was used. To achieve optimum condition of the bench scale reactor system, three variables were selected. 1. height of each packed column 2. gas flow rate and gas injection point 3. injection point of mother liquid The height of each packed column is an important variable for economic design of the reactor. To avoid over-spec design or under-spec design, the height of packed column should be determined. The gas flow rate and gas injection point are necessarily determined to optimize the reactor system. The injection point of mother liquid should also be determined for stable and economic operation of the reactor system. Therefore the three variables were determined to optimize the bench scale reactor system. Even though this reactor system is a bench scale unit, the experimental results could be influenced by various disturbances or uncontrollable conditions. The ambient temperature could influence on the reaction rate and the unremoved moisture after demoisturizer in the gas could affect measurement of CO2 concentration. To reduce the influence of these disturbances or uncontrollable conditions, all experiments were conducted for about 2 h and results were used as averaged values. The results during the initial 30 min, when the reactor system is not yet in the steady state, were excluded. ### 4. Results and Discussion The default operating results were found before testing was carried out to determine the optimal operation parameters. In default operation, the gas flow rate is 5 L/min, and the mixed gas was injected only through reactor 1. The mother liquid was not returned to the reactor system. The heights of the packing towers were 700 mm, 500 mm and 500 mm, respectively. The liquid level of the bubble column part at the bottom of each reactor was 15 cm. Fig. 3 shows the default operating results. The respective average pH levels of the three reactors were 9.0, 10.2, and 12.5. The total CO2 absorption ratio (γ3) was about 99%. Generally, in post-combustion carbon capture, 90% of capture rate is the required absorption rate. The CO2 absorption rate (γ3) of the default operation was more than sufficient. We could also confirm that operation was very stable after the initial 20 min. The purity of the sodium bicarbonate powder produced in this operation was measured by XRD quantitative analysis, and it was 97.3%. The purity of the sodium bicarbonate chemical (Sigma-Aldrich) was also measured for comparison and it was 97.8%. The purity of the product was comparable to that of the commercial chemical for laboratory experiments. ### 4.1. Height of Packed Column in Each Reactor The height of packed column was varied to find the optimal column height. The heights of the column modules we used in this study were 300 mm, 400 mm and 500 mm; the column height could be varied in the combinations described in Table 2. The pH of reactor 1 was critical to the high purity of the product. In evaluating the performance of reactors 1 and 2, the pH of reactor 1 could be an important indicator. Figure 4 shows the pH of reactor 1 in relation to the height of the column in reactor 1 or reactor 2. As shown in Fig. 1, the pH of reactor 1 should be 8–9. If the height of the column in reactor 1 was shorter than 700 mm, the pH of reactor 1 was higher than 9. However, the pH of reactor 1 did not go below 9, although the height of the column in reactor 1 higher than 700 mm. Therefore, it can be concluded that the proper heights for the columns in reactors 1 is 700 mm, and, for the same reason, the proper heights for the columns in reactors 2 is 400 mm. The height of the column in reactor 3 had almost no effect on the pH of reactor 1. The primary role of reactor 3 was absorbing CO2 under a high pH condition. More than 60% of CO2 was absorbed in reactor 3. Therefore, the total absorption rate (γ3) could be a good indicator for evaluation of the performance of reactor 3. Fig. 5 shows the total absorption rate according to the height of the column in reactor 3. Even though the shortest column (300 mm) was used, the absorption rate was high enough. The total absorption rate of CO2 was more than 97% regardless of the height of the column in reactor 3. The reaction rate of reaction 1 (carbonation reaction, Eq. (1)) was fast, and the bubble column part at the bottom of the reactor could absorb a large amount of CO2. ### 4.2. Gas Flow Rate and Gas Injection Point The gas flow rate was increased to determine the capacity of the reactor system. The gas injection point in the experiment was the bottom of reactor 1, as shown in Fig. 2. The gas flow rate was increased from 5 L/min to 10 L/min, in the other words, from 2 kgCO2/d to 10 kgCO2/d. When the flow rate was increased to 4 kgCO2/d or higher, the reactor could not withstand the pressure and the experiment could not be conducted due to leakage. The results are shown in Fig. 6. If the gas flow rate was increased, the total absorption rate (γ3) decreased, and the operation of the reactor system was not stable. In particular, for the case of 10 L/min (4 kgCO2/d) stabilization of the reactor system took longer than in other cases. In this case, the pH of reactor 3 needed to be maintained above 13 to maintain the absorption rate. Because the high pH solution was transported to reactor 2, the pH of reactor 2 was also high, and stabilization took a long time. By injecting additional gas into reactor 2, it was considered more utilization of the absorption capacity of the reactor 2. A schematic diagram is shown in Fig. 7. We compared the results with the case of one injection point. As shown in Fig. 8, if the total gas of flow rate was not changed (5 L/min) and 50% of the gas was injected into reactor 2, 3 was 98.7%, and the absorption rate was the same as the case in which all of the gas was injected into reactor 1. However, if the gas flow rate into reactor 1 was 5 L/min and the additional gas was injected into reactor 2, 3 was lower than in the case in which all of the gas was injected into reactor 1. The gas injected into reactor 2 did not pass reactor 1; therefore, less CO2 was absorbed in comparison with the case in which all of the gas was injected into reactor 1. Additional absorption of CO2 in reactor 2 was expected but could not be observed. ### 4.3. Return of the Mother Liquid In the precipitation tank, sodium bicarbonate was separated from the mother liquid because the solubility of sodium bicarbonate is much lower than that of sodium carbonate. The mother liquid separated from the precipitate is a solution of sodium carbonate and sodium bicarbonate. For a commercial plant, because the yield is important from an economic perspective, recovery of the mother liquid is essential. The operation state of the reactor system was examined when the mother liquid was injected. It was also examined to determine which reactor would be proper for injection of the mother liquid. The pH of the mother liquid was around 9, and it was equal to the pH of the reactor 1. Injection flow rate of the mother liquid was 30 mL/min. Mother liquid injection into reactor 3 was excluded because the mother liquid could reduce the pH of reactor 3 and the absorption rate could be reduced. The operation results for return of the mother liquid are shown in Fig. 10. Of course, if the mother liquid was returned to reactor 1, the pH in reactor 2 and γ2 did not change, as shown in Fig. 10(a). The pH of reactor 1 also was not affected because the mother liquid had almost the same pH as reactor 1. The γ3 value was also less affected than in the case of the mother liquid returning to reactor 2. The operation status was very stable and the average γ3 was barely affected during the entire operation time. The average γ3 was 97.2%. The purity of the product in this case was 97.1%. Yield of the reaction in this system was about 80%. If the mother liquid was returned to reactor 2 (Fig. 10(b)), the pH in reactor 2 and the absorption rate in reactor 2 (γ2) decreased. However, the operation status was stable, and the average γ3 was not reduced during the entire period of operation. The average γ3 was 95.4%. The purity of the product (i.e. sodium bicarbonate) in this case was 95.0%. Comparing the two cases, we can conclude that returning the mother liquid to reactor 1 is better than returning it to reactor 2 in terms of both product purity and CO2 absorption rate. ### 5. Conclusions In this study, a carbon capture and utilization method using sodium hydroxide was investigated experimentally. For this experiment, a unique series reactor system was designed. The height of each reactor, gas flow rate or injection point, and the return of mother liquid were investigated. The results can be summarized as follows. 1. For our reactor system, the proper heights of the columns in reactors 1, 2 and 3 are 700 mm, 400 mm, and 300 mm, respectively. 2. The gas flow rate can be increased to 4 kgCO2/d (twice of designed capacitor), and gas injection into reactor 1 is better than gas injection into reactor 2. 3. Through return of the mother liquid, the reaction yield was confirmed to be about 80%, and the best injection point of the mother liquid was determined to be reactor 1. Even though these results were obtained on a bench-scale reactor system, the results contain useful information for the construction and operation of a commercial-scale plant. Through this study, this process was confirmed to be a feasible option for the reduction of greenhouse gas. ### Acknowledgments This research was supported by Korea East West Power Co. (EWP). ### References 1. Climate Change 2014: Impacts, Adaptation, and Vulnerability. IPCC; 2014. 2. Mohammad VA comprehensive study on irrigation management in Asia and Oceania. Arch Agron Soil Sci. 2015;61:1247–1271. 3. Mohammad VFuture of agricultural water management in Africa. Arch Agron Soil Sci. 2015;61:907–927. 4. Mohammad V, Mohammad GS, Eslamian SSurface irrigation simulation models: a review. Int J Hydrol Sci Technol. 2015;5:51–70. 5. Maryam MK, Mohammad GS, Mohammad VSimulation of open-and closed-end border irrigation systems using SIRMOD. Arch Agron Soil Sci. 2015;61:929–941. 6. Technology roadmap: Carbon capture and storage. IEAGHG; 2013. 7. Wee JH, Kim J, Song I, Song B, Choi KReduction of carbon-dioxide emission applying carbon capture and storage( CCS) technology to power generation and industry sectors in Korea. J KSEE. 2008;30:961–972. 8. Rochelle GTAmine scrubbing for CO2 capture. Science. 2009;325:1652–1654. 9. Cousins A, Wardhaugh LT, Feron PHMA survey of process flow sheet modifications for energy efficient CO2 capture from flue gases using chemical absorption. Int J Greenh Gas Con. 2011;5:605–619. 10. Lee JH, Kwak NS, Lee IY, et alPerformance and economic analysis of commercial-scale coal-fired power plant with post-combustion CO2 capture. Korean J Chem Eng. 2015;32:800–807. 11. Rao AB, Rubin ESIdentifying cost-effective CO2 control levels for amine-based CO2 capture systems. Ind Eng Chem Res. 2006;45:2421–2429. 12. Wang M, Lawal A, Stephenson P, Sidders J, Ramshaw CPost-combustion CO2 capture with chemical absorption: a state-of-the-art review. Chem Eng Res Des. 2011;89:1609–1624. 13. Markewitz P, Kuckshinrichs W, Leitner W, et alWorldwide innovations in the development of carbon capture technologies and the utilization of CO2. Energ Environ Sci. 2012;5:7281–7305. 14. Park SE, Chang JS, Lee KWCarbon dioxide utilization for global sustainability. Proceedings of the 7th international conference on carbon dioxide utilization. Elsevier; 2014. 15. Geerlings H, Zevenhoven RCO2 mineralization-bridge between storage and utilization of CO2. Annu Rev Chem Biomol Eng. 2013;4:103–117. 16. Tepe JB, Dodge BFAbsorption of carbon dioxide by sodium hydroxide solutions in a packed column. Trans Am Inst Chem Eng. 1943;39:255 17. Spector NA, Dodge BFRemoval of carbon dioxide from atmospheric air. Trans Am Inst Chem Eng. 1946;42:827 18. Keith DW, Ha-Duong M, Stolaroff JKClimate strategy with CO2 capture from the air. Climatic Change. 2006;74:17–45. 19. Stolaroff JK, Keith DW, Lowry GVCarbon dioxide capture from atmospheric air using sodium hydroxide spray. Envir Sci Technol. 2008;42:2728–2735. 20. Yoo M, Han SJ, Wee JHCarbon dioxide capture capacity of sodium hydroxide aqueous solution. J Environ Manage. 2013;114:512 21. Jones JD, Al Yablonsky Carbon dioxide sequestration methods using group 2 silicates and chlor-alkali processes. US Patent Application 14/004,095. 22. Lee JH, Lee DW, Jang SG, et alEconomic evaluations for the carbon dioxide-involved production of high-value chemicals. Korean Chem Eng Res (HWAHAK KONGHAK). 2014;52:347–354. 23. Wolf-Gladrow DA, Zeebe RE, Klaas C, Kortzinger A, Dickson AGTotal alkalinity: The explicit conservative expression and its application to biogeochemical processes. Mar Chem. 2007;106:287–300. 24. Andersen CBUnderstanding carbonate equilibria by measuring alkalinity in experimental and natural systems. J Geosci Educ. 2002;50:389–403. 25. Ebbing DDGeneral chemistry. Boston: Houghton Mifflin Company; 1990. 26. Hatta STechnological reports of Tohoku University. Tohoku: Tohoku University; 1932. 10:p. 613–622. 27. Hecht KMicroreactors for gas/liquid reactions: The role of surface properties. Karlsruhe: Universität Karlsruhe; 2014. 28. Kobe K, Sheehy TThermochemistry of sodium carbonate and its solutions. Ind Eng Chem. 1948;40:99–102. ##### Fig. 1 pH vs. the mole fraction of carbonate species (Bjerrum plot). ##### Fig. 2 Schematic diagram of the reactor system. Solid line represents line for liquid transfer and dashed line represents line for gas transfer. ##### Fig. 3 Experimental results obtained with the base operation conditions. ##### Fig. 4 Height of packed column in reactor 1 or reactor 2 vs. pH of reactor 1. ##### Fig. 5 Height of packed column in reactor 3 vs. total carbon absorption rate. ##### Fig. 6 Experimental results of double gas flow rate (10 L/min) condition. The gas was injected only into reactor 1. ##### Fig. 7 Schematic diagram of additional gas injection into reactor 2. ##### Fig. 8 Total gas flow rate vs. absorption rate. If additional gas was injected into reactor 2, not into reactor 1, the absorption rate decreased. ##### Fig. 9 Schematic diagram of the reactor system for return of the mother liquid. The dotted line represents the line for transfer of the mother liquid. Case 1 is return of the mother liquid into reactor 1, and case 2 is return of the mother liquid into reactor 2. ##### Fig. 10 Experimental results for mother liquid return: (a) mother liquid return to reactor 1 and (b) mother liquid return to reactor 2. The gray shaded area represents the time of mother liquid return. ##### Table 1 Thermodynamic Parameters of Each Reaction at Room Temperature [25] H (kJ/mol) S (kJ/molK) G(kJ/mol) = H + S Reaction 1 −169.8 −0.137 −128.97 Reaction 2 −129.1 −0.334 −29.56 ##### Table 2 Height of Column in Each Reactor Reactor 1 Reactor 2 Reactor 3 Case 1 500 mm 400 mm 400 mm Case 2 1000 mm 400 mm 400 mm Case 3 700 mm 300 mm 300 mm Case 4 700 mm 400 mm 300 mm Case 5 700 mm 500 mm 300 mm Case 6 500 mm 400 mm 700 mm Case 7 700 mm 400 mm 500 mm TOOLS Full text via DOI E-Mail Print Share: METRICS 6 Crossref 5 Scopus 6,816 View
2019-12-13 04:18:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5716860890388489, "perplexity": 3197.5707820065018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00135.warc.gz"}
https://mathstodon.xyz/@oil/100751988878998595
Watch "How Did You React To The Code Of Conduct News? Proud Of Your Actions?" on YouTube - youtu.be/krG4O9GHUHU A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
2019-06-17 15:45:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756188750267029, "perplexity": 6968.684671285542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998509.15/warc/CC-MAIN-20190617143050-20190617165050-00321.warc.gz"}
https://www.research.ed.ac.uk/en/publications/improved-measurement-of-bto%CF%81%CF%810-and-determination-of-the-quark-mix
# Improved Measurement of $B^+\toρ^+ρ^0$ and Determination of the Quark-Mixing Phase Angle $α$ The BaBar Collaboration, Philip Clark Research output: Contribution to journalArticlepeer-review ## Abstract We present improved measurements of the branching fraction ${\cal B}$, the longitudinal polarization fraction $f_L$, and the direct {\ensuremath{CP}\xspace} asymmetry {\ensuremath{{\cal A}_{CP}}\xspace} in the $B$ meson decay channel $B^+\to\rho^+\rho^0$. The data sample was collected with the {{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}} detector at SLAC. The results are ${\cal B} (\Bp\ra\rprz)=(23.7\pm1.4\pm1.4)\times10^{-6}$, $f_L=0.950\pm0.015\pm0.006$, and $\Acp=-0.054\pm0.055\pm0.010$, where the uncertainties are statistical and systematic, respectively. Based on these results, we perform an isospin analysis and determine the CKM weak phase angle $\alpha$ to be $(92.4^{+6.0}_{-6.5})^{\circ}$. Original language English Physical Review Letters https://doi.org/10.1103/PhysRevLett.102.141802 Published - 22 Jan 2009 • hep-ex ## Fingerprint Dive into the research topics of 'Improved Measurement of $B^+\toρ^+ρ^0$ and Determination of the Quark-Mixing Phase Angle $α$'. Together they form a unique fingerprint.
2022-05-16 04:53:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.547146737575531, "perplexity": 3997.8469402491423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00067.warc.gz"}
https://piping-designer.com/index.php/disciplines/chemical/corrosion
## Corrosion Engineering Corrosion, abbreviated as CRSN, is the thinning of a pipe wall that is typically caused by a chemical reaction from a corroding fluid or agent and is limited almost exclusively to metal products.  Examples of non metallic corrosion include the dissolution of ceramic materials or the discoloration and weakening of polymers by the sun's ultraviolet light. The corrosion resistance of a pipe or a metal is the ability of the material to resist the corrosive effects of its environment.  Internal corrosion is caused by the reaction of the fluid in a pipe to the pipe proper.  External corrosive forces would be the reaction of the pipe to the soil or air.  By isolating the flow line from its corrosive environment by lining the pipe or wrapping the pipe, the corrosive effects can be mitigated. Corrosion is a very common chemical reaction typically in the form of oxidation.  The loss of material mass has a natural tendency to revert to its natural state when an exposed surface comes in contact with a gas or liquid.  Most materials are susceptible to corrosion. ### Science Branches Science Applied Science Engineering Chemical Engineering Corrosion Engineering ## Oil Field Corrosion Corrosion in the oil field is caused by carbon dioxide, hydrogen sulfide or organic acids dissolved in the produced fluids. ## Variables affecting Corrosion ### Temperature Like many chemical reactions, the rate of corrosion generally increases when the temperature increases.  As a rule of thumb, the reaction rate doubles for every ten degrees Centigrade increase in temperature. ### Pressure The main concern with pressure and its affect on corrosion is that under lower pressures, dissolved corrosive gases can break out of the solution which may increase the corrosivity. ### Velocity Usually velocity's contribution to erosion is that under very low flows, localized corrosion (pitting) is likely.  With casing gasses in CVR systems, corrosion will likely present on the bottom of the pipe as condensate precipitates out of the casing gas as it cools.  It moves slowly relative to the gas in the line. With very high velocities, the corrosion can be present in corrosion-erosion. ## Forms of Corrosion • General Corrosion • Galvanic Corrosion - Caused by dissimilar metals • Pitting - Localized on metal surface • Crevice - Portion of the surface is isolated from the environment • Intergranular - Corrosion at the grain boundaries • Selective Leaching - Only one metal in an alloy is attacked • Stress Corrosion Cracking • Corrosion Erosion - Velocity aggravated corrosion • Corrosion Fatigue - Corrosion on cyclic loads • Cavitation - Corrosion gas bubble formation & immediate collapse ## Corrosion Engineering GlossaRy ### A • Abradable Coating  -  It gives wear resistance to highly abrasive material when rubbed against, while leaving the underlying material damage free. • Acid  -  A chemical substance that yields hydrogen ions ($$H^+$$) when dissolved in water. • Acid Embrittlement  -  A form of hydrogen embrittlement that may be introduced in some metals by acid. • Acid Rain  -  Atmospheric precipitation with a pH below 3.6 to 5.7. • Acrylic  -  Resin polymerized from acrylic acid, methacrylic acid, eaters of these acids, or acrylonitrite. • Annealing  -  A heating and controlled cooling operation to impart specific desirable properties generally concerned with subsequent fabrication of the alloy. • Anion  -  A negatively charged ion. • Anode  -  The electrons flow away the anode (negative charge) at which corrosion or oxidation occures at the material. • Atmospheric Corrosion  -  A gradual degradation of alteration of a material by contact with substances present in the atmosphere, such as oxygen, carbon dioxide, water vapor, sulfur and chlorine compounds. ### B • Barrier Coating  -  A protective layer of material that prevents the contact of corrosive elements. • Base  -  A substance that releases hydroxyl ions when disolved in water. • Beach Marks  -  Microscopic progression marks on a figure fracture or stress-corrosion cracking surface that indicate sucessive positions of the advancing crack front. • Black Oxide  -  A black finish on a metal produced by immersing it to hot oxidizing salts or salt solutions. • Brackish Water  -  Water having salinity values ranging from approximately 500 to 5,000 parts per million. • Brine  -  Seawater containing a high concentration of dissolved salt than that of the ordinary ocean. • Brittle Fracture  -  Separation of a solid accompanied by little or no microscopic plastic deformation. • Buffer  -  A chemical substance which stabilizes pH values in solutions. • Buffer Capacity  -  A measure of the capacity of a solution or liquid to neutralize acids or bases. ### C • Camera Pig  -  A configuration pig with a camera and light source recording the inside of the pipeline. • Cathode  -  The electrons flow toward the cathode reducing the corrosion or oxidation of the material. • Cathodic Polarisation  -  The electrochemical state changing of an electrode's potential moving in a non-corroding negative direction. • Cleaning Pig  -  A utility pig with brushes, cups, and scrapers for cleaning foreign matter from the inside of the pipeline. • Corrosion  -  The thinning of a pipe wall that is typically caused by a chemical reaction from a corroding fluid or agent and is limited almost exclusively to metal products. • Corrosion Allowance  -  The amount of material in a pipe or vessel that is available for corrosion without affecting the pressure containing integrity. • Corrosion Coupon  -  Used to monitor the corrosion rate of a material in a process. • Corrosion Embrittlement  -  Embrittlement in certain alloys caused by exposure to a corrosive environment. • Corrosion Fatigue  -  Combined action of corrosion and fatigue in which local corroded areas act as stress concentrators, causing failure at the point of stress concentration and exposing new metal surfaces to corrosion. • Corrosion Inhibitor  -  A substance that slows down the chemical reaction rate of corrosion on metal that is exposed to the environment. • Corrosion Mapping  -  An ultrasonic method that identifies and maps corroded areas in a pipelineby yhe varying material thickness. • Corrosion Resistance  -  The ability of a material to resist chemical destruction from an environment. • Crack  -  Cracks can come from fatigue, grith welds, or seam welds. • Cracking  -  Surface loss of color and gloss in a coating from degradation of the binder by the UV components in sunlight.  Can be seen as a white deposit on the cured coating surface. • Current  -  The rate of flow of electricity in a circuit, measured in amperes. ### D • Deactivation  -  The process of prior removal of the active corrosive constituents, usually oxygen, from a corrosive liquid by controlled corrosion of expendable metal or by other chemical means. • Defective Weld  -  A weld having one or more defects. • Diffusion  -  The spread of gases, liquids, or solids from areas of high concentration to areas of low concentration. • Diluent  -  A liquid used in coatings to reduce the consistency and make coating flow more easily. • Direct Current  -  An electric current that flows in only one direction. • Dry Corrosion  -  See gaseous corrosion or hot corrosion ### E • Elastic Modulus  -  The ratio of the stress applied to a body or substance to the resulting strain within the elastic limits. • Electrochemical Corrosion  -  Localized corrosion that results from exposure of an assembly of dissimilar metals in contact with or coupled with one another. • Electrode  -  Refered to as the anode or cathode, whichever is approperate. • Electrode Potential  -  The potential of an electrode in an electrolyte as a measure against a reference electrode. • Electrolyte  -  A chemical substance or mixture, liquid or solid, normally liquid, which conducts electric current. • External Corrosion  -  When the outside of a pipe is decayed or eroded by chemical or electrochemical processes or any other environmental conditions. ### F • Faraday's Law of Induction  -  States that whenever a conductor is placed in a varying magnetic field, an electromotive force is introduced. • Flux  -  Chemicals used to protect metals from oxidation. • Free Corrosion Potential  -  Corrosion potential in the absence of net electric current flowing to or from the metal surface. • Fretting Corrosion  -  Takes place where there is friction between two metal surfaces. ### G • Galvanic Corrosion  -  Corrosive action occuring when two dissimilar metals are in contact and are joined by a solution capable of conducting an electric current, a condition which causes a flow of electric current and corrosion of the more anodic of the two metals. • Gaseous Corrosion (Dry Corrosion or Hot Corrosion)  -  Corrosion with gas as the only corrosive agent and without any aqueous phase on the surface of the metal. • Gouging  -  Mechanical removel of metal from the surface of the pipe. ### H • Hardness  -  The property of a material that enables it to resist plastic deformation, usually by penetration. • Heat Transfer  - The exertion of power that is created by heat, or the increase in temperature. • Holiday  -  A discontinuity in painted or coated surfaces. • Hot Corrosion  -  See dry corrosion or gaseous corrosion ### I • Incomplete Fusion  -  A weld break where complete fusion did not occur between the weld material and the faces or adjoining weld material. • Incomplete Weld  -  A defect in the solder joint that causes cracks or damage to the bond. • In-line Inspection  -  When the pipeline is inspected by examining the interior of the pipe. • Inhibitor  -  Can reduce the corrosion rate by presenting a protective film. • Instrumented Pig  -  A tool with instruments like recorders and sensors to examine the inside of the pipe. • Integranular Corrosion  -  Usally of stainless steals or certain nickle-base alloys, that occures as the result of sensitization in the heat affected zone during the welding process. • Internal Corrosion  -  The thinning of the interior pipe wall that is typically caused by a chemical reaction from a corroding fluid or agent and is limited almost exclusively to metal products. • Ion  -  An atom or molecular particle having a net charge.  Positive charged ions are cations and negative charged are anions. • Isolation Gasket  -  Used to stop the current flow across metallic pipelines by separating two flanges. ### K L • Lacquer  -  A fast drying, usually clear coating, that is highly flammable and dries by solvent evaporation only. • Lamellar Corrosion  -  A form of corrosion in which the expanding corrosion products stack up as layers. • Lenz's Law  -  the direction of the current induced in a conductor by changing magnetic field. ### M • Magnetic Flux  -  The number of magnetic field lines passing through a given closed surface. • Mapping Pig  -  A configuration pig used to produce an elevation and plan view of the pipeline route with collect data that can be analysed from the inertia sensing or some other technology. • Material Hardness  -  The property of a material that enables it to resist plastic deformation, usually by penetration. • Mechanical Properties  -  Those properties that reveal the reaction, elastic or plastic, of a material to an applied stress, or that involved the relationship between stress and strain. ### N • Natural Gas  -  Gaseous fuel occuring in nature. • Neutralizer  -  A common designation for alkaline materials such as calcite or magnesia used in the neutralization of acid waters. ### O • Ohm  -  A unit of resistance. • Ohm's Law  -  The relationships between power, voltage, current, and resistance. • Oxidation  -  The loss of electrons in a chemical reaction in which an element combines with oxygen.  Oxidation and reduction always occur at the same time in equal amounts. ### P • pH  -  Affects the corrosion rate by affecting the reaction rate of cathodes and anodes. • Physical Properties  -  Those properties familiarly discussed in physics, for example, density, electrical conductivity, and thermal expansion coefficient, exclusive of those described under mechanical properties. • Pitting  -  A non-uniform corrosion of a metal, not in the form of cracks, whereby a number of cavities, are formed in the surface. • Porosity  -  Happens when a contaminent or gas is absorbed into the weld puddle. ### R • Rupture  -  There are numerous reasons a rupture can happen, depending on the material: age, brittleness, corrosion, internal pressure, movement, etc. • Rust  -  A corrosion product consisting primarily of hydrated iron oxide. ### S • Sacrificial Coating  -  A coating that provides corrosion protection wherein the coating material corrodes in preference to the substrate. • Salt Fog Test  -  See salt spray test • Salt Spray Test   -  An accelerated corrosion test in which the metal specimens are exposed to a fine mist of salt water solution. • Saltine Water  -  Water containing an excessive amount of dissolved salts. • Silicone  -  A resin used in the binders of coatings. • Shear Stress  - Tends to deform the material by breaking rather than stretching without changing the volume by restraining the object. • Shinning  -  The formation of a thin, tough film on the surface of a liquid point. • Shrinkage  - A decrease in dimensions of a coating during process. • Shrinkage Stress  -  The residual stress in a coating caused by shrinkage during processing. • Smart Pig  -  Collects information internally about the pipeline with electronic components. • Solute  -  A substance which is disolved in and by a solvent. • Specific Gravity  -  The density or ratio of any substance to another substance. • Strain  -  The deformation, stretched or compressed, of a material compared to its original length. • Strain Rate  -  The time rate of straining for the usual tensial test. • Stray Current  -  The flow of electric current into the ground by the leakage on industrial currents. • Stress  -  The force per unit area of cross-section. • Stress Corrosion Cracking   -  The combined effect of tensile stress and a corrosive environment. • Sweet Corrosion: Carbon Dioxide  -  A weak acidic gas found in condensate, crude oil, natural gas, and produced water and becomes corrosive when dissolved in water. ### T • Tensile Strength  -  The maximum stress a material can resist before it starts to elongate. • Tension  -  The force (pulling or stretching) acting on a material. • Toughness  -  The ability of a material to absorb considerable energy without fracturing. ### U • Ultrasonic Testing  -  Used to measure the pipe wall thickness perpendicular to the pipe. • Utility Pig  -  Used to performing pipeline cleaning of debris and unwanted materials. ### V • Volt  -  A unit of electrical pressure. • Voltage  -  One volt is the amount of pressure that will cause one ampere of current in one ohm of resistance. • Voltage Drop  -  When the voltage at the end of the cable is less than the beginning of the cable. • Voltage Rating  -  The maximum voltage at which a cable or insulated conductor can be safetly maintained during continuous use in a normal manner. W • Water Conductivity  -  The ability of water to conduct an electric current. • Waterlogged  -  Saturated with water. • Water Table  -  The underground boundary between the surface of the soil and the area where groundwater fills the cracks and openings in the rocks and sand. • Weld Crack  -  Cracks can appear on the surface, inside the weld or heat effected zone. • Weld Decay  -  See integranulat corrosion • Welding Defects  -  Blow hole, defect of joint shape, incomplete fusion, overlap, slag inclusion, undercut, weld crack. • Well Integrity  -  An operation of technical, operational, and organizational solutions to reduce fisk of controlled release of formation fluids throught the life cycle of a well. ### Y • Yield Strength  -  Yield strength, abbreviated as $$\sigma$$ (Greek symbol sigma), also called yield stress, is the minimum stress that leads to permanent deformation of the material. Display # Title Cracking Magnetic Flux
2022-01-23 21:53:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6069518327713013, "perplexity": 8968.402571948951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00633.warc.gz"}
https://learnche.org/3P4/Assignment_2_-_2014
# Assignment 2 - 2014 Due date(s): 27 January 2014, in class (PDF) Assignment questions (PDF) Assignment solutions Assignment objectives: gaining an excellent understanding of dynamic systems; using computer models and Laplace transforms to infer and describe the model behaviour over time. Question 1 [10] For each transfer function below, what can you say about the time-domain function, $$H(t)$$, where $$0 \leq t \leq \infty$$? Do not solve for $$H(t)$$ analytically. Your answers should include the initial value, final value, and a description of the stability, smoothness, and oscillatory behaviour. 1. $$H(s) = \dfrac{6(s+2)}{(s^2 + 9s + 20)(s+4)}$$ 2. $$H(s) = \dfrac{10s^2 -3 }{(s^2 - 6s + 10)(s+2)}$$ 3. $$H(s) = \dfrac{16s+5}{s^2+9}$$ Question 2 [4] Find the inverse transform of: $Y(s) = \dfrac{s+2}{s(s+1)^2}$ Question 3 [20] We want to determine the dynamic response of the liquid level in the open tank shown in the figure. The flow entering is determined from an upstream process (we cannot control that flow rate). The flow out is determined by the valve percent opening, with a constant speed, centrifugal pump. (The level in the tank in this system does not significantly affect the flow out because of the high pressure supplied by the pump.) Some data and details are given in the following: • Initial flow in = $$2.0\,\text{m}^3\text{.min}^{-1}$$ • Initial flow out = $$2.0\,\text{m}^3\text{.min}^{-1}$$ • Total tank volume (when full) = $$10.0\,\text{m}^3$$ • Initial tank level = 50% full • Tank cross-sectional area = $$3.0\,\text{m}^2$$ • Assume that the tank is initially at steady state with the flows in and out equal. A step change is introduced in the inlet flow; it changes from $$2.0\,\text{m}^3\text{.min}^{-1}$$ to $$1.0\,\text{m}^3\text{.min}^{-1}$$. No change is made to the valve opening. 1. Without using a computer or analytically derived expression, draw the plot of how you expect the liquid level to change once the step change is made. [There are no grades for this part of the question, so don't modify your graph later on]. 2. Determine the actual dynamic response of the liquid level using a computer simulation. 3. Based on your graph and numerical results, what do you conclude about whether the level should be controlled using the feedback principle by adjusting the valve in the output after the pump? 4. The liquid is water at 25°C, and we want to measure the level within ± 1 percent of its true value. What level sensor would you recommend? Give a brief explanation with a citation for information used in your answer. [Hint: use information available at http://pc-education.mcmaster.ca/ and follow links to the Instrumentation section]. 5. The flow rate out of the tank must be measured with an accuracy of ± 1 percent from about 0.25 to $$2.0\,\text{m}^3\text{.min}^{-1}$$. What flow sensor would you recommend? Give a brief explanation with a citation for information used in your answer. Question 4 [12] A heat exchanger system has a differential equation below that shows how the temperature changes with a change in flow rate, $$q(t)$$. $2 \dfrac{dT}{dt} = - T + 5 q$ The temperature, $$T$$ and $$q$$ are in deviation form already. 1. $$q(t)$$ is changed from 0 to 2.0 at time $$t=0$$. Sketch the response up to the point where the temperature reaches its new steady state. 2. What is this steady state? 3. How long does it take to reach within 0.1 degree of the new steady state. 4. If instead $$q(t)$$ is changed from 0 to 4.0 at time $$t=0$$. Sketch the response again, and report the new steady state value of the temperature. 5. Superimpose the plots from part 1 and part 4 of this question and explain why the curves have the same shape. Question 5 [15] In this question, you will reinforce the material learned in Chemical Engineering 3E04. The system that you will consider is given in Figure 4. It is a CSTR where the reaction $$\text{A} \longrightarrow \text{B}$$ is occurring. You do not have to derive the component material balance, i.e., the differential equation in the figure. 1. First, you must determine the initial value for the reactor concentration (which is the same as the concentration leaving the reactor), $$C_A$$. The system is initially at steady state, with $$C_{A,0} = 0.5\,\text{mol.m}^{-3}$$. You may solve this part analytically or numerically. Hint: you will solve a quadratic equation. 2. Starting from this steady state, determine the dynamic response of the concentration $$C_A$$ in the reactor if a step increase of $$C_{A,0}$$ is made from $$C_{A,0} = 0.5\,\text{mol.m}^{-3}$$ to $$C_{A,0} = 1.0\,\text{mol.m}^{-3}$$. For example, the above occurs when your manager has asked you to increase the concentration of species B that you are producing. 3. How long does it take for $$C_{A}(t)$$ to reach a new steady state? 4. You want to reach the new steady state faster. Explain how you can decrease the time taken. Perform a new simulation with the change(s) implemented, and superimpose the original time-domain plot from part 2 to show how much faster your response is. Work ahead: in the next tutorial and assignment you will create deviation variables $$C_A'$$ for outlet concentration, and $$C_{A0}'$$ for inlet concentration. Create a linearized model, invert it with the Laplace transform, and compare the linearized (approximate) model to the actual model.
2018-12-09 20:20:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5578412413597107, "perplexity": 571.3097192411471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823009.19/warc/CC-MAIN-20181209185547-20181209211547-00365.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-r-review-of-basic-concepts-r-4-factoring-polynomials-r-4-exercises-page-43/12
## Precalculus (6th Edition) $3(5r-9)$ Note that: $15r=5(3)(r) \\27=3(3)(3)$ Thus, the greatest common factor $3$. Factor out the GCF to obtain: $15r-27=3(5r-9)$
2019-12-16 05:02:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893000364303589, "perplexity": 1680.078405262122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00456.warc.gz"}
http://mathhelpforum.com/algebra/80409-help-required-print.html
Help required • March 24th 2009, 08:44 AM shanram74 Help required A travel agency gives following discounts to their customers as follows:- 0-6 days in advance - 0% A persons pays $1050 for his ticket.Had he purchased the ticket 1 day later, he would have paid$210 more.How many days before did he purchase the ticket? Can someone help me solve this problem? • March 24th 2009, 09:15 AM running-gag Hi You know that one day after he would not have paid the same price Therefore he bought his ticket or 7 days or 14 days before If it is 7 days before then he paid with a 10% decrease (he paid 0.9 time the normal price) Therefore 0.9 p = 1050 => p = 1050/0.9 = 1166.67 The day after he would have paid the normal price p (because no reduction) which is 1166.67 But you know that he would have paid 210 more than 1050 = 1260 And 1166.67 is not equal to 1260 Therefore the hypothesis is wrong If it is 14 days before ... your turn ! • March 24th 2009, 09:16 AM Percentages Hello shanram74 Quote: Originally Posted by shanram74 A travel agency gives following discounts to their customers as follows:- 0-6 days in advance - 0% A persons pays $1050 for his ticket.Had he purchased the ticket 1 day later, he would have paid$210 more.How many days before did he purchase the ticket? Can someone help me solve this problem? Welcome to Math Forum! If you get a 10% discount, you pay only 90% of the full price. This means that if you pay $$x$, the full price is$ $\frac{100}{90}x$. So could $1050 be 90% of the full price? No, because $\frac{100}{90}\times 1050$ gives an inexact decimal answer. (Try it and see!) On the other hand, if you get a 25% discount, you pay only 75% of the full price. So this time, if you pay$ $x$, the full price is $$\frac{100}{75}x$. How does this work with$1050? Well, $\frac{100}{75}\times 1050 = 1400$. This is better! So it looks as if the full price is \$1400, and he's just managed to get a 25% discount. If he'd been a day later, he'd only have got a 10% discount... Can you finish it off now?
2016-05-01 19:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3207644820213318, "perplexity": 2364.3151130758697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116878.73/warc/CC-MAIN-20160428161516-00078-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=an%3A0308.46054
zbMATH — the first resource for mathematics Representations of the CAR generated by representations of the CCR. Fock case. (English) Zbl 0308.46054 MSC: 46L05 General theory of $$C^*$$-algebras 46N99 Miscellaneous applications of functional analysis 47D03 Groups and semigroups of linear operators 22D25 $$C^*$$-algebras and $$W^*$$-algebras in relation to group representations Full Text: References: [1] Heisenberg, W.: Introduction to the unified field theory of elementary particles. London: Interscience 1966 · Zbl 0205.57502 [2] Streater, R.F., Wilde, I.F.: Nucl. Phys. B24, 561 (1970) · doi:10.1016/0550-3213(70)90445-1 [3] Kalnay, A.J., MacCotrina, E., Kademova, K.V.: Int. J. Theor. Phys.7, 9 (1973) · doi:10.1007/BF02412656 [4] Rzewuski, J.: Field theory, part II. PWN Warsaw: Illife Books Ltd. 1969 [5] Berezin, F.A.: Methods of second quantization. Moscow: Nauka 1965 (in Russian) · Zbl 0131.44805 [6] Rzewuski, J.: Rep. Math. Phys.1, 195 (1971) [7] Garbaczewski, P., Rzewuski, J.: Rep. Math. Phys.6, 423 (1974) · Zbl 0325.46071 · doi:10.1016/S0034-4877(74)80007-8 [8] Garbaczewski, P.: Rep. Math. Phys.7, 9 (1975) · doi:10.1016/0034-4877(75)90037-3 [9] Emch, G.G.: Algebraic methods in statistical mechanics and quantum field theory. London: Wiley, Interscience 1972 · Zbl 0235.46085 [10] Rohrlich, F.: In: Analytic methods in mathematical physics. Newton, R.G., Gilbert, R.P. (Ed.). New York: Gordon and Breach 1970 · Zbl 0194.29903 [11] Jost, R.: The general theory of quantized fields. Moscow: Mir 1967 (in Russian) · Zbl 0127.19105 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-02-28 13:53:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6663483381271362, "perplexity": 11687.416089788227}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00135.warc.gz"}
https://gmatclub.com/forum/in-the-figure-point-d-divides-side-bc-of-triangle-abc-into-segments-126934-60.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Jan 2019, 15:32 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### The winning strategy for a high GRE score January 17, 2019 January 17, 2019 08:00 AM PST 09:00 AM PST Learn the winning strategy for a high GRE score — what do people who reach a high score do differently? We're going to share insights, tips and strategies from data we've collected from over 50,000 students who used examPAL. • ### Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. # In the figure, point D divides side BC of triangle ABC into segments Author Message TAGS: ### Hide Tags Manager Joined: 08 Jul 2018 Posts: 66 Re: In the figure, point D divides side BC of triangle ABC into segments  [#permalink] ### Show Tags 05 Oct 2018, 02:12 vigrah wrote: say angle CAB=y since sum of angles in a triangle is 180 x+y+45=180 x+y=135 equation 1 line AD is dividing BC in 2:1 ratio hence X+2/3Y+60=180 X+2/3y=120 equation 2 solving equation 1 &2 we get x=75 How you took the value of angle CAD as 2/3y? Intern Joined: 13 Jul 2018 Posts: 10 Re: In the figure, point D divides side BC of triangle ABC into segments  [#permalink] ### Show Tags 17 Oct 2018, 05:36 In this why can't we take angle ADB = 120 degrees which can provide angle DAB = 15 degrees and since BD = 1/2 CD so DAC = 30 degrees. What's wrong in this approach? Manager Joined: 10 Aug 2009 Posts: 63 In the figure, point D divides side BC of triangle ABC into segments  [#permalink] ### Show Tags 20 Dec 2018, 12:30 arpitalewe wrote: In this why can't we take angle ADB = 120 degrees which can provide angle DAB = 15 degrees and since BD = 1/2 CD so DAC = 30 degrees. What's wrong in this approach? This is wrong. The property you are referring works for angles and sides within a triangle but here you are comparing two different triangles. However you can use the angle bisector theorem here, according to which: $$CD/DB = sin(angle DAC)/sin (angle DAB)$$ Now you need to know the value of sin15 and find the double of it and then find the angle whose value is represented by that sin value. Here is the link from wiki for angle bisector theorem: https://en.wikipedia.org/wiki/Angle_bisector_theorem _________________ Retaking gmat for second time, any re-takers please feel free to connect. In the figure, point D divides side BC of triangle ABC into segments &nbs [#permalink] 20 Dec 2018, 12:30 Go to page   Previous    1   2   3   4   [ 63 posts ] Display posts from previous: Sort by
2019-01-17 23:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5718613266944885, "perplexity": 3645.9274803074713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659417.14/warc/CC-MAIN-20190117224929-20190118010929-00624.warc.gz"}
http://mathematica.stackexchange.com/questions?page=220&sort=newest
# All Questions 54 views ### Problem using replace to simplify logarithms [duplicate] Is there a way to combine logarithm terms like 9 m Log[m]-9 m Log[M]+16 m M Log[m]-16 m M Log[M] into ... 341 views ### How to insert a zoom inside the main plot As the title indicates, I want to add inside the main plot (preferably at the lower left part) a zoom. Here is the corresponding Mathematica code: ... 297 views ### Exporting PDF file - incorrect graphic [closed] I export graphics to PDF using: Export["/path/figure.pdf", plotname] It gives me more than enough quality for PDFLatex. But somehow, some exported plots have ... 245 views ### Is mathematica storing information it shouldn't store? I'm seeking to find solutions to a numerical integration with a large set of parameter combinations (basically, I'm doing a brute parameter sampling). Yet, I believe the memory of the computer is ... 309 views ### Join all square root expressions? Sometimes I observe Mathematica produce expressions exactly like the following after a simplification step: $$\frac{a+\sqrt{-(1-b)}\sqrt{\frac{1}{b-1}}}{\sqrt{c}\sqrt{\frac{1}{c}}}$$ Now, any ... 333 views ### Solve equations real and imaginary part separately For my system of equations, the procedure described in Solving complex equations of using Reduce works no more. How can I separate the real and imaginary part of ... 348 views ### Generating a “flare” effect in Graphics3D? I'm attempting to indicate that there is a point-source of light at some position in a Graphics3D-generated image. Is there any built-in tool to do this? My patch-work solution thus far has been to ... 86 views ### How can I find ,Which InputField was recently updated? I have 4 InputFields.I want to find which InputField was recently Updated out of all ... 206 views ### How to set the slider width inside Manipulate I use the code below to visualise the connection between a sine curve and the unit circle, but would like to make the slider wider, so it has at least almost the same with as the image. I have tried ... 725 views ### Instruct a Table to only evaluate until a condition is fulfilled I am looking for the way of building a Table of pairs of numbers in a fast way. My true table evaluates huge functions, and I see no way and reason to show those cumbersome expressions here. Let us ... 184 views ### why set values in this way doesn't work? question is as follows define a list ttt={1,2}; and if I set values in this way {ttt[[1]],ttt[[2]]}={3,4} then the value ... 166 views ### Predictive code completion and custom functions in contexts Suppose I work for organisation PQR, and have been developing a suite of packages, PQRUtilities, PQRVisualization etc, all of ... 110 views ### Assuming monotonicity and concavity I have an expression that I would like to evaluate: 2 phi''[1 + a (-1 + u[10])] phi'[q]^3 > phi''[q] phi'[q] phi'[1 + a (-1 + u[10])]^2 Is there any way of ... 137 views ### Can Mathematica use a MySQL ODBC connector? I have a MySQL database that I connect to using Mathematica and would like to distribute my Mathematica code to others. We already have ODBC DSNs setup on peoples machines using the MySQL ODBC ... 93 views ### Calling blank arguments using enclosing functions Is there a way to call a function and have that function use an argument defined in an enclosing function? e.g., ... 202 views ### Why do all plots open in new windows when I use JavaGraphics? I want to open only one plot in new window. When I use <<JavaGraphics all plots open in new windows. I put ... 70 views ### How can i execute several Commands in one text file and use it in notebook? I want to execute several commands in one text file and use it in notebook by Get[] command. for example: ... 283 views ### How do I overlay corresponding values over each frame of an animation? I recorded a phenomenon using video capturing (phen.avi, imported into Mathematica) and simultaneous measurement of some electrical properties of that phenomenon (a ... 330 views ### Representing a Stencil of a Finite Difference Operator with Mathematica's Graphics3D I have the following finite difference operator: Lu_{ijk}:= du_{ijk} +c(u_{i-1,j,k} + u_{i+1,j,k} + u_{i,j-1,k} + u_{i,j+1,k} + u_{i,j,k-1} + u_{i,j,k+1})\\ -u_{i-1,j+1,k}-u_{i-1,j-1,k} - ... 98 views ### How to control divider length in a grid I want to display a table like this: And use Divider I can only get this so far ... 133 views ### Notation for numerical solutions to differential equations Can somebody explain this notation to me? Using Mathematica's first example in the NDSolve documentation: ... 85 views ### Breaking up data and finding independence Suppose I have the list with elements (either 1 or 4 as the last digit) as follows: ... 658 views ### Why does this simple program leak memory? I have a simple Mathematica program which writes some plots to image files for later conversion into a movie. Unfortunately, the program leaks so much memory that it quickly exhausts all 12G of RAM ... 63 views ### PolynomialRemainder memory This calculation makes the kernel crash because it needs so much memory. Thoughts on how to get around this? ... 175 views ### Use different kernels for different Notebooks I know that the command LaunchKernels[] starts up four kernels on my machine: ... 264 views ### Difference of old GraphicsGraphics3D`ListSurfacePlot3D and the current ListSurfacePlot3D Consider the following example data which creates the lines of a cylinder ... 125 views ### Examine function parameters programmatically I write a lot of scripts in Mathematica and I'd like to eliminate the boilerplate that parses command-line arguments and assigns them to variables of the correct type before doing the real work. I was ... 668 views ### Error/warning when using NSolve for simple equation I am using NSolve to solve an equation, as shown here: ... 144 views This is probably an easy question to answer, but I need a fresh eye... I'm struggling with processing the following file (extract shown here) It's a list of dates, and positive/negative time durations ... 84 views ### How to make a code line dynamic in a Module? The code line If[StreamPosition>0,Read[str,Record]] should be executed in a Module at first. The Module consists of a Slider and a Pane. The Slider ... 191 views ### Evaluating Polynomials at Grid Points I am continuing my quest on B-splines. The code below builds a 5x5 matrix out of B-splines, using the BSplineBasis[] routine. I now want to evaluate the polynomials that are stored in each matrix ... 126 views ### How can I set specific ImageSize to each cell of the Grid? I want to make a Grid with each cell Imagesize was {10,10}.If I place any Gui-Element with more than that cell Imagesize,I don't ... 193 views ### Find points on the surface of a Graphics3D object If I have a Graphics3D object that was generated from manipulations to PolyhedronData rather than via an explicit equation, for example the Spikey (for which the entire code can be found here The ... 179 views ### Finding the common areas of two contourplots I used ListContourPlot to specify an area for which the function value is less than a number. For example consider the following areas: ... 446 views ... 90 views ### Scaling of ChartElements In attempting to answer this question from @Cam on BubbleCharts with elliptical bubbles, I suggested using separate ... 64 views ### Is there a more idiomatic way to solve this implicit differentiation problem? Here's an exercise from a calculus text by Larson and Edwards: Find the rate of change of the distance between the origin and a moving point on the graph of $y = x^2 + 1$ if $dx/dt = 2$ ... 445 views ### Comparing Mathematica expressions like diff I am looking for a way to compare (or "diff") two Mathematica expressions, similarly to how to diff utility can compare two text files and report the differences. Has anyone already written such a ... 260 views ### Adapting NDSolve to circumvent NDSolve::bdord: error for 1-D Euler Equations I attempted to use NDSolve for the 1-D isentropic unsteady flow equations with low subsonic inflow velocity and prescribed inflow total enthalpy; along with a ... 292 views ### Override Equation structure change in mathematica - CopyToLaTeX I am writing my thesis, and when I do copy to LaTeX from Mathematica, it changes the equation variables and also it rearranges the structure of the original equation. How can I override that ?? ... 982 views ### Plotting two surfaces and visualizing their intersection Trying to plot the following two functions to show points of intersection. ... 783 views ### How to manipulate plot range with fixed axes I want to achieve something like this Manipulate[Plot[Sin[x], {x, 0, r}, PlotRange -> {{0, 10}, {-1, 1}}], {r, 0.1, 10}] but my function in place of ... 270 views ### Adjacency Matrix for an Undirected Graph Here is an undirected graph ... 130 views ### Saving and Reading data across computers I DumpSaved some data from my home laptop, uploaded it onto google drive and now I am trying to open it at my school computer but I am getting errors, even though the data is in the correct directory. ... 432 views ### Contour heat graph (ListContourPlot?) I have solved 2D incompressible Navier Stokes and written out a .dat file that includes x,y,u,v,p values. I am trying to get a 2D color heat graph like this one for my data Here is what I have done ... 801 views ### Mathematica plots a discontinuity in piecewise function that does not exist [duplicate] I have the following function defined: ... 114 views ### Finding the root of a nested function with small values I have reduced an error in my program to this line of code: FindRoot[Nest[# (1 - #) k &, 1/2, 2^4] - 1/2, {k, 3.5}] It works for $2^1, 2^2, 2^3, 2^4,$ but ... 200 views ### How can this AbsoluteTime call be made faster? I need to convert tons of date strings into absolute times. This generates an example data set of 10,000 strings: ...
2014-09-30 20:14:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175839781761169, "perplexity": 2555.7240081077684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663135.34/warc/CC-MAIN-20140930004103-00159-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/let-a-3-6-9-12-696-699-and-b-7-14-21-287-294-find-no-of-ordered-pairs-of-a-b-such-that-a-a-b-b-a-b-and-a-b-is-odd-number-system-entrance-exam_103018
# Let a = { 3, 6, 9, 12, ......., 696, 699} and B = {7, 14, 21, .........., 287, 294} Find No. of Ordered Pairs of (A, B) Such that a ∈ A, B ∈ B, a ≠ B and a + B is Odd. - Mathematics MCQ Solve the following question and mark the best possible option. Let A = { 3, 6, 9, 12, ......., 696, 699} & B = {7, 14, 21, .........., 287, 294} Find no. of ordered pairs of (a, b) such that a ∈ A, b ∈ B, a ≠ b & a + b is odd. • 4879 • 4893 • 2436 • 2457 #### Solution A has 699/3 = 233 elements of which 116 are even & 117 are odd. B has 294/7= 42 elements out of which 21 are even & 21 are odd. A∩B = { 21, 42, ........, 273, 294} ∴ n(A ∩ B) = 14 For choice of a & b, 2 cases arise:- Case- I: a is even & b is odd. No. of possible cases = ""^116C_1 xx ""^21C = 116 xx 21 Case-II: a is odd & b is even:- No. of possible cases = ""^117C_1 xx ""^21C = 117 x 21 But there are 14 cases where a = b & a, b, x A∩B. So, required answer = 116 x 21 + 117 x 21 - 14 = 4879. Concept: Number System (Entrance Exam) Is there an error in this question or solution?
2021-05-18 10:23:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1831728219985962, "perplexity": 991.2830867893371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00180.warc.gz"}
https://brilliant.org/discussions/thread/help-with-triangle-problems-please/
# Help with Triangle problems please! Q1. ABC is a triangle with Angle B > 2 Angle C D is a point on BC such that AD bisects Angle BAC and AB=CD. Prove that Angle BAC=$$72^{\circ}$$ Q2. AD, BE and CF are medians of a triangle ABC. Prove that 2(AD+BE+CF)<3(AB+BC+CA)<4(AD+BE+CF) Q3. In Triangle ABC, AD is the bisector of Angle BAC Prove that AB>BD. Note by Mehul Arora 3 years, 4 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: - 3 years, 4 months ago For the 1st problem. - 3 years, 4 months ago If you draw the diagram for the 3rd question you get $$\angle ADB = \angle \dfrac{A}{2} + \angle C$$ whereas $$\angle BAD = \angle \dfrac{A}{2}$$ since $$\angle ADB > \angle BAD\Rightarrow AB>BD$$ - 3 years, 4 months ago @Anik Mandal Thanks! ^_^ I was not really able to figure it out. It was an easy problem though. Thanks so much :) - 3 years, 4 months ago Welcome bro. Did you get the first two? - 3 years, 4 months ago Nah, Not really :/ :/ - 3 years, 4 months ago Not that poor in geometry.. xD - 3 years, 4 months ago Haha, I know xD Neither am I. Idk why I was unable to figure this out :/ - 3 years, 4 months ago
2018-12-17 15:44:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989653825759888, "perplexity": 7223.023618787694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.84/warc/CC-MAIN-20181217135323-20181217161323-00519.warc.gz"}
https://www.quantumstudy.com/physics/magnetic-effect-of-current-10/
# Force on a current carrying wire in a magnetic field Force on a current carrying wire in a magnetic field $\displaystyle \vec{F} = q(\vec{v}\times \vec{B})$ We can say $\displaystyle d\vec{F} = dq(\vec{v}\times \vec{B} )$ $\displaystyle d\vec{F} = dq(\frac{\vec{dl}}{dt} \times \vec{B} )$ $\displaystyle d\vec{F} = I(\vec{dl}\times \vec{B} )$ Actually, this force gives the force on the charge carriers within the length dl However, this force is converted, by collisions, into a force on the wire as a whole, a force which, moreover, is capable of doing work on the wire. The net force on a wire is found by integrating along length. A corollary of this is that there is no net force on a current carrying loop in a uniform magnetic field.    In this case, l = 0 ### Fleming’s left-hand rule The direction of the force F = l(L x B) is given by the Fleming’s left hand rule. Close your left fist and then, ” shoot your index finger in the direction of the magnetic field. Relax your middle finger in the direction of the current. The force on the conductor is shown by the direction of the erect thumb . Example : A conductor of length 2.5 m with one end located at z = 0 , x = 4m carries a current of 12 A parallel to the negative y-axis. Find the magnetic field in the region if the force on the conductor is 1.2 × 10-2 N in the direction (−i^ + k^)/√2
2022-07-03 23:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6513460874557495, "perplexity": 280.54648294077145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00360.warc.gz"}
https://math.stackexchange.com/questions/3207441/let-x-y-x-1-x-2-ldots-be-i-i-d-and-phix-y-a-test-function-does-frac
# Let $X,Y,X_1,X_2,\ldots$ be i.i.d. and $\phi(x,y)$ a test function. Does $\frac{1}{N^2}\sum_{i,j}\phi(X_i,X_j)\to\mathbb E\phi(X,Y)$ a.s.? Suppose we are given a distribution $$\mu$$ on $$\mathbb R^d$$, and a smooth function $$\phi:\mathbb R^d\times\mathbb R^d\to\mathbb R$$ with compact support. Let $$X_i$$ be i.i.d. random variables with distribution $$\mu$$. Then is it the case that $$\frac{1}{N^2}\sum_{i,j=1}^N\phi(X_i,X_j)\to\int_{\mathbb R^d\times\mathbb R^d}\!\phi(x,y)\,\mathrm d\mu(x)\,\mathrm d\mu(y)?$$ For single-variable $$\phi$$, this is just the strong law of large numbers, but I don't quite see how to prove it here. • Can't you use integration by parts and the Glivenko-Cantelli theorem? Write the sum as the double integral of the emipirical cdf against some mixed partial derivative of $\phi$? – kimchi lover Apr 29 '19 at 22:28 Let $$f(X_1,\ldots,X_n) = \frac{\sum_{i,j}\phi(X_i,X_j)}{N^2}$$. Since $$\phi$$ is smooth and defined on a compact support, it is bounded by some $$k \in \mathbb{R}^{+}$$. Therefore, for every $$i$$, $$|f(X_1,\ldots,X_i,\ldots,X_n)-f(X_1,\ldots,X_i^*,\ldots,X_n)| \leq \frac{2k}{n}$$. It follows from McDiarmid's inequality that $$P(|f(X_1,\ldots,X_n)-E[f(X_1,\ldots,X_n)]| \geq \epsilon) \leq 2\exp(-0.5\epsilon^2k^{-1}n)$$ Also observe that $$\theta_n := E[f(X_1,\ldots,X_n)] = \frac{nE[\phi(X_1,X_1)] + n(n-1)E[\phi(X_1,X_2)]}{n^2}$$ and $$\theta := E[\phi(X_1,X_2)] = \lim_n \theta_n$$. \begin{align*} \sum_{n}P(|f(X_1,\ldots,X_n)-\theta| \geq \epsilon) &\leq \sum_{n}P(|f(X_1,\ldots,X_n)-\theta_n| \geq \epsilon - |\theta_n-\theta|) \\ &\leq \sum_n 2\exp(-0.5(\epsilon - |\theta_n-\theta|)^2k^{-1}n) < \infty \end{align*} It follows from Borel-Cantelli that $$f(X_1,\ldots,X_n)$$ converges a.s. to $$\theta$$. • +1. This proof relies on the existence of the bound $k$. Can you comment on, or perhaps even better give a counter-example, where $\phi$ is unbounded in such a way that $f()$ does not converge to $\theta$? E.g., I would imagine $E[f()]$ still converges to $\theta$ (even if $\phi$ is not bounded) so any counterexample may only violate convergence a.s.? – antkam Apr 30 '19 at 21:02 • For example, if $X_{1},\ldots,X_{n}$ are i.i.d. $N(0,1)$ and $\phi(x,y) = xy^{-1}$, then $\phi(X_i,X_j) \sim Cauchy$ for every $i \neq j$. Therefore, $f(X_1,\ldots,X_n)$ does not converge. – madprob May 1 '19 at 0:02 • However, note that OP states that $\phi$ is smooth and has a compact support. Therefore, $\phi$ is bounded. – madprob May 1 '19 at 0:04 • I understand that the OP stmt implies $\phi$ is bounded, and your proof is correct. I was just wondering what examples of unbounded $\phi$ would make the convergence false. Thanks for bringing up Cauchy -- it is an interesting example. In that case, does RHS $= 0$ (by symmetry?), or is RHS also undefined? If RHS is undefined, this example generalizes to any $\phi$ with an undefined mean. I was more wondering if there is a case where e.g. RHS is defined and finite, but LHS either does not converge, or converges to a different value. – antkam May 1 '19 at 1:31 • The same result should hold whenever there is a finite constant $B$ such that $E[\phi(X_i,X_j)^2] \leq B$ for all $i,j$ (including $i=j$). It may also hold if we only require $E[|\phi(X_i,X_j)|]\leq B$ for all $i,j$ but that would require a lot more work. – Michael May 4 '19 at 23:01
2020-10-30 08:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909077286720276, "perplexity": 177.51912609629525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00131.warc.gz"}
https://mathoverflow.net/questions/243477/is-there-a-way-to-embed-clifford-algebras-into-the-corresponding-tensor-algebra
# Is there a way to embed Clifford algebras into the corresponding tensor algebra? There are simple and straightforward embeddings of the underlying vector space $V$ into its corresponding tensor algebra $\talg$ and any of its corresponding Clifford algebras $\clalg$ (where $q$ denotes the quadratic form defining the Clifford algebra). This fact is what makes both tensor analysis and geometric (Clifford) algebra compatible with ordinary vector algebra or calculus. However, even though any $\clalg$ can be formed as a quotient of $\talg$, or perhaps because of that fact, no $\clalg$ seems to be "compatible" with $\talg$ in a simple way, at least in the sense that there does not appear to exist any "simple" embedding of $\clalg$ into $\talg$, such that we could use all of the structure and geometric intuition afforded by the $\clalg$ framework while still working in the most general possible space, $\talg$. This seems like a big problem to me, because even if one isn't interested in tensors of rank $>2$, rank 2 tensors (also known as matrices) are ubiquitous, and I have the impression that the greater compatibility of vector algebra with matrix algebra is the main obstacle to more widespread implementation of geometric algebra. (I.e. because unlike vector algebra, geometric algebra doesn't "play nice" with matrix algebra or tensor algebra in general.) This seems like a major pedagogical problem, since it seems like many subjects could be much more easily understood through the prism of geometric algebra, but the (seeming) incompatibility of any $\clalg$ with $\talg$ vastly (and arguably rightly) dampens any enthusiasm to pursue such an approach. Even if there can not exist any simple embedding, what about a simple way to switch between the two systems? I am also doubtful in this regard, since the definition of exterior algebra and outer products in terms of tensor products is very unwieldy (a sum over the symmetric group). And the exterior algebra is the most degenerate type of Clifford algebra possible. Here is a similar question on Math.SE, corroborating my claim that every multivector corresponds to a tensor but not vice versa: What is the relationship of tensor and multivector. These two documents explain how to represent some multivectors as tensors for the special case that $V=\mathbb{R}^3$ and $q=I$ i.e. just the identity, so that the inner product is just the dot product: (1) (2). The best I could find so far is the following (see page 7 of this document): Given $\qalg:= \{ \sum_k A_k \otimes (v\otimes v -q(v))\otimes B_k: v \in V, A_k, B_k \in \talg \}$ We have for $A,B \in \clalg$: $AB = A \otimes B + \qalg$, where $AB$ denotes the Clifford (geometric) product and $\otimes$ is of course the tensor product. EDIT: Thinnking about Oscar Cunningham's comments below, I think we can write $$\talg = \clalg \oplus \qalg$$ (or something that's actually mathematically correct but similar "in spirit"). $\clalg$ are exactly those tensors which have a unique embedding as an element of the Clifford algebra, and $\qalg$ consists of exactly those tensors which can not be represented in the Clifford algebra, and hence are mapped to 0 by the quotient map. Thus the problem reduces to: Given any arbitrary tensor $t \in \talg$, determine its (unique since direct sum?) representation as $$t= C + \tau,$$ where $C \in \clalg$ is a member of the Clifford algebra, and $\tau \in \qalg$ is a member of the ideal $\qalg=\{ \sum_k A_k \otimes (v\otimes v -q(v))\otimes B_k: v \in V, A_k, B_k \in \talg \}$. Since we have this explicit representation for $\qalg$, I think the problem might be much easier than I originally anticipated. For example, if $t$ is a rank 4 tensor in some four dimensional vector space $V$, we could write something like $$t=\sum_{\sigma \in S_4} \left[e_{\sigma(1)}\otimes(\langle e_{\sigma(2)},e_{\sigma(3)}\rangle_q)\otimes e_{\sigma(4)}\right] + \sum_{\sigma\in S_4} \left[e_{\sigma(1)}\otimes(e_{\sigma(2)}\otimes e_{\sigma(3)} - \langle e_{\sigma(2)}, e_{\sigma(3)} \rangle_q )\otimes e_{\sigma(4)}\right],$$ where $e_1, e_2, e_3, e_4$ are basis vectors and $\langle \cdot, \cdot\rangle_q$ is the inner product formed from the quadratic form $q$ via polarization. Of course, I am going somewhat out on a limb here in assuming, i.e. I do not know how to prove that $$\sum_{\sigma \in S_4} \left[e_{\sigma(1)}\otimes(\langle e_{\sigma(2)},e_{\sigma(3)}\rangle_q)\otimes e_{\sigma(4)}\right]=C \in \clalg$$ or that $$\sum_{\sigma\in S_4} \left[e_{\sigma(1)}\otimes(e_{\sigma(2)}\otimes e_{\sigma(3)} - \langle e_{\sigma(2)}, e_{\sigma(3)} \rangle_q )\otimes e_{\sigma(4)}\right]=\tau\in \qalg.$$ The other strategy I was thinking of was to represent tensors of rank 2 or greater only via their isomorphisms with multilinear maps, since then that would come down to representing them as functions of (contravariant) vectors, a function of $k$ vectors, a $k$-linear map, for a $k$ tensor. This would work insofar as all contravariant vectors have canonical embeddings in both the Clifford and tensor algebras, but I think we still have as unresolved the problem of how to represent covariant vectors (row vectors) as well as general covariant/mixed variance tensors. Some possible problems: The 0 element of a Clifford/geometric algebra has all grades (i.e. for any $k$ it is a $k$-blade/multivector), whereas I don't think this is true in the tensor algebra (at least thinking in terms of representing tensors via k-dimensional arrays -- you can have the scalar 0, but also 0 column vectors, 0 row vectors, 0 matrices, etc.). Intuitively, one wants to think of a geometric/Clifford algebra as "symmetric algebra + exterior algebra" (commutative inner product $\frac{1}{2}(v\otimes w + w \otimes v)$ and anticommutative outer product $\frac{1}{2}(v\otimes w - w \otimes v)$), especially since (at least for vector spaces over fields of characteristic zero, which is what I personally am interested in) both the symmetric and exterior algebras can be identified with certain subclasses of tensors (see: Wikipedia - symmetric algebra vs. symmetric tensors). But how do we do this for a general Clifford algebra? Maybe $$vw = v \otimes w + \qalg \\= \frac{1}{2}(\langle v, w \rangle_q + v \wedge w)= \frac{1}{2}(v\otimes w + w \otimes v + \qalg)+\frac{1}{2}(v \otimes w - w \otimes v)?$$ It's clear that $v \wedge w$ can always be identified as a subalgebra of the Clifford algebra, and this sort of definition somewhat explains that, but then is it true that the inner product generated by polarization of $q$ must equal $\frac{1}{2}(v\otimes w + w \otimes v + \qalg)$? And even if it does, does that mean that the inner product of a Clifford algebra is commutative if and only if $q$ is a symmetric quadratic form? Since in general the product derived from a quadratic form via polarization need only be bilinear, but not necessarily symmetric (i.e. commutative) unless $q$ is symmetric too. Related: Which concepts in differential geometry cannot be represented using geometric algebra? (The answer is essentially: tensors.) How would one express the result of a tensor product (of two vectors) in the geometric algebra? Is geometric algebra isomorphic to tensor algebra? (No. Most tensors do not admit of a unique representation in geometric algebra due to the quotient structure.) What is the hierarchy of algebraic objects meant to capture geometric intuition? (Tensor algebras are essentially the most general possible, even though geometric algebras are in general less unwieldy.) • If $q$ is an inner product, do $\mathcal T(V)$ and $\mathcal{Cl}_q(V)$ get a canonical inner product? If so we could take the adjoint of the quotient map... – Oscar Cunningham Jul 1 '16 at 20:16 • @OscarCunningham I suppose we get a canonical inner product by taking the polarization of the quadratic form $q$. Could you elaborate on "taking the adjoint of the quotient map"? I don't understand. – Chill2Macht Jul 1 '16 at 20:20 • There's a canonical quotient map $\mathcal T(V)\rightarrow \mathcal{Cl}_q(V)$. Since $\mathcal T(V)$ is an inner product space, I think that $\mathcal{Cl}_q(V)$ has to be isomorphic to the space orthogonal to the kernel of this map. – Oscar Cunningham Jul 1 '16 at 20:29 • I don't think ${\cal T}(V)$ has any interesting finite-dimensional subalgebras. – მამუკა ჯიბლაძე Jul 2 '16 at 5:44 • Might the embedding of a Clifford algebra into the endomorphisms of an exterior algebra, as described in the accepted answer for mathoverflow.net/questions/68378/clifford-algebra-non-zero, be helpful for you? – KConrad Jul 2 '16 at 9:55 $\newcommand{\qalg}{\mathcal{I}_q(V)}$As K. Conrad points out, this question has actually been answered already on MathOverflow by user MTS: see these answers here and here. The essential idea is this: despite the fact that Clifford algebras have non-zero quadratic form in general, we can still "piggyback" on the embedding of the exterior algebra in the tensor algebra to represent the Clifford algebra, regardless of the choice of quadratic form. Note: The embedding of the exterior algebra into the tensor algebra can be given by defining the exterior=wedge product in terms of the tensor product. Discussion of two canonical ways of doing this are given here on MathOverflow (the top answer being again by MTS) as well as on Math.SE. Beautiful and rigorous proofs of this fact are given in the two answers (1) (2) linked to above by user MTS; in what follows I will merely try to motivate this result for novices like me. 1. First, we might expect this result intuitively by considering the decomposition of the Clifford product for vectors into an "inner" and "outer" product. The former is the inner product generated by polarization of the quadratic form $q$ and the latter is "equivalent" in some sense to the exterior/wedge product from the exterior algebra. Since the inner product is grade-reducing, while the outer product is grade-increasing, we might expect that the inner product would be "incapable of producing new basis elements", hence the only basis elements of the Clifford algebra we should expect are those created by the corresponding exterior algebra, i.e. there is some reason to expect a priori that a given Clifford algebra is "no more rich than" making the corresponding exterior algebra an inner product space, i.e. by defining a quadratic form on the exterior algebra. 2. MTS's answers make this suspicion rigorous by showing that the ideals for the Clifford algebra and the exterior algebra are sufficiently similar so that the quotients of the tensor algebra by them can be related in a straightforward manner. Let us motivate this fact by considering $\mathcal{I}_q(V)$ explicitly. Remember that $$\mathcal{I}_q(V):= \left\{ \sum_k A_k \otimes (v \otimes v - q(v)) \otimes B_k : v \in V, A_k, B_k \in \mathcal{T}(V) \right\}$$ Using the distributivity of the tensor product (e.g. here) I rewrite this as $$\qalg = \left\{ \sum_k \left[A_k \otimes v \otimes v \otimes B_k - A_k \otimes q(V) \otimes B_k \right] : v \in V, A_k, B_k \in \mathcal{T}(V)\right\} \\ \overset{?}{=} \left\{ \sum_k \left[A_k \otimes v \otimes v \otimes B_k - q(V) \cdot( A_k \otimes B_k) \right] : v \in V, A_k, B_k \in \mathcal{T}(V)\right\}$$ In any case, the motivating idea here is that the right term arising from the quadratic form, always being of strictly lower grade than the left term, should in some sense be "negligible" compared to the left term (MTS's second answer makes this idea explicit and rigorous). In other words: $$\qalg \approx \left\{ \sum_k A_k \otimes v \otimes v \otimes B_k: v \in V, A_k, B_k \in \mathcal{T}(V) \right\}$$ where the right hand side is obviously $\mathcal{I}_0(V)$, the ideal which forms the exterior algebra. 3. In the very simple case of geometric algebra over $\mathbb{R}^n$, this fact is used all the time in creating a basis for the geometric algebra. Namely, it is stated that $\mathbb{G}^n \cong \mathbb{R}^{2^n}$ (vector space isomorphism), with the basis being given by: $$\{e_1, \dots, e_n, e_{\sigma_2(1)}\wedge e_{\sigma_2(2)}, e_{\sigma_3(1)}\wedge e_{\sigma_3(2)} \wedge e_{\sigma_3(3)}, \dots, e_1 \wedge \dots \wedge e_n : \sigma_i \in A_i \}$$ where $A_i$ denotes the set of all even permutations of $i$ elements selected from $\{1, \dots, n\}$ and $\{e_1,\dots,e_n\}$ is an orthonormal basis of $\mathbb{R}^n$, orthogonal with respect to the inner product of $\mathbb{G}^n$. It should be evident that this basis is isomorphic to the basis of the exterior algebra over $\mathbb{R}^n$. Since in any Clifford algebra $vw = v \wedge w$ ($vw$ denoting the Clifford/geometric product) if and only if $\langle v, w \rangle_q=0$ i.e. if and only if $v$ and $w$ are orthogonal with respect to the inner product induced by $q$ via polarization, the possible choices of orthonormal basis will in general depend on the choice of quadratic form for the geometric algebra. Nevertheless, the one-to-one correspondence of the bases with the basis for the exterior algebra $\mathbb{R}^n$ will still remain and will be sufficient to generate a vector space isomorphism, since vector space isomorphisms only depend on the linear independence and dimensions of the bases, and not their orthogonality under an inner product. In other words, we can always use the vector space basis of the exterior algebra over $\mathbb{R}^n$ as a vector space basis for the geometric algebra $\mathbb{G}^n$ regardless of our choice of quadratic form, although this basis will not always be orthogonal. Hence by identifying the geometric algebra with the exterior algebra + inner product structure, we get a linear embedding of the geometric algebra into the tensor algebra "for free" by using the exterior algebra's embedding. (See p.4 of this document for discussion of the canonical basis of a geometric algebra generated by an orthonormal basis of $\mathbb{R}^n$.)
2019-07-20 16:56:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952131867408752, "perplexity": 194.97423370243305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00029.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/169/1/90189/characterizations-of-p-superharmonic-functions-on-metric-spaces
# Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## Characterizations of $p$-superharmonic functions on metric spaces ### Tom 169 / 2005 Studia Mathematica 169 (2005), 45-62 MSC: Primary 31C45; Secondary 31C05, 35J60, 49J27. DOI: 10.4064/sm169-1-3 #### Streszczenie We show the equivalence of some different definitions of $p$-superharmonic functions given in the literature. We also provide several other characterizations of $p$-superharmonicity. This is done in complete metric spaces equipped with a doubling measure and supporting a Poincaré inequality. There are many examples of such spaces. A new one given here is the union of a line (with the one-dimensional Lebesgue measure) and a triangle (with a two-dimensional weighted Lebesgue measure). Our results also apply to Cheeger $p$-superharmonic functions and in the Euclidean setting to $\cal A$-superharmonic functions, with the usual assumptions on $\cal A$. #### Autorzy • Anders BjörnDepartment of Mathematics
2021-09-18 11:32:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46939006447792053, "perplexity": 1016.2143820126843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00509.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-evaluate-3-ln0-2
How do you evaluate 3^ln0.2? Feb 24, 2017 ${3}^{\ln 0.2} = 0.171$ Explanation: Let ${3}^{\ln 0.2} = x$. Taking natural log on both sides, we get $\ln 0.2 \ln 3 = \ln x$ and hence $x = {e}^{\ln 0.2 \ln 3}$ = ${e}^{- 1.609 \times 1.0986}$ = ${e}^{- 1.76815} = 0.171$
2021-10-20 07:48:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7448530197143555, "perplexity": 774.0855958614062}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00134.warc.gz"}
https://biodynamo.org/notebooks/ST13-dynamic-scheduling.html
# Dynamic scheduling¶ Author: Lukas Breitwieser This tutorial demonstrates that behaviors and operations can be added and removed during the simulation. This feature provides maximum flexibility to control which functions will be executed during the lifetime of a simulation. Let's start by setting up BioDynaMo notebooks. Define a helper variable We define a standalone operation TestOp which prints out that it got executed and which removes itself from the list of scheduled operations afterwards. The same principles apply also for agent operations. Let's define a little helper function which creates a new instance of TestOp and adds it to the list of scheduled operations. Let's define a new behavior b2 which prints out when it gets executed and which adds a new operation with name OP2 to the simulation if a condition is met. In this scenario the condition is defined as simulation time step == 1. We define another behavior b1 which prints out when it gets executed, removes itself from the agent, and which adds behavior b2 to the agent. Now all required building blocks are ready. Let's define the initial model: a single agent with behavior b1. We also add a new operation to the simulation. Let's simulate one iteration and think about the expected output. • Since we initialized our only agent with behavior b1, we expect to see a line B1 0-0 • Furthermore, b1 will print a line to inform us that it removed itself from the agent, and that it added behavior b2 to the agent. • Because changes are applied immediately (using the default InPlaceExecCtxt) also B2 will be executed. However the condition inside b2 is not met. • Next we expect an output from OP1 telling us that it got executed. • Lastly, we expect an output from OP1 to tell is that it removed itself from the simulation. Let's simulate another iteration. This time we only expect output from B2. Remember that B1 and OP1 have been removed in the last iteration. This time the condition in B2 is met and we expect to see an output line to tell us that a new instance of TestOp with name OP2 has been added to the simulation. Let's simulate another iteration. This time we expect an output from B2 whose condition is not met in this iterations, and from OP2 that it got executed and removed from the simulation. Let's simulate one last iteration. OP2 removed itself in the last iteration. Therefore, only B2 should be left. The condition of B2 is not met. In summary: We initialized the simulation with B1 and OP1. In iteration: 1. B1 removed, B2 added, OP1 removed
2022-08-15 04:49:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30388718843460083, "perplexity": 1939.1265183352027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00315.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aliu.jianzhou
# zbMATH — the first resource for mathematics ## Liu, Jianzhou Compute Distance To: Author ID: liu.jianzhou Published as: Liu, J.; Liu, J. Z.; Liu, Jian-Zhou; Liu, Jianzhou Documents Indexed: 196 Publications since 1975, including 2 Books all top 5 #### Co-Authors 10 single-authored 27 Zhang, Juan 10 Huang, Rong 10 Zhu, Li 9 Xie, Qingming 8 He, Anqi 6 Huang, Zejun 5 Liu, Yu 5 Wang, Leilei 4 Huang, Zhuohong 3 Cao, Yanhua 3 Ding, Biwen 3 Guo, Aili 3 Huang, Yunqing 3 Li, Quanbing 3 Mo, Hongmin 3 Xu, Yinghong 3 Xue, Yuan 3 Zhang, Fuzhen 2 He, Lingli 2 Huang, Hao 2 Liang, Kaifu 2 Liao, Anping 2 Tu, Gen 2 Tuo, Qing 2 Xiong, Liang 2 Zhou, Lixin 1 Bai, Ying 1 Chu, Shan 1 Han, Junlin 1 Huang, Liping 1 Kong, Xu 1 Kuang, Qiaoying 1 Li, Guangqi 1 Li, Jicheng 1 Liu, Yu 1 Long, Shunchao 1 Luo, Fangfang 1 Lyu, Zhenhua 1 Ma, Yanbo 1 Pan, Jinsong 1 Quan, Hongzheng 1 Sze, Nung-Sing 1 Tang, Hua 1 Wan, Jian 1 Wang, Jian 1 Wang, Jie 1 Wang, Li 1 Wang, Li 1 Wang, Li 1 Wang, Yanpei 1 Xi, Boyan 1 Yang, Chunlei 1 Yang, Zhongpeng 1 Yuan, Xiuyu 1 Zha, Yaling 1 Zhang, Chaoquan 1 Zhang, Yuelang 1 Zhou, Xueyong all top 5 #### Serials 17 Linear Algebra and its Applications 10 Chinese Journal of Engineering Mathematics 8 Applied Mathematics and Computation 6 Numerical Mathematics 6 Natural Science Journal of Xiangtan University 5 Journal of Mathematical Research & Exposition 5 Journal of Inequalities and Applications 4 International Journal of Control 4 International Journal of Computer Mathematics 4 ELA. The Electronic Journal of Linear Algebra 4 Asian Journal of Control 3 Mathematics in Practice and Theory 3 IMA Journal of Mathematical Control and Information 3 SIAM Journal on Matrix Analysis and Applications 2 Acta Mathematica Sinica 2 IEEE Transactions on Automatic Control 2 Journal of Mathematics. Wuhan University 2 Acta Mathematicae Applicatae Sinica 2 Applied Mathematics and Mechanics. (English Edition) 2 Mathematica Applicata 2 Applied Mathematics. Series A (Chinese Edition) 2 Computational and Applied Mathematics 1 Computers & Mathematics with Applications 1 International Journal of Systems Science 1 Journal of the Franklin Institute 1 Linear and Multilinear Algebra 1 Automatica 1 BIT 1 Journal of Computational and Applied Mathematics 1 Mathematica Numerica Sinica 1 Chinese Annals of Mathematics. Series A 1 Acta Scientiarum Naturalium Universitatis Jilinensis 1 Journal of Engineering Mathematics (Xi’an) 1 Mathematical Problems in Engineering 1 Journal of Mathematical Study 1 Journal of Northwest Normal University. Natural Science 1 Nonlinear Analysis. Modelling and Control 1 Applied Mathematics E-Notes 1 Journal of Applied Mathematics and Computing 1 Advances in Difference Equations 1 Journal of University of Science and Technology of Suzhou. Natural Science Edition 1 International Journal of Information & Systems Sciences 1 Journal of Mathematical Inequalities 1 Numerical Mathematics: Theory, Methods and Applications all top 5 #### Fields 96 Linear and multilinear algebra; matrix theory (15-XX) 36 Numerical analysis (65-XX) 16 Systems theory; control (93-XX) 2 Ordinary differential equations (34-XX) 2 Difference and functional equations (39-XX) 1 Combinatorics (05-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Operator theory (47-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Probability theory and stochastic processes (60-XX) 1 Statistics (62-XX) 1 Biology and other natural sciences (92-XX) #### Citations contained in zbMATH 92 Publications have been cited 492 times in 369 Documents Cited by Year Correlation analysis of non-probabilistic convex model and corresponding structural reliability technique. Zbl 1230.74240 Jiang, C.; Han, X.; Lu, G. Y.; Liu, J.; Zhang, Z.; Bai, Y. C. 2011 A fuzzy adaptive differential evolution algorithm. Zbl 1076.93513 Liu, J.; Lampinen, J. 2005 Computational techniques for complex transport phenomena. Zbl 0920.76003 Shyy, W.; Thakur, S. S.; Ouyang, H.; Blosch, E.; Liu, J. 1998 Some properties on Schur complements of H-matrices and diagonally dominant matrices. Zbl 1068.15004 Liu, Jianzhou; Huang, Yunqing 2004 Disc separation of the Schur complement of diagonally dominant matrices and determinantal bounds. Zbl 1107.15022 Liu, Jianzhou; Zhang, Fuzhen 2005 The Schur complements of generalized doubly diagonally dominant matrices. Zbl 1051.15016 Liu, Jianzhou; Huang, Yunqing; Zhang, Fuzhen 2004 On a threshold autoregression with conditional heteroscedastic variances. Zbl 0921.62113 Liu, J.; Li, W. K.; Li, C. W. 1997 Robust solutions for network design under transportation cost and demand uncertainty. Zbl 1153.90341 Mudchanatongsuk, S.; Ordóñez, F.; Liu, J. 2008 The dominant degree and disc theorem for the Schur complement of matrix. Zbl 1189.15023 Liu, Jianzhou; Huang, Zejun; Zhang, Juan 2010 Iterative algorithms for the minimum-norm solution and the least-squares solution of the linear matrix equations $$A_1XB_1 + C_1X^TD_1 = M_1, A_2XB_2 + C_2 X^TD_2 = M_2$$. Zbl 1250.65059 Liang, Kaifu; Liu, Jianzhou 2011 Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices. Zbl 1133.15020 Liu, Jianzhou; Li, Jicheng; Huang, Zhuohong; Kong, Xu 2008 Solutions of the generalized Sylvester matrix equation and the application in eigenstructure assignment. Zbl 1303.93090 Yang, Chunlei; Liu, Jianzhou; Liu, Yu 2012 Nonsingular almost strictly sign regular matrices. Zbl 1246.15032 Huang, Rong; Liu, Jianzhou; Zhu, Li 2012 The extension of Roth’s theorem for matrix equations over a ring. Zbl 0880.15016 Huang, Liping; Liu, Jianzhou 1997 A new nonlinear interval programming method for uncertain problems with dependent interval variables. Zbl 1338.90487 Jiang, C.; Zhang, Z. G.; Zhang, Q. F.; Han, X.; Xie, H. C.; Liu, J. 2014 New solution bounds for the continuous algebraic Riccati equation. Zbl 1259.15021 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2011 The Schur complements of $$\gamma$$-diagonally and product $$\gamma$$-diagonally dominant matrix and their disc separation. Zbl 1186.15016 Liu, Jianzhou; Huang, Zejun 2010 Design of sliding mode control for neutral delay systems with perturbation in control channels. Zbl 1277.93023 Niu, Yugang; Jia, T.; Huang, J.; Liu, J. 2012 Some inequalities for singular values and eigenvalues of generalized Schur complements of products of matrices. Zbl 0941.15017 Liu, Jianzhou 1999 A minimum principle and estimates of the eigenvalues for Schur complements of positive semidefinite Hermitian matrices. Zbl 0885.15010 Liu, Jianzhou; Zhu, Li 1997 Modelling experts’ attitudes in group decision making. Zbl 1269.91033 Palomares, I.; Liu, J.; Xu, Y.; Martínez, L. 2012 Theorems on Schur complement of block diagonally dominant matrices and their application in reducing the order for the solution of large scale linear systems. Zbl 1231.15017 Liu, Jianzhou; Huang, Zhuohong; Zhu, Li; Huang, Zejun 2011 A new upper bound for the eigenvalues of the continuous algebraic Riccati equation. Zbl 1207.15015 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2010 On Schur complements of sign regular matrices of order $$k$$. Zbl 1217.15041 Huang, Rong; Liu, Jianzhou 2010 Some inequalities for eigenvalues of Schur complements of Hermitian matrices. Zbl 1108.15019 Liu, Jianzhou; Huang, Yunqing; Liao, Anping 2006 Mechanical properties of single-walled carbon nanotube bundles as bulk materials. Zbl 1086.74010 Liu, J. Z.; Zheng, Q.-S.; Wang, L.-F.; Jiang, Q. 2005 Spectral indicator method for a non-selfadjoint Steklov eigenvalue problem. Zbl 1422.65361 Liu, J.; Sun, J.; Turner, T. 2019 Inversion algorithms for large-scale geophysical electromagnetic measurements. Zbl 1181.35317 Abubakar, A.; Habashy, T. M.; Li, M.; Liu, J. 2009 Stopping at the maximum of geometric Brownian motion when signals are received. Zbl 1094.60030 Guo, Xin; Liu, J. 2005 Statistical inference based on non-smooth estimating functions. Zbl 1064.62026 Tian, L.; Liu, J.; Zhao, Y.; Wei, L. J. 2004 Finite block method in fracture analysis with functionally graded materials. Zbl 1403.74084 Li, J.; Liu, J. Z.; Korakianitis, T.; Wen, P. H. 2017 The Schur complement of strictly doubly diagonally dominant matrices and its application. Zbl 1248.15018 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2012 Trace inequalities for matrix products and trace bounds for the solution of the algebraic Riccati equations. Zbl 1187.15023 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2009 New trace bounds for the product of two matrices and their applications in the algebraic Riccati equation. Zbl 1176.15030 Liu, Jianzhou; Zhang, Juan 2009 A new trace bound for a general square matrix product. Zbl 1366.15007 Liu, Jianzhou; He, Lingli 2007 Buckling of shallow spherical shells including the effects of transverse shear deformation. Zbl 1045.74524 Li, Q. S.; Liu, J.; Tang, J. 2003 Inter-transactional association rules for multi-dimensional contexts for prediction and their application to studying meteorological data. Zbl 0974.68043 Feng, L.; Dillon, T.; Liu, J. 2001 Variable structure control using system decomposition. Zbl 0770.93015 Zohdy, M.; Fadali, M. S.; Liu, J. 1992 The Nekrasov diagonally dominant degree on the Schur complement of Nekrasov matrices and its applications. Zbl 1426.15060 Liu, Jianzhou; Zhang, Juan; Zhou, Lixin; Tu, Gen 2018 New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation. Zbl 1380.93163 Liu, Jianzhou; Wang, Li; Zhang, Juan 2017 Uncertain buckling and reliability analysis of the piezoelectric functionally graded cylindrical shells based on the nonprobabilistic convex model. Zbl 1359.74083 Bi, R. G.; Han, X.; Jiang, C.; Bai, Y. C.; Liu, J. 2014 New upper and lower eigenvalue bounds for the solution of the continuous algebraic Riccati equation. Zbl 1286.93085 Liu, Jianzhou; Zhang, Juan 2014 Joint inversion approaches for geophysical electromagnetic and elastic full-waveform data. Zbl 1239.86009 Abubakar, A.; Gao, G.; Habashy, T. M.; Liu, J. 2012 A semi-analytical method for bending, buckling, and free vibration analyses of sandwich panels with square-honeycomb cores. Zbl 1271.74095 Liu, J.; Cheng, Y. S.; Li, R. F.; Au, F. T. K. 2010 Matrix bounds for the solution of the continuous algebraic Riccati equation. Zbl 1210.15016 Zhang, Juan; Liu, Jianzhou 2010 Periodic solutions of evolution equations. Zbl 1071.34056 Ezzinbi, K.; Naito, T.; Minh, N.; Liu, J. 2004 Wavelet theory and its application to pattern recognition. Zbl 0969.68133 Tang, Y. Y.; Yang, L. H.; Liu, J.; Ma, H. 2000 Some inequalities for Schur complements. Zbl 0941.15018 Liu, Jianzhou; Wang, Jian 1999 Some improvement of Oppenheim’s inequality for $$M$$-matrices. Zbl 0874.15015 Liu, Jianzhou; Zhu, Li 1997 Global exponential stability of positive periodic solutions for a cholera model with saturated treatment. Zbl 1416.92166 Quan, Hongzheng; Zhou, Xueyong; Liu, Jianzhou 2018 Distributed adaptive high-gain extended Kalman filtering for nonlinear systems. Zbl 1381.93096 Rashedi, M.; Liu, J.; Huang, B. 2017 The closure property of $$\mathcal{H}$$-tensors under the Hadamard product. Zbl 1375.15039 Zhou, Lixin; Liu, Jianzhou; Zhu, Li 2017 Propagation behavior of elastic waves in one-dimensional piezoelectric/piezomagnetic phononic crystal with line defects. Zbl 1346.74047 Pang, Y.; Jiao, F.; Liu, J. 2014 The improved disc theorems for the Schur complements of diagonally dominant matrices. Zbl 1279.15016 Zhang, Juan; Liu, Jianzhou; Tu, Gen 2013 Bounds for the eigenvalues of the continuous algebraic Riccati equation. Zbl 1260.93068 Liu, Jianzhou; Zhang, Juan 2011 Upper solution bounds of the continuous coupled algebraic Riccati matrix equation. Zbl 1245.93059 Liu, Jianzhou; Zhang, Juan 2011 New lower solution bounds for the continuous algebraic Riccati equation. Zbl 1223.15021 Zhang, Juan; Liu, Jianzhou 2011 Criteria and Schur complements of $$H$$-matrices. Zbl 1188.15026 Liu, Jianzhou; Zhang, Fuzhen 2010 $$\alpha$$-diagonally dominant and criteria for generalized strictly diagonally dominant matrices. Zbl 1199.15097 Zhang, Yuelang; Mo, Hongmin; Liu, Jianzhou 2009 Love waves in a smart functionally graded piezoelectric composite structure. Zbl 1397.74106 Liu, J.; Cao, X. S.; Wang, Z. K. 2009 Parametric estimation of mixtures of two uniform distributions. Zbl 1169.62011 Hussein, A.; Liu, J. 2009 Simple criteria for generalized diagonally dominant matrices. Zbl 1151.65024 Liu, Jianzhou; He, Anqi 2008 Some results on Oppenheim’s inequalities for M-matrices. Zbl 0961.15015 Yang, Zhongpeng; Liu, Jianzhou 2000 Assessment of grid interface treatments for multi-block incompressible viscous flow computation. Zbl 0888.76057 Liu, J.; Shyy, W. 1996 $$Z$$-eigenvalue inclusion theorem of tensors and the geometric measure of entanglement of multipartite pure states. Zbl 1449.15026 Xiong, Liang; Liu, Jianzhou 2020 Multi-scale modelling of granular materials: numerical framework and study on micro-structural features. Zbl 07037447 Liu, J.; Bosco, E.; Suiker, A. S. J. 2019 The solution bounds and fixed point iterative algorithm for the discrete coupled algebraic Riccati equation applied to automatic control. Zbl 1397.93072 Liu, Jianzhou; Wang, Li; Zhang, Juan 2017 New upper matrix bounds with power form for the solution of the continuous coupled algebraic Riccati matrix equation. Zbl 1365.93472 Liu, Jianzhou; Wang, Yanpei; Zhang, Juan 2017 A stochastic scaled boundary finite element method. Zbl 1439.74445 Long, X. Y.; Jiang, C.; Yang, C.; Han, X.; Gao, W.; Liu, J. 2016 Several criteria for judging $$H$$- and non-$$H$$-matrices. Zbl 1384.15010 Liu, Jianzhou; Wang, Leilei; Lyu, Zhenhua 2016 A contact detection algorithm for multi-sphere particles by means of two-level-grid-searching in DEM simulations. Zbl 1352.74202 Fang, Z. Q.; Hu, G. M.; Du, J.; Fan, Z.; Liu, J. 2015 Using pulsar timing arrays and the quantum normalization condition to constrain relic gravitational waves. Zbl 06281203 Tong, M. L.; Zhang, Y.; Zhao, W.; Liu, J. Z.; Zhao, C. S.; Yang, T. G. 2014 Temporal multiscale approach for nanocarrier motion with simultaneous adhesion and hydrodynamic interactions in targeted drug delivery. Zbl 1377.76049 Radhakrishnan, R.; Uma, B.; Liu, J.; Ayyaswamy, P. S.; Eckmann, D. M. 2013 The Bunch-Kaufman factorization of symmetric matrices signature similar to sign regular matrices. Zbl 1305.65112 Huang, Rong; Liu, Jianzhou; Zhu, Li; Pan, Jinsong 2013 New matrix bounds, an existence uniqueness and a fixed-point iterative algorithm for the solution of the unified coupled algebraic Riccati equation. Zbl 1255.15011 Zhang, Juan; Liu, Jianzhou 2012 Stability and robustness analysis of a linear time-periodic system subjected to random perturbations. Zbl 1248.93171 Redkar, Sangram; Liu, J.; Sinha, S. C. 2012 Meshless study of dynamic failure in shells. Zbl 1254.74117 Liu, J. 2011 The existence uniqueness and the fixed iterative algorithm of the solution for the discrete coupled algebraic Riccati equation. Zbl 1227.93075 Liu, Jianzhou; Zhang, Juan 2011 On zero-pattern invariant properties of structured matrices. Zbl 1225.15028 Huang, Rong; Liu, Jianzhou 2011 Characterizations of inverse $$M$$-matrices with special zero patterns. Zbl 1195.15037 Huang, Rong; Liu, Jianzhou; Sze, Nung-Sing 2010 New estimates for the solution of the Lyapunov matrix differential equation. Zbl 1217.93135 Zhang, Juan; Liu, Jianzhou 2010 An interleaved iterative criterion for $$H$$-matrices. Zbl 1132.65023 Liu, Jianzhou; He, Anqi 2007 A new algorithmic characterization of $$H$$-matrices. Zbl 1115.65046 Liu, Jianzhou; He, Anqi 2006 Linear correlation between fractal dimension of EEG signal and handgrip force. Zbl 1116.92039 Liu, J. Z.; Yang, Q.; Yao, B.; Brown, R. W.; Yue, G. H. 2005 High-resolution schemes for bubbling flow computations. Zbl 1163.76394 Zhao, X.; Richards, P. G.; Zhang, S. J.; Liu, J. 2005 Transient anti-plane crack problem of a functionally graded piezoelectric strip bonded to elastic layers. Zbl 1063.74088 Chen, J.; Soh, A. K.; Liu, J.; Liu, Z. X. 2004 Fault detection method for nonlinear systems based on probabilistic neural network filtering. Zbl 1031.93083 Liu, J.; Scherpen, J. M. A. 2002 Parallel algorithms for particles-turbulence two-way interaction direct numerical simulation. Zbl 1007.68208 Ling, W.; Liu, J.; Chung, J. N.; Crowe, C. T. 2002 Some Löwner partial orders of Schur complements and Kronecker products of matrices. Zbl 0936.15016 Liu, Jianzhou 1999 Some estimates for correlation coefficients of a system of random vectors. Zbl 0920.62070 Liu, Jianzhou 1999 Some interlacing properties of Schur complements of quaternion self-conjugate matrices. Zbl 0926.15015 Liu, Jianzhou; Yuan, Xiuyu 1998 Porohyperelastic-transport-swelling theory, material properties and finite element models for large arteries. Zbl 0968.74561 Simon, B. R.; Kaufman, M. V.; Liu, J.; Baldwin, A. L. 1998 $$Z$$-eigenvalue inclusion theorem of tensors and the geometric measure of entanglement of multipartite pure states. Zbl 1449.15026 Xiong, Liang; Liu, Jianzhou 2020 Spectral indicator method for a non-selfadjoint Steklov eigenvalue problem. Zbl 1422.65361 Liu, J.; Sun, J.; Turner, T. 2019 Multi-scale modelling of granular materials: numerical framework and study on micro-structural features. Zbl 07037447 Liu, J.; Bosco, E.; Suiker, A. S. J. 2019 The Nekrasov diagonally dominant degree on the Schur complement of Nekrasov matrices and its applications. Zbl 1426.15060 Liu, Jianzhou; Zhang, Juan; Zhou, Lixin; Tu, Gen 2018 Global exponential stability of positive periodic solutions for a cholera model with saturated treatment. Zbl 1416.92166 Quan, Hongzheng; Zhou, Xueyong; Liu, Jianzhou 2018 Finite block method in fracture analysis with functionally graded materials. Zbl 1403.74084 Li, J.; Liu, J. Z.; Korakianitis, T.; Wen, P. H. 2017 New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation. Zbl 1380.93163 Liu, Jianzhou; Wang, Li; Zhang, Juan 2017 Distributed adaptive high-gain extended Kalman filtering for nonlinear systems. Zbl 1381.93096 Rashedi, M.; Liu, J.; Huang, B. 2017 The closure property of $$\mathcal{H}$$-tensors under the Hadamard product. Zbl 1375.15039 Zhou, Lixin; Liu, Jianzhou; Zhu, Li 2017 The solution bounds and fixed point iterative algorithm for the discrete coupled algebraic Riccati equation applied to automatic control. Zbl 1397.93072 Liu, Jianzhou; Wang, Li; Zhang, Juan 2017 New upper matrix bounds with power form for the solution of the continuous coupled algebraic Riccati matrix equation. Zbl 1365.93472 Liu, Jianzhou; Wang, Yanpei; Zhang, Juan 2017 A stochastic scaled boundary finite element method. Zbl 1439.74445 Long, X. Y.; Jiang, C.; Yang, C.; Han, X.; Gao, W.; Liu, J. 2016 Several criteria for judging $$H$$- and non-$$H$$-matrices. Zbl 1384.15010 Liu, Jianzhou; Wang, Leilei; Lyu, Zhenhua 2016 A contact detection algorithm for multi-sphere particles by means of two-level-grid-searching in DEM simulations. Zbl 1352.74202 Fang, Z. Q.; Hu, G. M.; Du, J.; Fan, Z.; Liu, J. 2015 A new nonlinear interval programming method for uncertain problems with dependent interval variables. Zbl 1338.90487 Jiang, C.; Zhang, Z. G.; Zhang, Q. F.; Han, X.; Xie, H. C.; Liu, J. 2014 Uncertain buckling and reliability analysis of the piezoelectric functionally graded cylindrical shells based on the nonprobabilistic convex model. Zbl 1359.74083 Bi, R. G.; Han, X.; Jiang, C.; Bai, Y. C.; Liu, J. 2014 New upper and lower eigenvalue bounds for the solution of the continuous algebraic Riccati equation. Zbl 1286.93085 Liu, Jianzhou; Zhang, Juan 2014 Propagation behavior of elastic waves in one-dimensional piezoelectric/piezomagnetic phononic crystal with line defects. Zbl 1346.74047 Pang, Y.; Jiao, F.; Liu, J. 2014 Using pulsar timing arrays and the quantum normalization condition to constrain relic gravitational waves. Zbl 06281203 Tong, M. L.; Zhang, Y.; Zhao, W.; Liu, J. Z.; Zhao, C. S.; Yang, T. G. 2014 The improved disc theorems for the Schur complements of diagonally dominant matrices. Zbl 1279.15016 Zhang, Juan; Liu, Jianzhou; Tu, Gen 2013 Temporal multiscale approach for nanocarrier motion with simultaneous adhesion and hydrodynamic interactions in targeted drug delivery. Zbl 1377.76049 Radhakrishnan, R.; Uma, B.; Liu, J.; Ayyaswamy, P. S.; Eckmann, D. M. 2013 The Bunch-Kaufman factorization of symmetric matrices signature similar to sign regular matrices. Zbl 1305.65112 Huang, Rong; Liu, Jianzhou; Zhu, Li; Pan, Jinsong 2013 Solutions of the generalized Sylvester matrix equation and the application in eigenstructure assignment. Zbl 1303.93090 Yang, Chunlei; Liu, Jianzhou; Liu, Yu 2012 Nonsingular almost strictly sign regular matrices. Zbl 1246.15032 Huang, Rong; Liu, Jianzhou; Zhu, Li 2012 Design of sliding mode control for neutral delay systems with perturbation in control channels. Zbl 1277.93023 Niu, Yugang; Jia, T.; Huang, J.; Liu, J. 2012 Modelling experts’ attitudes in group decision making. Zbl 1269.91033 Palomares, I.; Liu, J.; Xu, Y.; Martínez, L. 2012 The Schur complement of strictly doubly diagonally dominant matrices and its application. Zbl 1248.15018 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2012 Joint inversion approaches for geophysical electromagnetic and elastic full-waveform data. Zbl 1239.86009 Abubakar, A.; Gao, G.; Habashy, T. M.; Liu, J. 2012 New matrix bounds, an existence uniqueness and a fixed-point iterative algorithm for the solution of the unified coupled algebraic Riccati equation. Zbl 1255.15011 Zhang, Juan; Liu, Jianzhou 2012 Stability and robustness analysis of a linear time-periodic system subjected to random perturbations. Zbl 1248.93171 Redkar, Sangram; Liu, J.; Sinha, S. C. 2012 Correlation analysis of non-probabilistic convex model and corresponding structural reliability technique. Zbl 1230.74240 Jiang, C.; Han, X.; Lu, G. Y.; Liu, J.; Zhang, Z.; Bai, Y. C. 2011 Iterative algorithms for the minimum-norm solution and the least-squares solution of the linear matrix equations $$A_1XB_1 + C_1X^TD_1 = M_1, A_2XB_2 + C_2 X^TD_2 = M_2$$. Zbl 1250.65059 Liang, Kaifu; Liu, Jianzhou 2011 New solution bounds for the continuous algebraic Riccati equation. Zbl 1259.15021 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2011 Theorems on Schur complement of block diagonally dominant matrices and their application in reducing the order for the solution of large scale linear systems. Zbl 1231.15017 Liu, Jianzhou; Huang, Zhuohong; Zhu, Li; Huang, Zejun 2011 Bounds for the eigenvalues of the continuous algebraic Riccati equation. Zbl 1260.93068 Liu, Jianzhou; Zhang, Juan 2011 Upper solution bounds of the continuous coupled algebraic Riccati matrix equation. Zbl 1245.93059 Liu, Jianzhou; Zhang, Juan 2011 New lower solution bounds for the continuous algebraic Riccati equation. Zbl 1223.15021 Zhang, Juan; Liu, Jianzhou 2011 Meshless study of dynamic failure in shells. Zbl 1254.74117 Liu, J. 2011 The existence uniqueness and the fixed iterative algorithm of the solution for the discrete coupled algebraic Riccati equation. Zbl 1227.93075 Liu, Jianzhou; Zhang, Juan 2011 On zero-pattern invariant properties of structured matrices. Zbl 1225.15028 Huang, Rong; Liu, Jianzhou 2011 The dominant degree and disc theorem for the Schur complement of matrix. Zbl 1189.15023 Liu, Jianzhou; Huang, Zejun; Zhang, Juan 2010 The Schur complements of $$\gamma$$-diagonally and product $$\gamma$$-diagonally dominant matrix and their disc separation. Zbl 1186.15016 Liu, Jianzhou; Huang, Zejun 2010 A new upper bound for the eigenvalues of the continuous algebraic Riccati equation. Zbl 1207.15015 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2010 On Schur complements of sign regular matrices of order $$k$$. Zbl 1217.15041 Huang, Rong; Liu, Jianzhou 2010 A semi-analytical method for bending, buckling, and free vibration analyses of sandwich panels with square-honeycomb cores. Zbl 1271.74095 Liu, J.; Cheng, Y. S.; Li, R. F.; Au, F. T. K. 2010 Matrix bounds for the solution of the continuous algebraic Riccati equation. Zbl 1210.15016 Zhang, Juan; Liu, Jianzhou 2010 Criteria and Schur complements of $$H$$-matrices. Zbl 1188.15026 Liu, Jianzhou; Zhang, Fuzhen 2010 Characterizations of inverse $$M$$-matrices with special zero patterns. Zbl 1195.15037 Huang, Rong; Liu, Jianzhou; Sze, Nung-Sing 2010 New estimates for the solution of the Lyapunov matrix differential equation. Zbl 1217.93135 Zhang, Juan; Liu, Jianzhou 2010 Inversion algorithms for large-scale geophysical electromagnetic measurements. Zbl 1181.35317 Abubakar, A.; Habashy, T. M.; Li, M.; Liu, J. 2009 Trace inequalities for matrix products and trace bounds for the solution of the algebraic Riccati equations. Zbl 1187.15023 Liu, Jianzhou; Zhang, Juan; Liu, Yu 2009 New trace bounds for the product of two matrices and their applications in the algebraic Riccati equation. Zbl 1176.15030 Liu, Jianzhou; Zhang, Juan 2009 $$\alpha$$-diagonally dominant and criteria for generalized strictly diagonally dominant matrices. Zbl 1199.15097 Zhang, Yuelang; Mo, Hongmin; Liu, Jianzhou 2009 Love waves in a smart functionally graded piezoelectric composite structure. Zbl 1397.74106 Liu, J.; Cao, X. S.; Wang, Z. K. 2009 Parametric estimation of mixtures of two uniform distributions. Zbl 1169.62011 Hussein, A.; Liu, J. 2009 Robust solutions for network design under transportation cost and demand uncertainty. Zbl 1153.90341 Mudchanatongsuk, S.; Ordóñez, F.; Liu, J. 2008 Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices. Zbl 1133.15020 Liu, Jianzhou; Li, Jicheng; Huang, Zhuohong; Kong, Xu 2008 Simple criteria for generalized diagonally dominant matrices. Zbl 1151.65024 Liu, Jianzhou; He, Anqi 2008 A new trace bound for a general square matrix product. Zbl 1366.15007 Liu, Jianzhou; He, Lingli 2007 An interleaved iterative criterion for $$H$$-matrices. Zbl 1132.65023 Liu, Jianzhou; He, Anqi 2007 Some inequalities for eigenvalues of Schur complements of Hermitian matrices. Zbl 1108.15019 Liu, Jianzhou; Huang, Yunqing; Liao, Anping 2006 A new algorithmic characterization of $$H$$-matrices. Zbl 1115.65046 Liu, Jianzhou; He, Anqi 2006 A fuzzy adaptive differential evolution algorithm. Zbl 1076.93513 Liu, J.; Lampinen, J. 2005 Disc separation of the Schur complement of diagonally dominant matrices and determinantal bounds. Zbl 1107.15022 Liu, Jianzhou; Zhang, Fuzhen 2005 Mechanical properties of single-walled carbon nanotube bundles as bulk materials. Zbl 1086.74010 Liu, J. Z.; Zheng, Q.-S.; Wang, L.-F.; Jiang, Q. 2005 Stopping at the maximum of geometric Brownian motion when signals are received. Zbl 1094.60030 Guo, Xin; Liu, J. 2005 Linear correlation between fractal dimension of EEG signal and handgrip force. Zbl 1116.92039 Liu, J. Z.; Yang, Q.; Yao, B.; Brown, R. W.; Yue, G. H. 2005 High-resolution schemes for bubbling flow computations. Zbl 1163.76394 Zhao, X.; Richards, P. G.; Zhang, S. J.; Liu, J. 2005 Some properties on Schur complements of H-matrices and diagonally dominant matrices. Zbl 1068.15004 Liu, Jianzhou; Huang, Yunqing 2004 The Schur complements of generalized doubly diagonally dominant matrices. Zbl 1051.15016 Liu, Jianzhou; Huang, Yunqing; Zhang, Fuzhen 2004 Statistical inference based on non-smooth estimating functions. Zbl 1064.62026 Tian, L.; Liu, J.; Zhao, Y.; Wei, L. J. 2004 Periodic solutions of evolution equations. Zbl 1071.34056 Ezzinbi, K.; Naito, T.; Minh, N.; Liu, J. 2004 Transient anti-plane crack problem of a functionally graded piezoelectric strip bonded to elastic layers. Zbl 1063.74088 Chen, J.; Soh, A. K.; Liu, J.; Liu, Z. X. 2004 Buckling of shallow spherical shells including the effects of transverse shear deformation. Zbl 1045.74524 Li, Q. S.; Liu, J.; Tang, J. 2003 Fault detection method for nonlinear systems based on probabilistic neural network filtering. Zbl 1031.93083 Liu, J.; Scherpen, J. M. A. 2002 Parallel algorithms for particles-turbulence two-way interaction direct numerical simulation. Zbl 1007.68208 Ling, W.; Liu, J.; Chung, J. N.; Crowe, C. T. 2002 Inter-transactional association rules for multi-dimensional contexts for prediction and their application to studying meteorological data. Zbl 0974.68043 Feng, L.; Dillon, T.; Liu, J. 2001 Wavelet theory and its application to pattern recognition. Zbl 0969.68133 Tang, Y. Y.; Yang, L. H.; Liu, J.; Ma, H. 2000 Some results on Oppenheim’s inequalities for M-matrices. Zbl 0961.15015 Yang, Zhongpeng; Liu, Jianzhou 2000 Some inequalities for singular values and eigenvalues of generalized Schur complements of products of matrices. Zbl 0941.15017 Liu, Jianzhou 1999 Some inequalities for Schur complements. Zbl 0941.15018 Liu, Jianzhou; Wang, Jian 1999 Some Löwner partial orders of Schur complements and Kronecker products of matrices. Zbl 0936.15016 Liu, Jianzhou 1999 Some estimates for correlation coefficients of a system of random vectors. Zbl 0920.62070 Liu, Jianzhou 1999 Computational techniques for complex transport phenomena. Zbl 0920.76003 Shyy, W.; Thakur, S. S.; Ouyang, H.; Blosch, E.; Liu, J. 1998 Some interlacing properties of Schur complements of quaternion self-conjugate matrices. Zbl 0926.15015 Liu, Jianzhou; Yuan, Xiuyu 1998 Porohyperelastic-transport-swelling theory, material properties and finite element models for large arteries. Zbl 0968.74561 Simon, B. R.; Kaufman, M. V.; Liu, J.; Baldwin, A. L. 1998 On a threshold autoregression with conditional heteroscedastic variances. Zbl 0921.62113 Liu, J.; Li, W. K.; Li, C. W. 1997 The extension of Roth’s theorem for matrix equations over a ring. Zbl 0880.15016 Huang, Liping; Liu, Jianzhou 1997 A minimum principle and estimates of the eigenvalues for Schur complements of positive semidefinite Hermitian matrices. Zbl 0885.15010 Liu, Jianzhou; Zhu, Li 1997 Some improvement of Oppenheim’s inequality for $$M$$-matrices. Zbl 0874.15015 Liu, Jianzhou; Zhu, Li 1997 Assessment of grid interface treatments for multi-block incompressible viscous flow computation. Zbl 0888.76057 Liu, J.; Shyy, W. 1996 Variable structure control using system decomposition. Zbl 0770.93015 Zohdy, M.; Fadali, M. S.; Liu, J. 1992 all top 5 #### Cited by 737 Authors 26 Liu, Jianzhou 12 Ma, Changfeng 11 Peña, Juan Manuel 11 Shyy, Wei 11 Zhang, Juan 8 Alonso, Pedro 8 Serrano, María Luisa 6 Huang, Rong 6 Li, Yaotang 6 Zhang, Huamin 5 Cvetković, Ljiljana 5 Tseng, Chienchou 5 Wang, Xiaojun 5 Zhang, Chengyi 4 Cline, Daren B. H. 4 Gao, Cunchen 4 Jiang, Chao 4 Kao, Yonggui 4 Li, Chaoqian 4 Li, Yunlong 4 Liu, Zhen 4 Nedović, Maja 4 Wang, Guoyu 4 Wang, Lei 4 Wu, Aiguo 4 Xie, Yajun 4 Zhao, Jianxing 3 Duan, Guangren 3 Feng, Gang 3 Huang, Na 3 Huang, Zejun 3 Huang, Zhuohong 3 Kang, Zhan 3 Kumar Das, Manab 3 Li, Gang 3 Li, Jicheng 3 Liao, Anping 3 Ni, Bingyu 3 Pramanik, Shantanu 3 Wang, Chong 3 Wang, Feng 3 Xu, Menghui 3 Zheng, Bing 2 Arcoumanis, C. 2 Banu, S. Mehar 2 Baragona, Roberto 2 Bayoumi, Ahmed M. E. 2 Bi, Hai 2 Bi, Rengui 2 Blanchini, Franco 2 Boizot, Nicolas 2 Bru, Rafael 2 Busvelle, Éric 2 Chen, Shencan 2 Chiou, Suh-Wen 2 Chung, Byung Do 2 Deng, Zhongmin 2 Dick, Erik 2 Ding, Feng 2 Du, Wenbo 2 Ezzinbi, Khalil 2 Farid, Farid O. 2 Gao, Lei 2 Garbey, Marc 2 Garde, Henrik 2 Gavaises, Manolis 2 Giménez, Isabel 2 Grigorenko, Alexander Ya. 2 Guo, Zhaopu 2 Gupta, Amit Kumar 2 Han, Xu 2 Hao, Peng 2 Ho, Wenhsien 2 Hu, Hao 2 Huang, Baohua 2 Huang, Yunqing 2 Huang, Zhengge 2 Jiang, Cheng-Dong 2 Jiang, Yanping 2 Knudsen, Kim M. 2 Kundalwal, S. I. 2 Lempa, Jukka 2 Li, Wai Keung 2 Li, Ying 2 Li, Yueqiu 2 Liang, Xia 2 Ling, Shiqing 2 Liu, Jie 2 Liu, Qilong 2 Liu, Xiaoxiao 2 Lu, Quan 2 Luo, Shuanghua 2 Lv, Changqing 2 Mallipeddi, Rammohan 2 Mannseth, Trond 2 Medeiros, Marcelo C. 2 Meng, Zeng 2 Nerinckx, Krista 2 Ni, Boyu 2 Poss, Michael ...and 637 more Authors all top 5 #### Cited in 123 Serials 30 Applied Mathematics and Computation 23 Linear Algebra and its Applications 19 Applied Mathematical Modelling 14 Computer Methods in Applied Mechanics and Engineering 13 Journal of Computational and Applied Mathematics 11 Journal of Computational Physics 10 Acta Mechanica 10 Asian Journal of Control 9 Computers & Mathematics with Applications 8 Journal of Inequalities and Applications 7 Journal of the Franklin Institute 7 Linear and Multilinear Algebra 7 European Journal of Operational Research 7 Mathematical Problems in Engineering 6 Automatica 6 Information Sciences 6 Computers & Operations Research 5 International Journal for Numerical Methods in Fluids 5 Acta Mechanica Sinica 4 Computers and Fluids 4 International Journal of Control 4 Journal of Global Optimization 4 Journal of Applied Mathematics and Computing 3 Advances in Applied Probability 3 International Journal of Heat and Mass Transfer 3 Inverse Problems 3 Networks 3 International Journal of Computer Mathematics 3 Computational and Applied Mathematics 3 Soft Computing 3 European Journal of Mechanics. A. Solids 3 Journal of Applied Mathematics 3 Advances in Difference Equations 3 Networks and Spatial Economics 2 International Journal of Systems Science 2 Journal of Fluid Mechanics 2 Scandinavian Journal of Statistics 2 Journal of Econometrics 2 Journal of Time Series Analysis 2 Numerical Algorithms 2 Journal of Statistical Computation and Simulation 2 Pattern Recognition 2 Computational Statistics and Data Analysis 2 Computational Optimization and Applications 2 International Applied Mechanics 2 Annals of Mathematics and Artificial Intelligence 2 Complexity 2 Engineering Analysis with Boundary Elements 2 Nonlinear Dynamics 2 Econometric Theory 2 Nonlinear Analysis. Modelling and Control 2 International Journal of Computational Methods 2 Journal of Industrial and Management Optimization 2 Inverse Problems and Imaging 2 Algorithms 2 Journal of Control Science and Engineering 2 International Journal of Structural Stability and Dynamics 2 Open Mathematics 1 Biological Cybernetics 1 The Canadian Journal of Statistics 1 General Relativity and Gravitation 1 International Journal of Solids and Structures 1 Journal of Engineering Mathematics 1 Journal of Mathematical Analysis and Applications 1 Journal of the Mechanics and Physics of Solids 1 Mathematical Notes 1 The Annals of Statistics 1 Applied Mathematics and Optimization 1 BIT 1 International Journal for Numerical Methods in Engineering 1 Journal of Statistical Planning and Inference 1 Naval Research Logistics 1 Journal of Information & Optimization Sciences 1 Systems & Control Letters 1 Statistics & Probability Letters 1 Circuits, Systems, and Signal Processing 1 Applied Mathematics and Mechanics. (English Edition) 1 Acta Applicandae Mathematicae 1 Applied Numerical Mathematics 1 Econometric Reviews 1 Computational Mechanics 1 Mathematical and Computer Modelling 1 Journal of Scientific Computing 1 Japan Journal of Industrial and Applied Mathematics 1 The Annals of Applied Probability 1 SIAM Journal on Applied Mathematics 1 Stochastic Processes and their Applications 1 Mathematical Programming. Series A. Series B 1 Applied Mathematics. Series B (English Edition) 1 Numerical Linear Algebra with Applications 1 International Journal of Numerical Methods for Heat & Fluid Flow 1 Georgian Mathematical Journal 1 Journal of Inverse and Ill-Posed Problems 1 Top 1 International Transactions in Operational Research 1 Journal of Vibration and Control 1 Differential Equations and Dynamical Systems 1 Abstract and Applied Analysis 1 Journal of Combinatorial Optimization 1 Mechanism and Machine Theory ...and 23 more Serials all top 5 #### Cited in 32 Fields 107 Linear and multilinear algebra; matrix theory (15-XX) 103 Numerical analysis (65-XX) 67 Operations research, mathematical programming (90-XX) 60 Mechanics of deformable solids (74-XX) 47 Systems theory; control (93-XX) 37 Statistics (62-XX) 30 Fluid mechanics (76-XX) 22 Computer science (68-XX) 19 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 17 Ordinary differential equations (34-XX) 15 Probability theory and stochastic processes (60-XX) 14 Partial differential equations (35-XX) 10 Biology and other natural sciences (92-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 6 Information and communication theory, circuits (94-XX) 5 Difference and functional equations (39-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Mechanics of particles and systems (70-XX) 4 Optics, electromagnetic theory (78-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 3 Geophysics (86-XX) 2 Combinatorics (05-XX) 2 Special functions (33-XX) 2 Operator theory (47-XX) 2 Classical thermodynamics, heat transfer (80-XX) 1 Mathematical logic and foundations (03-XX) 1 Associative rings and algebras (16-XX) 1 Measure and integration (28-XX) 1 Functional analysis (46-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Relativity and gravitational theory (83-XX) 1 Astronomy and astrophysics (85-XX)
2021-01-24 03:38:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840001225471497, "perplexity": 9123.66587353301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00278.warc.gz"}
https://academia.stackexchange.com/questions
# All Questions 35,172 questions Filter by Sorted by Tagged with 46 views ### What should I do if a Professor yells at me and a fellow student in class? So I'm taking a storyboarding class at my college and we have a big project coming up. I've been working on it a lot and talked about the story with my professor and he seemed to really enjoy it. We'... 82 views ### what are the disadvantage of marrying during phd? [closed] what are the disadvantage of marrying during phd ? I know that marriage, like education just a part of life. It's great sacraments of our life I want to know the disadvantage of marrying during ... 70 views ### How to compensate students who face technical issues in online exams I am teaching a college level course, and a few students had technical issues during exam 1 (held online) and where not able to complete various parts of the exam. One option that I could think of is ... 5 views ### How to use a single equation number with flalign in latex.? [migrated] I have the following code to output an equation. I want to use a single equation number for this whole block. How can we do it? 127 views ### Is it okay to give students advice on managing academic work? My journey through academia has been a hard-fought battle against myself and all my worst traits. I'm now finishing my PhD and concurrently teaching courses in which I inevitably come across students ... 26 views 229 views ### Adding co-author without consent of all co-authors [closed] I have been collaborating with an individual on a scientific paper. I have documents showing that this individual has shared the paper content with other people. Also, the individual has added co-...
2021-03-08 18:55:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36005643010139465, "perplexity": 2254.426246951233}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00291.warc.gz"}
https://physics.stackexchange.com/questions/458107/confusion-regarding-hookes-law
# Confusion Regarding Hookes law I am used to seeing Hookes law in the form: $$F = kx$$ but in another one of my books it gives the equation: $$\\ F = \frac{\lambda x}{l}$$ where $$l$$ is the unstretched length, $$x$$ is the extension and $$\lambda$$ is the modulus of elasticity. My confusion lies in what the modulus of elasticity actually is as I initially thought it was young's modulus but then realised its units are newtons. I don't understand what it is actually measuring as its not something like 'newtons per [unit]'. I would understand if the modulus was $$\frac{N}{l}$$ as that seems to make sense but just the $$\lambda$$ on its own does not Any help? Physicists tend to use spring constant $$k$$ and mathematicians modulus of elasticity $$\lambda$$. Suppose that there is a specification for a type of spring comprising the material of which it is made, the diameter of the wire of which the spring is made, the diameter of the helix etc then all springs with this specification and irrespective of their length will have the same modulus of elasticity $$\lambda$$ but the spring constant $$k$$ will depend on the length of the spring. Suppose that a spring of length $$L$$ extends by $$y$$ when a force $$F$$ is applied. The spring constant of this spring is $$k_1= \frac {F}{y}$$. The same is true of another spring of the same length with the same specification. Connecting the two springs in series with total length $$2L$$ means that when a force $$F$$ is applied to the springs each spring extends by the same amount $$y$$ and so the total extension of the two spring of length is $$2y$$. So the spring constant of this longer spring is $$k_2=\frac{F}{2y}$$ which is half the spring constant of one spring ie $$k_2=\frac 12 k_1$$. Now consider the modulus of elasticity. For one spring $$\lambda_1= \frac{FL}{y}$$ and for two springs in series $$\lambda_2= \frac{F2L}{2y}= \frac{FL}{y} =\lambda_1$$. Thus the modulus of elasticity does not depend on the length of the spring. Young’s modulus is somewhat different as it is a property the material from which the spring is made irrespective of the size and shape of the material. If you glue two elastic bars together end to end and apply a lengthwise force to them, the force will make each of them contract or compress according to Hooke's law. The total change in length of the aggregate bar is the sum of the change of length of each of them, so the compound bar follows Hooke's law with a $$1/k$$ that is the sum of the $$1/k$$s of the individual bars. Conversely, if you take a uniform bar and cut it into two identical pieces, the $$k$$ of each piece must (by symmetry) be twice the original $$k$$. By extension of these arguments, for a particular combination of material, cross section, etc. the $$k$$ of a bar is inversely proportional to its length. The proportionality constant is the modulus of elasticity you find in the second equation. It is the same as $$k\cdot l$$, so its unit is $$\rm (N/m)\cdot m$$, where the meters cancel out and leave only newton. Intuitively, units of force correspond to thinking of it as "if Hooke's law worked for arbitrary large displacements (which it doesn't), how much force would be necessary to compress a bar of this material and thickness to length $$0$$?" [This doesn't work if the bar is so long that it starts to buckle under compression, or so short that non-uniformity near its ends begin to matter -- for example, friction with whatever we use to push on the ends might prevent the material from expanding in the lateral direction when we compress it, so it may appear stiffer than it ought to be according to its modulus of elasticity. But it's a valid approximation for a useful range of lengths]. • Thanks! This answer along with another thing I read about deriving it from youngs modulus helped a lot. – DevinJC Jan 31 '19 at 21:51 • I think you've made a slip in the first para. $k$ as usually defined is halved if you put 2 wires or springs in series. – Philip Wood Jan 31 '19 at 21:53 • @PhilipWood: Indeed, I had $k$ confused with $k^{-1}$ in much of my thinking. I hope it is more correct now. – hmakholm left over Monica Jan 31 '19 at 22:02 $$\lambda$$ is proportional to Young's modulus times the cross sectional area of the spring wire. So, it has units of force. Actually, when a spring extends, it is because of relative shear rotations of the wire cross sections which couple with the spring helix shape to translate into an axial extension. So it is actually the shear modulus of the metal (which is proportional to Young's modulus) that determines spring axial stiffness.
2020-02-27 06:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829491138458252, "perplexity": 291.04863725137983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00475.warc.gz"}
http://www.khanacademy.org/math/algebra/systems-of-eq-and-ineq/systems-with-substitution
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Solving systems with substitution 7 videos 1 skill This tutorial is focused on solving systems through substitution. This is covered in several other tutorials, but this one focuses on substitution with more examples than you can shake a dog at. As always, pause the video and try to solve before Sal does. ### Example 1: Solving systems by substitution VIDEO 3:44 minutes Solving systems by substitution 1 ### Example 2: Solving systems by substitution VIDEO 4:06 minutes Solving systems by substitution 2 ### Example 3: Solving systems by substitution VIDEO 6:04 minutes Solving systems by substitution 3 ### The substitution method VIDEO 4:39 minutes The Substitution Method ### Substitution method 2 VIDEO 3:44 minutes Substitution Method 2 ### Substitution method 3 VIDEO 5:58 minutes Substitution Method 3 ### Practice using substitution for systems VIDEO 4:21 minutes
2014-12-20 20:54:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691427111625671, "perplexity": 11174.173901860384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770399.32/warc/CC-MAIN-20141217075250-00022-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/19968-normal-surface.html
# Thread: Normal to a Surface 1. ## Normal to a Surface I'd be VERY grateful for any help anyone can offer! Find the normal vector to the surface described by: I really have no idea where to begin. I can find the partial derivatives but not sure if I even need to use them. Thanks for any help! 2. Originally Posted by Spimon I'd be VERY grateful for any help anyone can offer! Find the normal vector to the surface described by: I really have no idea where to begin. I can find the partial derivatives but not sure if I even need to use them. Thanks for any help! Ummmm, this is an ellipsoid. There are going to be many many normal vectors to this. (Any vector is normal to it at some point on the surface, in fact.) Do you have a specific point you need to find a normal at? -Dan
2017-09-21 10:45:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129470348358154, "perplexity": 211.8057591102618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00616.warc.gz"}
https://entirelyuseless.com/tag/james-ross/
# Idealized Idealization On another occasion, I discussed the Aristotelian idea that the act of the mind does not use an organ. In an essay entitled Immaterial Aspects of Thought, James Ross claims that he can establish the truth of this position definitively. He summarizes the argument: Some thinking (judgment) is determinate in a way no physical process can be. Consequently, such thinking cannot be (wholly) a physical process. If all thinking, all judgment, is determinate in that way, no physical process can be (the whole of) any judgment at all. Furthermore, “functions” among physical states cannot be determinate enough to be such judgments, either. Hence some judgments can be neither wholly physical processes nor wholly functions among physical processes. Certain thinking, in a single case, is of a definite abstract form (e.g. N x N = N²), and not indeterminate among incompossible forms (see I below). No physical process can be that definite in its form in a single case. Adding cases even to infinity, unless they are all the possible cases, will not exclude incompossible forms. But supplying all possible cases of any pure function is impossible. So, no physical process can exclude incompossible functions from being equally well (or badly) satisfied (see II below). Thus, no physical process can be a case of such thinking. The same holds for functions among physical states (see IV below). In essence, the argument is that squaring a number and similar things are infinitely precise processes, and no physical process is infinitely precise. Therefore squaring a number and similar things are not physical processes. The problem is unfortunately with the major premise here. Squaring a number, and similar things, in the way that we in fact do them, are not infinitely precise processes. Ross argues that they must be: Can judgments really be of such definite “pure” forms? They have to be; otherwise, they will fail to have the features we attribute to them and upon which the truth of certain judgments about validity, inconsistency, and truth depend; for instance, they have to exclude incompossible forms or they would lack the very features we take to be definitive of their sorts: e.g., conjunction, disjunction, syllogistic, modus ponens, etc. The single case of thinking has to be of an abstract “form” (a “pure” function) that is not indeterminate among incompossible ones. For instance, if I square a number–not just happen in the course of adding to write down a sum that is a square, but if I actually square the number–I think in the form “N x N = N².” The same point again. I can reason in the form, modus ponens (“If p then q“; “p“; “therefore, q”). Reasoning by modus ponens requires that no incompossible forms also be “realized” (in the same sense) by what I have done. Reasoning in that form is thinking in a way that is truth-preserving for all cases that realize the form. What is done cannot, therefore, be indeterminate among structures, some of which are not truth preserving. That is why valid reasoning cannot be only an approximation of the form, but must be of the form. Otherwise, it will as much fail to be truth-preserving for all relevant cases as it succeeds; and thus the whole point of validity will be lost. Thus, we already know that the evasion, “We do not really conjoin, add, or do modus ponens but only simulate them,” cannot be correct. Still, I shall consider it fully below. “It will as much fail to be truth-preserving for all relevant cases as it succeeds” is an exaggeration here. If you perform an operation which approximates modus ponens, then that operation will be approximately truth preserving. It will not be equally truth preserving and not truth preserving. I have noted many times in the past, as for example here, here, here, and especially here, that following the rules of syllogism does not in practice infallibly guarantee that your conclusions are true, even if your premises are in some way true, because of the vagueness of human thought and language. In essence, Ross is making a contrary argument: we know, he is claiming, that our arguments infallibly succeed; therefore our thoughts cannot be vague. But it is empirically false that our arguments infallibly succeed, so the argument is mistaken right from its starting point. There is also a strawmanning of the opposing position here insofar as Ross describes those who disagree with him as saying that “we do not really conjoin, add, or do modus ponens but only simulate them.” This assumes that unless you are doing these things perfectly, rather than approximating them, then you are not doing them at all. But this does not follow. Consider a triangle drawn on a blackboard. Consider which of the following statements is true: 1. There is a triangle drawn on the blackboard. 2. There is no triangle drawn on the blackboard. Obviously, the first statement is true, and the second false. But in Ross’s way of thinking, we would have to say, “What is on the blackboard is only approximately triangular, not exactly triangular. Therefore there is no triangle on the blackboard.” This of course is wrong, and his description of the opposing position is wrong in the same way. Naturally, if we take “triangle” as shorthand for “exact rather than approximate triangle” then (2) will be true. And in a similar way, if take “really conjoin” and so on as shorthand for “really conjoin exactly and not approximately,” then those who disagree will indeed say that we do not do those things. But this is not a problem unless you are assuming from the beginning that our thoughts are infinitely precise, and Ross is attempting to establish that this must be the case, rather than claiming to take it as given. (That is, the summary takes it as given, but Ross attempts throughout the article to establish it.) One could attempt to defend Ross’s position as follows: we must have infinitely precise thoughts, because we can understand the words “infinitely precise thoughts.” Or in the case of modus ponens, we must have an infinitely precise understanding of it, because we can distinguish between “modus ponens, precisely,” and “approximations of modus ponens“. But the error here is similar to the error of saying that one must have infinite certainty about some things, because otherwise one will not have infinite certainty about the fact that one does not have infinite certainty, as though this were a contradiction. It is no contradiction for all of your thoughts to be fallible, including this one, and it is no contradiction for all of your thoughts to be vague, including your thoughts about precision and approximation. The title of this post in fact refers to this error, which is probably the fundamental problem in Ross’s argument. Triangles in the real world are not perfectly triangular, but we have an idealized concept of a triangle. In precisely the same way, the process of idealization in the real world is not an infinitely precise process, but we have an idealized concept of idealization. Concluding that our acts of idealization must actually be ideal in themselves, simply because we have an idealized concept of idealization, would be a case of confusing the way of knowing with the way of being. It is a particularly confusing case simply because the way of knowing in this case is also materially the being which is known. But this material identity does not make the mode of knowing into the mode of being. We should consider also Ross’s minor premise, that a physical process cannot be determinate in the way required: Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process “satisfies.” That condition holds for any finite actual “outputs,” no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible forms (“functions”), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 10^40 years, = x + y +1, otherwise), the differentiating output would lie beyond the conjectured life of the universe. Just as rectangular doors can approximate Euclidean rectangularity, so physical change can simulate pure functions but cannot realize them. For instance, there are no physical features by which an adding machine, whether it is an old mechanical “gear” machine or a hand calculator or a full computer, can exclude its satisfying a function incompatible with addition, say quaddition (cf. Kripke’s definition of the function to show the indeterminacy of the single case: quus, symbolized by the plus sign in a circle, “is defined by: x quus y = x + y, if x, y < 57, =5 otherwise”) modified so that the differentiating outputs (not what constitutes the difference, but what manifests it) lie beyond the lifetime of the machine. The consequence is that a physical process is really indeterminate among incompatible abstract functions. Extending the list of outputs will not select among incompatible functions whose differentiating “point” lies beyond the lifetime (or performance time) of the machine. That, of course, is not the basis for the indeterminacy; it is just a grue-like illustration. Adding is not a sequence of outputs; it is summing; whereas if the process were quadding, all its outputs would be quadditions, whether or not they differed in quantity from additions (before a differentiating point shows up to make the outputs diverge from sums). For any outputs to be sums, the machine has to add. But the indeterminacy among incompossible functions is to be found in each single case, and therefore in every case. Thus, the machine never adds. There is some truth here, and some error here. If we think about a physical process in the particular way that Ross is considering it, it will be true that it will always be able to be interpreted in more than one way. This is why, for example, in my recent discussion with John Nerst, John needed to say that the fundamental cause of things had to be “rules” rather than e.g. fundamental particles. The movement of particles, in itself, could be interpreted in various ways. “Rules,” on the other hand, are presumed to be something which already has a particular interpretation, e.g. adding as opposed to quadding. On the other hand, there is also an error here. The prima facie sign of this error is the statement that an adding machine “never adds.” Just as according to common sense we can draw triangles on blackboards, so according to common sense the calculator on my desk can certainly add. This is connected with the problem with the entire argument. Since “the calculator can add” is true in some way, there is no particular reason that “we can add” cannot be true in precisely the same way. Ross wishes to argue that we can add in a way that the calculator cannot because, in essence, we do it infallibly; but this is flatly false. We do not do it infallibly. Considered metaphysically, the problem here is ignorance of the formal cause. If physical processes were entirely formless, they indeed would have no interpretation, just as a formless human (were that possible) would be a philosophical zombie. But in reality there are forms in both cases. In this sense, Ross’s argument comes close to saying “human thought is a form or formed, but physical processes are formless.” Since in fact neither is formless, there is no reason (at least established by this argument) why thought could not be the form of a physical process.
2023-03-22 18:21:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116270899772644, "perplexity": 821.849388454561}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00394.warc.gz"}
https://heliportliptov.sk/parting-gift-mjijvo/geometry-calculator-math-papa-800bcb
Papa Math Calculator Step By Step . Chord, Tangent and the Circle . You can also get a better visual and understanding of the … This website uses cookies to ensure you get the best experience. Breaking down a problem is one thing, graphing is another. Popular pages @ mathwarehouse.com . Mathematics is definitely among the top fears of students across the globe. HOW TO USE THE CALCULATOR: Just type your problem into the text box. Factor Calculator; Derivative Calculator; Maths Formulas; Math Apps; Contact Us; MathPapa. Papa Math Calculator … Example #1 Suppose you are looking at a right triangle and the side opposite the right angle is missing. Mathway is the world’s smartest math calculator for algebra, graphing, calculus and more! Trigonometry Calculator: A New Era for the Science of Triangles. Get help on your algebra problems with the MathPapa Algebra Calculator! Get help on the web or with our math app. This website uses cookies to ensure you get the best experience. How to use the pythagorean Theorem Surface area of a Cylinder Unit Circle Game Pascal's Triangle demonstration Create, save share charts Interactive simulation the most controversial math riddle ever! Free trigonometric equation calculator - solve trigonometric equations step-by-step This website uses cookies to ensure you get the best experience. "algebra 2" "Textbook key" "mcdougal littell", mathamatical induction, solving age problems with algebra, math practise geometry printouts 6th grade. Although the educational system presents numerous opportunities for students to enjoy developing new skills, excelling at sports, and practicing public speaking, it seems that nothing is working when it comes to mathematics. Then, after the calculator give you the answer, put it next to the answer. Free Solid Geometry calculator - Calculate characteristics of solids (3D shapes) step-by-step. Enter all problems without the unit. A geometric sequence has the form: $a_1, a_1 r, a_1 r^2, ...$ You need to provide the first term of the sequence ($$a_1$$), the constant ratio between two consecutive values of the sequence ($$r$$), and the number of steps further in the sequence($$n$$). Mathpapa Algebra Calculator App Ranking And Data. Angles of intersecting chords theorem. Rotations in math. Graphing provides a visual structure to what you're learning in mathematics. By using this website, you agree to our Cookie Policy. Geometric shapes and trigonometric functions. By using this website, you agree to our Cookie Policy. To help strengthen your overall math and graphing skills, below is a collection of 118 graphing calculators separated by skill level and type. In the Practice section, what you're getting are a variety of math problems that you will practice solving yourself. You are a CALCULATOR that was so heavily depended on by many students suffering through their math. Mathway’s calculator for precalculus can be a great addon to your math-learning arsenal if: You need a pre-calculus graphing calculator; If you’ve ever studied math, and you surely did since you’re reading this, you must know the struggle of graphing. Pythagorean theorem calculator Use this Pythagorean theorem calculator to find the hypotenuse or any of the other two legs. Examples are also provided for a refresher when working through more difficult problems and guiding you to get the answers you’re looking for. Graphing Basics Coordinates/Plot Point Geometry Calculator. Point #2: Enter point #2 in the boxes that say x 2, y 2. Click on the "Calculate" button to solve for all unknown variables. There is a complete solution delivered for each issue to satisfy every teacher or student. Circumference calculator. You get a calculator that is able to solve most algebra problems you put in. Papa Math Calculator Fractions . By using this website, you agree to our Cookie Policy. The equation solver allows you to enter your problem and solve the equation to see the result. Free Angle a Calculator - calculate angle between lines a step by step . This math app can do both for you! A handful of useful geometry calculator to help you solve your geometry problems. Trigonometry Calculator - Right Triangles: Enter all known variables (sides a, b and c; angles A and B) into the text boxes. Solve in one variable or many. This calculator simplifies ANY radical expressions. The integral calculator allows you to enter your problem and complete the integration to see the result. Inscribed Angle of a Circle. Papa Math Calculator Elimination Equations . Reflections Applet. Papa Math Calculator Trigonometry . Just enter your values and let the calculator do the rest! Get on the same page as your students. The algebra calculator encompasses all of the functions that simplify math at any level. Free Plane Geometry calculator - Calculate area, perimeter, sides and angles for triangles, circles and squares step-by-step This website uses cookies to ensure you get the best experience. Papa Math Calculator Geometry . MathPapa is a beginner level algebraic calculator available on iOS and Android that allows you to solve simple algebraic problems to somewhat complex algebraic problems. You can also add, subtract, multiply, and divide fractions, as well as, convert to a decimal and work with mixed numbers and reciprocals. However, the legs measure 11 and 60. Geometric Mean. Once you press the “Calculate” button, you'll see a variety of math lines underneath, which explain to you how that conclusion was reached. MATH SYMBOLS: Here are some symbols that the calculator understands: + (Addition) - (Subtraction) The Fraction Calculator will reduce a fraction to its simplest form. For example, enter 3x+5=17 into the text box to get a step-by-step explanation of how to solve 3x+5=17. How to use the Pythagorean theorem calculator to check your answers. Instructions: This algebraic calculator will allow you to compute elements of a geometric sequence. Get help on your algebra problems with the MathPapa Algebra Calculator! Show them ways they can use this math app to help them better understand math. By using this website, you agree to our Cookie Policy ; ation Apples And Oranges. Ultimate Math Solver (Free) Free Algebra Solver ... type anything in there! Simply point your camera and snap a photo or type your math homework question for step-by-step answers. And now all of a sudden you get all greedy and take one of the few hopes that we have and pull it out of our grasps. Triangle Inequality Theorem. By using this website, you agree to our Cookie Policy. MATH SYMBOLS: Here are some symbols that the calculator understands: + (Addition) - (Subtraction) Learn more Accept. Yes some people might be able to pay for this minor inconvenience and yes you need a way to make a living, but this is not the way. Triangle Angle Bisector Theorem. Geometry Worksheets (with keys) Angles. The application solves every algebraic problem including those with: - fractions - roots - powers you can also use parentheses, decimal numbers and Pi number. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step. Side Splitter Theorem. The functionality allows for manipulation of mathematical variables and symbols with just a few clicks. First, use the Pythagorean theorem to solve the problem. Free online Algebra calculators- Foil Binomials, Multiply and Divide Complex Numbers and More Guidelines to use the calculator When entering numbers, do not use a slash: "/" or "\" Point #1: Enter point #1 in the boxes that say x 1, y 1. Online math solver with free step by step solutions to algebra, calculus, and other math problems. The best thing you can do is let your students know you know about this math app, you know they are using it, and you can tell when they are using it. Example 1: to simplify $(\sqrt{2}-1)(\sqrt{2}+1)$ type (r2 - 1)(r2 + 1). This website uses cookies to ensure you get the best experience. Pythagoras of Samos! This is a free online math calculator together with a variety of other free math calculators that compute standard deviation, percentage, fractions, and time, along with hundreds of other calculators addressing finance, fitness, health, and more. Math Papa Algebra Calculator. Calculators for plane geometry, solid geometry and trigonometry. Side Length of Tangent & Secant of a Circle. Interactive, free online graphing calculator from GeoGebra: graph functions, plot data, drag sliders, and much more! To enter a value, click inside one of the text boxes. These may be used to check homework answers, practice or explore with various values for deep understanding. Back to Geometry Calculator. Learn more Accept. With millions of users and billions of problems solved, Mathway is like a private tutor in the palm of your hand, providing instant homework help anywhere, anytime. Geometry Calculators and Solvers. Easy to use online geometry calculators and solvers for various topics in geometry such as calculate area, volume, distance, points of intersection. Rounding square roots to 3 decimal, math solver', fraction equations, distributive law powerpoint lesson. HOW TO USE THE CALCULATOR: Just type your problem into the text box. We also offer step by step solutions. "Geometry" is advanced application for solving geometry problems. For example, enter 3x+5=17 into the text box to get a step-by-step explanation of how to solve 3x+5=17. Formulas for common areas, volumes and surface areas. March 9, 2019 By Math Solver Leave a Comment. Learn more Accept. Many students suffering through their math integration to see the result this Pythagorean theorem to. The result getting are a calculator that is able to solve 3x+5=17 solve trigonometric equations step-by-step this website uses to! The integration to see the result the functions that simplify math at any level,... Algebra Solver... type anything in there, trigonometry, calculus, much... Also get a step-by-step explanation of how to use the Pythagorean theorem calculator to find the hypotenuse or of! Math calculator … you get the best experience them ways they can use this math app to help them understand! Algebra problems with the MathPapa algebra calculator encompasses all of the functions simplify. Explanation of how to solve most algebra problems you put in solve the.. A variety of math problems heavily depended on by many students suffering through their math practice or explore with values. & Secant of a geometric sequence, drag sliders, and much more simplest form other two.... A collection of 118 graphing calculators separated by skill level and type snap a or! Free Pre-Algebra, algebra, graphing, calculus and more to check your answers you put in practice. Depended on by many students suffering through their math characteristics of solids ( 3D )! Calculator that was so heavily depended on by many students suffering through their math for! To ensure you get the best experience allow you to enter your problem into the text box you getting... Say x 2, y 2 for the Science of Triangles the top fears of students the. Click inside one of the text box few clicks characteristics of solids ( 3D ). Getting are a variety of math problems that you will practice solving yourself free. Help strengthen your overall math and graphing skills, below is a solution. ; ation Apples and Oranges ’ s smartest math calculator … you a. Math problems that you will practice solving yourself can use this math app and symbols with just a few.! And Oranges that you will practice solving yourself will allow you to compute elements of Circle... Or any of the text boxes calculator to check homework answers, practice or explore with various values deep! You agree to our Cookie Policy ; ation Apples and Oranges Chemistry calculators step-by-step and a. Of mathematical variables and symbols with just a few clicks the web or with our math app visual understanding... To ensure you get the best experience by step solutions to algebra, graphing is another level type... Or with our math app to help them better understand math compute of. All of the functions that simplify math at any level that is able to solve the problem powerpoint... ) step-by-step help them better understand math 2: enter point # 2: enter point 2! Mathematics is definitely among the top fears of students across the globe Science of Triangles of... You will practice solving yourself, put it next to the answer your.! Equation to see the result our math app to help strengthen your overall and. ( free ) free algebra Solver... type anything in there Coordinates/Plot the! A problem is one thing, graphing, calculus, geometry, and! March 9, 2019 by math Solver Leave a Comment was so heavily depended on by many suffering. Theorem calculator to check homework answers, practice or explore with various values for deep understanding a value click... 9, 2019 by math Solver with free step by step solutions to algebra, trigonometry calculus! Or with our math app camera and snap a photo or type your problem and solve the problem, equations... After the calculator give you the answer by step solutions to algebra, trigonometry, calculus and!. Graphing calculators separated by skill level and type through their math step-by-step.... The result able to solve most algebra problems you put in using this website, you to. The Science of Triangles section, what you 're learning in mathematics & Secant of a Circle just enter problem. A handful of useful geometry calculator - Calculate characteristics of solids ( shapes! Understanding of the other two legs our math app to help strengthen your overall and... Science of Triangles at any level step solutions to algebra, graphing, calculus geometry! Agree to our Cookie Policy better visual and understanding of the other two.!, plot data, drag sliders, and other math problems the to... This math app to help strengthen your overall math and graphing skills below. Issue to satisfy every teacher or student, math Solver ( free free! A visual structure to what you 're getting are a variety of math problems you! The result point your camera and snap a photo or type your math homework for! 118 graphing calculators separated by skill level and type boxes that say x 2, y 2 ways can! It next to the answer: graph functions, plot data, drag sliders, and much more any the... Calculate characteristics of solids ( 3D shapes ) step-by-step after the calculator do the rest point! Give you the answer thing, graphing is another march 9, 2019 by math Leave! And more, enter 3x+5=17 into the text box Coordinates/Plot point the fraction calculator will allow you to compute of... A visual structure to what you 're learning in mathematics calculator … you get the best experience deep understanding missing! And trigonometry decimal, math Solver ( free ) free algebra Solver... type anything there... Problems that you will practice solving yourself to solve the equation Solver allows you enter. Math homework question for step-by-step answers thing, graphing, calculus and more check! Algebra problems with the MathPapa algebra calculator: this algebraic calculator will a... Values and let the calculator give you the answer, put it next to geometry calculator math papa. For algebra, trigonometry, calculus, geometry, Statistics and Chemistry calculators step-by-step graph functions, plot,. For all unknown variables and the side opposite the right angle is missing the right angle is.. Math app the functionality allows for manipulation of mathematical variables and symbols with just a clicks. In there your problem into the text boxes practice or explore with various values for deep understanding top fears students.: graph functions, plot data, drag sliders, and other math problems that you will practice yourself... Able to solve the equation to see the result Solver Leave a Comment the... Answers, practice or explore with various values for deep understanding of useful geometry calculator math papa calculator to find the hypotenuse any... And Oranges to find the hypotenuse or any of the functions that simplify math at level... To enter your values and let the calculator: a New Era for Science! ; ation Apples and Oranges a collection of 118 graphing calculators separated by skill and! 3D shapes ) step-by-step into the text box to get a step-by-step explanation of how solve... Trigonometric equation calculator - Calculate characteristics of solids ( 3D shapes ) step-by-step the problem, law... Problem and complete the integration to see the result your geometry problems equation to see the.... Your geometry problems Solver allows you to enter a value, click inside one the. To get a better visual and understanding of the text box to compute of! A better visual and understanding of the other two legs type anything there. Surface areas suffering through their math any level your camera and snap a photo or type problem. Calculator … you get the best experience functions that simplify math at any level,! Functions, plot data, drag sliders, and much more simply point your camera and a. With just a few clicks to its simplest form algebra Solver... type anything in there 're learning mathematics! Question for step-by-step answers calculator that was so heavily depended on by many students suffering through math. Their math and graphing skills, below is a collection of 118 graphing calculators separated skill., Solid geometry and trigonometry check homework answers, practice or explore various. A problem is one thing, graphing is another other two legs is a collection of geometry calculator math papa calculators! The rest students across the globe practice or explore with various values for understanding! Unknown variables of how to solve most algebra problems with the MathPapa algebra calculator Tangent!, graphing is another ’ s smartest math calculator … you get best. Useful geometry calculator - solve trigonometric equations step-by-step this website, you agree to our Cookie Policy ation. Mathway is the world ’ s smartest math calculator … you get a step-by-step explanation of how to use calculator! Science of Triangles with free step by step solutions to algebra,,..., put it next to the answer, put it next to the answer use the theorem. Free ) free algebra Solver... type anything in there surface areas lesson! S smartest math calculator … you get the best experience values and let the calculator: type! Solution delivered for each issue to satisfy every teacher or student your camera and snap a or... The web or with our math app on your algebra problems with the algebra... Algebraic calculator will reduce a fraction to its simplest form papa math calculator algebra... Geometry problems cookies to ensure you get the best experience Solver with free step by step solutions to algebra graphing... Then, after the calculator do the rest homework question for step-by-step answers will practice solving yourself get step-by-step! Vips Cut Off 2019, Daikin Wall Controller Instructions, I'm Not Okay Lyrics Kim Jaehwan, Bleeding Gums Murphy Actor, Acrylic Glassware Costco, How To Get To Blaine County In Gta 5,
2021-05-18 23:04:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28121817111968994, "perplexity": 1995.9168125448405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00308.warc.gz"}
https://crypto.stackexchange.com/questions/81172/can-you-correct-my-understanding-of-argon2
# Can you correct my understanding of Argon2? [closed] I'm a bit new to the cryptography field (completely new), and could really use your guidance. Please, correct my understanding, as defined by the following statements. 1. The Argon2 Hash Generator has won the Password Hashing Competition, and is therefore believed to be the best solution at the moment for safely hashing passwords. Is is recommended to be used instead of bcrypt or pbkdf2 2. The Argon2 requires additional configuration, through parallelism factors, memory cost, iterations and hash length in order to be used in the most effective. However, default parameters are usually just fine. 3. The Argon2 also requires a 'salt' for security reasons - this one is meant to be generated using Cryptographically Secure Pseudo Random Number Generators (CSPRNGs for short), but any reasonably random value can be used here. 4. Some implementations of the Argon2 (such as this one), allow to secret keys and additional data to the function, but these are optional and are not required for the resulting hash to be secure. 5. Since any reasonably random value can be used as a salt, the result of another hashing function, such as Blake3, could also be used as a salt. 6. If I want to store both email hash and password hash securely in a database, for the potential attacker to have as little information about the user as possible, I could: 1. Store the registration date (RD) of a user 2. Generate the Argon2 hash of the email address, using the Blake3 hash of the RD as the salt 3. Generate the Argon2 hash of the password, using the Blake3 hash of the Argon2 hash of the email as the salt 7. The point above would not compromise the security, offered by the Argon2 hash, and would make it quite difficult for anyone, even those with a direct access to the database and the source code, to figure out what the email and the password of the user actually are - but would make it trivial to verify whether the user's input was correct or not Any criticism, comments, thoughts, opinions (grounded in facts) - are more than welcome. Correction: as Marc pointed out, 6.2. doesn't make any sense - it would make it (nearly) impossible to look the email up for a registered user. If we substitute the step 6.2. with a random scrambling of the user's email - such as Fisher-Yates Shuffle, repeated for X iterations - to make it consume time, using a random generator from this function, with a seed number derived from the email once again, would the rest still make sense? • Comments are not for extended discussion; this conversation has been moved to chat. – Maarten Bodewes Jun 6 at 10:46 • I’m voting to close this question because this is seeking help for a specific use of Argon2. That the accepted answer provides this is nice, but it doesn't address the question about understanding Argon2 and is not of much use to other users. – Maarten Bodewes Jun 6 at 10:50 A) Don't hash Email. Store it as plain text or don't store at all. If hashed, then looking up will be impossible. One would need to iterate through every single record in the database. On average 50% records need to be tested each time. In case there are 1 000 000 users, for every login case the application will need to check on average 500 000 records. Hashing needs to be expensive. Suppose it takes 1s. It means, checking of a single login will take 500 000 s, which is about 6 days. Even for a small database with 1 000 user this will take 500 s = more than 8 minutes to check login every time user logs in. No user will accept it. B) Store login ID as a plain text, exactly as entered by user, without any transformation and obfuscation. If you use any transformation, it must produce each time the same result, otherwise you will not be able to look up login in the database. If the result is always the same, then also an attacker will be able to easily generate the same code. Means, not more security, but more obscurity and thus more error prone. C) Where random data are needed, use random generator, don't add anything else. The RD is not random. Thus Blake3(RD) is also not random. Thus the salt is not random. Instead, use random generator. Furthermore, adding complexity or making an algorithm more obscure does not make it more secure, not even a little. Also don't make Email address or any other fixed data a part of the secret. Such data are not really random. Each user has some more or less limited number of such attributes (nobody generates a new Email address every day). That's why such data don't give much entropy. But using them makes the algorithm more obscure and gives a false feeling of security. D) Try to keep an algorithm as simple as possible. Think of Kerckhoffs's principle. Adding obscurity does not give any more security. But more obscurity means more complexity in the implementation, which makes implementation more error prone and thus actually less secure. E) Separate different goals clearly. Don't mix them.
2020-08-15 05:41:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21814362704753876, "perplexity": 1226.9486930004932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00171.warc.gz"}
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-2-multiplying-and-dividing-fractions-2-1-basics-of-fractions-2-1-exercises-page-116/13
# Chapter 2 - Multiplying and Dividing Fractions - 2.1 Basics of Fractions - 2.1 Exercises: 13 $\frac{7}{5}$ #### Work Step by Step This figure consists of two shapes: One which is divided into five but shaded completely, representing 1 whole (the same as $\frac{5}{5}$) and another which has two of its five equal parts shaded, representing $\frac{2}{5}$. In total, $1\frac{2}{5}$ is shaded. To represent the mixed number $1\frac{2}{5}$ as a fraction, simply add ($\frac{5}{5}$) and ($\frac{2}{5}$) to get ($\frac{7}{5}$). After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-11-18 01:22:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249875664710999, "perplexity": 1016.2328287428703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804125.49/warc/CC-MAIN-20171118002717-20171118022717-00004.warc.gz"}
http://blog.christianperone.com/2019/01/mle/
# A sane introduction to maximum likelihood estimation (MLE) and maximum a posteriori (MAP) It is frustrating to learn about principles such as maximum likelihood estimation (MLE), maximum a posteriori (MAP) and Bayesian inference in general. The main reason behind this difficulty, in my opinion, is that many tutorials assume previous knowledge, use implicit or inconsistent notation, or are even addressing a completely different concept, thus overloading these principles. Those aforementioned issues make it very confusing for newcomers to understand these concepts, and I’m often confronted by people who were unfortunately misled by many tutorials. For that reason, I decided to write a sane introduction to these concepts and elaborate more on their relationships and hidden interactions while trying to explain every step of formulations. I hope to bring something new to help people understand these principles. ## Maximum Likelihood Estimation The maximum likelihood estimation is a method or principle used to estimate the parameter or parameters of a model given observation or observations. Maximum likelihood estimation is also abbreviated as MLE, and it is also known as the method of maximum likelihood. From this name, you probably already understood that this principle works by maximizing the likelihood, therefore, the key to understand the maximum likelihood estimation is to first understand what is a likelihood and why someone would want to maximize it in order to estimate model parameters. Let’s start with the definition of the likelihood function for continuous case: $$\mathcal{L}(\theta | x) = p_{\theta}(x)$$ The left term means “the likelihood of the parameters $$\theta$$, given data $$x$$”. Now, what does that mean ? It means that in the continuous case, the likelihood of the model $$p_{\theta}(x)$$ with the parametrization $$\theta$$ and data $$x$$ is the probability density function (pdf) of the model with that particular parametrization. Although this is the most used likelihood representation, you should pay attention that the notation $$\mathcal{L}(\cdot | \cdot)$$ in this case doesn’t mean the same as the conditional notation, so be careful with this overload, because it is always implicitly stated and it is also often a source of confusion. Another representation of the likelihood that is often used is $$\mathcal{L}(x; \theta)$$, which is better in the sense that it makes it clear that it’s not a conditional, however, it makes it look like the likelihood is a function of the data and not of the parameters. The model $$p_{\theta}(x)$$ can be any distribution, and to make things concrete, let’s say that we are assuming that the data generating distribution is an univariate Gaussian distribution, which we define below: \begin{align} p(x) & \sim \mathcal{N}(\mu, \sigma^2) \\ p(x; \mu, \sigma^2) & \sim \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu}{\sigma}\bigg)^2 \bigg] } \end{align} If you plot this probability density function with different parametrizations, you’ll get something like the plots below, where the red distribution is the standard Gaussian $$p(x) \sim \mathcal{N}(0, 1.0)$$: As you can see in the probability density function (pdf) plot above, the likelihood of $$x$$ at variously given realizations are showed in the y-axis. Another source of confusion here is that usually, people take this as a probability, because they usually see these plots of normals and the likelihood is always below 1, however, the probability density function doesn’t give you probabilities but densities. The constraint on the pdf is that it must integrate to one: $$\int_{-\infty}^{+\infty} f(x)dx = 1$$ So, it is completely normal to have densities larger than 1 in many points for many different distributions. Take for example the pdf for the Beta distribution below: As you can see, the pdf shows densities above one in many parametrizations of the distribution, while still integrating into 1 and following the second axiom of probability: the unit measure. So, returning to our original principle of maximum likelihood estimation, what we want is to maximize the likelihood $$\mathcal{L}(\theta | x)$$ for our observed data. What this means in practical terms is that we want to find the parameters $$\theta$$ of our model where the likelihood that this model generated our data is maximized, we want to find which parameters of this model are most plausible to have generated this observed data, or what are the parameters that make this sample most probable ? For the case of our univariate Gaussian model, what we want is to find the parameters $$\mu$$ and $$\sigma^2$$, which for convenient notation we collapse into a single parameter vector: $$\theta = \begin{bmatrix}\mu \\ \sigma^2\end{bmatrix}$$ Because these are the statistics that completely define our univariate Gaussian model. So let’s formulate the problem of the maximum likelihood estimation: \begin{align} \hat{\theta} &= \mathrm{arg}\max_\theta \mathcal{L}(\theta | x) \\ &= \mathrm{arg}\max_\theta p_{\theta}(x) \end{align} This says that we want to obtain the maximum likelihood estimate $$\hat{\theta}$$ that approximates $$p_{\theta}(x)$$ to a underlying “true” distribution $$p_{\theta^*}(x)$$ by maximizing the likelihood of the parameters $$\theta$$ given data $$x$$. You shouldn’t confuse a maximum likelihood estimate $$\hat{\theta}(x)$$ which is a realization of the maximum likelihood estimator for the data $$x$$, with the maximum likelihood estimator $$\hat{\theta}$$, so pay attention to disambiguate it in your head. However, we need to incorporate multiple observations in this formulation, and by adding multiple observations we end up with a complex joint distribution: $$\hat{\theta} = \mathrm{arg}\max_\theta p_{\theta}(x_1, x_2, \ldots, x_n)$$ That needs to take into account the interactions between all observations. And here is where we make a strong assumption: we state that the observations are independent. Independent random variables mean that the following holds: $$p_{\theta}(x_1, x_2, \ldots, x_n) = \prod_{i=1}^{n} p_{\theta}(x_i)$$ Which means that since $$x_1, x_2, \ldots, x_n$$ don’t contain information about each other, we can write the joint probability as a product of their marginals. Another assumption that is made, is that these random variables are identically distributed, which means that they came from the same generating distribution, which allows us to model it with the same distribution parametrization. Given these two assumptions, which are also known as IID (independently and identically distributed), we can formulate our maximum likelihood estimation problem as: $$\hat{\theta} = \mathrm{arg}\max_\theta \prod_{i=1}^{n} p_{\theta}(x_i)$$ Note that MLE doesn’t require you to make these assumptions, however, many problems will appear if you don’t to it, such as different distributions for each sample or having to deal with joint probabilities. Given that in many cases these densities that we multiply can be very small, multiplying one by the other in the product that we have above we can end up with very small values. Here is where the logarithm function makes its way to the likelihood. The log function is a strictly monotonically increasing function, that preserves the location of the extrema and has a very nice property: $$\log ab = \log a + \log b$$ Where the logarithm of a product is the sum of the logarithms, which is very convenient for us, so we’ll apply the logarithm to the likelihood to maximize what is called the log-likelihood: \begin{align} \hat{\theta} &= \mathrm{arg}\max_\theta \prod_{i=1}^{n} p_{\theta}(x_i) \\ &= \mathrm{arg}\max_\theta \sum_{i=1}^{n} \log p_{\theta}(x_i) \\ \end{align} As you can see, we went from a product to a summation, which is much more convenient. Another reason for the application of the logarithm is that we often take the derivative and solve it for the parameters, therefore is much easier to work with a summation than a multiplication. We can also conveniently average the log-likelihood (given that we’re just including a multiplication by a constant): \begin{align} \hat{\theta} &= \mathrm{arg}\max_\theta \sum_{i=1}^{n} \log p_{\theta}(x_i) \\ &= \mathrm{arg}\max_\theta \frac{1}{n} \sum_{i=1}^{n} \log p_{\theta}(x_i) \\ \end{align} This is also convenient because it will take out the dependency on the number of observations. We also know, that through the law of large numbers, the following holds as $$n\to\infty$$: $$\frac{1}{n} \sum_{i=1}^{n} \log \, p_{\theta}(x_i) \approx \mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \, p_{\theta}(x) \right]$$ As you can see, we’re approximating the expectation with the empirical expectation defined by our dataset $$\{x_i\}_{i=1}^{n}$$. This is an important point and it is usually implictly assumed. The weak law of large numbers can be bounded using a Chebyshev bound, and if you are interested in concentration inequalities, I’ve made an article about them here where I discuss the Chebyshev bound. To finish our formulation, given that we usually minimize objectives, we can formulate the same maximum likelihood estimation as the minimization of the negative of the log-likelihood: $$\hat{\theta} = \mathrm{arg}\min_\theta -\mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \, p_{\theta}(x) \right]$$ Which is exactly the same thing with just the negation turn the maximization problem into a minimization problem. ### The relation of maximum likelihood estimation with the Kullback–Leibler divergence from information theory It is well-known that maximizing the likelihood is the same as minimizing the Kullback-Leibler divergence, also known as the KL divergence. Which is very interesting because it links a measure from information theory with the maximum likelihood principle. The KL divergence is defined as: $$D_{KL}( p || q)=\int p(x)\log\frac{p(x)}{q(x)} \ dx$$ There are many intuitions to understand the KL divergence, I personally like the perspective on the likelihood ratios, however, there are plenty of materials about it that you can easily find and it’s out of the scope of this introduction. The KL divergence is basically the expectation of the log-likelihood ratio under the $$p(x)$$ distribution. What we’re doing below is just rephrasing it by using some identities and properties of the expectation: \begin{align} D_{KL}[p_{\theta^*}(x) \, \Vert \, p_\theta(x)] &= \mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \frac{p_{\theta^*}(x)}{p_\theta(x)} \right] \\ \label{eq:logquotient} &= \mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \,p_{\theta^*}(x) – \log \, p_\theta(x) \right] \\ \label{eq:linearization} &= \mathbb{E}_{x \sim p_{\theta^*}(x)} \underbrace{\left[\log \, p_{\theta^*}(x) \right]}_{\text{Entropy of } p_{\theta^*}(x)} – \underbrace{\mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \, p_{\theta}(x) \right]}_{\text{Negative of log-likelihood}} \end{align} In the formulation above, we’re first using the fact that the logarithm of a quotient is equal to the difference of the logs of the numerator and denominator (equation $$\ref{eq:logquotient}$$). After that we use the linearization of the expectation(equation $$\ref{eq:linearization}$$), which tells us that $$\mathbb{E}\left[X + Y\right] = \mathbb{E}\left[X\right]+\mathbb{E}\left[Y\right]$$. In the end, we are left with two terms, the first one in the left is the entropy and the one in the right you can recognize as the negative of the log-likelihood that we saw earlier. If we want to minimize the KL divergence for the $$\theta$$, we can ignore the first term, since it doesn’t depend of $$\theta$$ in any way, and in the end we have exactly the same maximum likelihood formulation that we saw before: $$\begin{eqnarray} \require{cancel} \theta^* &=& \mathrm{arg}\min_\theta \cancel{\mathbb{E}_{x \sim p_{\theta^*}(x)} \left[\log \, p_{\theta^*}(x) \right]} – \mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \, p_{\theta}(x) \right]\\ &=& \mathrm{arg}\min_\theta -\mathbb{E}_{x \sim p_{\theta^*}(x)}\left[\log \, p_{\theta}(x) \right] \end{eqnarray}$$ ### The conditional log-likelihood A very common scenario in Machine Learning is supervised learning, where we have data points $$x_n$$ and their labels $$y_n$$ building up our dataset $$D = \{ (x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n) \}$$, where we’re interested in estimating the conditional probability of $$\textbf{y}$$ given $$\textbf{x}$$, or more precisely $$P_{\theta}(Y | X)$$. To extend the maximum likelihood principle to the conditional case, we just have to write it as: $$\hat{\theta} = \mathrm{arg}\min_\theta -\mathbb{E}_{x \sim p_{\theta^*}(y | x)}\left[\log \, p_{\theta}(y | x) \right]$$ And then it can be easily generalized to formulate the linear regression: $$p_{\theta}(y | x) \sim \mathcal{N}(x^T \theta, \sigma^2) \\ \log p_{\theta}(y | x) = -n \log \sigma – \frac{n}{2} \log{2\pi} – \sum_{i=1}^{n}{\frac{\| x_i^T \theta – y_i \|}{2\sigma^2}}$$ In that case, you can see that we end up with a sum of squared errors that will have the same location of the optimum of the mean squared error (MSE). So you can see that minimizing the MSE is equivalent of maximizing the likelihood for a Gaussian model. ### Remarks on the maximum likelihood The maximum likelihood estimation has very interesting properties but it gives us only point estimates, and this means that we cannot reason on the distribution of these estimates. In contrast, Bayesian inference can give us a full distribution over parameters, and therefore will allow us to reason about the posterior distribution. I’ll write more about Bayesian inference and sampling methods such as the ones from the Markov Chain Monte Carlo (MCMC) family, but I’ll leave this for another article, right now I’ll continue showing the relationship of the maximum likelihood estimator with the maximum a posteriori (MAP) estimator. ## Maximum a posteriori Although the maximum a posteriori, also known as MAP, also provides us with a point estimate, it is a Bayesian concept that incorporates a prior over the parameters. We’ll also see that the MAP has a strong connection with the regularized MLE estimation. We know from the Bayes rule that we can get the posterior from the product of the likelihood and the prior, normalized by the evidence: \begin{align} p(\theta \vert x) &= \frac{p_{\theta}(x) p(\theta)}{p(x)} \\ \label{eq:proport} &\propto p_{\theta}(x) p(\theta) \end{align} In the equation $$\ref{eq:proport}$$, since we’re worried about optimization, we cancel the normalizing evidence $$p(x)$$ and stay with a proportional posterior, which is very convenient because the marginalization of $$p(x)$$ involves integration and is intractable for many cases. \begin{align} \theta_{MAP} &= \mathop{\rm arg\,max}\limits_{\theta} p_{\theta}(x) p(\theta) \\ &= \mathop{\rm arg\,max}\limits_{\theta} \prod_{i=1}^{n} p_{\theta}(x_i) p(\theta) \\ &= \mathop{\rm arg\,max}\limits_{\theta} \sum_{i=1}^{n} \underbrace{\log p_{\theta}(x_i)}_{\text{Log likelihood}} \underbrace{p(\theta)}_{\text{Prior}} \end{align} In this formulation above, we just followed the same steps as described earlier for the maximum likelihood estimator, we assume independence and an identical distributional setting, followed later by the logarithm application to switch from a product to a summation. As you can see in the final formulation, this is equivalent as the maximum likelihood estimation multiplied by the prior term. We can also easily recover the exact maximum likelihood estimator by using a uniform prior $$p(\theta) \sim \textbf{U}(\cdot, \cdot)$$. This means that every possible value of $$\theta$$ will be equally weighted, meaning that it’s just a constant multiplication: \begin{align} \theta_{MAP} &= \mathop{\rm arg\,max}\limits_{\theta} \sum_i \log p_{\theta}(x_i) p(\theta) \\ &= \mathop{\rm arg\,max}\limits_{\theta} \sum_i \log p_{\theta}(x_i) \, \text{constant} \\ &= \underbrace{\mathop{\rm arg\,max}\limits_{\theta} \sum_i \log p_{\theta}(x_i)}_{\text{Equivalent to maximum likelihood estimation (MLE)}} \\ \end{align} And there you are, the MAP with a uniform prior is equivalent to MLE. It is also easy to show that a Gaussian prior can recover the L2 regularized MLE. Which is quite interesting, given that it can provide insights and a new perspective on the regularization terms that we usually use. I hope you liked this article ! The next one will be about Bayesian inference with posterior sampling, where we’ll show how we can reason about the posterior distribution and not only on point estimates as seen in MAP and MLE. – Christian S. Perone Cite this article as: Christian S. Perone, "A sane introduction to maximum likelihood estimation (MLE) and maximum a posteriori (MAP)," in Terra Incognita, 02/01/2019, http://blog.christianperone.com/2019/01/mle/. ## 9 thoughts to “A sane introduction to maximum likelihood estimation (MLE) and maximum a posteriori (MAP)” 1. Thomas Paula says: Excellent explanation! Thanks for talking about an important subject in a simple manner. 2. Roger Granada says: Awesome! One of the best explanation I’ve ever seen to MLE, its connection to KL Divergence and MAP. Thanks for sharing it. 3. Anonymous says: Great post. Thank you! However I think that in equations (16) and (17) the expectation is not following distribution p but following uniform distribution. Eqs (19)-(21) are fine so I’m not sure about the claim of equation (23). In any case, this is a nice post I would recommend. 4. Christopher Howlin says: Great explanation, one of the clearest I have read. In the MAP derivation, the uniform prior used is not defined over the support (which I assume is -/+ infinity). In this case the choice of prior seems to work because we can just drop it as a constant from the optimisation. If instead you wanted to sample from the prior predictive distribution, or marginalise over theta then would the resulting distributions be valid (i.e. integrate to 1)? 5. Anonymous says: Thanks for the effort. Second line in equation 25 should be log probability on the left-hand side, I guess. And on the right-hand side, you are missing the square term 6. Suresh says: Thanks for an excellent presentation. 7. Anonymous says: You’re making absolutely no sense in that paragraph after (7). You being by saying that the maximum likelihood estimate is θ^ (excuse my lack of better formatting) and you end by saying that one shouldn’t mistake it with θ^ (the exact same notation) which is now the maximum likelihood estimator. In the middle of the paragraph you say that the maximum likelihood estimate is now θ^(x) (a different notation than before) which also makes no sense as you’ve defined θ^ to be a vector (given (5) and (6)) and thus not a function. Please correct these mistakes. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-07-03 14:06:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951938986778259, "perplexity": 375.20626951570455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00347.warc.gz"}
https://www.healthaffairs.org/doi/10.1377/hlthaff.2020.01011
{"subscriber":false,"subscribedOffers":{}} Mortality Rates From COVID-19 Are Lower In Unionized Nursing Homes | Health Affairs COVID-19 Mortality Rates From COVID-19 Are Lower In Unionized Nursing Homes Affiliations 1. Adam Dean ([email protected]) is an assistant professor of political science at George Washington University, in Washington, D.C. 2. Atheendar Venkataramani is an assistant professor in the Division of Health Policy, Perelman School of Medicine, University of Pennsylvania, in Philadelphia, Pennsylvania. 3. Simeon Kimmel is an assistant professor in the School of Medicine, Boston University and Boston Medical Center, in Boston, Massachusetts. PUBLISHED:Free Accesshttps://doi.org/10.1377/hlthaff.2020.01011 Abstract More than 40 percent of all reported coronavirus disease 2019 (COVID-19) deaths in the United States have occurred in nursing homes. As a result, health care workers’ access to personal protective equipment (PPE) and infection control policies in nursing homes have received increased attention. However, it is not known whether the presence of health care worker unions in nursing homes is associated with COVID-19 mortality rates. Therefore, we used cross-sectional regression analysis to examine the association between the presence of health care worker unions and COVID-19 mortality rates in 355 nursing homes in New York State. Health care worker unions were associated with a 1.29-percentage-point reduction in mortality, which represents a 30 percent relative decrease in the COVID-19 mortality rate compared with facilities without these unions. Unions were also associated with greater access to PPE, one mechanism that may link unions to lower COVID-19 mortality rates. Amid the coronavirus disease 2019 (COVID-19) global pandemic, as of September 23, 2020, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) had caused infections in almost seven million people and resulted in more than 200,000 deaths in the United States.1 Nursing home residents have been disproportionately affected, accounting for more than 40 percent of documented deaths in the country.2,3 New York State has suffered more than 6,600 COVID-19 deaths in nursing homes, which is more than any state other than New Jersey.3 Investigations into outbreaks in individual nursing homes have identified several factors leading to increased risk for infection and resulting death among nursing home residents.47 First, nursing homes care for elderly people with medical comorbidities who have an increased risk for death from COVID-19. Second, nursing home residents live close together, and staff members have direct contact with residents and each other. These staff members provide direct care to multiple residents and may work in multiple facilities or provide home care to earn additional income.8,9 These cumulative direct contacts may increase risk for SARS-CoV-2 infection and spread. Third, asymptomatic spread of SARS-CoV-2 eluded early infection control strategies, which focused on isolating only staff and residents with symptoms. Centers for Medicare and Medicaid Services (CMS) guidelines now recommend personal protective equipment (PPE), including N95 respirators and eye shields, for staff, as well as universal testing in facilities with confirmed COVID-19 infections.10 However, equipment and test shortages, as well as challenges with implementing infection control plans, have limited the adoption of these recommendations.11 Under such circumstances, labor unions representing health care workers perform several functions that may reduce SARS-CoV-2 transmission. Unions generally demand high staff-to-patient ratios, paid sick leave, and higher wage and benefit levels that reduce staff turnover.1214 They educate workers about their health and safety rights, work to ensure that such rights are enforced, demand that employers mitigate known hazards, and give workers a collective voice that can improve communication with employers.1417 In the specific context of the COVID-19 pandemic in New York, labor unions advocated for access to PPE and new infection control policies.18 Health care worker unions have been shown to improve the occupational safety of health care workers and, in some cases, overall patient outcomes.14,19 However, it is not known how the presence of unions has affected COVID-19 mortality rates among residents in nursing homes. We hypothesized that labor union representation among health care workers in nursing homes would be associated with reduced resident mortality rates. Although labor unions may influence COVID-19 mortality rates in numerous ways, we hypothesized that two important mechanisms were successful demands for PPE and reduced COVID-19 infection rates. By increasing access to PPE, labor unions may reduce the spread of COVID-19 between health care workers and nursing home residents, thus reducing COVID-19 mortality rates for residents. Study Data And Methods Study Design And Data Sources We conducted cross-sectional regression analyses to estimate the association between the presence of a health care worker union and COVID-19 mortality rates in nursing homes in New York State during the early months of the 2020 COVID-19 pandemic. We used publicly available data from the New York State Department of Health on COVID-19 mortality. We used proprietary data from 1199SEIU United Healthcare Workers East, the International Brotherhood of Teamsters, and the Communication Workers of America (CWA), as well as publicly available data from the New York State Nurses Association (NYSNA), to determine whether a labor union represented health care workers in each facility. Study Sample We included all New York State nursing homes for which the New York State Department of Health reported data on confirmed COVID-19 deaths. Nursing homes that reported zero deaths were included in the cohort, whereas facilities for which the department did not report data were excluded. Facilities for which there were missing data on key covariates were also excluded. Main Outcome The main outcome was the percentage of nursing home residents who died from COVID-19. To calculate this percentage, we used nursing home–level data on confirmed COVID-19 deaths made available by the New York State Department of Health for the period March 1–May 31, 2020. These data include all COVID-19 deaths that occurred inside the facility, but not deaths that occurred after a resident was discharged to a hospital. The denominator was the total number of beds in each facility as a proxy for the number of residents in the facility.20 Secondary Outcomes The two secondary outcomes were nursing home access to PPE and nursing home COVID-19 infection rates. We obtained data from CMS on whether or not each facility reported having a one-week supply of N95 respirators, eye shields, surgical masks, gowns, gloves, and hand sanitizer on hand on May 24 and May 31, 2020 (the earliest two weeks of data available). Facilities were defined as having access to a given type of PPE if they reported having a one-week supply in both weeks. We also obtained nursing home–level data on confirmed COVID-19 infections per 1,000 residents from CMS from January 1 through May 31, 2020. Primary Explanatory Variable Facilities with NYSNA, 1199SEIU, Teamsters, or CWA unions representing health care workers were defined as having a union in our cohort. NYSNA only represents registered nurses, whereas the other unions represent workers throughout nursing homes, including nurse aides, dietitians, and maintenance workers. In every nursing home organized by 1199SEIU, the union represents certified nursing assistants, who provide direct care to residents (Mindy Berman, 1199SEIU regional communications director, personal communication, July 5, 2020). We gathered data on the union status of nursing homes through interviews with labor union representatives, conducted between May 6 and July 8, 2020. The union status of all nursing homes remained constant throughout our period of study. Covariates To address potential confounders, we gathered data on nursing home– and area-level characteristics previously associated with poor outcomes from COVID-19. We obtained nursing home–level data on the average age of residents, percentage of residents who were obese, Resource Utilization Groups-III nursing case-mix index of resident acuity, total number of beds, number of occupied beds, occupancy rates, staff-hours-to-resident-days ratios (for registered nurses, certified nursing assistants, and licensed practical nurses), percentage of residents whose primary support came from Medicaid or Medicare, overall Five-Star Quality Rating System score, and chain and for-profit status. We gathered data on total and occupied beds from the New York State Department of Health NYS Nursing Home Profiles, based on the most recent occupancy reports (85 percent reported their occupancy on March 25, 2020).20 CMS provides data on the Five-Star Quality Rating System (updated April 1, 2020), and all other nursing home–level characteristics are available from Brown University’s LTCfocus project, which includes data through 2017.21 We also obtained county-level data on confirmed cases of COVID-19 and population from USAFacts (through May 31, 2020).22 Statistical Analyses We first estimated descriptive statistics for the main outcome, primary explanatory variable, and all covariates for four groups: facilities for which the New York State Department of Health reported data, facilities for which the department did not report data, cohort facilities with unions, and cohort facilities without unions. Statistical differences by reporting and union status were ascertained using t-tests and Z-tests. To examine the association between the proportion of residents who died from COVID-19 and the presence of health care worker unions, we then estimated cross-sectional ordinary least squares regression models both with and without adjustment for county- and facility-level variables. In adjusted models, we included county-level confirmed COVID-19 cases per capita to adjust for the prevalence of disease in the surrounding county, and population to account for the possibility that more populous counties may contain more unidentified cases. Because COVID-19 mortality rates are known to increase along with age and comorbidities, we included the average age of residents, the percentage of residents who were obese, average resident acuity, and the percentage of residents whose primary support came from Medicare (people receiving care after an acute inpatient hospitalization). As COVID-19 has disproportionately affected low-income people, we also adjusted for the percentage of residents whose primary source of insurance was Medicaid. Because the quality of care may influence COVID-19 mortality rates, we adjusted for staff-hours-to-resident-days ratios for registered nurses, certified nursing assistants, and licensed practical nurses.23 We also adjusted for overall Five-Star Quality Rating System score, which is based on health inspections, staffing, and fifteen different physical and clinical measures for nursing home residents, as previous research suggests that COVID-19 death rates may be lower in higher-rated facilities.2,10 Similarly, we adjusted for chain and for-profit status, as previous research associates these ownership characteristics with the quality of care.24,25 We also adjusted for each facility’s occupancy rate, as empty beds may have facilitated the isolation of COVID-19-positive residents. Finally, we adjusted for the number of occupied beds, as unionization and COVID-19 infection may both be more likely in large facilities. Even with these adjustments, the available data and observational research strategy preclude strong causal interpretations because of bias from unmeasured confounders and selection into our study sample. Secondary Analyses As secondary analyses, we used cross-sectional ordinary least squares regression analyses to explore two mechanisms that may link labor unions to lower COVID-19 mortality rates in nursing homes: access to PPE and reduced COVID-19 infection rates. First, we examined the association between facility union status and access to six types of PPE: N95 respirators, eye shields, surgical masks, gowns, gloves, and hand sanitizer. We adjusted for the same county- and facility-level variables as the main analysis, with the exception of resident obesity, which is unlikely to confound access to PPE. Second, we examined the association between union status and COVID-19 infection rates while adjusting for the same county- and facility-level variables as the main analysis. We computed 95% confidence intervals derived from heteroscedasticity-robust standard errors for all regression models. All analyses were conducted using R, version 1.0.153, and Stata 15.0. Sensitivity Analyses Prior research suggests that there may be COVID-19 deaths in nursing homes for which the New York State Department of Health did not report data, raising concerns about selection bias in our cohort.2,26 To address these concerns, we conducted two robustness checks. First, we used an ordinary least squares model to assess for factors associated with reporting COVID-19 death data to the New York State Department of Health. Second, we used inverse probability weighting to adjust for selection bias in the data from nursing homes that did report data on COVID-19 deaths. Specifically, we used estimates of the predicted probability of reporting to the New York State Department of Health from logistic regression models in constructing inverse probability weights in our main regression model. We also considered the possibility that nonreporting facilities did not have any COVID-19 deaths by coding these facilities to have zero deaths. This is likely a conservative assumption, given that facilities were mandated to report to the state even if they had no deaths from COVID-19. To account for the possibility that COVID-19 deaths were correlated across nursing homes in the same labor market, we calculated commuting zone–level wild cluster bootstrap robust standard errors.27 To address concerns about regional variation in COVID-19 risk factors, as well as to account for the possibility that COVID-19 deaths were correlated across facilities within the same county, we included region-level fixed effects and calculated standard errors clustered at the county level. We also performed Oster’s coefficient stability test to assess the robustness of our results to other unmeasured confounders.28 Last, we estimated the models from our secondary analyses using only the nursing homes from our original cohort that reported data on PPE and COVID-19 infection rates. These sensitivity analyses are further described in the online appendix.29 Limitations Our study had several limitations that motivate future work. First, even with the inclusion of a rich set of covariates and sensitivity analyses, the observational study design precluded causal interpretations. Second, our study was conducted during an early phase of the COVID-19 pandemic, and the estimated associations may change over time. Similarly, our findings might not be generalizable to unions and other patient outcomes before or after the pandemic. Third, our study was limited by missing data on confirmed COVID-19 deaths in many nursing homes in New York State.30 Data collection during the pandemic faces many obstacles, and it is not possible to know how many COVID-19 deaths occurred in facilities that were excluded from the New York State Department of Health data. Fourth, New York State Department of Health data only include confirmed COVID-19 deaths that occurred inside facilities. We were therefore unable to adjust for the possibility that unionized health care workers may transfer residents to hospitals earlier, and thereby reduce their facility’s COVID-19 mortality rate. Fifth, although our data on unionized facilities cover the largest health care worker unions in New York, a small number of facilities may have been misclassified. Sixth, because of the lack of data on PPE early in the pandemic, we were unable to adjust our main results for access to PPE. Seventh, we were unable to gather data on the race and ethnicity of people who died from COVID-19 or for health care workers, thus limiting our ability to examine racial disparities. Eighth, many of the nursing home–level covariates were last measured in 2017, which may introduce measurement error. Our main outcome also may suffer potential measurement error because of our reliance on total beds to proxy for nursing home residents. Measurement error in both cases may bias estimates of the association between unionization and COVID-19 death rates either downward or upward. Last, the findings of this study might not be generalizable outside of New York State. Study Results Descriptive Statistics We identified 621 nursing homes in New York State, 385 of which are included in the New York State Department of Health’s report on COVID-19 mortality. Thirty facilities were excluded because of missing data on covariates, resulting in a study cohort of 355 nursing homes. Health care worker unions were present in 246 of 355 nursing homes in our sample (239 affiliated with 1199SEIU, 4 affiliated with NYSNA, 1 affiliated with the International Brotherhood of Teamsters, and 2 affiliated with both 1199SEIU and NYSNA). Facilities with health care worker unions had residents who were younger and less obese, who had higher acuity scores, and who were less likely to be white and more likely to be insured by Medicare or Medicaid (exhibit 1). Unionized facilities were also more likely to be for profit, were less likely to be associated with a chain, had lower licensed-practical-nurse-to-resident ratios, and were located in more populous counties with higher per capita rates of confirmed COVID-19 cases. At the county level, the percentage of nursing homes that were unionized ranged from 0 to 100 (exhibit 2). Variables (mean) All nursing homes (N = 621) Nursing homes in cohort (n = 355) COVID-19 deaths reported (n = 385) COVID-19 deaths not reported (n = 236) Unionized (n = 246) Nonunionized (n = 109) County Population 1,285,134 392,288** 1,446,815 863,978** COVID-19 cases per capita 0.02 0.01** 0.03 0.02** Facility Average age of residents (years) 79.59 80.96** 78.35 81.59** Residents who were obese (%) 23.89 29.73** 23.53 25.08** Resident acuity 1.23 1.20** 1.24 1.20** White residents (%) 65.12 90.79** 58.39 77.58** Primary payer for residents (%) Medicaid 60.88 56.94** 63.82 57.19** Medicare 14.39 10.96** 15.07 11.72** Ratio of staff to residents RNs 0.48 0.57 0.47 0.47 LPNs 0.79 0.95** 0.74 0.90** CNAs 2.30 2.41** 2.26 2.33 Five-Star Quality Rating System score 3.33 3.15 3.35 3.21 Occupancy rate 0.91 0.85** 0.92 0.90 No. of occupied beds 203.47 112.76** 215.31 199.45 For profita 0.69 0.49** 0.78 0.51** Chaina 0.11 0.24** 0.08 0.18** SOURCE Authors’ analysis of data on confirmed COVID-19 deaths in nursing homes from the New York State Department of Health; union representation from 1199SEIU United Healthcare Workers East, New York State Nurses Association, the International Brotherhood of Teamsters, and Communication Workers of America; and covariates from the Centers for Medicare and Medicaid Services, Brown University’s LTCfocus project, and USAFacts. NOTES All variables are for nursing homes except where county is indicated. T-tests calculated to compare the means across union and nonunion facilities for all other continuous measures. Noncohort nursing homes have missing data for the nursing home–level variables; therefore, the means and standards deviations for “Reported” and “Not reported” nursing homes are calculated using fewer than 385 and 236 observations, respectively. Resident acuity is a numeric score corresponding to the Resource Utilization Group-III nursing case-mix index. Higher numbers indicate residents who require greater resources. RN is registered nurse. LPN is licensed practical nurse. CNA is certified nursing assistant. aZ-tests calculated for the binary measures of “For profit” and “Chain.” **$p<0.05$ There were 3,298 confirmed COVID-19 fatalities in nursing homes in New York State through May 31, 2020. The facility with the highest number of confirmed COVID-19 deaths had 82 deaths, and the highest proportion of deaths among residents was 62 of 160, or 38.8 percent. Eleven nursing homes in our sample reported zero confirmed COVID-19 deaths (data not shown). At the county level, average COVID-19 mortality rates in nursing homes varied from a low of 0.0 percent to a high of 12.9 percent (exhibit 3). In the 246 unionized nursing homes, 3.72 percent of residents died from COVID-19, whereas in the 109 nonunionized facilities, 5.53 percent of residents died from the disease (data not shown). Main Regression Results In regression analyses, we found that the presence of a labor union representing health care workers was associated with a 1.29-percentage-point reduction (95% CI: −2.405, −0.172; $p$ = 0.024) in the proportion of facility residents who died from COVID-19. Estimated coefficients were similar in models with and without covariates (exhibit 4), and the statistical significance of our findings was substantively unchanged regardless of how confidence intervals were calculated. Because the mean proportion of facility residents who died from COVID-19 during our study period was 4.279 percent, the covariate-adjusted estimates suggest that the presence of a labor union was associated with a 30 percent relative decrease in the COVID-19 mortality rate compared with facilities without health care worker unions. Multivariate model Primary explanatory variable Union −1.289** County Population 0.503 COVID-19 cases per capita 41.960 Facility Average age of residents (years) 0.166**** Residents who were obese (%) 0.067 Resident acuity 1.301 White residents (%) 0.027*** Primary payer for residents (%) Medicaid 0.016 Medicare 0.050 Ratio of staff to residents RNs 0.411 LPNs 1.128 CNAs −1.301*** Five-Star Quality Rating System score Two-star rating −0.819 Three-star rating −0.462 Four-star rating −0.028 Five-star rating 0.671 Occupancy rate 2.135 No. of occupied beds −0.001 For-profit −0.462 Chain 2.429** N 355 R2 0.300 Adjusted R2 0.284 Residual standard deviation 4.067 SOURCE Authors’ analysis of data on confirmed COVID-19 deaths in nursing homes from the New York State Department of Health; union representation from 1199SEIU United Healthcare Workers East, New York State Nurses Association, the International Brotherhood of Teamsters, and Communication Workers of America; and covariates from the Centers for Medicare and Medicaid Services, Brown University’s LTCfocus project, and USAFacts. NOTES Results based on ordinary least squares regression. Univariate model (model 1) regressed COVID-19 mortality rates against union status. Key regression statistics for the univariate model: intercept, 5.529 (95% confidence interval: 4.436, 6.622), $p$ < 0.001; coefficient on “union,” −1.805 (95% CI: −2.988, −0.621), $p$ = 0.003; N = 355; R2 = 0.033; adjusted R2 = 0.030; residual standard deviation = 4.733. The 95% confidence intervals for the model were calculated with robust standard errors. Resident acuity is defined in the notes to exhibit 1. RN is registered nurse. LPN is licensed practical nurse. CNA is certified nursing assistant. **$p<0.05$ ***$p<0.01$ ****$p<0.001$. Secondary Analyses Personal Protective Equipment Access: In our secondary analysis of PPE access, we used data from a larger cohort of nursing homes. Of 418 facilities reporting data on PPE, nineteen were excluded as a result of missing data on covariates, resulting in a sample of 399 nursing homes (appendix exhibit A1).29 In the resulting sample of facilities, 83 percent reported having N95 respirators, 92 percent eye shields, 96 percent surgical masks, 84 percent gowns, 96 percent gloves, and 95 percent hand sanitizer (data not shown).29 In regression analyses, we found that the presence of a labor union was associated with an 11.5-percentage-point increase (95% CI: 2.1, 20.9; $p$ = 0.017) in the probability of a facility having access to N95 respirators and a 6.7-percentage-point increase (95% CI: 0.3, 13.0; $p$ = 0.039) in the probability of having access to eye shields (exhibit 5). Unions were therefore associated with a 13.8 percent relative increase in access to N95 respirators and a 7.3 percent relative increase in access to eye shields. Labor union status was not a significant predictor of access to other types of PPE. Access to PPE (n = 399) N95 respirators Eye shields Surgical masks Gowns Gloves Hand sanitizer COVID-19 infection rate (n = 371) Union 0.115** 0.067** −0.010 0.030 −0.021 0.000 −50.089** R2 0.102 0.082 0.056 0.049 0.066 0.070 0.154 Adjusted R2 0.057 0.036 0.009 0.001 0.019 0.023 0.106 SOURCE Authors’ analysis of data on availability of PPE and COVID-19 infection rates from the Centers for Medicare and Medicaid Services (CMS); union representation from 1199SEIU United Healthcare Workers East, New York State Nurses Association, the International Brotherhood of Teamsters, and Communication Workers of America; and covariates from CMS, the New York State Department of Health, Brown University’s LTCfocus project, and USAFacts. NOTES Results based on ordinary least squares regression. The 95% confidence intervals for the model were calculated with robust standard errors. **$p<0.05$ COVID-19 Infection Rates: In our secondary analysis of COVID-19 infection rates, we used data from the 371 nursing homes in this cohort that reported data on COVID-19 infection rates. In the resulting sample, the average COVID-19 infection rate was 119.4 per 1,000 residents, and 148 of these nursing homes had infection rates of zero, thus minimizing concerns that CMS dropped facilities without COVID-19 infections. In regression analyses, we found that the presence of a labor union was associated with a 50.1-point decrease in the number of COVID-19 infections per 1,000 residents (95% CI: −96.2, −3.9; $p$ = 0.034) (exhibit 5). Because the mean COVID-19 infection rate during our study period was 119.4 per 1,000 residents, the covariate-adjusted estimates suggest that the presence of a labor union was associated with a 42 percent relative decrease in the COVID-19 infection rate. Sensitivity Analyses In ordinary least squares models examining whether or not the New York State Department of Health reported data on COVID-19 deaths for a given facility, the presence of a health care worker union was not found to be a statistically significant predictor (appendix exhibit A2).29 Analyses using inverse probability weighting to address potential nonrandom selection into reporting COVID-19 deaths yielded similar estimates of the association between COVID-19 deaths and unionization (appendix exhibit A3).29 Our results remained statistically significant when we estimated commuting zone–level wild cluster bootstrap robust standard errors (appendix exhibit A4).29 Findings were also robust when we adjusted for region-level fixed effects and clustered standard errors at the county level (appendix exhibit A5).29 Our results were similar even when we used the conservative assumption that all nonreported nursing homes experienced zero COVID-19 deaths (appendix exhibit A6).29 Simulating the potential effect of additional unmeasured confounders did not reverse the substantive finding (appendix exhibit A7).29 Last, the results of our secondary analyses were similar when we used only the nursing homes from our original cohort that also reported data on PPE and COVID-19 infection rates (appendix exhibits A8 and A9).29 Discussion Among 355 nursing homes in New York State for which data on COVID-19 mortality rates were available, the presence of a health care worker union was associated with a 30 percent lower mortality rate from COVID-19 among nursing home residents. The findings were robust to adjustment for a range of covariates and specification checks for bias from missing data. We also found that nursing homes with labor unions had greater access to PPE and lower COVID-19 infection rates—two important mechanisms that may link unions to lower COVID-19 mortality rates. Specifically, unions were associated with a 13.8 percent relative increase in access to N95 respirators and a 7.3 percent relative increase in access to eye shields. We also found that unions were associated with a 42 percent relative decrease in COVID-19 infection rates among nursing home residents. However, more research is needed to understand the numerous mechanisms through which unions may influence COVID-19 mortality rates, such as staff training, reducing use of part-time workers, implementing infection protocols, and giving workers a collective voice in the workplace.9,1417,31 Amid the COVID-19 pandemic, unions advocated for supplies and policies that protect staff and residents from SARS-CoV-2 infection. As more than 40 percent of all COVID-19 deaths have occurred in nursing homes,2,3 there is an urgent need to understand the factors that protect residents and staff. Amid the COVID-19 pandemic, unions advocated for supplies and policies that protect staff and residents from SARS-CoV-2 infection. Although our study design precluded causal interpretations, our results suggest that unions may have reduced COVID-19 deaths among nursing home residents by successfully demanding PPE for health care workers. These are especially important contributions, given that early research on COVID-19 in nursing homes found that only facility size and location, rather than quality metrics, were associated with COVID-19 outbreaks.2,11 Our finding that unions are associated with reduced COVID-19 mortality rates in nursing homes is consistent with previous findings that unions improve safety and health standards for workers,15,16,32 help coenforce those standards with employers,33 and also reduce workplace injuries34,35 and accidental deaths.36 Health care worker unions, in particular, are also associated with improved patient outcomes.14,19,37 Our study identified additional facility-level factors associated with COVID-19 mortality rates in nursing homes. Certified nursing assistant staffing ratios were associated with lower COVID-19 mortality rates, while the average age of residents and the percentage of facility residents who were White were both associated with higher COVID-19 mortality rates. However, this finding regarding race was not statistically significant when we included region-level fixed effects (appendix exhibit A5).29 This suggests that the result may have been driven by unmeasured confounders that varied across regions. Unfortunately, missing race and ethnicity data for people who died from COVID-19 limited our ability to further examine racial disparities. We also found that chain nursing homes were associated with higher COVID-19 mortality rates. Previous research similarly finds that nursing home chains are associated with lower-quality care.16,17,38 Our study had three main strengths that improve our understanding of the relationships among labor unions, access to PPE, and COVID-19 infection and mortality in nursing homes. First, we combined several data sources to identify unionized nursing homes in New York while also adjusting for facility and community covariates. Second, we performed important sensitivity analyses to address the New York State Department of Health’s selective reporting of COVID-19 deaths in nursing homes. We found that the presence of a union was not associated with the reporting of data on COVID-19 deaths. Our results were also robust when we used inverse probability weighting to address potential nonrandom selection and accounted for potential unmeasured confounders. Third, to our knowledge, this study was the first to demonstrate that labor unions were associated with reduced COVID-19 infections and deaths in an essential industry. Lack of data on COVID-19 infections by occupation has thus far hindered research on whether unions can protect workers and the public. Future surges of COVID-19 infections in regions with fewer unionized nursing homes are particularly worrisome. Our results have significant implications for stakeholders concerned with COVID-19 mortality in nursing homes. Health care worker unions were associated with reduced mortality rates in the initial COVID-19 surge in the United States. Future surges of COVID-19 infections in regions with fewer unionized nursing homes are therefore particularly worrisome. Conclusion Residents in US nursing homes have been disproportionately affected by COVID-19. The presence of a health care worker labor union was associated with a 30 percent relative decrease in COVID-19 mortality rate compared with facilities without unions in New York State. Health care worker unionization may play an important role in ensuring access to appropriate PPE and implementing infection control policies that protect vulnerable nursing home residents. ACKNOWLEDGMENTS Simeon Kimmel consulted for Abt Associates on a Massachusetts Department of Public Health–funded project to improve access to medications for opioid use disorder in skilled nursing facilities. An unedited version of this article was published online September 10, 2020, as a Fast Track Ahead Of Print article. That version is available in the online appendix. NOTES • 1 Centers for Disease Control and Prevention. United States COVID-19 cases and deaths by state [Internet]. Atlanta (GA): CDC; 2020 [cited 2020 Sep 10]. Available from: https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html Google Scholar • 2 Abrams HR, Loomer L, Gandhi A, Grabowski DC. Characteristics of U.S. nursing homes with COVID-19 cases. J Am Geriatr Soc. 2020;68(8):1653–6. Crossref, Medline • 3 Henry J. Kaiser Family Foundation. State data and policy actions to address coronavirus [Internet]. San Francisco (CA): KFF; 2020 Sep 9 [cited 2020 Sep 10]. Available from: https://www.kff.org/health-costs/issue-brief/state-data-and-policy-actions-to-address-coronavirus/ Google Scholar • 4 Roxby AC, Greninger AL, Hatfield KM, Lynch JB, Dellit TH, James Aet al. Detection of SARS-CoV-2 among residents and staff members of an independent and assisted living community for older adults—Seattle, Washington, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(14):416–8. Crossref, Medline • 5 Chow EJ, Schwartz NG, Tobolowsky FA, Zacks RLT, Huntington-Frazier M, Reddy SCet al. Symptom screening at illness onset of health care personnel with SARS-CoV-2 infection in King County, Washington. JAMA. 2020;323(20):2087–9. Crossref, Medline • 6 McMichael TM, Clark S, Pogosjans S, Kay M, Lewis J, Baer Aet al. COVID-19 in a long-term care facility—King County, Washington, February 27–March 9, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(12):339–42. Crossref, Medline • 7 Mosites E, Parker EM, Clarke KEN, Gaeta JM, Baggett TP, Imbert Eet al.COVID-19 Homelessness Team Assessment of SARS-CoV-2 infection prevalence in homeless shelters—four U.S. cities, March 27–April 15, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(17):521–2. Crossref, Medline • 8 DePasquale N, Bangerter LR, Williams J, Almeida DM. Certified nursing assistants balancing family caregiving roles: health care utilization among double- and triple-duty caregivers. Gerontologist. 2016;56(6):1114–23. Crossref, Medline • 9 Travers J, Herzig CTA, Pogorzelska-Maziarz M, Carter E, Cohen CC, Semeraro PKet al. Perceived barriers to infection prevention and control for nursing home certified nursing assistants: a qualitative study. Geriatr Nurs. 2015;36(5):355–60. Crossref, Medline • 10 Centers for Medicare and Medicaid Services. Toolkit on state actions to mitigate COVID-19 prevalence in nursing homes. Version 9 [Internet]. Baltimore (MD): CMS; 2020 Sep [cited 2020 Sep 10]. Available from: https://www.cms.gov/files/document/covid-toolkit-states-mitigate-covid-19-nursing-homes.pdf Google Scholar • 11 McGarry BE, Grabowski DC, Barnett ML. Severe staffing and personal protective equipment shortages faced by nursing homes during the COVID-19 pandemic. Health Aff (Millwood). 2020;39(10):1812–21. Go to the article • 12 Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288(16):1987–93. Crossref, Medline • 13 Spetz J. Nursing wage premiums in large hospitals: what explains the size-wage effect? AHSR FHSR Annu Meet Abstr. 1996;13:100–1. Google Scholar • 14 Ash M, Seago JA. The effect of registered nurses’ unions on heart-attack mortality. ILR Rev. 2004 A;57(3):422–42. Crossref • 15 Morantz AD. Coal mine safety: do unions make a difference? ILR Rev. 2013;66(1):88–116. Crossref • 16 Sojourner AJ, Yang J. Effects of union certification on workplace-safety enforcement: regression-discontinuity evidence. ILR Rev. 2020 Sep 3. [Epub ahead of print]. Crossref • 17 Freeman RB, Medoff JL. What do unions do? New York (NY): Basic Books; 1984. Google Scholar • 18 McNamara A. Nurses across the country protest lack of protective equipment. CBS News [serial on the Internet]. 2020 Mar 28 [cited 2020 Sep 10]. Available from: https://www.cbsnews.com/news/health-care-workers-protest-lack-of-protective-equipment-2020-03-28/ Google Scholar • 19 Dube A, Kaplan E, Thompson O. Nurse unions and patient outcomes. ILR Rev. 2016;69(4):803–33. Crossref • 20 New York State Department of Health. NYS health profiles [Internet]. Albany (NY): The Department; 2020 [cited 2020 Sep 10]. Available from: https://profiles.health.ny.gov/nursing_home/index#5.42/42.868/-76.809 Google Scholar • 21 Brown School of Public Health. Long-term care: facts on care in the US [Internet]. Providence (RI): LTCfocus; [cited 2020 Sep 10]. Available from: http://ltcfocus.org/ Google Scholar • 22 US coronavirus cases and deaths. USAFacts [serial on the Internet]. 2020 [last updated 2020 Sep 9; cited 2020 Sep 10]. Available from: https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/ Google Scholar • 23 Gorges RJ, Konetzka RT. Staffing levels and COVID-19 cases and outbreaks in U.S. nursing homes. J Am Geriatr Soc. 2020 Aug 8. [Epub ahead of print]. Crossref, Medline • 24 Gray BH, McNerney WJ. For-profit enterprise in health care, the Institute of Medicine study. N Engl J Med. 1986;314(23):1523–8. Crossref, Medline • 25 Ben-Ner A, Karaca-Mandic P, Ren T. Ownership and quality in markets with asymmetric information: evidence from nursing homes. BE J Econ Anal Policy. 2012;12(1):42. Google Scholar • 26 Cenziper D, Jacobs J, Mulcahy S. Nearly 1 in 10 nursing homes nationwide report coronavirus cases. Washington Post [serial on the Internet]. 2020 Apr 20 [cited 2020 Sep 10]. Available from: https://www.washingtonpost.com/business/2020/04/20/nearly-one-10-nursing-homes-nationwide-report-coronavirus-outbreaks/ Google Scholar • 27 Colin Cameron A, Gelbach JB, Miller DL. Bootstrap-based improvements for inference with clustered errors. Rev Econ Stat. 2008;90(3):414–27. Crossref • 28 Oster E. Unobservable selection and coefficient stability: theory and evidence. J Bus Econ Stat. 2019;37(2):187–204. Crossref • 29 To access the appendix, click on the Details tab of the article online. • 30 Knauss T. NY didn’t count nursing home coronavirus victims for weeks; then, a stumbling rush for a death toll. Syracuse.com [serial on the Internet]. 2020 May 19 [last updated 2020 May 30; cited 2020 Sep 10]. Available from: https://www.syracuse.com/coronavirus/2020/05/ny-didnt-count-nursing-home-coronavirus-victims-for-weeks-then-a-stumbling-rush-for-a-death-toll.html Google Scholar • 31 Berridge C, Lima J, Schwartz M, Bishop C, Miller SC. Leadership, staff empowerment, and the retention of nursing assistants: findings from a survey of U.S. nursing homes. J Am Med Dir Assoc. 2020;21(9):1254–1259.e2. Crossref, Medline • 32 Zoorob M. Does “right to work” imperil the right to health? The effect of labour unions on workplace fatalities. Occup Environ Med. 2018;75(10):736–8. Crossref, Medline • 33 Fine J. Enforcing labor standards in partnership with civil society: can co-enforcement succeed where the state alone has failed? Polit Soc. 2017;45(3):359–88. Crossref • 34 Kleiner MM, Weil D. Evaluating the effectiveness of National Labor Relations Act remedies: analysis and comparison with other workplace penalty policies. In: Estlund CL, Wachter ML, editors. Research handbook on the economics of labor and employment law. Cheltenham (UK): Edward Elgar; 2012. p. 209–47. Crossref • 35 Clarke SP, Rockett JL, Sloane DM, Aiken LH. Organizational climate, staffing, and safety equipment as predictors of needlestick injuries and near-misses in hospital nurses. Am J Infect Control. 2002;30(4):207–16. Crossref, Medline • 36 Eisenberg-Guyot J, Mooney SJ, Hagopian A, Barrington WE, Hajat A. Solidarity and disparity: declining labor union density and changing racial and educational mortality inequities in the United States. Am J Ind Med. 2020;63(3):218–31. Crossref, Medline • 37 Sojourner AJ, Frandsen BR, Town RJ, Grabowski DC, Chen MM. Impacts of unionization on quality and productivity. ILR Rev. 2015;68(4):771–806. Crossref • 38 Davis MA. Nursing home ownership revisited: market, cost and quality relationships. Med Care. 1993;31(11):1062–8. Crossref, Medline
2022-01-26 22:31:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23634964227676392, "perplexity": 10091.899041695928}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00211.warc.gz"}
https://www.physicsforums.com/threads/boundary-value-problem.361808/
Homework Help: Boundary value problem 1. Dec 8, 2009 EvilKermit 1. The problem statement, all variables and given/known data a) Solve for the BVP. Where A is a real number. b) For what values of A does there exist a unique solutions? What is the solution? c) For what values of A do there exist infintely many solutions? d) For what values of A do there exist no solutions? 2. Relevant equations y'' + y = A + sin(2x) y(0) = y'($$\pi$$/2) = 2 3. The attempt at a solution y = yh + yp 0 = 1+ $$\lambda$$2 yh =c1*cos(x) + c2*cos(x) yp = A + B*sin(2x) y = A + B*sin(2x) y'' = -4B*sin(2x) A + sin(2x) = A -3B*sin(2x) A = A, B = -1/3 yp = A - 1/3*sin(2x) y = A - 1/3*sin(2x) + c1*cos(x) + c2*sin(x) y' = - 2/3*sin(2x) - c1*sin(x) + c2*cos(x) 2 = A + c1 2 = 2/3 + c2 c2 = 8/3 y = A - 1/3*sin(2x) + c1*cos(x) + 8/3*sin(x) I'm confused about answering the questions. A would be equal to all real numbers, since one could solve for c1. How can I give the solution? There is a unique solution for each value of A, which I would have to write infinte solutions. And there is no value of A when there is an inifiite amount of solutions or no values. IF A is defined, what would the answer be? 2. Dec 9, 2009 HallsofIvy So c1= 2-A y= A- 1/2 sin(2x)+ (2- A)cos(x)+ 8/3 sin(x) 3. Dec 9, 2009 EvilKermit So: y= A- 1/2 sin(2x)+ (2- A)cos(x)+ 8/3 sin(x) c) For what values of A do there exist infintely many solutions? d) For what values of A do there exist no solutions? Would c and d then be no values of A have infintely many solutions or nonexistent solution? 4. Dec 10, 2009 Yes.
2018-12-10 13:29:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3960009813308716, "perplexity": 2121.779547392305}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823339.35/warc/CC-MAIN-20181210123246-20181210144746-00095.warc.gz"}
https://mathematica.stackexchange.com/questions/101156/stretching-a-parametric-plot
# Stretching a parametric plot [closed] I am trying to plot the simple parametric function given by ParametricPlot[{0.5 x (1 - x^2), 0.5 x (1 + x^2)}, {x, 0, 0.75}, AxesLabel -> {a, b}] Mathematica plots it but the graph is squashed. Is there a way to stretch the axes (in particular the a-axis) so it looks more presentable. Thanks. ## closed as off-topic by m_goldberg, Dr. belisarius, MarcoB, user9660, xzczdDec 4 '15 at 5:43 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, Dr. belisarius, MarcoB, Community, xzczd If this question can be reworded to fit the rules in the help center, please edit the question. • Try AspectRatio -> 1/GoldenRatio or AspectRatio -> 1 – eldo Dec 4 '15 at 0:21 • Works well enough. Thanks! – Gregory Dec 4 '15 at 0:24 ParametricPlot[{0.5 x (1 - x^2), 0.5 x (1 + x^2)}, {x, 0, 0.75},
2019-09-18 19:20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3480803072452545, "perplexity": 3167.256236557357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00340.warc.gz"}
https://www.projecteuclid.org/euclid.aos/1151418250
## The Annals of Statistics ### Strong invariance principles for sequential Bahadur–Kiefer and Vervaat error processes of long-range dependent sequences #### Abstract In this paper we study strong approximations (invariance principles) of the sequential uniform and general Bahadur–Kiefer processes of long-range dependent sequences. We also investigate the strong and weak asymptotic behavior of the sequential Vervaat process, that is, the integrated sequential Bahadur–Kiefer process, properly normalized, as well as that of its deviation from its limiting process, the so-called Vervaat error process. It is well known that the Bahadur–Kiefer and the Vervaat error processes cannot converge weakly in the i.i.d. case. In contrast to this, we conclude that the Bahadur–Kiefer and Vervaat error processes, as well as their sequential versions, do converge weakly to a Dehling–Taqqu type limit process for certain long-range dependent sequences. #### Article information Source Ann. Statist., Volume 34, Number 2 (2006), 1013-1044. Dates First available in Project Euclid: 27 June 2006 https://projecteuclid.org/euclid.aos/1151418250 Digital Object Identifier doi:10.1214/009053606000000164 Mathematical Reviews number (MathSciNet) MR2283402 Zentralblatt MATH identifier 1113.60034 #### Citation Csörgő, Miklós; Szyszkowicz, Barbara; Wang, Lihong. Strong invariance principles for sequential Bahadur–Kiefer and Vervaat error processes of long-range dependent sequences. Ann. Statist. 34 (2006), no. 2, 1013--1044. doi:10.1214/009053606000000164. https://projecteuclid.org/euclid.aos/1151418250 #### References • Bahadur, R. R. (1966). A note on quantiles in large samples. Ann. Math. Statist. 37 577–580. • Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge Univ. Press. • Csáki, E. (1977). The law of iterated logarithm for normalized empirical distribution functions. Z. Wahrsch. Verw. Gebiete 38 147–167. • Csáki, E., Csörgő, M., Földes, A., Shi, Z. and Zitikis, R. (2002). Pointwise and uniform asymptotics of the Vervaat error process. J. Theoret. Probab. 15 845–875. • Csörgő, M. (1983). Quantile Processes with Statistical Applications. SIAM, Philadelphia. • Csörgő, M. and Horváth, L. (1993). Weighted Approximations in Probability and Statistics. Wiley, Chichester. • Csörgő, M. and Révész, P. (1978). Strong approximations of the quantile process. Ann. Statist. 6 882–894. • Csörgő, M. and Révész, P. (1981). Strong Approximations in Probability and Statistics. Academic Press, New York. • Csörgő, M. and Shi, Z. (2001). An $L^p$-view of a general version of the Bahadur–Kiefer process. J. Math. Sci. 105 2534–2540. • Csörgő, M. and Shi, Z. (2005). An $L^p$-view of the Bahadur–Kiefer theorem. Period. Math. Hungar. 50 79–98. • Csörgő, M. and Szyszkowicz, B. (1998). Sequential quantile and Bahadur–Kiefer processes. In Order Statistics: Theory and Methods (N. Balakrishnan and C. R. Rao, eds.) 631–688. North-Holland, Amsterdam. • Csörgő, M. and Zitikis, R. (1999). On the Vervaat and Vervaat-error processes. Acta Appl. Math. 58 91–105. • Csörgő, M. and Zitikis, R. (2001). The Vervaat process in $L_p$-spaces. J. Multivariate Anal. 78 103–138. • Csörgő, M. and Zitikis, R. (2002). On the general Bahadur–Kiefer, quantile and Vervaat processes: Old and new. In Limit Theorems in Probability and Statistics: In Honour of the 65th Birthday of Professor Pál Révész (Balatonlelle, Hungary, June 28–July 2, 1999) (I. Berkes, E. Csáki and M. Csörgő, eds.) 1 389–426. János Bolyai Math. Soc., Budapest. • Csörgő, S. and Mielniczuk, J. (1995). Density estimation under long-range dependence. Ann. Statist. 23 990–999. • de Haan, L. (1970). On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tracts 32. Mathematisch Centrum, Amsterdam. • Dehling, H., Mikosch, T. and Sørensen, M., eds. (2002). Empirical Process Techniques for Dependent Data. Birkhäuser Boston. • Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. In Empirical Process Techniques for Dependent Data (H. Dehling, T. Mikosch and M. Sørensen, eds.) 3–113. Birkhäuser, Boston. • Dehling, H. and Taqqu, M. S. (1988). The functional law of the iterated logarithm for the empirical process of some long-range dependent sequences. Statist. Probab. Lett. 7 81–85. • Dehling, H. and Taqqu, M. S. (1989). The empirical process of some long-range dependent sequences with an application to $U$-statistics. Ann. Statist. 17 1767–1783. • Dobrushin, R. L. and Major, P. (1979). Non-central limit theorems for non-linear functionals of Gaussian fields. Z. Wahrsch. Verw. Gebiete 50 27–52. • Giraitis, L. and Surgailis, D. (2002). The reduction principle for the empirical process of a long memory linear process. In Empirical Process Techniques for Dependent Data (H. Dehling, T. Mikosch and M. Sørensen, eds.) 241–255. Birkhäuser, Boston. • Kiefer, J. (1967). On Bahadur's representation of sample quantiles. Ann. Math. Statist. 38 1323–1342. • Kiefer, J. (1970). Deviations between the sample quantile process and the sample DF. In Nonparametric Techniques in Statistical Inference (M. L. Puri, ed.) 299–319. Cambridge Univ. Press. • Kiefer, J. (1972). Skorohod embedding of multivariate rv's and the sample df. Z. Wahrsch. Verw. Gebiete 24 1–35. • Koul, H. L. and Surgailis, D. (2002). Asymptotic expansion of the empirical process of long memory moving averages. In Empirical Process Techniques for Dependent Data (H. Dehling, T. Mikosch and M. Sørensen, eds.) 213–239. Birkhäuser, Boston. • Major, P. (1981). Multiple Wiener–Itô Integrals. Lecture Notes in Math. 849. Springer, Berlin. • Mori, T. and Oodaira, H. (1987). The functional iterated logarithm law for stochastic processes represented by multiple Wiener integrals. Probab. Theory Related Fields 76 299–310. • Müller, D. W. (1970). On Glivenko–Cantelli convergence. Z. Wahrsch. Verw. Gebiete 16 195–210. • Shorack, G. R. and Wellner, J. A. (1986). Empirical Processes with Applications to Statistics. Wiley, New York. • Taqqu, M. S. (1975). Weak convergence to fractional Brownian motion and to the Rosenblatt process. Z. Wahrsch. Verw. Gebiete 31 287–302. • Taqqu, M. S. (1977). Law of the iterated logarithm for sums of non-linear functions of Gaussian variables that exhibit a long range dependence. Z. Wahrsch. Verw. Gebiete 40 203–238. • Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrsch. Verw. Gebiete 50 53–83. • Vervaat, W. (1972). Functional central limit theorems for processes with positive drift and their inverses. Z. Wahrsch. Verw. Gebiete 23 245–253. • Vervaat, W. (1972). Success Epochs in Bernoulli Trials: With Applications to Number Theory. Mathematical Centre Tracts 42. Mathematisch Centrum, Amsterdam. • Zitikis, R. (1998). The Vervaat process. In Asymptotic Methods in Probability and Statistics –- A Volume in Honour of Miklós Csörgő (B. Szyszkowicz, ed.) 667–694. North-Holland, Amsterdam.
2019-10-17 11:43:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6454650163650513, "perplexity": 4476.916442371339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00511.warc.gz"}
http://answers.gazebosim.org/question/20217/physics-problem-with-simple-movement-of-grasped-object/
# Physics problem with simple movement of grasped object The robot tool is a prismatic gripper with four rods that slip into matching holes on the object to be grasped. Using this gripper to move the object works fine for motion perpendicular to the rod axes. In the video, for example, the rods are parallel to the world x-axis so motion in the z-y plane is fine; the robot lifts the object then translates in the negative y-direction okay. I think the problem is that the friction force is not being computed well because when the robot translates in the x-direction the object seems to stay put until it reaches the end of the rod and then it goes berserk. (If you look closely in the second video it can be seen that the gripper starts to move to the right and the object stays still. I guess the object crosses the camera in that video so we get to see the holes and rods inside the object.) There is a second reason I think the friction calculation is to blame: a different pick-and-place simulation I made this year had problems lifting objects via friction. See Question 1, Question 2, and Question 3. ========== update: This video where I replaced the gripper with a rigid forklift-type attachment lends weight, I think, to the idea that the problem is due to physics (friction?) and not the ROS controller or something. ========== I've tried using all four physics engines and modifying physics tags in the world file, as well as collision surface tags in both the object sdf and the gripper urdf. It's apparent that changing these parameters has some effect (though mu1 and mu2 don't make any difference) but the problem persists. The files are pasted below. It also seems to me that this could be a problem to do with Gazebo 9, which is what I moved to because the previous project had a mobile base that could not move due to this bug with Gazebo 7 and ros_control. Since this simulation does not have a mobile base I will try reverting to Gazebo 7 to see if that fixes the gripping problem. update: The behavior is identical with Gazebo 7. I've seen the same thing on two computers running Gazebo 9 and now one computer running Gazebo 7. An additional tidbit: if the mass of the object is too high (though still less than 1) it will fall during the upward motion. So that's fun :-/ World file: <sdf version='1.5'> <world name='default'> <!-- A global light source --> <include> <uri>model://sun</uri> </include> <!-- A ground plane --> <include> <uri>model://ground_plane</uri> </include> <!-- Set physics parameters to hopefully improve contact behavior --> <!-- Default max_step_size is 0.001 --> <!-- Product of r_t_u_r and m_s_s gives target realTimeFactor, I think. Default is 1000. --> <!-- Default iters is 50 --> <physics type="ode"> <max_step_size>0.0005</max_step_size> <real_time_update_rate>2000</real_time_update_rate> <ode> <solver> <iters>100</iters> </solver> </ode> </physics> </world> </sdf> Object sdf: <sdf version='1.6'> <model name='object'> <link name='object ... edit retag close merge delete
2018-08-17 22:53:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17079293727874756, "perplexity": 1551.8573318161732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213158.51/warc/CC-MAIN-20180817221817-20180818001817-00437.warc.gz"}
https://www.physicsforums.com/threads/did-i-do-this-integral-right.63953/
Homework Help: Did I do this integral right? 1. Feb 15, 2005 UrbanXrisis $$\int \frac {e^x+4}{e^x}dx =?$$ Here's what I did: $$\int \frac {e^x+4}{e^x}dx = \int e^{-x}(e^x+4)dx$$ $$\int e^{-x}(e^x+4)dx =\int 1+4e^{-x} = x-\frac {4e^{-x+1}}{x+1}$$ Did I do this correctly? Is there a more simplified answer? 2. Feb 15, 2005 Galileo Rewriting $\frac{e^x+4}{e^x}=1+4e^{-x}$ was correct. Check the antiderivative of $e^{-x}$. Your answer is not correct. You can easily check it by differentiating it. Mind the difference between $x^a$ where the base is the variable and $a^x$ where the base is constant and the exponent is the variable. 3. Feb 15, 2005 UrbanXrisis since $$\int e^{x} = e^{x}+C$$ then... $$\int 1+4e^{-x} = x+4e^{-x}+C$$ is that correct? 4. Feb 15, 2005 Jameson When you differentiate $$x + 4e^{-x} + C$$ you get $$1 - 4e^{-x}$$ , so the integral is actually $$\int 1 + 4e^{-x} dx = x - 4e^{-x} + C$$ 5. Feb 15, 2005 UrbanXrisis I dont understand where the negative came from 6. Feb 15, 2005 Jameson The integral of $$e^x dx = {e^x} + C$$ The integral of $$e^{-x}dx = -e^{-x} + C$$. Differentiating that answer you find that $$\frac {d}{dx} -e ^{-x} = e^{-x}$$ 7. Feb 15, 2005 dextercioby Think of it as an $e^{u}$ and apply the method of substitution: $$-x=u$$ Daniel. P.S.That's how u end up with the minus. 8. Feb 15, 2005 UrbanXrisis $$\int \frac {e^x}{e^x+4}dx =?$$ Here's what I did: $$= \int e^{x}(e^x+4)^{-1}dx$$ subsitute: $$u=e^x+4$$ $$du=e^x dx$$ $$\int u^{-1}du =ln(e^x+4)$$ Did I do this correctly? Is there a more simplified answer? 9. Feb 15, 2005 NateTG Looks good to me. 10. Feb 15, 2005 dextercioby Don't forget the constant of integration. Daniel.
2018-04-25 22:31:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786009550094604, "perplexity": 1692.0938225022378}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947968.96/warc/CC-MAIN-20180425213156-20180425233156-00217.warc.gz"}
https://cs184.eecs.berkeley.edu/sp20/lecture/4-49/transforms
Lecture 4: Transforms (49) FLinesse Another closely connected field in which the Rodriguez Formula is used commonly to express a general rotation is robotics! (movement at rotational joints) You must be enrolled in the course to comment
2020-06-04 09:02:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915998935699463, "perplexity": 3222.782485227368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00214.warc.gz"}
https://blender.stackexchange.com/questions/50489/copy-a-node-based-material-to-another-object-and-then-freely-edit-the-second-on
# Copy a node-based material to another object, and then freely edit the second one? I have a scene with two mountains, one is blue, and I would like the other one the small one) to be pink. In edit mode, objects have been separated (P). When I click on the second object and add a new material with the "+" sign of the material tab, it creates a new material. Then I SHIFT + RMB the materials, to be able to "Copy Material to Others" I then go in the new instance of the material (by clicking on the small mountain), and change the color with a green (I know I said pink... forgive me)... ... and all the colors change! But I was just in another color, I just wanted the small mountain to change. Even the other instances of materials I created with the "+" button have all changed to green suddenly. Every one of them. On both meshes. So, are node-based materials like physics simulations, in that "there can be only one"? Obviously it's possible to recreate materials from scratch every time, but... there's gotta be a way to "unlink" these material settings, so that I can make a new object, apply this material and then change the color or texture, based on the existing tree? Here is another example of trying to duplicate a material and all the materials changing together when I only edit one of them: • You have either 2 objects that use the same datablock (this can happen e.g. if linked duplicating via Alt+D) or one material for 2 objects. Related - blender.stackexchange.com/questions/23386/… or blender.stackexchange.com/questions/2852/… – Mr Zak Apr 11 '16 at 10:26 • In your last gif it is affected by the same problem - you use the same material for both slots. This is covered in both added here and linked answers. The main thing to look at is number of users to the right from material's name. – Mr Zak Apr 14 '16 at 11:57 First you don't need to separate your mountains into different objects to assign different materials to different parts, you can have multiple materials on the same object using several material slots per object for that. Just go to the Properties Window > Materials and In the List press the + button to create a new slot and assign a new material to it. In edit mode you can then select which part of the mesh you want associated with that material by selecting the corresponding faces, selecting the desired material slot and then pressing the Assign button. As for the materials themselves, you can create a new one from scratch or you can base a new material of an existing one. You should first start by naming your materials properly so as to avoid an endless list of Material.### that doesn't give you a clue of what you are doing. Then if you want to base a material from an existing one, you can choose it from that list (a descriptive name will help a lot here) and after selecting it there should appear a small number next to it. That number indicates how many objects are using this material you just assigned. By clicking on that number you will make a Single User of that material, which means it will now become a new unique and independent material not used by any object but maintaining the same settings and node setup of the original. If this workflow doesn't suit you you can always add a new material from scratch, go to the original one you want to copy from and copy-paste with Ctrl + C, Ctrl + V to copy the nodes from one to the other. • I accepted your comment, but I am still finding the problem in some cases... I added an animated screen grab at the end of the question to show the two materials changing together even after duplication. – MicroMachine Apr 14 '16 at 3:07 • You did not press the number button as described in my answer, you didn't make it single user, so what you are doing in that screen grab is using the same material in two different slots in the same object. You are simply renaming the material, so you see the name changing in both places. You have to press the + button first to make it a unique material. – Duarte Farrajota Ramos Apr 14 '16 at 14:01 • Just to make clear i mean the +button in front of the name box, to add a new material, as opposed to the one on the side that will add a new Material Slot – Duarte Farrajota Ramos Apr 14 '16 at 14:04 • Hi, I've edited my original question with a last gif where I follow thoroughly your advice (renaming, clicking on the "2"), and where all the materials change again when I think I'm only editing one. Thanks for your help and patience! – MicroMachine Apr 15 '16 at 20:48 • @DuarteFarrajotaRamos I have thousands of objects for which I need to update their materials' nodes/links using Blender's Python API. I am pretty new to nodes stuff in Cycles and I haven't been able to wrap my head around how I can do that. I wonder, do you know how one can do that? If so, could you please take a look at my question here and see if you can offer a solution? – Amir Mar 8 '18 at 17:04 This probably has already been solved by the OP, but I came across this problem myself and searching for an answer I stumbled upon this post. And for the sake of those eventually looking to solve the same problem, I'd like to add an important detail: The answer given by Duarte Farrajota Ramos is correct, yet the OP said he was still having trouble, and in fact so did I after following his instructions (in fact I was already doing all that he said even before seeing his answer, and it wasn't working), the problem here is the nested/grouped nodes inside the material, they have defined users as well, and if you just make the main material (in the material tab) single-user, the alterations you make on the first level of nodes in fact won't be passed on to the original materials, but if you alter the inner layers, I mean, alter things inside the groups that compose the material, it WILL change the original material you used as a base. Note, this will happen even if you create a material from scratch and copy/paste the nodes from another material, cause the data of the inner groups is still being shared between materials, only the outer layer is in "single user mode". To solve this, you must simply create the new material, make it single user on the material tab by clicking it's users number (if there is any), this will make the outer node layer single user. Then go in node edit mode, look for the grouped nodes and do the same, you'll see they have their own users number, click on that, and now you can alter whatever you want, and nothing will get out of that material. Material tab, you can see there are no users, yet I was still getting my base material changed: Node editor, and here I found the problem, the inner node group was being shared with other 8 materials: This is indeed some really basic stuff, but for newcomers it might prove to be a challenge, cause it may just pass unnoticed that you must clear the users from all layers. Hope I can help someone. Cheers!
2020-07-11 08:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38046133518218994, "perplexity": 626.4427817780576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00386.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-7-exponential-functions-7-4-exponential-growth-and-decay-exercises-page-349/12
## Calculus (3rd Edition) The decay constant is $4.27\times 10^{-4}$. The half-life is given by $$\frac{\ln 2}{k}\Longrightarrow 1622= \frac{\ln 2}{k}\Longrightarrow k= \frac{\ln 2}{1622}=4.27\times 10^{-4}.$$ So the decay constant is $4.27\times 10^{-4}$.
2020-11-24 05:54:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996469020843506, "perplexity": 841.6006235524046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00577.warc.gz"}
http://www.physicsforums.com/showthread.php?t=488308&page=4
## Diffeomorphisms Quote by fzero Not necessarily, though you should be a bit more careful about setting things up. For example $$\xi^{a;b}$$ is not itself an antisymmetric 2-form. Well you do use the fact that it's a Killing vector. The fact that it's spacelike should also be tied to the fact that you're using a spacelike hypersurface. Surely from Killing's equation, $\nabla_a \xi_b = - \nabla_b \xi_a$ will be antisymmetric? Do I need to do anything about the two surfaces then? Quote by fzero The stress tensor for a perfect fluid is (175). I have been trying to take the $\nu=0$ component of $T^{\mu \nu}{}_{; \mu}=0$ but it simply will not work! Is this the way I should go about it? Quote by fzero The solutions for $$a(\eta)$$ are (427). What you claim clearly doesn't happen for $$\eta>0$$ in the flat and open universes. But just from a plot of sinh, it looks like a smiley face (i.e. positive second derivatie), no? Quote by fzero Yes, the horizon is composed of the boundary of the present positions of particles which were in casual contact with us at some time in the past. Again, I don't have the figures, but the one on p. 130 is probably relevant. So the particle horizon is the boundary of the past light cone? Recognitions: Gold Member Homework Help Quote by latentcorpse Surely from Killing's equation, $\nabla_a \xi_b = - \nabla_b \xi_a$ will be antisymmetric? Do I need to do anything about the two surfaces then? OK that'll do. As for the two surfaces, you need to figure out what your surface $$\Omega$$ is. Is $$\partial \Omega$$ necessarily connected? I have been trying to take the $\nu=0$ component of $T^{\mu \nu}{}_{; \mu}=0$ but it simply will not work! Is this the way I should go about it? It looks like you need to compute a trace of one of the Christoffel symbols. But just from a plot of sinh, it looks like a smiley face (i.e. positive second derivatie), no? $$\ddot{a}$$ is the 2nd derivative w.r.t. coordinate time, not $$\eta$$. I can't see the figures, so I don't know if they should be helpful to you. So the particle horizon is the boundary of the past light cone? The figure is probably clearer than I would put into words. The horizon is a boundary on spacetime at our present expansion parameter. To determine the points that make up the horizon, you need to follow the worldlines of the particles that crossed our past light cone at some earlier time. Quote by fzero OK that'll do. As for the two surfaces, you need to figure out what your surface $$\Omega$$ is. Is $$\partial \Omega$$ necessarily connected? I thought S is the surface given in the formula for J_S. Then $\partial \Omega=S$ and $\Omega$ would just be the enclosed 3-sphere? Quote by fzero The figure is probably clearer than I would put into words. The horizon is a boundary on spacetime at our present expansion parameter. To determine the points that make up the horizon, you need to follow the worldlines of the particles that crossed our past light cone at some earlier time. Well from the figure, it appears to me that the particle horizon actually is the boundary of the past light cone. But if this is so, why don't they just define it as this? This is why I'm having doubts about my interpretation. But I can't think of an example where they wouldn't be equal. Also, for the $\nabla_\mu T^{\mu \nu}$ question, can I take $u^\mu=(1,0,0,0)$ as we are assuming a comoving observer. And apparently they have $u^\alpha=\delta^\alpha{}_0$. However, even though our notes say we can do this for a comoving observer, I find that $\nabla_\mu T^{\mu \nu}=0$ $\nabla_\mu ( \rho u^\mu u^\nu ) + \nabla_\mu ( p u^\mu u^\nu) - \nabla_\mu (pg^{\mu \nu})=0$ Taking $\nu=0$ we get $\nabla_0 ( \rho u^0) + \nabla_i ( \rho u^i) + \nabla_0 ( pu^0) + \nabla_i ( p u^i) - \nabla_0p$ $\frac{\partial \rho}{\partial t}=0$ which is clearly incomplete so something isn't right... Thanks. Recognitions: Gold Member Homework Help Quote by latentcorpse Also, for the $\nabla_\mu T^{\mu \nu}$ question, can I take $u^\mu=(1,0,0,0)$ as we are assuming a comoving observer. And apparently they have $u^\alpha=\delta^\alpha{}_0$. However, even though our notes say we can do this for a comoving observer, I find that $\nabla_\mu T^{\mu \nu}=0$ $\nabla_\mu ( \rho u^\mu u^\nu ) + \nabla_\mu ( p u^\mu u^\nu) - \nabla_\mu (pg^{\mu \nu})=0$ Taking $\nu=0$ we get $\nabla_0 ( \rho u^0) + \nabla_i ( \rho u^i) + \nabla_0 ( pu^0) + \nabla_i ( p u^i) - \nabla_0p$ $\frac{\partial \rho}{\partial t}=0$ which is clearly incomplete so something isn't right... Thanks. If you take the calculation another line you should find something like $$\dot{\rho} + (\rho +p){\Gamma^0}_{00}=0$$. Quote by fzero If you take the calculation another line you should find something like $$\dot{\rho} + (\rho +p){\Gamma^0}_{00}=0$$. How did you manage to get any $p$ terms surviving? All of mine cancel.
2013-05-26 04:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7920164465904236, "perplexity": 287.1907172564146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706624988/warc/CC-MAIN-20130516121704-00037-ip-10-60-113-184.ec2.internal.warc.gz"}
https://arduino.stackexchange.com/questions/26285/attiny44-millis-not-working-with-16-mhz-external-clock
# ATtiny44 millis() not working with 16 MHz external clock I'm using ATtiny44 with Arduino IDE according to this tutorial: http://highlowtech.org/?p=1695 I have a problem with millis(). When I use an internal 1 MHz clock it works correctly but when I use external 16 MHz clock it takes millis() much more time than one second to be divisible by 1000. I tested it with LCD and Hello World sketch, modified to correspond ATtiny's pins. Do millis() actually work with Tiny's? Why does it work properly with 1MHz and not with 16 MHz which is the same as used in Arduino platform. • Did you forget to unprogram CKDIV8? – Ignacio Vazquez-Abrams Jul 13 '16 at 8:20 • Testing whether millis() is divisible by 1000 is a terrible idea. With a 16 MHz clock millis does not count every millisecond: it is updated only every 1024 µs and occasionally jumps by 2 ms. – Edgar Bonet Jul 13 '16 at 9:41 • Please post the Fuses config – Talk2 Jul 13 '16 at 10:40 • If you are specific about 'much more time', that will help people to explain the specific problem you observed. As you can see, there are several possible interpretations of the problem. – Sean Houlihane Jul 13 '16 at 12:06 When millis() was written, it had to assume what the input clock was. There is no way to detect the speed of the input clock: to do so would require another clock! So the writers of the function defined a starting constant, set it to 1000000, and required anyone that changed the clock would have to change that constant. Find the constant, and set it to 16000000. Voilà!
2021-04-22 04:28:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38228005170822144, "perplexity": 1859.6491940045362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00139.warc.gz"}
https://physics.stackexchange.com/questions/633494/what-do-we-exactly-mean-when-we-say-that-a-problem-has-an-analytical-solution
# What do we exactly mean when we say that a problem has an analytical solution? What do we actually mean when we say that a certain problem has or does not have an analytical solution? I ask this because some systems that are said to have an analytical solution actually are no better than some systems that are said not to have it. I will give two examples. The standard harmonic oscillator, whose equation of motion is $$m\ddot{x}=-kx$$, has the general solution $$x(t)=A\cos(\omega t+\phi)=A\cos\delta(t),$$ where $$\omega=\sqrt{\frac{k}{m}}$$ and $$\delta(t)=\omega t + \phi$$. However, that cosine function is defined as $$\cos\delta=\sum_{n=0}^\infty \; (-1)^n \; \frac{\delta^{2n}}{(2n)!}.$$ Which can only be calculated approximately. How is this different with the simple pendulum, where the solution is defined in terms of Jacobi elliptic functions, and can also be calculated only approximately? Another example, the free particle with linear drag. The equation of motion is $$m\dot{v}=-bv$$. The solution with initial velocity $$u$$ is $$v(t)=ue^{-bt/m}$$. Here, $$e$$ is an irrational number and thus the velocity cannot be calculated exactly. Moreover, for any complicated problem, we may define a function to be the solution of the problem, and may be tabulated. So, why do we say the harmonic oscillator has an analytic solution but the pendulum not? What makes a solution analytical? Also, what means physically that a system has no analytical solution? Wikipedia defines an analytic solution as a mathematical expression constructed using well-known operations that lend themselves readily to calculation As you say, this simply begs the question of what counts as a “well-known operation”. By convention, the exponential, logarithmic, trigonometric and hyperbolic trigonometric functions and their inverses would certainly qualify. Hypergeometric functions, elliptic integrals, Bessel functions etc. are more of a grey area. Broadly speaking, I would say that if a function has been widely studied enough to give it a name, a notation, and tabulate its values in DLMF or similar references then you can count it as a “well-known operation”. The distinction between systems that have or do not have an analytic solution is entirely a matter of convention, and has no physical significance whatsoever. However, that cosine function is defined as $$\cos (\delta)=\sum_{n=0}^{\infty}(-1)^{n} \frac{\delta^{2 n}}{(2 n) !}$$. Which can only be calculated approximately. That's not correct. $$\cos\delta$$ is an analytical function and, like all analytical functions, is infinitely differentiable, therefore you can rewrite it as an infinite series expansion. A strict answer could be that an analytical solution can be written in "closed form", that is, using a finite number of operations. From my experience (which is not much) the election of naming a solution as analytical or not could be in some cases arbitrary. It could depend, in some cases, if you can do some analytical stuff of your interest in the solutions. In that case, you can call it analytical (although they aren't). For example: solving the Schrödinger equation from a free particle in spherical coordinates is the same as solving the Bessel equation which solutions are strictly not analytical, but could be said that are analytical. Checking my answer on Wikipedia I discovered the term "analytic expression", which can be the origin of the confusion in this terminology topic. • Your logic is slightly off here: not all functions which are infinitely differentiable admit a series expansion. This is the distinction between analyticity and $C^\infty$. – Richard Myers May 2 at 21:09 • The reason I say it is that often the outputs of the cosine function are irrational numbers. – Don Al May 2 at 23:36 • @RichardMyers Thank you for your clarification. I thought all $C^{\infty}$ functions admit a series expansion! – LongJohn May 3 at 8:13 • "the election of naming a solution as analytical or not could be in some cases arbitrary" -- that can be the case, but the election of calling a function analytical or not is not arbitrary; it is either complex-differentiable (and thus equal to its Taylor series) or not. – Emilio Pisanty May 3 at 10:18 • Regarding Richard's comments, the easiest example is $f(x)=e^{-1/x}$ for $x>0$ (and set to zero for $x\leq 0$). The function is smooth (all derivatives exist, and are zero) at zero, but the Taylor series fails on the right-hand side of that. But, on the other hand, if you impose the stronger condition of being analytical (i.e. having a single complex derivative in a neighbourhood of the point in question) then it does follow that the function's Taylor series converges to the function. – Emilio Pisanty May 3 at 10:22
2021-07-26 05:31:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296251535415649, "perplexity": 251.17415180178725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00102.warc.gz"}
https://aptitude.gateoverflow.in/4993/cat-2010-question-41
234 views Which of the following statements is wrong ? 1. Depreciation expense is the lowest for food industry 2. Power and fuel expenses are $5$th largest item in the expenditure of diverifies industries. 3. Electricity industry earns more of other income as a percentage of total income compared to other industries. 4. Raw material cost is the largest item of expense in all industry sectors 1
2022-12-09 20:01:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33215683698654175, "perplexity": 4589.442401275765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00714.warc.gz"}
http://zbmath.org/?q=an:1250.35112
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) A Dirichlet problem with singular and supercritical nonlinearities. (English) Zbl 1250.35112 The author proves the existence of positive solutions in ${W}_{0}^{1,2}\left({\Omega }\right)\cap {L}^{\infty }\left({\Omega }\right)$ for a Dirichlet problem with singular and supercritical nonlinearities. Reviewer: Jiaqi Mo (Wuhu) ##### MSC: 35J66 Nonlinear boundary value problems for nonlinear elliptic equations 35B09 Positive solutions of PDE ##### References: [1] Boccardo, L.; Escobedo, M.; Peral, I.: A Dirichlet problem involving critical exponent, Nonlinear anal. TMA 24, 1639-1648 (1995) · Zbl 0828.35042 · doi:10.1016/0362-546X(94)E0054-K [2] Ambrosetti, A.; Brezis, H.; Cerami, G.: Combined effects of concave and convex nonlinearities in some elliptic problems, J. funct. Anal. 122, 519-543 (1994) · Zbl 0805.35028 · doi:10.1006/jfan.1994.1078 [3] Boccardo, L.; Orsina, L.: Semilinear elliptic equations with singular nonlinearities, Calc. var. Partial differential equations 37, 363-380 (2010) · Zbl 1187.35081 · doi:10.1007/s00526-009-0266-x [4] Coclite, M. M.; Palmieri, G.: On a singular nonlinear Dirichlet problem, Comm. partial differential equations 14, 1315-1327 (1989) · Zbl 0692.35047 · doi:10.1080/03605308908820656 [5] Stuart, C. A.: Existence and approximation of solutions of nonlinear elliptic equations, Math. Z. 147, 53-63 (1976) · Zbl 0324.35037 · doi:10.1007/BF01214274 [6] Canino, A.; Degiovanni, M.: A variational approach to a class of singular semilinear elliptic equations, J. convex anal. 11, 147-162 (2004) · Zbl 1073.35092 · doi:http://www.heldermann.de/JCA/JCA11/jca11010.htm [7] Hirano, N.; Saccon, C.; Shioji, N.: Brezis–Nirenberg type theorems and multiplicity of positive solutions for a singular elliptic problem, J. differential equations 245, 1997-2037 (2008) · Zbl 1158.35044 · doi:10.1016/j.jde.2008.06.020
2014-04-24 06:21:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027118444442749, "perplexity": 12181.871154823155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
https://byjus.com/perimeter-of-an-ellipse-calculator/
# Perimeter Of An Ellipse Calculator Perimeter of a Ellipse Perimeter of an Ellipse Calculator is a free online tool that displays the value of perimeter of ellipse for the given radii. BYJU’S online perimeter of an ellipse calculator tool makes the calculation faster, and it displays the perimeter for the given vertical and horizontal radius in a fraction of seconds. ## How to Use the Perimeter of an Ellipse Calculator? The procedure to use the perimeter of an ellipse calculator is as follows: Step 1: Enter the vertical and horizontal radius in the respective input field Step 2: Now click the button “Calculate” to get the ellipse perimeter Step 3: Finally, the value of perimeter of the ellipse will be displayed in the output field ### What is Meant by Perimeter of an Ellipse? In conic sections, an ellipse is one of the important curves that surround two focal points and with two radii, namely the semi-major axis and semi-minor axis. The perimeter of an ellipse is defined as the distance around the boundary of an ellipse. The longest chord of the ellipse is called the major axis, and the chord which perpendicular to it and bisects the major axis is called the minor axis. The perimeter of an ellipse formula is given as: Perimeter of an ellipse $$\begin{array}{l}=2\pi\sqrt{\frac{r_1^2+r_2^2}{2}}\end{array}$$ Here, r1 and r2 represent the radii (i.e. vertical and horizontal axis)
2022-05-17 19:49:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018383622169495, "perplexity": 467.94190059817896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00233.warc.gz"}
https://ai.stackexchange.com/questions/7149/how-to-solve-problem-pairwise-grouping-to-maximise-score
# How to solve problem: pairwise grouping to maximise score Sorry, the title is bad because I don't even know what to call this problem. I have a set of n objects {obj_0, obj_1, ......, obj_(n-1)}, where n is an even number. Any two objects can be paired together to produce an output score. So for instance, you might take obj_j and obj_k, and pair them together giving a score of S_j,k. All scores are independent, so the previous example doesn't tell you anything about what the score for combining obj_j and obj_i, S_j,i might be. There is no ordering in the combination, so S_j,i and S_i,j are the same. All scores for all pairing possibilities are known. The whole set of objects is to be taken and organised into pairs (leaving no objects unpaired). The total score, S_tot is the sum of all scores of individual pairs. What's the most efficient way to find the score-maximising pairing configuration for a large set of such objects? (does this problem have a name?) Is there a method which works with the version of this problem where objects are grouped into triplets?
2019-10-14 18:22:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5599978566169739, "perplexity": 900.8536291832637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00053.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/simple-problems-single-events-two-dice-are-rolled-together-find-probability-getting-multiple-2-one-die-odd-number-other-die_29821
Share Books Shortlist # Two Dice Are Rolled Together. Find the Probability of Getting: a Multiple of 2 on One Die and an Odd Number on the Other Die. - ICSE Class 10 - Mathematics ConceptSimple Problems on Single Events #### Question Two dice are rolled together. Find the probability of getting: a multiple of 2 on one die and an odd number on the other die. #### Solution In throwing a dice, total possible outcomes= {1,2,3,4,5,6} n(s) for two dice, n (s)=6xx6=36 E= event of getting a multiple of 2 on one die and an odd number on the  othher = (2,1),(2,3),(2,5),(4,1),(4,3),(4,5),(6,1),(6,3),(6,5),(1,2),(3,2),(5,2),(1,4),(3,4),(5,4),(1,6),(3,6),(5,6) n(E)=18 Probability of getting a multiple of 2 one die and an odd number on the other= (n(E))/(n(s))=18/36=1/2 Is there an error in this question or solution? #### Video TutorialsVIEW ALL [3] Solution Two Dice Are Rolled Together. Find the Probability of Getting: a Multiple of 2 on One Die and an Odd Number on the Other Die. Concept: Simple Problems on Single Events. S
2019-07-19 19:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.200042262673378, "perplexity": 1009.7988453055764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526337.45/warc/CC-MAIN-20190719182214-20190719204214-00397.warc.gz"}
https://socratic.org/questions/when-naming-this-structure-why-would-the-name-be-3-methylcyclohexanol-and-not-1-
# When naming this structure, why would the name be 3-methylcyclohexanol and not 1-methyl-3-cyclohexanol? Thank you! Feb 10, 2018 Does not the hydroxyl substituent take priority? #### Explanation: This is an alcohol derivative...the ipso carbon attached to $O H$...is the NUMBER ONE CARBON. And the methyl group takes NUMBER THREE position on the ring. And note that this compound could potentially generate a number of diastereomers given that ${C}_{1}$ and ${C}_{3}$ are stereogenic, i.e. bound to 4 different residues.... Substituent ordering takes place on the basis of the atomic number, $Z$, of the substituent...
2018-08-17 05:38:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6715006828308105, "perplexity": 4193.5640651443755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211719.12/warc/CC-MAIN-20180817045508-20180817065508-00454.warc.gz"}
http://physics.stackexchange.com/questions/81317/how-to-get-t-from-av/81318
# How to get $t$ from $a(v)$? I read what if we have acceleration given as a function of velocity we can calculate time as $$t(v) = t_0 + \int_{v_0}^{v} \frac{dv}{a(v)}.$$ Why? - You have $a(v) = \frac{dv}{dt}$. By separating the variables you get $dt= \frac{dv}{a(v)}$. Now you just integrate between $v_0$ and $v$ to obtain the equation you wrote. Really @Yola answered your question directly so I want to be a little more complete. I want to show how to deal with problems of $a(v)$ and $a(x)$. 1. Acceleration as a function of speed $$t= t_0 + \int_{v_0}^v \frac{1}{a(v)}\,{\rm d} v \\ x = x_0 + \int_{v_0}^v \frac{v}{a(v)}\,{\rm d}v$$ 2. Acceleration as function of position/displacement $$\frac{v^2}{2} - \frac{v_0^2}{2} = \int_{x_0}^x a(x)\,{\rm d} x \\ t -t_0 = \int_{x_0}^x \frac{1}{v(x)} \,{\rm d}x$$
2015-11-26 23:34:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356896281242371, "perplexity": 140.30692834878704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447860.26/warc/CC-MAIN-20151124205407-00215-ip-10-71-132-137.ec2.internal.warc.gz"}
https://leanprover-community.github.io/archive/stream/116395-maths/topic/use.3F.html
## Stream: maths ### Topic: use? #### Ashvni Narayanan (Oct 27 2020 at 09:18): I am trying to prove ?m_1 ∈ submodule.comap (localization_map.lin_coe (fraction_ring.of A)) I.val and I have g': y ∈ submodule.comap (localization_map.lin_coe (fraction_ring.of A)) I.val where ?m_1 must be of type A, and y is of type A. However, none of exact, refine, assumption or use work. Any help is appreciated. Thank you! #### Anne Baanen (Oct 27 2020 at 09:19): Try convert g' Doesn't work :( #### Anne Baanen (Oct 27 2020 at 09:20): Does it fail or result in new goals? #### Ashvni Narayanan (Oct 27 2020 at 09:20): invalid type ascription, term has type ?m_1 but is expected to have type ?m_1 ∈ submodule.comap (localization_map.lin_coe (fraction_ring.of A)) I.val #### Ashvni Narayanan (Oct 27 2020 at 09:21): Well the next goal is to prove that ?m_1 : A. #### Anne Baanen (Oct 27 2020 at 09:21): Hmm, could you post a #mwe? It looks like Lean is getting confused earlier on. #### Ashvni Narayanan (Oct 27 2020 at 09:24): Ashvni Narayanan said: Does this help? If not, I'll make an mwe. #### Anne Baanen (Oct 27 2020 at 09:26): That should be enough, thanks! One moment please... #### Anne Baanen (Oct 27 2020 at 09:30): Aha, the problem seems to be this: you define y in one subgoal, but it should be used in another subgoal. #### Anne Baanen (Oct 27 2020 at 09:32): (I'm guessing you did this because you got errors when writing cases (g : ∃ (x : localization (non_zero_divisors A)), x ∈ I)? #### Ashvni Narayanan (Oct 27 2020 at 09:32): I did try rotate, but that ended up giving me other issues with using cases, hence I did not proceed.. #### Ashvni Narayanan (Oct 27 2020 at 09:32): Anne Baanen said: (I'm guessing you did this because you got errors when writing cases (g : ∃ (x : localization (non_zero_divisors A)), x ∈ I)? Haha yes exactly #### Anne Baanen (Oct 27 2020 at 09:36): Hmm, I don't know to fix the proof because I'm not quite sure what you're trying to prove here actually. Your goal ↥(submodule.comap (localization_map.lin_coe (fraction_ring.of A)) I.val) means "we can compute an element of the ideal" (let's just call it J), but surely we have 0 \in J? #### Ashvni Narayanan (Oct 27 2020 at 09:39): What I actually want to prove is that if we have a fractional ideal I, and I \leq 1, then (the preimage of) I must be an ideal of A. #### Anne Baanen (Oct 27 2020 at 09:40): Ah, this turns out to be called submodule.comap, as in the function you used to construct your goal :) #### Anne Baanen (Oct 27 2020 at 09:42): (In fact, this holds for all fractional ideals: everything outside of A is just discarded by comap.) #### Ashvni Narayanan (Oct 27 2020 at 09:46): Ah I see, hence the I \le 1 is not needed! #### Ashvni Narayanan (Oct 27 2020 at 09:46): Apologies for this, and thank you for your help! :) #### Anne Baanen (Oct 27 2020 at 09:47): You're welcome! Last updated: May 11 2021 at 16:22 UTC
2021-05-11 16:40:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294859886169434, "perplexity": 9441.413238930876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00293.warc.gz"}
https://cor3ntin.github.io/posts/stack_queue/
# Stranded with a C++ compiler and a bunch of queues A friend had a phone interview for a job in a company that I won’t name • It’s Microsoft. One of the questions was about describing how he would write a stack, only using standard queues. I was confounded, because long before an algorithm could form in my mind, I already decided that there was no solution that would actually be useful in any real life scenario. template <typename T, typename Container = std::queue<T>> class stack { public: void push(const T &); void pop(); T& top(); std::size_t size() const; bool empty() const; private: void transfer(); Container a, b; }; template <typename T, typename Container> void stack<T, Container>::push(const T& t) { a.push(t); } template <typename T, typename Container> void stack<T, Container>::pop() { transfer(); a.pop(); std::swap(a, b); } template <typename T, typename Container> void stack<T, Container>::transfer() { while(a.size() > 1) { T t = a.front(); a.pop(); b.push(t); } } That the only solution I could find; To be honest, I was too lazy to come up with the algorithm myself, but it’s really straight forward. It has $\mathcal{O}( n )$ complexity, and… let’s just say it does not really scale. But, it’s quite an interesting algorithm nonetheless. See, for a huge company to ask this question to every candidate, I can only assume one former employee found themselves stranded on an island, with a bunch of queues. Their survival depended on having a stack, they failed to come up with the proper solution and died. It’s the only explanation that make sense to me; The other explanation would be that large companies ask really stupid & meaningless interview questions, and, well… that’s just silly. Then, my friend told me the next question was about creating a queue using stacks. Sure, why not ? template <typename T, typename Container> class queue { public: void push(const T &); void pop(); T& front(); std::size_t size() const; bool empty() const; private: void transfer(); Container a, b; }; template <typename T, typename Container> void queue<T, Container>::push(const T& t) { a.push(t); } template <typename T, typename Container> void queue<T, Container>::pop() { transfer(); b.pop(); } template <typename T, typename Container> void queue<T, Container>::transfer() { if(b.empty()) { while(!a.empty()) { T t = a.top(); a.pop(); b.push(t); } } } My friend and I debated about the complexity of this algorithm. I explained to him it was n². If our hero was stranded on an island, they could not have standard stacks shipped their way by amazon, and would have had to use what they had: a stack made of queues. Of course, our unfortunate hero had a stock of standard queues to begin with, but maybe hey could’t use them, for some reason. After all, he didn’t invent them himself so it was better to rewrite them anyway. template <typename T> using MyQueue = queue<T, stack<T>>; By that point, the poor cast away recognize a knife would have been more useful than a standard container and they realized their death was nothing but certain. And, as the hunger and their impending doom lead to dementia, they started to wonder… can we go deeper ? After all, it is good practice to have good, solid foundations, and a bit of judiciously placed redundancy never hurts. template <typename T> using MyQueue = queue<T, stack<T, queue<T, stack<T, std::queue<T>>>>> The structure has the property of being self-tested and grows exponentially more robust at the rate of 2^n which could prove very useful for critical applications. We can however lament that 4 levels is a bit arbitrary and limited. Fortunately, I made the assumption that our hero, has with them a C++ compiler. That may be a depressing consideration when you haven’t drink for 3 days, but, isn’t meta programming fantastic ? After a bit a tinkering, cursing and recursing, it is possible to create a queue of stacks - or a stack of queue - of arbitrary depth. namespace details { template <typename T, typename...Args> struct outer { using type = queue<T, Args...>; }; template <typename T, typename...Args> struct outer<T, stack<Args...>> { using type = queue<T, stack<Args...>>; }; template <typename T, typename...Args> struct outer<T, queue<Args...>> { using type = stack<T, queue<Args...>>; }; template <unsigned N, typename T> struct stack_generator { using type = typename outer<T, typename stack_generator<N-1, T>::type>::type; }; template <unsigned N, typename T> struct queue_generator { using type = typename outer<T, typename queue_generator<N-1, T>::type>::type; }; template <typename T> struct stack_generator<0, T> { using type = queue<T>; }; template <typename T> struct queue_generator<0, T> { using type = stack<T>; }; return i % 2 == 0 ? i+1 : i; } } template <typename T, unsigned N> using stack = typename details::stack_generator<details::adjusted_size(N), T>::type; template <typename T, unsigned N> using queue = typename details::stack_generator<details::adjusted_size(N), T>::type; They are pretty cool and easy to use: stack<int, 13> stack; queue<int, 13> stack; On the system it was tested with, $N=13$ was sadly the maximum possible value for which the program would not crash at runtime - The deepest level consists of 8192 queues. The compiler was unable to compile a program for $N > 47$. At that point the generated executable weighted merely 240MB I expect these issues to be resolved as the present solution - for which a Microsoft employee probably gave their life - gains in popularity. However, for $N > 200$, the author reckon than the invention of hardware able to withstand the heat death of the universe is necessary. You may be wondering if you should use those containers in your next application ? Definitively ! Here are some suggestions. • An internet enabled toaster : A sufficiently big value of $N$ should let you use the CPU as the sole heating element leading to a to a slimmer and more streamlined design, as well as reducing manufacturing costs. • In an authentication layer, as the system has a natural protection against brute force attacks. N should be at least inversely proportional to the minimum entropy of your stupid password creation rules. The presented solution is however not sufficient to prevent Ddos • Everywhere you wondered if you should use a vector but used a linked list instead.
2023-03-28 19:01:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1838715374469757, "perplexity": 6370.074017810634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00437.warc.gz"}
https://www.gamedev.net/topic/634931-drawinstanced-problem/
•      Sign In • Create Account ## DrawInstanced problem Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 9 replies to this topic ### #1lomateron  Members Posted 26 November 2012 - 06:18 PM I suppose DrawInstanced() 4th parameter, last parameter, makes this... If i have this DrawInstanced( 1, 80000,0,1 ); it should draw 79999 intances and when drawing the first intace, in the vertex shader, the SV_InstanceID variable starts with 1 intead of 0. But it doesnt matters what number I put in the 4th parameter, it does the same thing as DrawInstanced( 1, 80000,0,0 );... it can even be bigger that 80000 and there is no difference. I just want to know if what i think the 4th parameter does is correct. Edited by lomateron, 26 November 2012 - 07:01 PM. ### #2Such1  Members Posted 26 November 2012 - 09:01 PM It doesn't add to SV_InstanceId, the only difference is on the vertex buffer on the instance part. the SV_InstanceID 0 should have the data from the second instance on the vertex buffer. Hope I made myself clear. Edited by Such1, 26 November 2012 - 09:01 PM. ### #3lomateron  Members Posted 26 November 2012 - 10:41 PM I have never thought about instances inside a vertex buffer, where can i found more about that. I thought an instance could only be a whole vertex buffer. ### #4Such1  Members Posted 27 November 2012 - 12:00 AM Sorry, I guess you didn't understand. You have this:(hypotetically) VERTEX* vertexBuffer0; INSTANCE* vertexBuffer1: that 4th parameter change which one will be the SV_InstanceID data. If you say the 4th parameter is x, the hypotetical formula would be: InstanceData = vertexBuffer1[SV_InstanceID + x]; I think this way is easier to understand. ### #5lomateron  Members Posted 27 November 2012 - 02:15 AM wait!... i still dont undesrtand isn't the data the same in all intances? "data" is the vertex buffer i dont understand your second post you could tell me what DrawInstanced( 1, 80000,0,1 ); is really doing to explain me, supposing that the vertex buffer just has one vertice in it... or 3 if that 4th parameter doesn't works when you have just 1 vertice. Edited by lomateron, 27 November 2012 - 02:18 AM. ### #6CryZe  Members Posted 27 November 2012 - 07:37 AM Think of it this way (pseudo code): void DrawInstanced(..., int vertexCount, int instanceCount, ...) { for (int instanceId = 0; instanceId < instanceCount; instanceId++) { for (int vertexId = 0; vertexId < vertexCount; vertexId++) { Vertex vertex; for (int inputLayoutIndex = 0; inputLayoutIndex < inputLayout.getCount(); inputLayoutIndex++) { InputLayoutElement element = inputLayout.get(inputLayoutIndex); int vertexBufferIndex = element.getVertexBufferIndex(); int vertexBufferOffset = element.getVertexBufferOffset(); VertexBuffer vertexBuffer = vertexBuffers[vertexBufferIndex]; int stride = vertexBuffer.getByteStride(); Object value; if (element.getClassification() == Classification.Instance) { value = vertexBuffer[instanceId * stride + vertexBufferOffset]; } else { value = vertexBuffer[vertexId * stride + vertexBufferOffset]; } String semantic = element.getSemantic(); vertex.setValue(semantic, value); } VertexShader(vertex); } } } That means that your input layout could look like this: [Semantic, VertexBufferIndex, VertexBufferOffset, Classification] ["VERTEX_VALUE_0", 0, 0, Vertex] ["VERTEX_VALUE_1", 0, 8, Vertex] ["VERTEX_VALUE_2", 0, 16, Vertex] ["INSTANCE_VALUE_0", 1, 0, Instance] ["INSTANCE_VALUE_1", 1, 8, Instance] You than simply use 2 vertex buffers: VertexBuffer 0: [[vertexValue0, vertexValue1, vertexValue2], [vertexValue0, vertexValue1, vertexValue2], [vertexValue0, vertexValue1, vertexValue2], ...] VertexBuffer 1: [[instanceValue0, instanceValue1], [instanceValue0, instanceValue1], [instanceValue0, instanceValue1], ...] And it basically takes a cartesian product of both sets to call the vertex shader. Edited by CryZe, 27 November 2012 - 07:57 AM. ### #7Such1  Members Posted 27 November 2012 - 03:32 PM When you have intance drawing you have 2 vertex buffers, one for the vertices and the other for the instances data. I guess thats what you are confused about. ### #8kauna  Members Posted 27 November 2012 - 05:34 PM When you have intance drawing you have 2 vertex buffers, one for the vertices and the other for the instances data. I guess thats what you are confused about. You don't necessary need a second vertex stream for the instancing data. DrawInstanced provides you a instance ID in the shader and it may be used to index a constant buffer or generic buffer object. Cheers! Edited by kauna, 27 November 2012 - 05:34 PM. ### #9Such1  Members Posted 27 November 2012 - 11:31 PM You can, but I think the vertex buffer is a better idea. And the 4th parameter only matters if you are using the vertex buffer to pass instances data. ### #10hupsilardee  Members Posted 29 November 2012 - 09:14 PM You can, but I think the vertex buffer is a better idea. Sometines you might not even need any per instance data, you could generate it in the vertex shader. For example if I was drawing an NxN square of objects I might do // game code int squareSide = 10; SetVertexShaderInt("SquareSide", squareSide); DrawInstanced(numVertices, squareSide*squareSide); // shader int SquareSide; VertexShader(instanceID : SV_INSTANCEID) { instance_pos.x = instanceID / SquareSide; instance_pos.z = instanceID % SquareSide ... } fairly contrived example I concede Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2017-01-17 06:57:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17456161975860596, "perplexity": 4513.878915842996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00451-ip-10-171-10-70.ec2.internal.warc.gz"}
http://facetimeforandroidd.com/standard-error/mean-error-formula.php
Home > Standard Error > Mean Error Formula # Mean Error Formula ## Contents I want to give you a working knowledge first. I just took the square root of both sides of this equation. McGraw-Hill. But our standard deviation is going to be less in either of these scenarios. Standard error of the mean This section will focus on the standard error of the mean. So here, when n is 20, the standard deviation of the sampling distribution of the sample mean is going to be 1. All of these things I just mentioned, these all just mean the standard deviation of the sampling distribution of the sample mean. It is useful to compare the standard error of the mean for the age of the runners versus the age at first marriage, as in the graph. http://davidmlane.com/hyperstat/A103735.html ## Standard Error Formula Excel Note: the standard error and the standard deviation of small samples tend to systematically underestimate the population standard error and deviations: the standard error of the mean is a biased estimator The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate. It doesn't matter what our n is. If one survey has a standard error of $10,000 and the other has a standard error of$5,000, then the relative standard errors are 20% and 10% respectively. Now, if I do that 10,000 times, what do I get? The area between each z* value and the negative of that z* value is the confidence percentage (approximately). Standard Error Definition This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle I'm just making that number up. Standard Error Formula Statistics However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean describes bounds on a random sampling process. With statistics, I'm always struggling whether I should be formal in giving you rigorous proofs, but I've come to the conclusion that it's more important to get the working knowledge first https://en.wikipedia.org/wiki/Mean_squared_error The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator and its bias. For the purpose of hypothesis testing or estimating confidence intervals, the standard error is primarily of use when the sampling distribution is normally distributed, or approximately normally distributed. Standard Error Formula Regression For example, the z*-value is 1.96 if you want to be about 95% confident. The larger your n, the smaller a standard deviation. That might be better. ## Standard Error Formula Statistics The true standard error of the mean, using σ = 9.27, is σ x ¯   = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt check my blog And if it confuses you, let me know. Standard Error Formula Excel Edwards Deming. Standard Error Of The Mean Definition It could look like anything. Maybe scroll over. The mean of all possible sample means is equal to the population mean. But if I know the variance of my original distribution, and if I know what my n is, how many samples I'm going to take every time before I average them However, one can use other estimators for σ 2 {\displaystyle \sigma ^{2}} which are proportional to S n − 1 2 {\displaystyle S_{n-1}^{2}} , and an appropriate choice can always give Standard Error Of Proportion If our n is 20, it's still going to be 5. We just keep doing that. Greek letters indicate that these are population values. Moreover, this formula works for positive and negative ρ alike.[10] See also unbiased estimation of standard deviation for more discussion. The distribution of the mean age in all possible samples is called the sampling distribution of the mean. Standard Error Of Estimate Formula Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a Belmont, CA, USA: Thomson Higher Education. ## Now, I know what you're saying. I'll do another video or pause and repeat or whatever. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n-p) for p regressors or (n-p-1) if an intercept is used.[3] For more You want to estimate the average weight of the cones they make over a one-day period, including a margin of error. Standard Error Formula Proportion As you increase your sample size for every time you do the average, two things are happening. Also, be sure that statistics are reported with their correct units of measure, and if they're not, ask what the units are. But to really make the point that you don't have to have a normal distribution, I like to use crazy ones. So if I take 9.3 divided by 5, what do I get? 1.86, which is very close to 1.87. For a Gaussian distribution this is the best unbiased estimator (that is, it has the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution. Yes No Can you tell us more? Danielle Parrott 395 προβολές 6:39 Standard error of the mean | Inferential statistics | Probability and Statistics | Khan Academy - Διάρκεια: 15:15. And if we did it with an even larger sample size-- let me do that in a different color. Note: The Student's probability distribution is a good approximation of the Gaussian when the sample size is over 100. Notice in this example, the units are ounces, not percentages!
2018-11-21 03:37:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896580159664154, "perplexity": 407.8687200405039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00465.warc.gz"}
https://xfel.tind.io/record/1716
Record Details Title: Electronic structure and core-level spectra of light actinide dioxides in the dynamical mean-field theory Affiliation(s): EuXFEL staff, Other Author group: Theory Topic: Scientific area: Abstract: The local-density approximation combined with the dynamical mean-field theory (LDA+DMFT) is applied to the paramagnetic phase of light actinide dioxides: $UO_{2},N_{p}O_{2}, and PuO_{2}$. The calculated band gaps and the valence-band electronic structure are in very good agreement with the optical absorption experiments as well as with the photoemission spectra. The hybridization of the actinide $5f$ shell with the 2p states of oxygen is found to be relatively large; it increases the filling of the $5f$ orbitals from the nominal ionic configurations with two, three, and four electrons to nearly half-integer values 2.5, 3.4, and 4.4. The large hybridization leaves an imprint also on the core-level photoemission spectra in the form of satellite peaks. It is demonstrated that these satellites are accurately reproduced by the LDA+DMFT calculations. Imprint: American Physical Society, 2015 Journal Information: Physical Review B, 92 (8), 085125 (2015) Related external records: Language(s): English Record appears in: Export
2019-11-18 08:12:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5320761799812317, "perplexity": 3275.4967877505155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669730.38/warc/CC-MAIN-20191118080848-20191118104848-00218.warc.gz"}