text
stringlengths 100
356k
|
---|
# Stabilisation methods¶
When using a continuous Galerkin discretisation in advection-dominated problems, it may be necessary to stabilise the advection term in the momentum equation.
The implementation of the stabilisation methods can be found in the file stabilisation.py.
## Streamline upwind¶
This method adds some upwind diffusion in the direction of the streamlines. The term is given by
$\int_{\Omega} \frac{\bar{k}}{||\mathbf{u}||^2}(\mathbf{u}\cdot\nabla\mathbf{w})(\mathbf{u}\cdot\nabla\mathbf{u})$
which is added to the LHS of the momentum equation. The term $$\bar{k}$$ takes the form
$\bar{k} = \frac{1}{2}\left(\frac{1}{\tanh(\mathrm{Pe})} - \frac{1}{\mathrm{Pe}}\right)||\mathbf{u}||\Delta x$
where
$\mathrm{Pe} = \frac{||\mathbf{u}||\Delta x}{2\nu}$
is the Peclet number, and $$\Delta x$$ is the size of each element. |
# I Differential forms as a basis for covariant antisym. tensors
Tags:
1. May 8, 2017
### Physics_Stuff
In a text I am reading (that I unfortunately can't find online) it says:
"[...] differential forms should be thought of as the basis of the vector space of totally antisymmetric covariant tensors. Changing the usual basis $$dx^{\mu_1} \otimes ... \otimes dx^{\mu_n}$$ with $$dx^{\mu_1} \wedge ... \wedge dx^{\mu_n}$$ of some covariant tensor we can extract its totally antisymmetric part
$$T= \frac{1}{n!}T_{\mu_1 ... \mu_n}\hspace{1pt} d x^{\mu_1} \wedge ... \wedge d x^{\mu_n}= \frac{1}{n!}T_{[\mu_1 ... \mu_n]}\hspace{1pt} d x^{\mu_1} \wedge ... \wedge d x^{\mu_n}."$$
What is the point here? Is T an arbitrary tensor with n covariant components, or must T already be antisymmetric in order for this expression to hold? In order to know the components $$T_{\mu_1 ... \mu_n}$$ of T, so we can use the expression on the RHS above, we must already know what the tensor T looks like? Then, what is the point of such a decomposition?
2. May 8, 2017
### Staff: Mentor
$T$ is an arbitrary tensor. Until now, this doesn't say anything more than $T$ is a multi-dimensional scheme of numbers. In the first step, you say, these numbers represent coordinates. So the question is, according to which basis? As you answer "covariant multilinear forms", it means $T$ is interpreted according to a basis $dx^{\mu_1} \otimes \ldots \otimes dx^{\mu_n}$. It is still the same scheme of numbers. Now you say "but my multilinear forms are alternating differential forms". This means you pass from the tensor algebra $\mathcal{T}(V^*)$ onto the homomorphic image of its Graßmann algebra $\Lambda(V^*)$. It means, the basis vectors are now alternating differential forms and $T_{\mu_1 \ldots \mu_n}$ the coordinates of $T$ according to this basis. It is still the same scheme of numbers, however, interpreted as an element of the algebra of alternating differential forms (with a normalization factor).
Your question is as if you had asked, whether $(1,2)$ is a point, a line, a tangent, a slope, a linear mapping or a differential form. It is whatever you want it to be. The usual way to get there is of course the opposite direction: given an alternating differential form $T$, what are its coordinates according to the basis $dx^{\mu_1} \wedge \ldots \wedge dx^{\mu_n}\;$? |
# I About normalization of periodic wave function
1. Sep 29, 2016
### KFC
Hi all,
I am reading something on wave function in quantum mechanics. I am thinking a situation if we have particles distributed over a periodic potential such that the wave function is periodic as well. For example, it could be a superposition of a series of equal-amplitude plane waves with different wave number (some positive and some negative) so to give a form of $f(x+2\pi)=f(x)$. In this case, I wonder how do we normalize the wave function. I try the following but it almost give something close to zero because the integral gives something very large
$f [\int_{-\infty}^{+\infty}|f|^2dx]^{-1}$
But since it is periodic, do you think I should normalize the wave function with the normalization factor computed in one period as follows:
$\int_{-\pi}^{+\pi}|f|^2dx$
2. Sep 29, 2016
### vanhees71
That's very simple to answer. Since $\int_{\mathbb{R}} \mathrm{d} x |f(x)|^2$ doesn't exist in this case, it is not a wave function that describes a physical state, and thus you never ever need to consider it let alone normalize it.
If you have in mind the momentum eigenstates, you should realize that these are not wave functions but generalized functions which allow you transform from the position representation to momentum representation and vice versa. Here you normalize them "to a $\delta$ distribution". The momentum eigenstates are given by the equation
$$\hat{p} u_p(x)=-\mathrm{i} \partial_x u_p(x)=p u_p(x) \; \Rightarrow\; u_p(x)=N_p \exp(\mathrm{i} x p).$$
To "normalize" these functions conveniently you use
$$\int_{\mathbb{R}} \mathrm{d} x u_{p}^*(x) u_{p'}(x)=N_p^* N_{p'} \int_{\mathbb{R}} \mathrm{d} x \exp[\mathrm{i} x(p-p')=2 \pi \delta(p-p') |N_p|^2 \stackrel{!}{=} \delta(p-p') \;\Rightarrow \; N_p=\frac{1}{\sqrt{2 \pi}},$$
up to an irrelevant phase factor. So for convenience one uses
$$u_p(x)=\frac{1}{\sqrt{2 \pi}} \exp(\mathrm{i} p x).$$
Then the momentum-space wave function is given by the Fourier transformation of the position-space wave function, i.e.,
$$\tilde{\psi}(p)=\int_{\mathbb{R}} \mathrm{d} u_p^*(x) \psi(x),$$
which is inverted by
$$\psi(x)=\int_{\mathbb{R}} \mathrm{d}p u_p(x) \tilde{\psi}(p).$$
3. Sep 30, 2016
### KFC
Thanks for your reply. I am still reading your reply but I am still confusing on some parts. Since you mention the momentum space, I wonder if the following is physically possible or not. Taking crystal as example, in the text they always start the discussion with periodic lattice in position space so the k space is also periodic. So if k space is periodic, is it possible to input some wave in some form onto the crystal such that the wave in k space is periodic. If that's possible, how do we normalize the wave in k space? It is confusing me. I am always thinking a picture that in k space, we may see a Gaussian in every single recipical lattice site but such Gaussian is repeating from and to infinity so they don't add up to a finite value. In your example, you consider the delta function and derive the normalization factor, but that's still for plane wave. What I am thinking is something periodic in k space but not a plane wave.
4. Sep 30, 2016
### vanhees71
Sorry, I misunderstood your question. It's not about periodic wave functions but particles in a periodic potential as models of crystals. This is a bit more complicated. So have a look in some solid-state physics book (like Ashcroft&Mermin) on Bloch states. |
### Home > PC > Chapter 5 > Lesson 5.2.1 > Problem5-53
5-53.
The surface area $S$ of a sphere is directly proportional to the square of the radius $r$
1. Express $S$ as a function of $r$.
Do not forget the constant of proportionality ($k$).
$S\left(r\right) = kr^{2}$
2. Solve for the particular value of $k$ if the surface area is $16π \text{ cm}^{2}$ when the radius is $2$ cm. Then find the surface area when the radius is $3$ cm.
Substitute the given point $\left(r,S\right)$ into your function in part (a) to find $k$.
Then use the resulting equation to find the surface area when $r = 3$ cm. |
We know how to measure the range of an angle using degrees, which can also be divided into the submultiple minutes and seconds.
But there is another way of measuring angles. It can be done by using units called radians.
One radian is the angle obtained when the radius is taken and put around the circle. Let's see an illustration to understand it better:
So, a radian denotes an angle in which its corresponding arc has the same length as its radius. And so, a full angle has $$2\pi$$ radians, a straight angle has $$\pi$$ radians, and a straight angle has $$\dfrac{\pi}{2}$$ radians.
This is deduced from the fact that the total length of a circumference is:
$$L=2 \cdot \pi \cdot r$$
where $$r$$ is the radius of such circumference.
Therefore, a full rotation is $$2\pi$$ times the length of the radius, and considering that a full rotation is $$360^\circ$$, now we have a way of changing from one measure to another: $$2 \cdot \pi$$ radians $$=360^\circ$$ (a whole turn).
The conversion factors that we will use to change from one to another will be:
• to convert from degrees to radians:
$$N^\circ=N^\circ \cdot \dfrac{2\pi \ \mbox{radians}}{360^\circ}= \dfrac{N \cdot 2 \pi}{360}$$ radians, where $$N$$ is the number of degrees that we want to express in radians.
• to convert from radians to degrees:
$$M$$ radians = $$M \ \mbox{radians} \cdot \dfrac{360^\circ}{2 \pi \ \mbox{radians}}= \Big(\dfrac{M \cdot 360}{2 \pi} \Big)^\circ$$ where $$M$$ is the number of radians that we want to express in degrees.
Let's write $$270^\circ$$ in radians:
Taking the degrees conversion factor to radians we have: $$270^\circ \cdot \frac{2\pi \ \mbox{radians}}{360^\circ}= \frac{270 \cdot 2 \pi}{360} \ \mbox{radians}= \frac{3}{2} \pi \ \mbox{radians}$$$When we express the quantities in $$radians$$, we usually write $$\pi$$ instead of its value in numbers. If one is going to put it in number form, rounding $$3,1416$$ will be fine. For example: $$\frac{3}{2} \pi = \frac{3}{2} \cdot 3,1416 = 4,71225 \ \mbox{radians}$$$.
Let's write $$45^\circ$$ in radians: $$45^\circ \cdot \frac{2 \pi \ \mbox{radians}}{360^\circ}=\frac{45 \cdot 2\pi}{360} \ \mbox{radians}= \frac { \pi}{4} \ \mbox{radians}$$$that in numbers would be approximately:$$\frac { \pi}{4}= \frac{3,1416}{4}=0,7853 \ \mbox{radians}$$$
Now let's write $$3\pi$$ radians in degrees:
Like before, we take the conversion factor, but now the one that takes us from radians to degrees, and we obtain:
$$3\pi \ \mbox{radians}= 3\pi\cdot\frac{360^\circ}{2\pi}=540^\circ$$$Let's write $$\dfrac {6\pi}{5} \ \mbox{radians}$$ in degrees: $$\dfrac{6}{5}\pi \ \mbox{radians}= \frac{6}{5}\pi \frac{360^\circ}{2\pi}=216^\circ$$$ |
# Subtracting off first coordinate to get divergence-free vector field
Let $v$ be a vector field on $\mathbb{R}^n$. Show that $v$ can be written as a sum $v=f_1\dfrac{\partial}{\partial x_1}+w$ where $w$ is a divergence-free vector field.
Suppose $v=v_1\dfrac{\partial}{\partial x_1}+v_2\dfrac{\partial}{\partial x_2}+\ldots+v_n\dfrac{\partial}{\partial x_n}$, where $v_i:\mathbb{R}^n\rightarrow\mathbb{R}$.
Then we want to choose $f_1$ so that $$v-f_1\dfrac{\partial}{\partial x_1}=(v_1-f_1)\dfrac{\partial}{\partial x_1}+v_2\dfrac{\partial}{\partial x_2}+\ldots+v_n\dfrac{\partial}{\partial x_n}$$ is divergence-free, i.e. this quantity is zero when we actually take these partial derivatives (rather than using them as just symbols).
So, take any point $p\in\mathbb{R}^n$. Then we can evaluate the real number $$q(p)=\dfrac{\partial v_1(p)}{\partial x_1}+\dfrac{\partial v_2(p)}{\partial x_2}+\ldots+\dfrac{\partial v_n(p)}{\partial x_n}$$
If I set $f_1(p)=q(p)x_1$, this yields $v-f_1\dfrac{\partial}{\partial x_1}=0$ at point $p$. Since this holds for any point $p$, we have $v-f_1\dfrac{\partial}{\partial x_1}=0$.
EDIT: This solution is currently wrong, because $q(p)$ is not a constant, so I have to differentiate using the product rule. How can I fix it?
• Looks good. Though, I would write $q(p)$ rather than just $q$ so that it is clear that $q$ is a function of $p$ as well! – Tom Nov 23 '13 at 22:07
• @Tom Just edited to incorporate your suggestion. Thanks! – JJ Beck Nov 24 '13 at 0:00
• No problem! Also, don't forget that $x_1$ is a function as well, so $f(p) = q(p)x_1(p)$ or simply $f = q\,x_1$ interpreted by standard pointwise function multiplication. – Tom Nov 24 '13 at 0:06
• @Tom Actually now I'm getting confused. If $f_1(p)=q(p)x_1(p)$, will it be true that $\dfrac{\partial f_1(p)}{\partial x_1}=q(p)$? – JJ Beck Nov 24 '13 at 0:14
• Oh... you make a good point! If you don't have $q$ constant, you'll have to use the product rule. But, if you do set $q$ constant, the divergence of $w$ will only be certain to vanish at $p$.. – Tom Nov 24 '13 at 0:17
You want $$\frac{\partial f_1}{\partial x_1}(p) = q(p) .$$ Let $$f_1(x_1,x_2,\dots,x_n) = \int_0^{x_1} q(\xi,x_2,\dots,x_n) \, d\xi .$$ Here $p = (x_1,x_2,\dots,x_n)$. All is perfectly legitimate because you are working on $\mathbb R^n$. Probably on some other manifolds you could not make this work.
• You mean integral from $0$ to $x_1$, right? – JJ Beck Nov 24 '13 at 4:49 |
# Can't Finish My First Face
#### thnbgr
##### Member
Hi,
I just bought a Rubik's Cube yesterday and have been experimenting with it, but I can't seem to finish my first face. I end up with two stickers left most of the time, sometimes one. Are there any solutions to this? |
This presentation is the property of its rightful owner.
1 / 18
# More Bandstructure Discussion PowerPoint PPT Presentation
More Bandstructure Discussion. Model Bandstructure Problem One-dimensional, “almost free” electron model (easily generalized to 3D!) (BW, Ch. 2 & Kittel’s book, Ch. 7). “ Almost free ” electron approach to bandstructure. 1 e - Hamiltonian : H = (p) 2 /(2m o ) + V(x); p -i ħ (d/dx)
More Bandstructure Discussion
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
#### Presentation Transcript
More Bandstructure Discussion
### Model Bandstructure ProblemOne-dimensional, “almost free” electron model (easily generalized to 3D!) (BW, Ch. 2 & Kittel’s book, Ch. 7)
• “Almost free” electron approach to bandstructure.
1 e- Hamiltonian:H = (p)2/(2mo) + V(x); p -iħ(d/dx)
V(x) V(x + a) = Effective potential, period a(lattice repeat distance)
GOAL
• Solve the Schrödinger Equation: Hψ(x) = εψ(x)
Periodic potential V(x)
ψ(x) must have the Bloch form:
ψk(x) = eikx uk(x), with uk(x) = uk(x + a)
• The set of vectors in “k space” of the form G = (nπ/a),
(n = integer) are calledReciprocal Lattice Vectors
• Expand the potential in a Fourier series:
Due to periodicity, only wavevectors for which k = G enter the sum.
V(x) V(x + a) V(x) = ∑GVGeiGx (1)
The VG depend on the functional form of V(x)
V(x) is realV(x)= 2 ∑G>0 VGcos(Gx)
• Expand the wavefunction in a Fourier series ink:
ψ(x) = ∑kCkeikx(2)
Put V(x) from (1) & ψ(x) from (2) into the Schrödinger Equation:
• The Schrödinger Equation: Hψ(x) = εψ(x) or
[-{ħ2/(2mo)}(d2/dx2) + V(x)]ψ(x) = εψ(x)
Insert the Fourier series for both V(x) & ψ(x)
• Manipulation (see BW or Kittel) gets,
For each Fourier component of ψ(x):
(λk - ε)Ck + ∑GVGCk-G = 0 (3)
where λk= (ħ2k2)/(2mo) (the free electron energy)
• Eq. (3) is the k space Schrödinger Equation
A set of coupled, homogeneous, algebraic equations for the Fourier componentsof the wavefunction. Generally, this is intractable: There are an number of Ck !
• The k space Schrödinger Equation is:
(λk - ε)Ck + ∑GVGCk-G = 0 (3)
where λk= (ħ2k2)/(2mo) (the free electron energy)
• Generally, (3) is intractable! # of Ck ! But, in practice, need only a few.
Solution:Determinant of coefficients of theCk is set to0:
That is, it is an determinant!
• Aside:Another Bloch’s Theorem proof:Assume (3) is solved. Then, ψhas the form: ψk(x) = ∑GCk-G ei(k-G)x or
ψk(x) = (∑GCk-Ge-iGx) eikx uk(x)eikx
where uk(x) = ∑G Ck-G e-iGx
It’s easy to show the uk(x) = uk(x + a)
ψk(x) is of the Bloch form!
• The k space Schrödinger Equation:
(λk - ε)Ck + ∑GVGCk-G = 0 (3)
where λk= (ħ2k2)/(2mo) (the free electron energy)
• Eq. (3) is a set of simultaneous, linear, algebraic equations connecting the Ck-Gfor all reciprocal lattice vectors G.
• Note:If VG = 0 for all reciprocal lattice vectors G, then
ε = λk = (ħ2k2)/(2mo)
Free electron energy“bands”.
• The k space Schrödinger Equation is:
(λk - ε)Ck + ∑GVGCk-G = 0 (3)
where λk= (ħ2k2)/(2mo) (the free electron energy)
= Kinetic Energy of the electron in the periodic potential V(x)
• Consider the Special Case:
All VG are small in comparison with the kinetic energy, λk except for
G = (2π/a) & for k at the 1st BZ boundary, k = (π/a)
For k away from the BZ boundary, the energy band is the free electron parabola: ε(k) = λk = (ħ2k2)/(2mo)
For k at the BZ boundary, k = (π/a), Eq. (3) is a
2 2 determinant
• In this special case:As a student exercise (see Kittel), show that, for k at the BZ boundary k = (π/a), the k space Schrödinger Equation becomes 2 algebraic equations:
(λ- ε) C(π/a) + VC(-π/a) = 0
VC(π/a) + (λ- ε)C(-π/a) = 0
where λ= (ħ2π2)/(2a2mo); V = V(2π/a) = V-(2π/a)
• Solutions for the bandsεat the BZ boundary are:
ε = λ V
(from the 2 2 determinant):
Away from the BZ boundary the energy band εis a free electron parabola. At the BZ boundary there is a splitting:
A gap opens up!εG ε+ - ε- = 2V
• Now, lets look at in more detail at knear(but not at!) the BZ boundary to get the k dependence of ε near the BZ boundary: Messy! Student exercise (see Kittel) to show that the
Free Electron Parabola
SPLITS
into 2 bands, with a gap between:
ε(k) = (ħ2π2)/(2a2mo) V
+ ħ2[k- (π/a)2]/(2mo)[1 (ħ2π2 )/(a2moV)]
This also assumes that |V| >> ħ2(π/a)[k- (π/a)]/mo.
For the more general, complicated solution, see Kittel!
Almost Free e-Bandstructure:(Results, from Kittel for the lowest two bands)
ε = (ħ2k2)/(2mo)
V
V
### Brief Interlude:General Bandstructure Discussion(1d, but easily generalized to 3d)Relate bandstructure to classical electronic transport
Given an energy band ε(k)(a Schrödinger Equation eigenvalue):
The Electron is a Quantum Mechanical Wave
• From Quantum Mechanics, the energyε(k) & the frequency ω(k) are related by:ε(k) ħω(k)(1)
• Now, from Classical Wave Theory, the wave group velocityv(k) is defined as:v(k) [dω(k)/dk](2)
• Combining (1) & (2) gives: ħv(k) [dε(k)/dk]
• The QM wave (quasi-)momentum is: p ħk
• Now, a simple“Quasi-Classical” Transport Treatment!
• “Mixing up” classical & quantum concepts!
• Assume that the QM electron responds to an EXTERNALforce, FCLASSICALLY(as a particle). That is, assume that
Newton’s 2nd Law is valid: F = (dp/dt)(1)
• Combine this with theQMmomentum p = ħk & get:
F = ħ(dk/dt)(2)
Combine (1) with the classical momentum p = mv:
F = m(dv/dt) (3)
Equate (2) & (3) & also for v in (3) insert the QM group velocity:
v(k) = ħ-1[dε(k)/dk](4)
• So, this “Quasi-classical” treatment gives
F = ħ(dk/dt) = m(d/dt)[v(k)] = m(d/dt)[ħ-1dε(k)/dk](5)
or, using the chain rule of differentiation:
ħ(dk/dt) = mħ-1(dk/dt)(d2ε(k)/dk2) (6)
Note!!(6) can only be true if the e- mass m is given by
m ħ2/[d2 ε(k)/dk2](& NOTmo!) (7)
m EFFECTIVE MASSof e- in the bandε(k)at wavevectork.Notation: m = m* = me
• The Bottom Line is:Under the influence of an external forceF
The e- responds Classically(According to Newton’s 2nd Law)BUTwith a Quantum Mechanical Massm*,notmo!
• m The EFFECTIVE MASSof the e- in band ε(k)at wavevector k
m ħ2/[d2ε(k)/dk2]
• Mathematically,
m [curvature of ε(k)]-1
• This is for 1d. It is easily shown that:
m [curvature of ε(k)]-1
also holds in 3d!!
In that case, the 2nd derivative is taken along specific directions in 3d k space & the effective mass is actually a 2nd rank tensor.
m [curvature of ε(k)]-1
Obviously, we can havem > 0 (positive curvature)
or m < 0 (negative curvature)
• Consider the case of negative curvature:
m < 0 for electrons
For transport & other properties, the charge to mass ratio (q/m) often enters.
For bands with negative curvature, we can either
1. Treat electrons(q = -e) with me < 0
Or 2. Treat holes (q = +e) with mh > 0
### Consider again theKrönig-Penney ModelIn the Linear Approximation for L(ε/Vo). The lowest 2 bands are:
Negative me
Positive me
• The linear approximation for L(ε/Vo) does not give accurate effective masses at the BZ edge, k = (π/a).
For k near this value, we must use the exact L(ε/Vo) expression.
• It can be shown (S, Ch. 2) that, in limit of small barriers
(|Vo| << ε), the exact expression for the Krönig-Penney effective mass at the BZ edge is: m = moεG[2(ħ2π 2)/(moa2) εG]-1
with:mo = free electron mass, εG = band gap at the BZ edge.
+ “conduction band”(positive curvature) like:
- “valence band”(negative curvature) like:
### For Real Materials, 3d Bands
The Krönig-Penney model results (near the BZ edge):
m = moεG[2(ħ2π 2)/(moa2) εG]-1
This is obviously too simple for real bands!
• A careful study of this table, finds, for real materials, m εG also!NOTE:In general(m/mo) << 1 |
# Permutations & Functions
This is an assignment question I received a week ago.
A function $f:\{1, 2, \dots ,n\} \to \{1, 2, \dots, n\}$ which is a bijection is also called a permutation. Let $P_n$ be the set of all permutations on $\{1, 2, \dots , n\}$. Define the relation $\sim$ on $P_n$ by $f\sim g$ iff there exists a permutation $h$ such that $f = h \circ g \circ h^{-1}$.
a) Need to show that it is an equivalence relation.
My Approach:
For a relation to be an equivalence relation, it must be reflexive, symmetric, and transitive.
Reflexive:
Let $f \sim f \Longleftrightarrow$ there exists $h$ such that $f = h \circ f \circ h^{-1}$.
My question (this may be silly) but is $h \circ f \circ h^{-1} = f$? I claimed that it was, and thus if $h \circ f \circ h^{-1} = f$, then it is reflexive.
Symmetric:
Let $f \sim g \Longleftrightarrow$ there exists $h$ such that $f = h \circ g \circ h^{-1}$. Let $g \sim f \Longleftrightarrow$ there exists $h_2$ (may be a different $h$) such that $g = h \circ f \circ h^{-1}$.
If $f = h \circ g \circ h^{-1}$ then $f = g$.
And since $g = h \circ f \circ h^{-1}$ then $g = f$.
Clearly this is reflexive.
I would like to know if I am in the right direction.
Regards,
Julian.
-
You are generally going in the right direction. For reflexive, the permutations do not commute, so $h \circ f \circ h^{-1}$ does not necessarily equal $f$. But can find a specific $h$ that works. For symmetric, when you write $g$ you should write $g = h_2 \circ f \circ h_2^{-1}$. Again, because the permutations do not commute you cannot conclude (and it is not generally true) that $f=g$. But given $f = h \circ g \circ h^{-1}$ you should be able to find an $h_2$. For transitive, you assume $f = h \circ g \circ h^{-1}$ and $g= h_2 \circ k \circ h_2^{-1}$ Now can you find $h_3$ such that $f = h_3 \circ k \circ h_3^{-1}$?
@JulianPark: That is exactly what I was thinking. You have shown $f \sim f$ as you wanted. Another that works is $f$ itself, as does $f^{-1}$ (which need not be distinct from $f$) – Ross Millikan Nov 2 '12 at 3:19 |
# Antiderivative and Intergrals
1. Apr 3, 2008
### viper2308
1. The problem statement, all variables and given/known data
$$\int$$ (X-1) $$\sqrt{X}$$ dx
3. The attempt at a solution
It is multiple choice, I believe the answer is either (2/5)x^(5/2)-(2-3)x^(3/2)+c or
(1/2)x^2+2x^(3/2)-x+c
I have tried to find the derivatives of both these answers yet neither of them gave me the correct antidrivatives. I must be doing something wrong. To do it the right way I have done the u's but I just can't figure it out.
2. Apr 3, 2008
### rocomath
$$\int(x-1)\sqrt xdx$$
What was your first step?
Hint: Distribute the $$\sqrt x$$
3. Apr 3, 2008
### viper2308
I distributed the $$\sqrt{x}$$ and replaced with u's I then got
$$\int$$ u-u^(1/2) This lets see where the -2/3x^(3/2) comes from but I still don't understand where the 2/5x^(5/2) comes from.
4. Apr 3, 2008
### Snazzy
You don't need to replace with u's. What are the rules for multiplying variables with the same base with exponents?
5. Apr 3, 2008
### viper2308
Thank you, I forgot how to distribute for a second.
6. Apr 4, 2008
### Schrodinger's Dog
This seems a fairly straightforward case of multiplying out the brackets and solving using the sum rule of integrals:
$$\int \left(f \pm g\right) \,dx = \int f \,dx \pm \int g \,dx\rightarrow$$
$$\int (x-1)\sqrt{x}\,dx\rightarrow \int x^{\frac{3}{2}}-x^{\frac{1}{2}}\,dx=\frac{2}{5}x^{\frac{5}{2}}-\frac{2}{3}x^{\frac{3}{2}}+c$$
No need to use the u unless the question asks you to? Or am I missing something here?
Last edited: Apr 4, 2008
7. Apr 4, 2008
### Gib Z
I'm quite surprised they have multiple-choice anti derivative questions! I mean, if one doesn't really know how to integrate it they can just differentiate every option and see which one matches.
8. Apr 4, 2008
### Schrodinger's Dog
It sounds like calculus for dummies. I've never heard of multiple choice exams either? Not in A' Level or anywhere else? |
# 0.7 Compressed sensing (Page 3/5)
Page 3 / 5
The second statement of the theorem differs from the first in the following respect: when $K , there will necessarily exist $K$ -sparse signals $x$ that cannot be uniquely recovered from the $M$ -dimensional measurement vector $y=\Phi x$ . However, these signals form a set of measure zero within the set of all $K$ -sparse signals and can safely be avoided if $\Phi$ is randomly generated independently of $x$ .
Unfortunately, as discussed in Nonlinear Approximation from Approximation , solving this ${\ell }_{0}$ optimization problem is prohibitively complex. Yet another challenge is robustness; in the setting ofTheorem "Recovery via ℓ 0 optimization" , the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness) can be addressed, but atthe expense of slightly more measurements.
## Recovery via convex optimization
The practical revelation that supports the new CS theory is that it is not necessary to solve the ${\ell }_{0}$ -minimization problem to recover $\alpha$ . In fact, a much easier problem yields an equivalent solution (thanks again to the incoherency of thebases); we need only solve for the ${\ell }_{1}$ -sparsest coefficients $\alpha$ that agree with the measurements $y$ [link] , [link] , [link] , [link] , [link] , [link] , [link] , [link]
$\stackrel{^}{\alpha }=argmin{\parallel \alpha \parallel }_{1}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{s.t.}\phantom{\rule{4.pt}{0ex}}y=\Phi \Psi \alpha .$
As discussed in Nonlinear Approximation from Approximation , this optimization problem, also known as Basis Pursuit [link] , is significantly more approachable and can be solved with traditionallinear programming techniques whose computational complexities are polynomial in $N$ .
There is no free lunch, however; according to the theory, more than $K+1$ measurements are required in order to recover sparse signals via Basis Pursuit. Instead, one typically requires $M\ge cK$ measurements, where $c>1$ is an oversampling factor . As an example, we quote a result asymptotic in $N$ . For simplicity, we assume that the sparsity scales linearly with $N$ ; that is, $K=SN$ , where we call $S$ the sparsity rate .
Theorem
[link] , [link] , [link] Set $K=SN$ with $0 . Then there exists an oversampling factor $c\left(S\right)=O\left(log\left(1/S\right)\right)$ , $c\left(S\right)>1$ , such that, for a $K$ -sparse signal $x$ in the basis $\Psi$ , the following statements hold:
1. The probability of recovering $x$ via Basis Pursuit from $\left(c\left(S\right)+ϵ\right)K$ random projections, $ϵ>0$ , converges to one as $N\to \infty$ .
2. The probability of recovering $x$ via Basis Pursuit from $\left(c\left(S\right)-ϵ\right)K$ random projections, $ϵ>0$ , converges to zero as $N\to \infty$ .
In an illuminating series of recent papers, Donoho and Tanner [link] , [link] , [link] have characterized the oversampling factor $c\left(S\right)$ precisely (see also "The geometry of Compressed Sensing" ). With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust tomeasurement noise and quantization error [link] .
We often use the abbreviated notation $c$ to describe the oversampling factor required in various settings even though $c\left(S\right)$ depends on the sparsity $K$ and signal length $N$ .
A CS recovery example on the Cameraman test image is shown in [link] . In this case, with $M=4K$ we achieve near-perfect recovery of the sparse measured image.
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
How can I make nanorobot?
Lily
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
how can I make nanorobot?
Lily
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers! |
# A discussion on the two limit cases of sin
The infinite discontinuity is the result of the first two conditions not holding true this is also a non removable discontinuity as the limit does not exist at x can be removed by defining the function value equal to the limit, as the limit exists in this case sequence and series. A rigorous evaluation of two special limits proof of limit(sin(theta)/theta,theta = 0) = 1 proof of in many cases, however, this is not enough to evaluate a limit the two special limits and were introduced. Evaluating a multiple integral involves expressing it as an iterated integral that is to say the first integration in this case is so simple that one can write down the result as easily as the integral it does not accept variable limits thus there are two ways to use dblquad. This article explains what special cases of calculus limit problems are and shows how to solve them by giving free calculus help what are special cases of limit problems for examples of how to solve limits of the above two forms. Limit of a function notation the discussion of the limit concept is facilitated by using a special notation the quantities in (3) and (4) are also referred to as one-sided limits two-sided limitsif both the left-hand limit and the right-hand limit. Chapter 6 limits of functions and in that case every l2r would be a limit we can rephrase the - de nition of limits in terms of neighborhoods the limit lim x0 sin 1 x corresponding to the function f: r nf0gr given by f(x) = sin. Single case study analyses offer empirically-rich, context-specific, holistic accounts and contribute to both theory-building and, to a lesser extent, theory-testing.
Moved permanently the document has moved here. Disclosing policy limits in liability claims: can failure to provide policy limits information in the absence of litigation establish a case for bad faith two cases addressing this question are the foundation of this article: exhaustion of limits discussion added to contractual risk. In this case is completely inside the second interval for the function and so there are values of y on both sides of that are also inside this interval this the limits of the two outer functions are / limits / computing limits [practice problems] [assignment. The law of sines is one of two trigonometric equations commonly applied to this formula becomes the planar formula at the limit one obtains respectively the euclidean, spherical, and hyperbolic cases of the law of sines described above let p k (r) indicate the circumference of a.
Eventual and extreme) bounds on the sequence comprar priligy en linea es posible en cualquier momento los 365 dias en un ano estas farmacias funcionan sin dia libre a discussion on the two limit cases of sin y no tienes que esperar hasta que se abran also see why steady states are impossible overshoot loop: evolution under the maximum power. Calculus 2 integration techniques in this case, the key step will be u = sin x dv = e x dx ò e x sin x dx = e x sin x - ò e x cos x dx du = cos x v = e x the integral we have now is the. What mathematicians mean by indeterminate form is that in some cases we think about it as having one value when evaluating the limit sin[x]^x (which is 1 as x goes to 0), we say it is equal to x^x (since sin[x] and x go to 0 at the same rate the discussion of 0^0 is very old.
Limit of a function does not necessarily exists possible cases of non-existing limits would be when at least one of the one-sided limits does not exist. Limitations of forgiveness: two passages seem to limit god's forgiveness they are christ's discussion of the unpardonable sin in both cases the sin is excluded from the customary forgiveness which is extended to sins of all other classes. Listed here are a couple of basic limits and the standard limit laws which, when used in conjunction, can find most limits they are listed for standard, two-sided limits, but they work for all forms of limits. Sin is a differentiable function on r and sin(x) is a differentiable function on r hence, sin(l) is a differentiable function on ir \ {0} since it is a composite function of two.
## A discussion on the two limit cases of sin
Am i right to think that this should be the case for any function, where the thanks for the discussion and help calculus limits y=r\sin\theta$, and plug it it you get$\lim\limits_{r\to 0} \frac{r^3(cos^3\theta+sin^3\theta)}{r^2(cos^2\theta-sin^2\theta)} =\lim\limits_{r\to. Differential calc: limits learn with flashcards, games, and more — for free.
• Solutions to limits of functions as x approaches a constant solution 1 : click here to return to the list of problems the limit does not exist click here to return to the list of problems solution 13 : (make the replacement so that.
• Strategies for evaluating limits there are several approaches used to find limits it is also necessary to determine whether the result is valid for a two-sided limit in some of these cases.
• Limits of trigonometric functions return to contents go to problems & solutions 1 remark that trigonometric identities such as sin 2 x + cos 2 x = 1 or sin (x + y) are obtained by using this limit remark 31.
• Assignment 5 solution james mcivor 1 stewart 14216 [5 pts] find the limit or say why it does not exist: lim (xy)(00) x2 sin2 y x2 + 2y2 solution: the limit is equal to zero.
• This generalization includes as special cases limits on an interval sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on r by showing that the one-sided limits either one way to define the limit of a function is in terms of the limit of.
The formal definition of a definite integral is stated in terms of the limit of a riemann in general there are 4 cases to consider to express a rational function as the sum of two or more partial fractions case 1 in this section we will consider two types of integrals known as improper. The statement of the law of sines a proof of the law of sines trigonometry from the very beginning the topics this problem has two solutions not only is angle cba a solution this is the case a b sin 45° = /2 therefore, b sin a = 2 /2 =. Strategies for evaluating limits there are several approaches used to find limits it is also necessary to determine whether the result is valid for a two-sided limit $and the sine function is present, the special case$\lim\limits{\theta\to 0}\dfrac{\sin\theta. The derivative of $\sin x$ 3 a hard limit 4 the derivative of $\sin x$, continued 5 two examples 2 the fundamental theorem of calculus 3 functions consisting of products of the sine and cosine can be integrated by using substitution and trigonometric identities. Physics forums - the fusion of science and community magnetic flux is the [img]paper discussion: solar system expansion and strong equivalence principle as seen by the nasa messenger mission antonio genova, erwan mazarico, sander goossens.
A discussion on the two limit cases of sin
Rated 3/5 based on 18 review |
# I Interesting maths problems -- can you share some?
1. Mar 4, 2016
### moriheru
I would like to think about some interesting problems or interesting theorems to which one could find a proof. If you should know any I would be delighted if you could share them. Thank you very much.
2. Mar 4, 2016
### Staff: Mentor
Last edited: Mar 4, 2016
3. Mar 4, 2016
### Staff: Mentor
Another source of math problems for me has been the book:
Math 1001 by Prof Elwes
Its a survey of a large variety of math topics using a few paragraphs to describe each one. For some he'll mention that its an unsolved question or theorem.
https://www.amazon.com/Maths-1001-D...UTF8&qid=1457135502&sr=8-1&keywords=math+1001
Its pretty cheap too at $20 or less paperback at$13 |
# MathML Basic Elements
• The most basic elements of MathML are: mrow, mi, mo and mn.
Basic Elements
Index Elements Descriptions
1. <mrow> element The MathML <mrow> element is used to group any number of sub expressions horizontally.
2. <mi> element The MathML <mi> element is used to specify an identifier.
3. <mo> element The MathML <mo> element is used to specify an operator in a broad sense.
4. <mn> element The MathML <mn> element is used to specify a numeric literal.
For Eg: To write x + y = 5, the equivalent MathML Code are:
Basic Elements
## <mrow> element
• The MathML <mrow> element is used to group any number of sub expressions horizontally.
## <mi> element
• The MathML <mi> element is used to specify an identifier. For ex: The name of a variable, a continuing , a function, etc.
• It automatically renders the identifier using an italic font, if the identifier is one character long; otherwise the name is rendered with help of normal, upright font.
## <mo> element
• The MathML <mo> element is used to specify an operator during a broad sense. For Ex: addition operator '+', fence operator '{' or a separator ','
• The appropriate amount of space is added on the left and on the right of an <mo> element based on the textual contents of this element.
• For ex: In the above expression you replace <mo> + <mo> by <mo> , <mo>, this may suppress the space at the left of the mo element.
## <mn> element
• The MathML <mn> element is used to specify a numeric literal.
• For eg: PI should be specified as <mi>PI</mi> and not as <mn>PI</mn> while 3.14 should be specified as <mn>3.14</mn> and not as <mi>3.14</mi>. |
what is the significance of the crack in the house?
questions from the text "The Fall of the House of Usher" |
# NT is quite popular
Number Theory Level 3
Find the largest positive integer $$n$$ for which $$n^3+100$$ is divisible by $$n+10$$.
× |
# Tensor rank of anti-symmetric tensor
Let $V$ be a vector space of dimension $n$. Let us consider $V^{\otimes n}=V\otimes V \ldots \otimes V$. This vector space contains one dimentional vector space $\wedge^n V$. My question is does it something is known about the tensor rank of the vector $\wedge^n V$?
More formally let $e_1, e_2,\ldots e_n$ be a basis for $V$ than the question is what does it known about the tensor rank of:$$T=\sum_{\sigma \in S_n}(-1)^{sign(\sigma)} e_{\sigma(1)}\otimes e_{\sigma(2)} \otimes \ldots \otimes e_{\sigma(n)}.$$
The trivial upper bound on the tensor rank of this form is $n!$. Does it know any better uper bound?
As far as I know without $(-1)^{sign(\sigma)}$(i.e. for a symmetric form) it know upper bound of $2^n$.
-
What is the tensor rank? – Sasha Dec 20 '12 at 5:37
@Sasha, see here for a definition: its.caltech.edu/~matilde/WeitzMa10Abstract.pdf – Qfwfq Dec 20 '12 at 10:01
I don't know the answer to your question, but I know that there is quite a lot of work on computing and bounding tensor ranks in the algebraic geometry community. You might try writing to any of M. Catalisano, A.V. Geramita, A. Gimigliano, J.M. Landsberg, and/or Jerzy Weyman and asking them your question. (I don't know that they read MO, so they might not otherwise know about it.) – Robert Bryant Dec 21 '12 at 16:59
For symmetric tensors, I think your problem is called 'Waring Problem for polynomials'. Specifically, identifying symmetric tensors with polynomials, the Waring problem asks- given a homogeneous polynomial of degree d, what is the minimum number of d-th powers of a linear polynomial that are needed to write the given polynomial. The generic number has been known for a while and is called (i hope i'm remembering correctly) the Alexander-Hirshowitz theorem. The problem of given a monomial, how many dth forms are needed to write it was just solved and is on the arxiv. – aginensky Jan 17 '13 at 17:15
Here is a link - arxiv.org/abs/1110.0745 . I think the rank of 'detrminant' considered as a symmetric tensor must be known, but I do't know it ! – aginensky Jan 17 '13 at 17:17 |
# American Institute of Mathematical Sciences
2011, 2011(Special): 485-494. doi: 10.3934/proc.2011.2011.485
## Transport and generation of macroscopically modulated waves in diatomic chains
1 Weierstraß-Institut für Angewandte Analysis und Stochastik, Mohrenstraße 39, D-10117 Berlin
Received August 2010 Revised April 2011 Published October 2011
We derive and justify analytically the dynamics of a small macroscopically modulated amplitude of a single plane wave in a nonlinear diatomic chain with stabilizing on-site potentials including the case where a wave generates another wave via self-interaction. More precisely, we show that in typical chains acoustical waves can generate optical but not acoustical waves, while optical waves are always closed with respect to self-interaction.
Citation: Johannes Giannoulis. Transport and generation of macroscopically modulated waves in diatomic chains. Conference Publications, 2011, 2011 (Special) : 485-494. doi: 10.3934/proc.2011.2011.485
[1] Xianwei Chen, Xiangling Fu, Zhujun Jing. Chaos control in a special pendulum system for ultra-subharmonic resonance. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 847-860. doi: 10.3934/dcdsb.2020144 [2] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 [3] Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020045 [4] João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 [5] Roland Schnaubelt, Martin Spitz. Local wellposedness of quasilinear Maxwell equations with absorbing boundary conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 155-198. doi: 10.3934/eect.2020061 [6] Kuntal Bhandari, Franck Boyer. Boundary null-controllability of coupled parabolic systems with Robin conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 61-102. doi: 10.3934/eect.2020052 [7] Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083 [8] Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283 [9] Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109 [10] Franck Davhys Reval Langa, Morgan Pierre. A doubly splitting scheme for the Caginalp system with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 653-676. doi: 10.3934/dcdss.2020353 [11] H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020433 [12] Pengyu Chen. Non-autonomous stochastic evolution equations with nonlinear noise and nonlocal conditions governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020383 [13] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 [14] Amru Hussein, Martin Saal, Marc Wrona. Primitive equations with horizontal viscosity: The initial value and The time-periodic problem for physical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020398 [15] Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001 [16] Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021024 [17] Jan Březina, Eduard Feireisl, Antonín Novotný. On convergence to equilibria of flows of compressible viscous fluids under in/out–flux boundary conditions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021009 [18] Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050 [19] Duy Phan. Approximate controllability for Navier–Stokes equations in $\rm3D$ cylinders under Lions boundary conditions by an explicit saturating set. Evolution Equations & Control Theory, 2021, 10 (1) : 199-227. doi: 10.3934/eect.2020062
Impact Factor: |
# A chord of length 16 cm is drawn in a circle of radius 10 cm.
Question:
A chord of length 16 cm is drawn in a circle of radius 10 cm. Find the distance of the chord from the centre of the circle.
Solution:
Let AB be the chord of the given circle with centre O and a radius of 10 cm.
Then AB =16 cm and OB = 10 cm
From O, draw OM perpendicular to AB.
We know that the perpendicular from the centre of a circle to a chord bisects the chord.
$\therefore B M=\left(\frac{16}{2}\right) \mathrm{cm}=8 \mathrm{~cm}$
In the right ΔOMB, we have:
OB2 = OM2 + MB2 (Pythagoras theorem)
⇒ 102 = OM2 + 82
⇒ 100 = OM2 + 64
⇒ OM2 = (100 - 64) = 36
$\Rightarrow O M=\sqrt{36} \mathrm{~cm}=6 \mathrm{~cm}$
Hence, the distance of the chord from the centre is 6 cm. |
## Intermediate Algebra (6th Edition)
Published by Pearson
# Chapter 5 - Section 5.4 - Multiplying Polynomials - Exercise Set: 96
#### Answer
$2x$ and $3x$ cannot be added together as $3x$ has to be multiplied to $12-x$.
#### Work Step by Step
$2x$ and $3x$ cannot be added together as $3x$ has to be multiplied to $12-x$. The correct solution is: $=2x+(3x)(12)-(3x)(x) \\=2x+36x-3x^2 \\=38x-3x^2 \\=-3x^2+38x$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Operator Theory Seminar
Speaker:
Ishan Ishan, Vanderbilt University
Topic:
Von Neumann equivalence
Abstract:
The notion of measure equivalence of groups was introduced by Gromov as the measurable counterpart to the topological notion of quasi-isometry. Another well-studied notion is that of $W^*\!$-equivalence which states that two groups $\Gamma$ and $\Lambda$ are $W^*\!$-equivalent if they have isomorphic group von Neumann algebras, i.e., $L\Gamma\eqsim L\Lambda$. We introduce a coarser equivalence, which we call von Neumann equivalence, and show that it encapsulates both measure equivalence and $W^*\!$-equivalence. If time permits, we will also show that the new and wide class of groups, called properly proximal groups, introduced by Réemi Boutonnet, Adrian Ioana, and Jesse Peterson is also stable under von Neumann equivalence and thereby obtaining the first examples of non-inner-amenable, non-properly proximal groups. This is based on joint work with Jesse Peterson and Lauren Ruth.
Event Date:
April 6, 2021 - 1:30pm to 2:20pm
Location:
Online
Calendar Category:
Seminar
Seminar Category:
Operator Theory |
# 2 Inequalitie Questions :s (1wiv fractions, other with modulus)
• October 6th 2009, 02:04 PM
Kevlar
2 Inequalitie Questions :s (1wiv fractions, other with modulus)
Doing some private study from textbook and come across these 2 questions i need help with.
1.
$\frac{3}{3x-2}>\frac{1}{x+4}$
My working
$3(x+4)>3x-2$
$x+4>x-\frac{2}{3}$
now what i've got the answer but i'm not sure of the steps after this :s
2.
$x^2<2+|x|$
no idea how to start this one, tried minusing the 2 over then get confused !(Sleepy)
• October 6th 2009, 02:53 PM
pickslides
Quote:
Originally Posted by Kevlar
Doing some private study from textbook and come across these 2 questions i need help with.
1.
$\frac{3}{3x-2}>\frac{1}{x+4}$
My working
$3(x+4)>3x-2$
$x+4>x-\frac{2}{3}$
now what i've got the answer but i'm not sure of the steps after this :s
What you have done isn't bad but is this the actual question?
Do you think it can be solved? |
# Usage of Hidden Markov Models
I have a set of questions regarding how HMMs are used.
Context: there is a stream of real numbers or real number vectors (e.g. data from a phone accelerometer) and the goal is to detect that an action has just happened based on the data from this stream. Only one type of action is considered, i.e. the device is either performing the action or not. An example of an action could be "drawing" a circle in the air with the device while not performing the action is, for example, just carrying the device in the pocket. I'm aware that there are other ways to do this than HMMs but I'm asking this question also to understand HMMs more so for the sake of this question please suppose that using HMMs is required.
I see two ways of doing this
• multiple HMMs (i.e. the speech recognition way) - two HMMs (one for the action and for a non-action), the action is detected if the action-model has higher probability of generating the observed sequence than the non-action model - this would also mean that the signal would have to be processed in a sliding window fashion, running the whole window through the HMMs to get the probabilities
• single HMM with tracking state - a single HMM with states corresponding to the stages of the (non-)action and then the forward algorithm (updated with each new observation as they come in from the stream) is used to get the probabilities of the states and if the action-is-happening-right-now state (or action-just-ended state) has the highest probability, the action is detected
Which of these ways is used in practice or which of these makes the most sense? What are the requirements for training data for each of the above approaches (provided they both make sense in the first place).
Regarding the training...
• The Baum-Welch training uses only the observations and the state model is fitted to the observations, i.e. there is no clear interpretation of what the states actually represent. Therefore it cannot be used for the single HMM approach, am I correct?
• If I wanted to use the single-model approach (provided it makes sense at all) I would need to actually know the hidden states of the system for the training sequences, am I correct?
• If it is possible to label the training sequence(s) with the state (or what I think is the state), i.e. an in-action state and not-in-action state, or even action-is-starting, in-action, action-is-ending and non-action states, how would I train the single HMM?
I'm a total newbie in HMMs so if the questions are weird or very basic, you know why. Thanks for any answers that help me shine light on these issues.
• The context you provide isn't very clear to me, in terms of motivating why you would want to use an HMM. You say you have a stream of real-number values, but what are these values and what hidden state are you supposing they are manifest representations of? You say you want to "detect an action" from the signal, but then you talk about blinking lights and tapping your phone on a table, which doesn't make sense in the context of "detecting" anything. Perhaps if you were more clear about WHY you want to use an HMM and what data you have to work with, people could give clearer advice. – Ryan Simmons Jul 3 '18 at 13:02
• That said, your description of "multiple HMMs" also isn't clear to me. Are you referring to multi-stream HMMs, or something else entirely? If you are referring to multi-stream HMMs, I don't think your description of the method is accurate, nor would it apply to your situation. From what you describe, you only have a SINGLE stream of observed information, whereas multistream HMM is trying to make inference based on MULTIPLE streams of observed information. It doesn't make sense to try to apply this to "action" vs. "non-action", since those are just complementary (non-independent) states. – Ryan Simmons Jul 3 '18 at 13:09
• @RyanSimmons I have updated the Context section of the question. Regarding multiple HMMs - if I understood the basics of speech recognition with HMMs correctly, it works in such a way that the observation sequence (acoustic features in that case) is passed to all the HMMs and each of the HMMs is trained to recognize a speech unit (phone, word... depending on the application). The unit is then chosen based on the probabilities of generating the observation sequence that is computed from all the HMMs. Using this way in my case would be like there are just two words - an action and a non-action. – zegkljan Jul 3 '18 at 13:14
• (1/2): Multistream HMMs have multiple output processes; i.e. multiple streams of OBSERVED variables. In speech recognition, this is represented by multiple modalities (acoustic features, phonetic features, syntactic features, and in some contexts things like facial expressions, etc.). The idea is that you are trying to extract information on some finite set of latent features based on information across these multiple observed modalities. This does not sound like your context, to me. (cont...) – Ryan Simmons Jul 3 '18 at 13:39
• (2/2): You have a single stream of observed information, by which you are trying to infer information on a single latent feature (the occurrence of an action). It doesn't make sense to me, as the problem is described, to try and fit multiple HMMs to this structure. In fact, that model wouldn't be identified. Action and non-action are not independent states, since one is a complement of the other; they are only describable by a single parameter (the probability of an action, since 1 minus this probability is the probability of a non-action). – Ryan Simmons Jul 3 '18 at 13:41
1. Which of these ways is used in practice or which of these makes the most sense?
As I have mentioned in the comments above, unless I am misunderstanding, your description of "multiple HMMs" is likely inaccurate, and certainly inappropriate for the problem as you describe it. It seems to me that when you say "multiple HMMs" you are referring to what are usually called "multi-stream HMMs". Multistream HMMs have multiple output processes; i.e. multiple streams of OBSERVED variables. In speech recognition, this is represented by multiple modalities (acoustic features, phonetic features, syntactic features, and in some contexts things like facial expressions, etc.). The idea is that you are trying to extract information on some finite set of latent features based on information across these multiple observed modalities. This does not sound like your context, to me.
You have a single stream of observed information, by which you are trying to infer information on a single latent feature (the occurrence of an action). It doesn't make sense to me, as the problem is described, to try and fit multiple HMMs to this structure. In fact, that model wouldn't be identified. Action and non-action are not independent states, since one is a complement of the other; they are only describable by a single parameter (the probability of an action, since 1 minus this probability is the probability of a non-action).
1. The Baum-Welch training uses only the observations and the state model is fitted to the observations, i.e. there is no clear interpretation of what the states actually represent. Therefore it cannot be used for the single HMM approach, am I correct?
No. Baum-Welch is simply an estimation algorithm. It is in fact the default for any simple HMM. Interpretation of the states has nothing to do with what algorithm is used to estimate the parameters; interpretation should be based on substantive knowledge of your data, and you should use this to inform how you construct your model.
1. If I wanted to use the single-model approach (provided it makes sense at all) I would need to actually know the hidden states of the system for the training sequences, am I correct?
If you actually knew the hidden states of your system, there would be no point in running a hidden Markov model! The entire point of an HMM is that you cannot observe the hidden states of the system, so you must make inference on them using some set of manifest variables which you can observe. If you can observe the hidden states, then you can just directly estimate the transition probabilities.
1. If it is possible to label the training sequence(s) with the state (or what I think is the state), i.e. an in-action state and not-in-action state, or even action-is-starting, in-action, action-is-ending and non-action states, how would I train the single HMM?
This question is a little unclear to me. As I said above, if you actually knew the states, then you don't have a HMM anymore. Otherwise, it sounds a little bit like you are describing a supervised training, where you may have some set of data on which you have complete knowledge but on future sets of data you will no longer have this knowledge. But it is also possible (and, I would argue, more common) to use unsupervised training, where you don't have any such labels. There is also a literature on semi-supervised training, where you have incomplete labeling.
Regardless, it sounds to me like you should start with a more basic overview of how training works in the context of HMMs. This thread has a number of useful links to papers and worked examples.
• "If you actually knew the hidden states of your system, there would be no point in running a hidden Markov model!" - I was talking about the training data only. I have some fixed amount of observation sequences for which I know the hidden states. However, the states of the "live system" are actually hidden and therefore I need to use an HMM. "But it is also possible (and, I would argue, more common) to use unsupervised training" - please elaborate. If unsupervised, how would I find out the label for the live data then? – zegkljan Jul 3 '18 at 14:14
• Did you follow the link I provided? There are plenty of examples, there. You have to understand that training in and of itself is only about estimating parameters, which can be done with our without labels. In unsupervised training, you fit an HMM to your training data set to estimate the parameters; you can then validate or test this model using whatever other data you have, as in any other statistical model. Fundamentally, supervised training is no different, only you have more information to work with in order to estimate these model parameters. – Ryan Simmons Jul 3 '18 at 14:39
I will try to answer the questions separately below but would also refer you to a question I have asked on the subject some years ago, where - I believe - the setting is similar to what you describe. I will intentionally not repeat what is covered in that thread.
Which of these ways is used in practice or which of these makes the most sense? What are the requirements for training data for each of the above approaches (provided they both make sense in the first place).
In brief, for the event detection setting I have tried both the multiple HMM and single HMM approaches and did not get very far. Conceptually HMM's are more suited for settings where the states themselves are latent / hidden and you do not actually have intuition of what they may / may not represent; how many states there are etc. Put more succinctly, an HMM is an unsupervised learning algorithm with a temporal dimension.
In most cases it is just a Gaussian Mixture model with a state-transition component in the likelihood function. A very good reference on this is Pattern Recognition and Machine Learning by Chris Bishop, chapters 9 and 13.
The Baum-Welch training uses only the observations and the state model is fitted to the observations, i.e. there is no clear interpretation of what the states actually represent. Therefore it cannot be used for the single HMM approach, am I correct?
Baum-Welch is just the EM algorithm in this context, you use it to fit a single HMM, for multiple HMM's you would need to repeat the process. You can also try to fit a HMM (as you can any model) by using a stochastic optimisation process like MCMC, etc. In an HMM there is no clear representation of what the states represent not due to Baum-Welch but due to the unsupervised nature of the approach. In other words, you are just temporally clustering similar observations, taking also into account how they transition between themselves. Theoretically, there is no guarantee that the results of the clustering will align with your implicit labelling (event / no event). You can force it to do so by feature-engineering, but then you are trying to fit a solution to a problem it is not originally meant for IMHO.
If I wanted to use the single-model approach (provided it makes sense at all) I would need to actually know the hidden states of the system for the training sequences, am I correct?
If you knew the hidden states, they are not hidden. The very discussion of event/non-event resonates with supervised labelling. One commonly encountered issue in this context is the large imbalance in the number of event versus non-event observations. Events are usually rare occurrences (like in my question).
If it is possible to label the training sequence(s) with the state (or what I think is the state), i.e. an in-action state and not-in-action state, or even action-is-starting, in-action, action-is-ending and non-action states, how would I train the single HMM?
As mentioned above, form the moment you are talking about labelling, you would be better off, especially if your events are fixed lengths, by using a classifier with temporal features (e.g. going back x number of timesteps). Even more suitable are the weight sharing neural network algorithms like recurrent neural networks as you point out or convolutional neural networks (not just useful for image processing) with varying window/convolution sizes.
• The actions (events) are not fixed lengths. I they were I could just make a classifier that would take a window from the signal as an input and spit out action/non-action as an output. – zegkljan Jul 3 '18 at 14:01
• The recurrent architectures (GRU or LSTM) or conv nets with varying window sizes will likely take care of that for you. – Zhubarb Jul 3 '18 at 14:02
• I'm also looking for something minimalistic so that it could run reasonably well on, say, arduino. HMMs seemed like the way to go but I have never actually used them before, hence my question. Thanks for the insights though. – zegkljan Jul 3 '18 at 14:05
• HMM's are not minimlaistic by any means! They are (the simplest form of) dynamic bayesian networks. Notoriously difficult to fit. EM is not a robust process, not guaranteed to find the global optimum (unlike likelihood maximisation). – Zhubarb Jul 3 '18 at 14:06
• I'm talking about using a finished trained model. The training is, of course, hard but that would be done offline, not on the device. The inference, however, is quite easy if the number of states is small. – zegkljan Jul 3 '18 at 14:17
People always mix up "state", "event" and "event boundary".
Philosophically, "event" means the sharp change of state. So an "event detection" would mean detecting some transition with low probability.
Cognitively, psychologically and also looking at LSTM and predictive coding theory, you have a relatively stable predictive model of the input. Within a chunk of "event" (with a different meaning here), your model doesn't change much. When your model can no longer predict the input, this chunk of "event" breaks and a new event and a new model comes up.
Whatever terms people use, you can see the isomorphic structure.
In your description, "event" seems to mean actually a specific state. It's unnecessary to create a lot of parameters if it is already enough to describe your problem.
so at each time point you have a hidden sate which can take value "Event" or "Non-event". Between adjacent hidden states a transition matrix $A$ parametrizes $P(H_{i+1}|H_i)$ with higher probability to stay in the same state and lower probability to change (the usual sense of "event").
Probability density distributions of signal under hidden state "Event" and "Non-event" are different. Which means $p(S_i|H_i=Event)\neq p(S_i|H_i=Nonevent)$. We call it $B$.
$\pi$ specifying the probability with which value $H_0$ takes.
You have all the $S_i$ observed. What left is to find the most likely sequence of $H_i$. It gives you the highest joint probability which you can get by applying chain rule.
Each $H_i$ corresponds to one time point, but it will form chunks given the way you specify $A$ (more likely to stay than change). Starting and ending of actions are essentially the first and last action hidden state in a row.
You can of course also call $p(S_i|H_i=Event)$ and $p(S_i|H_i=Nonevent)$ two different models. Then I think your two ways mean the same thing.
• You are right that "event" is not a good word. I edited my question to not use the word "event" but "action" instead as that is closer to what I'm trying to achieve. However, your answer does not actually answer almost any of my questions. I know how HMMs work but I don't know how to actually use them. The two ways are definitely not the same thing - in the multiple-models there are multiple separate HMMs and I look how well each of them describes the observations while in the single-model there is only one HMM and I look at the probable hidden state given the observations. – zegkljan Jul 3 '18 at 12:58
• if you have two separate HMMs, what are the hidden states of each HMM? If each HMM has only one state that doesn't change, it's essentially a reduced version of 1 HMM - the temporal relation between hidden states is ignored. – Xiaoxiong Lin Jul 3 '18 at 13:25
• I have never said that the two HMMs would have only one state. They would have some number of states chosen such that they would fit the training sequences well. I don't know what are the hidden states - if I understand Baum-Welch training correctly there is no representation of the states, they (the transition probabilities) are just arranged as to maximize the likelihood that the training data are explained by the model. But I'm asking about this (in a way) in my question too. – zegkljan Jul 3 '18 at 13:32
• sorry I misread your question. I think Baum-Welch algorithm can also be used for the 1HMM case, it automatically gives hidden states, just maybe not being what you expected - "action/non-action". For the 2 HMM case, I still don't see how it's doable. you are still not capturing the transition between action/non-action, by only comparing which is more likely at each time point. Unless you wrap these 2 HMMs up into a larger HMM. Then it comes back to what I said in the answer, with each small HMM being a distribution generating mechanism for $p(S_i|H_i=Event)$ – Xiaoxiong Lin Jul 3 '18 at 13:44 |
Skip to main content
# 2.2: The Infinitude of Primes
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
We now show that there are infinitely many primes. There are several ways to prove this result. An alternative proof to the one presented here is given as an exercise. The proof we will provide was presented by Euclid in his book the Elements.
There are infinitely many primes.
We present the proof by contradiction. Suppose there are finitely many primes $$p_1, p_2, ...,p_n$$, where $$n$$ is a positive integer. Consider the integer $$Q$$ such that
$Q=p_1p_2...p_n+1.$
By Lemma 3, $$Q$$ has at least a prime divisor, say $$q$$. If we prove that $$q$$ is not one of the primes listed then we obtain a contradiction. Suppose now that $$q=p_i$$ for $$1\leq i\leq n$$. Thus $$q$$ divides $$p_1p_2...p_n$$ and as a result $$q$$ divides $$Q-p_1p_2...p_n$$. Therefore $$q$$ divides 1. But this is impossible since there is no prime that divides 1 and as a result $$q$$ is not one of the primes listed.
The following theorem discusses the large gaps between primes. It simply states that there are arbitrary large gaps in the series of primes and that the primes are spaced irregularly.
Given any positive integer $$n$$, there exists $$n$$ consecutive composite integers.
Consider the sequence of integers
$(n+1)!+2, (n+1)!+3,...,(n+1)!+n, (n+1)!+n+1$
Notice that every integer in the above sequence is composite because $$k$$ divides $$(n+1)!+k$$ if $$2\leq k\leq n+1$$ by [thm4].
Exercises
1. Show that the integer $$Q_n=n!+1$$, where $$n$$ is a positive integer, has a prime divisor greater than $$n$$. Conclude that there are infinitely many primes. Notice that this exercise is another proof of the infinitude of primes.
2. Find the smallest five consecutive composite integers.
3. Find one million consecutive composite integers.
4. Show that there are no prime triplets other than 3,5,7.
## Contributors
• Dr. Wissam Raji, Ph.D., of the American University in Beirut. His work was selected by the Saylor Foundation’s Open Textbook Challenge for public release under a Creative Commons Attribution (CC BY) license. |
Recognitions:
Homework Help
## Solving inhomogenous wave equation
Yep, and
$$\sum_{n=1}^{\infty} \frac{1}{(2n-1)^{4}} =\frac{\pi^{4}}{96}$$
$$\Rightarrow \sum_{n=1}^{\infty} \frac{1}{(2n-1)^{4}} = \sum_{m+1=1}^{\infty} \frac{1}{(2(m+1)-1)^{4}} = \sum_{m=0}^{\infty} \frac{1}{(2m+1)^{4}}=\frac{\pi^{4}}{96}$$
Thank you SO much for your patience. Tomorrow I am going through this problem from the start again to make sure I get each step.
Recognitions:
Homework Help
Quote by wahoo2000 Thank you SO much for your patience. Tomorrow I am going through this problem from the start again to make sure I get each step.
Your welcome. The critical point of the problem was the part about the boundary conditions for w(x,t). Choosing the right boundary conditions (by choosing boundary conditions for S) makes the solution much easier. In general, you should try to set the conditions on S such that w(x,t) vanishes at one endpoint (x=0 in this case). |
# American Institute of Mathematical Sciences
July 2009, 23(3): 617-638. doi: 10.3934/dcds.2009.23.617
## Front propagation in a noisy, nonsmooth, excitable medium
1 Department of Mathematics, University of Michigan, Ann Arbor, MI, 48109, United States 2 Department of Mathematics, Michigan State University, East Lansing, MI, 48824, United States
Received October 2007 Revised August 2008 Published November 2008
We consider the impact of noise on the stability and propagation of fronts in an excitable media with a piece-wise smooth, discontinuous ignition process. In a neighborhood of the ignition threshold the system interacts strongly with noise, the front can loose monotonicity, resulting in multiple crossings of the ignition threshold. We adapt the renormalization group methods developed for coherent structure interaction, a key step being to determine pairs of function spaces for which the the ignition function is Frechet differentiable, but for which the associated semi-group, $S(t)$, is integrable at $t=0$. We parameterize a neighborhood of the front solution through a dynamic front position and a co-dimension one remainder. The front evolution and the asymptotic decay of the remainder are on the same time scale, the RG approach shows that the remainder becomes asymptotically small, in terms of the noise strength and regularity, and the front propagation is driven by a competition between the ignition process and the noise.
Citation: Mohar Guha, Keith Promislow. Front propagation in a noisy, nonsmooth, excitable medium. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 617-638. doi: 10.3934/dcds.2009.23.617
[1] G. A. Braga, Frederico Furtado, Vincenzo Isaia. Renormalization group calculation of asymptotically self-similar dynamics. Conference Publications, 2005, 2005 (Special) : 131-141. doi: 10.3934/proc.2005.2005.131 [2] Nathan Glatt-Holtz, Mohammed Ziane. Singular perturbation systems with stochastic forcing and the renormalization group method. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1241-1268. doi: 10.3934/dcds.2010.26.1241 [3] Oliver Díaz-Espinosa, Rafael de la Llave. Renormalization and central limit theorem for critical dynamical systems with weak external noise. Journal of Modern Dynamics, 2007, 1 (3) : 477-543. doi: 10.3934/jmd.2007.1.477 [4] Hasan Alzubaidi, Tony Shardlow. Interaction of waves in a one dimensional stochastic PDE model of excitable media. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1735-1754. doi: 10.3934/dcdsb.2013.18.1735 [5] Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437 [6] Monica De Angelis, Pasquale Renno. Asymptotic effects of boundary perturbations in excitable systems. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2039-2045. doi: 10.3934/dcdsb.2014.19.2039 [7] Doron Levy, Tiago Requeijo. Modeling group dynamics of phototaxis: From particle systems to PDEs. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 103-128. doi: 10.3934/dcdsb.2008.9.103 [8] I. Moise, Roger Temam. Renormalization group method: Application to Navier-Stokes equation. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 191-210. doi: 10.3934/dcds.2000.6.191 [9] Wenlei Li, Shaoyun Shi. Singular perturbed renormalization group theory and its application to highly oscillatory problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1819-1833. doi: 10.3934/dcdsb.2018089 [10] Chiu-Ya Lan, Chi-Kun Lin. Asymptotic behavior of the compressible viscous potential fluid: Renormalization group approach. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 161-188. doi: 10.3934/dcds.2004.11.161 [11] Hans Koch. A renormalization group fixed point associated with the breakup of golden invariant tori. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 881-909. doi: 10.3934/dcds.2004.11.881 [12] Maria Aguareles, Marco A. Fontelos, Juan J. Velázquez. The structure of the quiescent core in rigidly rotating spirals in a class of excitable systems. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1605-1638. doi: 10.3934/dcdsb.2012.17.1605 [13] Ioana Ciotir. Stochastic porous media equations with divergence Itô noise. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020010 [14] Mattia Turra. Existence and extinction in finite time for Stratonovich gradient noise porous media equations. Evolution Equations & Control Theory, 2019, 8 (4) : 867-882. doi: 10.3934/eect.2019042 [15] Vincenzo Michael Isaia. Numerical simulation of universal finite time behavior for parabolic IVP via geometric renormalization group. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3459-3481. doi: 10.3934/dcdsb.2017175 [16] G. A. Braga, Frederico Furtado, Jussara M. Moreira, Leonardo T. Rolla. Renormalization group analysis of nonlinear diffusion equations with time dependent coefficients: Analytical results. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 699-715. doi: 10.3934/dcdsb.2007.7.699 [17] Laura Cremaschi, Carlo Mantegazza. Short-time existence of the second order renormalization group flow in dimension three. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5787-5798. doi: 10.3934/dcds.2015.35.5787 [18] Peter Howard, Bongsuk Kwon. Spectral analysis for transition front solutions in Cahn-Hilliard systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 125-166. doi: 10.3934/dcds.2012.32.125 [19] Yuri Latushkin, Roland Schnaubelt, Xinyao Yang. Stable foliations near a traveling front for reaction diffusion systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3145-3165. doi: 10.3934/dcdsb.2017168 [20] Xinfu Chen, Carey Caginalp, Jianghao Hao, Yajing Zhang. Effects of white noise in multistable dynamics. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1805-1825. doi: 10.3934/dcdsb.2013.18.1805
2018 Impact Factor: 1.143 |
# Doctor
For the Fallout 3 and Fallout: New Vegas skill, see Medicine.
Doctor
Fallout, Fallout 2, Fallout Tactics
modifiesChance to heal crippled limbs
governed byPerception, Intelligence
initial value15% + 1% * (Perception + Intelligence)/2
related perksMedic, Healer
related traitsGood Natured
The healing of major wounds and crippled limbs. Without this skill, it will take a much longer period of time to restore crippled limbs to use.
— In-game description
Doctor is a Fallout, Fallout 2 and Fallout Tactics skill. In Fallout 3 and Fallout: New Vegas, Doctor was merged with First Aid into the Medicine skill.
## Starting level
The starting level formula for all games:
$15% + 1% * ([[Perception]] + [[Intelligence]])/2$
## Uses
Crippled limbs do not heal on their own, only with this skill or visiting a doctor and paying for the service. The doctor skill can heal crippled limbs or replenish hit points with successful use. The skill can only be used successfully 3 times every 24 hours; failed uses do not count toward this limit. In Fallout, where one is on a time limit from the start of the game, some use of this skill can save time vs Waiting.
In addition to healing crippled limbs, the doctor skill also heals 4-10 hit points per successful use, and can be used even when one couldn't wait due to enemy proximity. The amount of healing can be increased by the Healer perk, with 2-5 extra Fallout or 4-10 extra Fallout 2 and Fallout Tactics.
The main use of this skill is to allow access to the Living Anatomy perk (requires 60% Doctor), especially if one is an unarmed character. A doctor skill of 76% is also required to access the Vault City medical terminals and learn about the combat implants. |
Journal topic
Biogeosciences, 17, 2397–2424, 2020
https://doi.org/10.5194/bg-17-2397-2020
Biogeosciences, 17, 2397–2424, 2020
https://doi.org/10.5194/bg-17-2397-2020
Research article 05 May 2020
Research article | 05 May 2020
# Summarizing the state of the terrestrial biosphere in few dimensions
Summarizing the state of the terrestrial biosphere in few dimensions
Guido Kraemer1,2,3,4, Gustau Camps-Valls2, Markus Reichstein1,3, and Miguel D. Mahecha1,3,4 Guido Kraemer et al.
• 1Max Planck Institute for Biogeochemistry, Department for Biogeochemical Integration, 07745 Jena, Germany
• 2Image Processing Laboratory, Universitat de València, 46980 Paterna (València), Spain
• 3German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, 04103 Leipzig, Germany
• 4Remote Sensing Centre for Earth System Research, Leipzig University, 04103 Leipzig, Germany
Correspondence: Guido Kraemer (gkraemer@bgc-jena.mpg.de)
Abstract
In times of global change, we must closely monitor the state of the planet in order to understand the full complexity of these changes. In fact, each of the Earth's subsystems – i.e., the biosphere, atmosphere, hydrosphere, and cryosphere – can be analyzed from a multitude of data streams. However, since it is very hard to jointly interpret multiple monitoring data streams in parallel, one often aims for some summarizing indicator. Climate indices, for example, summarize the state of atmospheric circulation in a region. Although such approaches are also used in other fields of science, they are rarely used to describe land surface dynamics. Here, we propose a robust method to create global indicators for the terrestrial biosphere using principal component analysis based on a high-dimensional set of relevant global data streams. The concept was tested using 12 explanatory variables representing the biophysical state of ecosystems and land–atmosphere fluxes of water, energy, and carbon fluxes. We find that three indicators account for 82 % of the variance of the selected biosphere variables in space and time across the globe. While the first indicator summarizes productivity patterns, the second indicator summarizes variables representing water and energy availability. The third indicator represents mostly changes in surface albedo. Anomalies in the indicators clearly identify extreme events, such as the Amazon droughts (2005 and 2010) and the Russian heat wave (2010). The anomalies also allow us to interpret the impacts of these events. The indicators can also be used to detect and quantify changes in seasonal dynamics. Here we report, for instance, increasing seasonal amplitudes of productivity in agricultural areas and arctic regions. We assume that this generic approach has great potential for the analysis of land surface dynamics from observational or model data.
1 Introduction
Today, humanity faces negative global impacts of land use and land cover change , global warming (IPCC2014), and associated losses of biodiversity , to only mention the most prominent transformations. Over the past decades, new satellite missions along with the continuous collection of ground-based measurements and the integration of both have increased our capacity to monitor the Earth's surface enormously. However, there are still large knowledge gaps limiting our capacity to monitor and understand the current transformations of the Earth system .
Many recent changes due to increasing anthropogenic activity are manifested in long-term transformations. One prominent example is “global greening” that has been attributed to fertilization effects, temperature increases, and land use intensification . It is also known that phenological patterns change in the wake of climate change . However, these phenological patterns vary regionally. In “cold” ecosystems one may find decreased seasonal amplitudes on primary production due to warmer winters . Elsewhere, seasonal amplitude may increase in agricultural areas, for example, due to the so-called “green revolution” . Another change in terrestrial land surface dynamics is induced by increasing frequencies and magnitudes of extreme events . The consequences for land ecosystems have yet to be fully understood and require novel detection and attribution methods tailored to the problem . While extreme events are typically only temporary deviations from a normal trajectory, ecosystems may change their qualitative state permanently, for example shift from grassland to shrubland. Such shifts or tipping points can be induced by changing environmental conditions or direct human influence, and they pose yet another problem that needs to be considered . The question we address here is how to uncover and summarize changes in land surface dynamics in a consistent framework. The idea is to simultaneously take advantage of a large array of global data streams, without addressing each observed phenomenon in a specific domain only. We seek to develop an integrated approach to uncover changes in the land surface dynamics based on a very generic approach.
The problem of identifying patterns of change in high-dimensional data streams is not new. Extracting the dominant features from high-dimensional observations is a well-known problem in many disciplines. One approach is to manually define indicators that are known to represent important properties such as the “Bowen ratio” (Bowen1926, find a more complete description of the concept in Sect. 3.3). Another one consists in using machine learning to extract unique, and ideally independent features from the data. In the climate sciences, for instance, it is common to summarize atmospheric states using empirical orthogonal functions (EOFs), also known as principal component analysis (PCA; ). The rationale is that dimensionality reduction only retains the main data features, which makes them more easily accessible for analysis. One of the most prominent examples is the description of the El Niño–Southern Oscillation (ENSO) dynamics in the multivariate ENSO index (MEI; ), an indicator describing the state of the regional circulation patterns at a certain point in time. The MEI is a very successful index that can be easily interpreted and used in a variety of ways; most basically it provides a measure for the intensity and duration of the different quasi-cyclic ENSO events, but it can also be associated with its characteristic impacts, e.g., seasonal warming, changes in seasonal temperatures, and overall dryness in the Pacific Northwest of the United States ; drought-related fires in the Brazilian Amazon ; and crop yield anomalies .
In plant ecology, indicators based on dimensionality reduction methods are used to describe changes to species assemblages along unknown gradients . The emerging gradients can be interpreted using additional environmental constraints, or based on internal plant community dynamics . It is also common to compress satellite-based Earth observations via dimensionality reduction to get a notion of the underlying dynamics of terrestrial ecosystems. For instance, showed that one can understand the impacts of droughts and heat waves based on a compressed view of the relevant vegetation indices. In general, dimensionality reduction is the method of choice to compress high-dimensional observations in a few (ideally) independent components with little loss of information .
Understanding changes in land–atmosphere interactions is a complex problem, as all aforementioned patterns of change may occur and interact: land cover change may alter biophysical properties of the land surface such as (surface) albedo with consequences for the energy balance . Long-term trends in temperature, water availability, or fertilization may impact productivity patterns and biogeochemical processes . In fact, these land surface dynamics have implications for multiple dimensions and require monitoring of biophysical state variables such as leaf area index, albedo, etc., as well as associated land–atmosphere fluxes of carbon, water, and energy.
Here, we aim to summarize these high-dimensional surface dynamics and make them accessible for subsequent interpretations and analyses such as mean seasonal cycles (MSCs), anomalies, trend analyses, breakpoint analyses, and the characterization of ecosystems. Specifically, we seek a set of uncorrelated, yet comprehensive, state indicators. We want to have a set of very few indicators that represent the most dominant features of the above-described temporal ecosystem dynamics. These indicators should also be uncorrelated, so that one can study the system state by looking and interpreting each indicator independently. The approach should also give an idea of the general complexity contained in the available data streams. If more than a single indicator is required to describe land surface dynamics accurately, then these indicators shall describe very different aspects. While one indicator may describe global patterns of change, others could be only relevant in certain regions, for certain types of ecosystems, or for specific types of impacts. The indicators shall have a number of desirable properties: (1) represent the overall state of observations comprising the system in space and time, (2) carry sufficient information to allow for reconstructing the original observations faithfully from these indicators, (3) be of much lower dimensionality than the number of observed variables, and (4) allow intuitive interpretations.
In this work, we first introduce a method to create such indicators, and then we apply the method to a global set of variables describing the biosphere. Finally, to prove the effectiveness of the method, we interpret the resulting set of indicators and explore the information contained in the indicators by analyzing them in different ways and relating them to well-known phenomena.
Table 1Variables used describing the biosphere. For a description of the variables, see Appendix A.
2 Methods
## 2.1 Data
Table 1 gives an overview of the data streams used in this analysis (for a more detailed description see Appendix A). For an effective joint analysis of more than a single variable, the variables have to be harmonized and brought to a single grid in space and time. The Earth System Data Lab (ESDL; https://www.earthsystemdatalab.net, last access: 23 April 2020; ) curates a comprehensive set of data streams to describe multiple facets of the terrestrial biosphere and associated climate system. The data streams are harmonized as analysis-ready data on a common spatiotemporal grid (equirectangular grid 0.25 in space and 8 d in time, 2001–2011), forming a 4D hypercube, which we call a “data cube”. The ESDL not only curates Earth system data, but also comes with a toolbox to analyze these data efficiently. For this study, we chose all available variables in the ESDL v1.0 (the most recent version available at the time of analysis), divided the available variables into meteorological and biospheric variables and discarded the atmospheric variables. We also discarded variables with distributions that are badly suited for a linear PCA (e.g., burned area contains mostly zeros) and variables with too many missing values. The only dataset that was added post hoc was fAPAR, which represents an important aspect of vegetation which was not available in the data cube at the time of analysis (it is part of the most recent version of the data cube).
The datasets taken from and are derived from flux tower measurements . The flux towers are not equally distributed in climate space; i.e., there are many flux towers in temperate areas but much fewer in tropic and arctic regions, which may lead to less accurate data in these regions. These datasets also exclude large arid areas such as the Sahara and Gobi deserts and parts of the Arabian Peninsula which may affect the resulting loadings of the PCA slightly.
In this study, each variable was normalized globally to zero mean and unit variance to account for the different units of the variables, i.e., transform the variables to have standard deviations from the mean as the common unit. Because the area of the pixel changes with latitude in the equirectangular coordinate system used by the ESDL, the pixels were weighted according to the represented surface area. Only spatiotemporal pixels without any missing values were considered in the calculation of the covariance matrix.
## 2.2 Dimensionality reduction with PCA
As a method for dimensionality reduction, we used a modified principal component analysis to summarize the information contained in the observed variables. PCA transforms the set of d centered and, in this case, standardized variables into a subset of p, $\mathrm{1}\le p\le d$, principal components (PCs). Each component is uncorrelated with the other components, while the first PCs explain the largest fraction of variance in the data.
The data streams consist of d=12 observed variables at the same time and location. Each observation is defined in a d-dimensional space, xi∈ℝd, and we define the dataset by collecting all samples in the matrix $\mathbf{X}=\left[{\mathbit{x}}_{\mathrm{1}}|\mathrm{\cdots }|{\mathbit{x}}_{n}\right]\in {\mathbb{R}}^{d×n}$. The observations are repeated in space and time and lie on a grid of $\mathrm{lat}×\mathrm{long}×\mathrm{time}$. In our case, we have $n=|\mathrm{lat}|×|\mathrm{long}|×|\mathrm{time}|=\mathrm{720}×\mathrm{1440}×\mathrm{506}=\mathrm{524},\mathrm{620},\mathrm{800}$ observations, where $|\cdot |$ denotes the cardinality of the dimension. Note that the actual number of observations was lower, $n=\mathrm{106},\mathrm{360},\mathrm{156}$, because we considered land points only and removed missing values.
The fundamental idea of PCA is to project the data to a space of lower dimensionality that preserves the covariance structure of the data. Hence, the fundament of a PCA is the computation of a covariance matrix, Q. When all variables are centered to global zero mean and normalized to unit variance, the covariance matrix can in principle be estimated as
$\begin{array}{}\text{(1)}& \mathbf{Q}=\frac{\mathrm{1}}{n-\mathrm{1}}{\mathbf{XX}}^{T}=\frac{\mathrm{1}}{n-\mathrm{1}}\sum _{i=\mathrm{1}}^{n}{\mathbit{x}}_{i}{\mathbit{x}}_{i}^{T}.\end{array}$
However, in our case the data cube lies on a regular 0.25 grid and estimating Q as above would lead to overestimating the influence of dynamics in relatively small pixels of high latitudes compared to lower latitudes where each data point represents a larger area. Hence, one needs a weighted approach to calculate the covariance matrix,
$\begin{array}{}\text{(2)}& \mathbf{Q}=\frac{\mathrm{1}}{w}\sum _{i=\mathrm{1}}^{n}{w}_{i}{\mathbit{x}}_{i}{\mathbit{x}}_{i}^{T},\end{array}$
where wi=cos (lati) and lati is the latitude of observation i, $w={\sum }_{i=\mathrm{1}}^{n}{w}_{i}$ is the total weight, and n is the total number of observations. Equation (2) has the additional property that it can be computed sequentially on very big datasets, such as our Earth System Data Cube, by a consecutively adding observations to an initial estimate.
Note that the actual calculation of the covariance matrix is even more complicated, because summing up many floating-point numbers one by one can lead to large inaccuracies due to precision issues of floating-point numbers and instabilities of the naive algorithm (; the same holds for the implementations of the sum function in most software used for numerical computing). Here, we used the Julia package WeightedOnlineStats.jl (https://doi.org/10.5281/zenodo.3360311, repository: https://github.com/gdkrmr/WeightedOnlineStats.jl/, last access: 23 April 2020) (implemented by the first author of this paper), which uses numerically stable algorithms for summation, higher-precision numbers, and a map-reduce scheme that further minimizes floating-point errors.
Based on this weighted and numerically stable covariance matrix, the PCA can be computed using an eigendecomposition of the covariance matrix,
$\begin{array}{}\text{(3)}& \mathbf{Q}=\mathbf{V}\mathbf{\Lambda }{\mathbf{V}}^{T}\in {\mathbb{R}}^{d×d}.\end{array}$
In this case, the covariance matrix Q is equal to the correlation matrix because we standardized the variables to unit variance. Λ is a diagonal matrix with the eigenvalues, ${\mathit{\lambda }}_{\mathrm{1}},\mathrm{\dots },{\mathit{\lambda }}_{d}$, in the diagonal in decreasing order and $\mathbf{V}\in {\mathbb{R}}^{d×d}$, the matrix with the corresponding eigenvectors in columns. V can project the new incoming input data xi (centered and standardized) onto the retained PCs,
$\begin{array}{}\text{(4)}& {\mathbf{y}}_{i}={\mathbf{V}}^{T}{\mathbit{x}}_{i}\in {\mathbb{R}}^{d},\end{array}$
where yi is the projection of the observation xi onto the d PCs.
The canonical measure of the quality of a PCA is the fraction of explained variance by each component, ${\mathit{\sigma }}_{i}^{\mathrm{2}}$, calculated as
$\begin{array}{}\text{(5)}& {\mathit{\sigma }}_{i}^{\mathrm{2}}=\frac{{\mathit{\lambda }}_{i}}{{\sum }_{i=\mathrm{1}}^{d}{\mathit{\lambda }}_{i}}.\end{array}$
To get a more complete measure of the accuracy of the PCA, we used the “reconstruction error” in addition to the fraction of explained variance. PCA allows a simple projection of an observation onto the first p PCs and a consecutive reconstruction of the observations from this p-dimensional projection. This is achieved by
$\begin{array}{}\text{(6)}& {\mathbf{Y}}_{p}={\mathbf{V}}_{p}^{T}\mathbf{X}\in {\mathbb{R}}^{p×n}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}{\mathbf{X}}_{p}={\mathbf{V}}_{p}{\mathbf{Y}}_{p}\in {\mathbb{R}}^{d×n},\end{array}$
where Yp is the projection onto the first p PCs, Vp the matrix with columns consisting of the eigenvectors belonging to the p largest eigenvalues, and Xp the observations reconstructed from the first p PCs.
The reconstruction error, ei, was calculated for every point, xi, in the space–time domain based on the reconstructions from the first p principal components:
$\begin{array}{}\text{(7)}& {\mathbit{e}}_{i}={\mathbf{V}}_{p}{\mathbf{V}}_{p}^{T}{\mathbit{x}}_{i}-{\mathbit{x}}_{i}\in {\mathbb{R}}^{d}.\end{array}$
As this error is explicit in space, time, and variable, it allows for disentangling the contribution of each of these domains to the total error. This can be achieved by estimating the (weighed) mean square error,
$\begin{array}{}\text{(8)}& \mathrm{MSE}=\frac{\mathrm{1}}{w}\sum _{i}{w}_{i}{\mathbit{e}}_{i}^{\mathrm{2}}.\end{array}$
This approach can give a better insight into the compositions of the error than a single global error estimate based on the eigenvalues.
## 2.3 Pixel-wise analyses of time series
The principal components estimated as described above are ideally low-dimensional representations of the land surface dynamics that require further interpretation. These components have temporal dynamics that need to be understood in detail. One crucial question is how the dynamics of a system of interest deviate from its expected behavior at some point in time. A classical approach is inspecting the “anomalies” of a time series, i.e., the deviation from the mean seasonal cycle at a certain day of year.
Another key description of such system dynamics are trends. We estimated trends of the indicators as well as of their seasonal amplitude using the Theil–Sen estimator. The advantage of the Theil–Sen estimator is its robustness to up to 29.3 % of outliers (Theil1950a, b, c; Sen1968), while ordinary least-squares regression is highly sensitive to such values. The calculation of the estimator consists simply in computing the median of the slopes spanned by all possible pairs of points,
$\begin{array}{}\text{(9)}& {\mathrm{slope}}_{ij}=\frac{{z}_{i}-{z}_{j}}{{t}_{i}-{t}_{j}},\end{array}$
where zi is the value of the response variable at time step i and ti the time at time step i. In our experiments, we computed the slopes separately per pixel and principal component with time as the predictor and the value of the principal component as the response variable.
To test the slopes for significance, we used the Mann–Kendall statistics (Mann1945; Kendall1970) and adjusted the resulting p values with the Benjamini–Hochberg method to control for the false discovery rate . Slopes with an adjusted p<0.05 were deemed significant.
To identify disruptions in trajectories, breakpoint detection provides a good framework for analysis. For the estimation of breakpoints, the generalized fluctuation test framework was used to test for the presence of breakpoints. The framework uses recursive residuals such that a breakpoint is identified when the mean of the recursive residuals deviates from zero. We used the implementation in . For practical reasons, here we only focus on the largest breakpoint.
Figure 1Example polygons and their areas, Eq. (10); the arrows indicate the directionality. (a) Clockwise polygon with a negative area. (b) Counterclockwise polygon with a positive area. (c) Chaotic polygon with a very low area. (d) Polygon with a single intersection and both a clockwise and counterclockwise portion. The clockwise portion is slightly larger than the counterclockwise portion; therefore the area is slightly negative.
The analysis of a different type of dynamic considers bivariate relations. In the context of oscillating signals it is particularly instructive to quantify their degree of phase shift and direction – even if both signals are not linearily related. A “hysteresis” would be such a pattern describing how the pathways AB and BA between states A and B differ . We estimated hysteresis by calculating the area inside the polygon formed by the mean seasonal cycle of the combinations of two components.
$\begin{array}{}\text{(10)}& \mathrm{Area}=\frac{\mathrm{1}}{\mathrm{2}}\sum _{i=\mathrm{1}}^{n}{x}_{i}\left({y}_{i+\mathrm{1}}-{y}_{i-\mathrm{1}}\right),\end{array}$
where n=46, the number of time steps in a year, and xi and yi are the mean seasonal cycle of two PCs at time step i. The polygon is circular; i.e., the indices wrap around the edges of the polygon so that x0=xn and ${x}_{n+\mathrm{1}}={x}_{\mathrm{1}}$. This formula gives the actual area inside the polygon only if it is non-self-intersecting and the vertices run counterclockwise. If the vertices run clockwise, the area is negative. If the polygon is shaped like an 8, the clockwise and counterclockwise parts will cancel each other (partially) out. Trajectories that have larger amplitudes will also tend to have larger areas as illustrated in Fig. 1.
3 Results and discussion
In the following, we first briefly present and discuss the quality of the global dimensionality reduction (Sect. 3.1) and interpret the individual components from an ecological point of view (Sect. 3.2). We summarize the global dynamics that we uncovered in the low-dimensional space (Sect. 3.3). We characterize the contained seasonal dynamics (Sect. 3.4), including spatial patterns of hysteresis (Sect. 3.5). We then describe global anomalies of the identified trajectories (Sect. 3.6) and discuss the identified anomalies in depth based on local phenomena (Sect. 3.7). Finally, we present global trends and their breakpoints (Sect. 3.7).
Figure 2(a) Fraction of explained variance of the PCA by component. The knee at component three suggests that components four and higher do not contribute much to total variance. (b) Rotation matrix of the global PCA model (also called loadings, Eq. 4). The columns of the rotation matrix describe the linear combinations of the (centered and standardized) original variables that make up the principal components. PC1 is dominated by primary-productivity-related variables, PC2 by variables describing water availability, and PC3 by variables describing albedo. Values of the rotation matrix are clamped to the range [0.5, 0.5]; the actual range of the values is [0.73, 0.74] and [0.46, 0.54] for the first three components.
## 3.1 Quality of the PCA
Figure 2a shows the explained fraction of variance (Eq. 5) for the global PCA based on the entire data cube. The two leading components explain 73 % of the variance from the 12 variables; additional components contribute relatively little additional variance (PC3 contributes 9 % and all subsequent PCs less than 7 %) each. This results in a “knee” at component 3, which suggests that two indicators are sufficient to capture the major global dynamics of the terrestrial land surface, but we will also consider the third components in the following analyses (Cattell1966).
Figure 3Reconstruction error of the data cube using varying numbers of principal components aggregated by the mean squared error. Reconstruction errors aggregated over all time steps and variables are shown in the left column: (a) using only the first component, (c) using the first two, (e) and using the first three. Corresponding right plots (b, d, f) show the mean reconstruction error aggregated by latitude.
We estimated the reconstruction error sequentially up to the first three principal components (Fig. 3). Regions that do not fit the model well show a higher reconstruction error. Considering one component only, the highest reconstruction errors appear in high latitudes but decrease strongly with each additional component and nearly vanish if the third component is included.
## 3.2 Interpretation of the PCA
The first PC summarizes variables that are closely related to primary productivity (GPP, LE, NEE, fAPAR) and therefore are highly interrelated (see Fig. 2b). The energy for photosynthesis comes from solar radiation, and fAPAR is an indicator for the fraction of light used for photosynthesis. The available photosynthetic radiation is used by photosynthesis to fix CO2 and to produce sugars that maintain the metabolism of the plant. The total uptake of CO2 is reflected in GPP, which is also closely related to water consumption. The flow of water within the plant is not only essential to enable photosynthesis but also drives the transport of nutrients from the roots. The uplift of water in the plant is ultimately driven by transpiration – together with evaporation from soil surfaces one can observe the integrated latent energy needed for the phase transition (LE). However, ecosystems also respire; CO2 is produced by plants in energy-consuming processes as well as by the decomposition of dead organic materials via soil microbes and other heterotrophic organisms. This total respiration can be observed as terrestrial ecosystem respiration (TER). The difference between GPP and TER is the net ecosystem exchange (NEE) rate of CO2 between ecosystems and the atmosphere . GPP and TER are also well represented in the first dimension.
The second component represents variables related to the surface hydrology of ecosystems (see Fig. 2b). Surface moisture, evaporative stress, root-zone soil moisture, and sensible heat are all essential indicators for the state of plant-available water. While surface moisture is a rather direct measure, evaporative stress is a modeled quantity summarizing the level of plant stress: a value of zero means that there is no water available for transpiration, while a value of 1 means that transpiration equals the potential transpiration . Root-zone soil moisture is the moisture content of the root zone in the soil, the moisture directly available for root uptake. If this quantity is below the wilting point, there is no water available for uptake by the plants. Sensible heat is the exchange of energy by a change in temperature; if there is enough water available, then most of the surface heat will be lost due to evaporation (latent heat), and with decreasing water availability more of the surface heat will be lost due to sensible heat, making this an indicator of dryness as well.
We observe that the third component is most strongly related to albedo (Fig. 2b). Albedo describes the overall reflectiveness of a surface. Here we refer to broadband (400–3000 nm) surface albedo; for an exact definition see Appendix A. Light surfaces, such as snow and sand, reflect most of the incoming radiation, while surfaces that have a high liquid water content or active vegetation absorb most of the incoming radiation. Local changes to albedo can be due to many causes, e.g., snowfall, vegetation greening and browning, or land use change.
The relation of PC3 to productivity and hydrology is opposite to what we would expect from an albedo axis. Because vegetation uses radiation as an energy source, albedo is negatively correlated with the productivity of vegetation, hence the negative correlation of albedo with PC1. Given that water also absorbs radiation, we can observe a negative correlation of albedo with PC2 (see Fig. 2b). We observe that PC1 and PC2 are positively correlated with PC3 on the positive portion of their axes (see Fig. 4d and f), which means counterintuitively that the index representing albedo is positively correlated with primary productivity and moisture content. Finally we can observe that PC1 and PC2 have a much higher reconstruction error in snow-covered regions, which is strongly improved by adding PC3 (see Fig. 3f). Therefore the third component should be regarded mostly as a binary variable that introduces snow cover, as the other information that is usually associated with albedo is already contained in the first two components.
Figure 4Trajectories of some points (colored lines) and the area-weighted density over principal components one and two (the gray background shading shows the density) for (a, c, e) the raw trajectories and (b, d, f) the mean seasonal cycle. The trajectories are shown in the space of PC1–PC2 (first row), PC1–PC3 (second row), and PC2–PC3 (third row). The trajectories were chosen to cover a large area in the space of the first two principal components. Some of the trajectories have an arrow indicating the direction. The numbers illustrate the value of some variables; for units see Table 1. Description of the points is as follows. Red: tropical rain forest, 2.625 S, 67.625 W; blue: maritime climate, 52.375 N, 7.375 E; green: monsoon climate, 22.375 N, 82.375 E; purple: subtropical, 34.875 N, 117.625 W; orange: continental climate, 52.375 N, 44.875 E; yellow: arctic climate, 72.375 N, 119.875 E.
## 3.3 Distribution of points in PCA space
The bivariate distribution of the first two principal components forms a “triangle” (gray background in Fig. 4a). At the high end of PC1 we find one point of the triangle in which ecosystems have a high primary productivity (high values of GPP, fAPAR, LE, TER, and evaporation), mostly limited by radiation. On the lower end of the first principal component we find the other two points of the triangle describing two alternative states of low productivity. These can happen either when the second principal component coincides with temperature limitation (the negative extreme of the second principal component) as seen in the lower left corner of the distribution in Fig. 4a and b or due to water limitation (positive extreme of the second principal component, the upper left corner in Fig. 4a). This pattern reflects the two essential global limitations of GPP in terrestrial ecosystems .
Both components form a subspace in which most of the variability of ecosystems takes place. Component one describes productivity and component two the limiting factors to productivity. Therefore, we can see that most ecosystems with high values on component one (a high productivity) are at the approximate center of component two. When ecosystems are found outside the center of component two, they have lower values on component one (lower productivity) because they are limited by water or temperature (see Fig. 4b).
Figure 5The background shading shows the distribution of the mean seasonal cycle of the spatial points (see Fig. 4). The contour lines represent the reconstruction of the variables from the first two principal components. The reconstructed variables are (a) latent heat (LE), (b) sensible heat (H), and (c) ${\mathrm{log}}_{\mathrm{10}}\left(\frac{\mathrm{Sensible}\phantom{\rule{0.125em}{0ex}}\mathrm{Heat}}{\mathrm{Latent}\phantom{\rule{0.125em}{0ex}}\mathrm{Heat}}\right)$, the log 10 of the Bowen ratio. Note that the LE and H have been considered in the construction of the PCs and hence are a linear function of the PCs. The Bowen ratio, instead, was not considered here and clearly responds in a nonlinear form.
To further interpret the triangle we analyze how the Bowen ratio embeds in the space of the first two dimensions. Energy fluxes from the surface into the atmosphere can represent either a radiative transfer (sensible heat) or evaporation (latent heat). Their ratio is the “Bowen ratio”, $B=\frac{\mathrm{H}}{\mathrm{LE}}$, (; see also Fig. 5). When water is available most of the available energy will be dissipated by evaporation, B<1, resulting in a high latent heat flux. Otherwise, the transfer by latent heat will be low and most of the incoming energy has to be dissipated via sensible heat, B>1. In higher latitudes, there is relatively limited incoming radiation and temperatures are low; therefore there is not much energy to be dissipated and both heat fluxes are low. A high sensible heat flux is an indicator of water limitation.
## 3.4 Seasonal dynamics
The leading principal components represent most of the variability of the space spanned by the observed variables, summarizing the state of a spatiotemporal pixel efficiently. This means that the PCs track the state of a local ecosystem over time (Fig. 4a) or, in the case of the mean seasonal cycle, time of the year (Fig. 4b). For a representation of the state of the first three components in time and space, see Appendix Fig. B1.
A first inspection reveals a substantial overlap of seasonal cycles of very different regions of the world. We also see that very different ecosystems may reach very similar states in the course of the season, even though their seasonal dynamics are very different. For instance, a midlatitude pixel (blue trajectory in Fig. 4) shows very similar characteristics to tropical forests during peak growing season. This indicates that an ecosystem of the midlatitudes can reach similar levels of productivity and water availability as a tropical rain forest (see also Appendix Fig. C1). Likewise, for the first two components, many high-latitude areas show similar characteristics to midlatitude areas during winter (low latent and sensible energy release as well as low GPP), and many dry areas such as deserts show similar characteristics to areas with a pronounced dry season, e.g. the Mediterranean.
Depending on their position on Earth, ecosystem states can shift from limitation to growth during the year (Fig. 4b, e.g. ). For example, the orange trajectory in Fig. 4, an area close to Moscow, shifts from a temperature-limited state in winter to a state of very high productivity during summer. Other ecosystems remain in a single limitation state with only slight shifts, such as the red trajectory in Fig. 4. In the corner of maximum productivity of the distribution, we find tropical forests characterized by a very low seasonality. We also observe that very different ecosystems can have very similar characteristics during their peak growing season; e.g. green (located in northeast India), blue (northwest Germany), and orange (located close to Moscow) trajectories have very similar characteristics during peak growing season compared to the red trajectory.
The third component shows a different picture. Due to a consistent winter snow cover in higher latitudes, the albedo is much higher and the amplitude of the mean seasonal cycle is much larger than in other ecosystems. Other areas show comparatively little variance on the third component and their relation to productivity and moisture content is even positively correlated to the third component, which is the opposite of what is expected from an albedo axis.
Figure 6Mean seasonal cycle of the first three principal components (in columns) during the seasons (in rows). Left column: first principal component. Middle column: second principal component. Right column: third principal component. Rows from top to bottom: equally spaced intervals during the year. Values have been clamped to 0.7 times their range to increase contrast.
The global pattern of the first principal component follows the productivity cycles during summer and winter (Fig. 6, left column) of the Northern Hemisphere, with positive values (high productivity, green) during summer and negative values (low productivity, brown) during winter. The tropics show high productivity all year. The global pattern shows the well-known green wave (Schwartz1994, 1998) because the first dimension integrates over all variables that correlate with plant productivity.
The second principal component (Fig. 6, middle column) tracks water deficiency: red and light red areas indicate water deficiency, light blue areas excess water, and dark blue areas water growth limitation due to cold. Areas which are temperature limited during winter but have a growing season during summer, such as boreal forests, change from dark blue in winter to light blue during the growing season. Areas which have low productivity during a dry season change their coloring from red to light red during the growing season, e.g the northwest of Mexico and southwest of the United States.
The third principal component (Fig. 6, right column) tracks surface reflectance. Therefore we can see the highest values in the arctic region during winter, and other areas vary much less in their reflectance throughout the year. Again, the third component shows a counterintuitive behavior in the midlatitudes, as it is positively correlated with productivity and therefore shows the opposite behavior of what would be expected from an indicator tracking albedo.
Although the principal components are globally uncorrelated, they covary locally (see Fig. D1). Ecosystems with a dry season have a negative covariance between PC1 and PC2, while ecosystems that cease productivity in winter have a positive covariance. Cold arid steppes and boreal climates show a negative covariance between PC1 and PC3. While other ecosystems that have a strong seasonal cycle show a positive correlation, many tropical ecosystems do not show a large covariance. A very similar picture is painted between the covariance of PC2 and PC3: boreal and steppe ecosystems show a negative covariance, while most other ecosystems show a more or less pronounced positive covariance, again depending on the strength of the seasonality.
Observing the mean seasonal cycle of the principal components gives us a tool to characterize ecosystems and may also serve as a basis for further analysis, such as a global comparison of ecosystems .
Figure 7The area inside the mean seasonal cycles of (a) PC1–PC2, (b) PC1–PC3, and (c) PC2–PC3. The area is positive if the direction is counterclockwise and negative if the direction is clockwise. Most of the trajectories need a strong seasonal cycle to show a pronounced hysteresis effect. If the mean seasonal cycle intersects, the areas cancel each other out, e.g. the green trajectory of Fig. 4b.
## 3.5 Hysteresis
The alternative return path between ecosystem states forming the hysteresis loops arises from the ecosystem tracking seasonal changes in the environmental condition, e.g. summer–winter or dry–rainy seasons (Fig. 4b). Hysteresis is a common occurrence in ecological systems . For instance, a hysteresis loop can be found when plotting soil respiration against soil temperature . The sensitivity of soil respiration to soil temperature changes seasonally due to changing soil moisture and photosynthesis (by supplying carbon to the rhizosphere), producing a seasonally changing hysteresis effect . Biological variables also show a hysteresis effect in their relations with atmospheric variables; e.g. found a hysteresis effect between seasonal NEE, temperature, and a number of other ecosystem and climate-related variables. Here we look at the mean seasonal cycles of pairs of indicators and the area they enclose.
The orange trajectory (area close to Moscow) in Fig. 4b shows that the paths between maximum and minimum productivity can be very different, in contrast to the blue trajectory located in the northwest of Germany which also has a very pronounced yearly cycle but shows no such effect. Figure 4 also indicates that the area inside the mean seasonal cycles of PC1–PC2 and PC1–PC3 shows important characteristics while hysteresis in PC2–PC3 is a much less pronounced feature; i.e., we can only see a pronounced area inside the yellow curve in Fig. 4f.
The trajectories that show a more pronounced counterclockwise hysteresis effect in PC1–PC2 (Fig. 7a) are areas with a warm and temperate climate and partially those that have a snow climate with warm summers, i.e., areas that have pronounced growing, dry, and wet seasons and therefore shift their limitations more strongly during the year. That means the moisture reserves are depleted during growing season, and therefore the return path has higher values on the second principal component (the climatic zones are taken from the Köppen–Geiger classification; ). We can also see that areas with dry winters tend to have a clockwise hysteresis effect, e.g. many areas in East Asia. Due to the humid summers there is no increasing water limitation during the summer months which causes a decrease for PC2 instead of an increase. Other areas with clockwise hysteresis can be found in winter dry areas in the Andes and the winter dry areas north and south of the African rain forests. Tropical rain forests do not show any hysteresis effect due to their low seasonality. In general we can say that the area inside the mean seasonal cycle trajectory of PC1–PC2 depends mostly on water availability in the growing and non-growing seasons, i.e., the contrast of wet summer and dry winter vs. dry summer and wet winter.
The hysteresis effect on PC1–PC3 (Fig. 7b) shows a pronounced counterclockwise MSC trajectory mostly in warm temperate climates with dry summers, while it shows a clockwise MSC trajectory in most other areas; again tropical rain forests are an exception due to their low seasonality. The most pronounced clockwise MSC trajectories can be found in tundra climates in arctic latitudes, where we have a consistent winter snow cover and a very short growing period. A counterclockwise rotation can be found in summer dry areas, such as the Mediterranean and California, but also some more humid areas, such as the southeast United States and the southeast coast of Australia. In these areas we can find a decrease for PC3 during the non-growing phase which probably corresponds to a drying out of the vegetation and soils.
The hysteresis effect on PC2–PC3 (Fig. 7c) mostly depends on latitude. There is a large counterclockwise effect in the very northern parts, due to the large amplitude of PC3. The amplitude gets smaller further south until the rotation reverses in winter dry areas at the northern and southern extremes of the tropics and disappears at the equatorial humid rain forests.
We can see that the hysteresis of pairs of indicators represents large-scale properties of climatic zones. The enclosed area and the direction of the rotation provide interesting information. Hysteresis can provide information on the seasonal availability of water, seasonal dry periods, or snowfall. With the method presented here, we can not observe intersecting trajectories, which would probably provide even more interesting insights (e.g. the green trajectory in Fig. 4b).
Figure 8Anomalies of the first three principal components. The brown–green contrast shows the anomalies on PC1, a relative low productivity or greening, respectively. The blue–red contrast shows the anomalies on PC2, a relative wetness or dryness, respectively. The brown–purple contrast shows the anomaly on PC3, a relative deviation in albedo. Panels (a), (e), and (i) are maps showing the anomalies of PC1–PC3, respectively, on 1 January 2001. Panels (b), (c), and (d) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (a). The effects of the floods on the Horn of Africa (2006) and the Russian heat wave (2010) are highlighted by circles. Panels (f), (g), and (h) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (e). Strong droughts in the Amazon during 2005 and 2010 can be observed as large red spots on the fringes of the Amazon basin (highlighted by circles). Panels (j), (k), and (l) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (i). A strong snowfall event affecting central and southern China is marked as circles.
## 3.6 Anomalies of the trajectories
The deviation of the trajectories from their mean seasonal cycle should reveal anomalies and extreme events. These anomalies have a directional component which makes them interpretable the same way the original PCs are. Therefore one can infer the state of the ecosystem during an anomaly. For instance the well-known Russian heat wave in summer 2010 appears in Fig. 8 as a dark brown spot in the southern part of the affected area, indicating lower productivity, and as a thin green line in the northern parts, indicating increased productivity. This confirms earlier reports in which only the southern agricultural ecosystems were negatively affected by the heat wave, while the northern predominantly forest ecosystems rather benefited from the heat wave in terms of primary productivity .
Another example of an extreme event that we find in the PCs is the very wet November rainy season of 2006 in the Horn of Africa after a very dry rainy season in the previous year. This event was reported to bring heavy rainfall and flooding events which caused an emergency for the local population but also increased ecosystem productivity . The rainfall event appears as green and blue spots in Fig. 8b and c, preceded by the drought events which appear as red and brown spots.
Figure 8f and g also show the strong drought events in the Amazon, particularly the droughts of 2005 and 2010 appear strongly north and south of the Amazon basin. The central Amazon basin does not show these strong events, because the observable response of the ecosystem was buffered due to the large water storage capacity in the central Amazon basin.
Another extreme event that can be seen is the extreme snow and cold event affecting central and south China in January 2008, causing the temporary displacement of 1.7 million people and economic losses of approximately USD 21 billion . This event shows up clearly on PC2 and PC3 as cold and light anomalies, respectively (see Fig. 8k and f).
Figure 9Trajectories of the first two principal components for single pixels. (a) Deforestation increases the seasonal amplitude of the first two PCs (Brazilian rain forest, 9.5 S, 63.5 W). The red line shows the trajectory before 2003 and the blue line the trajectory 2003 and later. A strong increase in seasonal amplitude can be observed after 2003. (b) The heat wave is clearly visible in the trajectory (red, Russian heat wave, summer 2010, 56 N, 45.5 E). (c) Rainfall in the short rainy season (November–December) influences agricultural yield and can cause flooding (extreme flooding after drought, November 2006, 3 N, 45.5 E). (d) The European heat wave in summer 2003 was one of the strongest on record (France, 47.2 N, 3.8 E). The mean seasonal cycle of the trajectories is shown in purple.
## 3.7 Single trajectories
Observing single trajectories can give insight into past events that happened at a certain place, such as extreme events or permanent changes in ecosystems. The creation of trajectories is an old method used by ecologists, mostly on species assembly data of local communities, to observe how the composition changes over time (e.g. ). In this context, we observe how the states of the ecosystems inside the grid cell shift over time, which comprises a much larger area than a local community but is probably also less sensitive to very localized impacts than a community-level analysis. One of the main differences of the method applied here from the classical ecological indicators is that the trajectories observed here are embedded into the space spanned by a single global PCA, and therefore we can compare a much broader range of ecosystems directly.
Figure 10(a, c, e) Trends in PC1–PC3, respectively (2001–2011). (b, d, f) Bivariate distribution of trends. Trends were calculated using the Theil–Sen estimator. Panels (a), (c), and (e) show significant trends only (p<0.05, Benjamini–Hochberg adjusted).
The seasonal amplitude of the trajectory in the Brazilian Amazon increases due to deforestation and crop growth cycles. Figure 9a shows an area in the Brazilian Amazon in Rondônia (9.5 S, 63.5 W) which was affected by large-scale land use change and deforestation. It can be seen that the seasonal amplitude increases strongly after the beginning of 2003. This increased amplitude could be due to any of the following reasons or a combination of them: deforestation decreases water storage capability and dries out soils, causing larger variability in ecosystem productivity. Therefore, during periods of no rain, large-scale deforestation can cause a shift in local-scale circulation patterns, causing lower local precipitation . Crop growth and harvest cause an increased amplitude in the cycle of productivity. An analysis of the trajectory can point to the nature of the change; however finding the exact causes for the change requires a deeper analysis.
The 2010 Russian heat wave has a very clear signal in the trajectories. Figure 9b shows the deviation of the trajectory during the Russian heat wave (red line) in an area east of Moscow (56 N, 45.5 E). In the southern grass- and croplands, the heat wave caused the productivity to drop significantly during summer due to a depletion of soil moisture. In the northern forested parts affected, the heat wave caused an increase in ecosystem productivity during spring due to higher temperatures combined with sufficient water availability. This shows the compound nature of this extreme event (see Fig. 8a and ). The analysis of the trajectory points directly towards the different types of extremes and responses that happened in the biosphere during the heat wave.
Variability of rainfall during the November rainy season in the Horn of Africa (3 N, 45.5 E, Fig. 9c) shows the trajectory and points in November of the observed time. The November rain has implications for food security because the second crop season depends on it. In 2006, the rainfall events were unusually strong and caused widespread flooding and disaster but also higher ecosystem productivity (see also Fig. 8). This was especially devastating because it followed a long drought that caused crop failures. Note also the two rainy seasons in the mean seasonal cycle (purple line in Fig. 9c).
The 2003 European heat wave is reflected in the trajectories just like the 2010 Russian heat wave. Figure 9d shows the trajectory during the August 2003 heat wave in Europe (France, 47.2 N, 3.8 E). The heat wave was unprecedented and caused large-scale environmental, health, and economic losses . The 2010 heat wave was stronger than the 2003 heat wave but the strongest parts of the 2010 heat wave were in eastern Europe (see Fig. 8), while the center of the 2003 heat wave was located in France.
As we have seen here, observing single trajectories in reduced space can give us important insights into ecosystem states and changes that occur. While the trajectories can point us towards abnormal events, they can only be the starting points for deeper analysis to understand the details of such state changes.
## 3.8 Trends in trajectories
The accumulation of CO2 in the atmosphere should cause an increase in global productivity of plants due to CO2 fertilization, while larger and more frequent droughts and other extremes may counteract this trend. Satellite observations and models have shown that during the last decades the world's ecosystems have greened up during growing seasons. This is explained by CO2 fertilization, nitrogen deposition, climate change, and land cover change . Tropical forests especially showed strong greening trends during the growing season.
General patterns of trends that can be observed are a positive trend (higher productivity) on the first principal component in many arctic regions. Many of these regions also show a wetness trend, with the notable exception of the western parts of Alaska, which have become drier. This is important, because wildfires play a major role in these ecosystems . These changes are also accompanied by a decrease for PC3 due to a loss in snow cover. A large-scale dryness trend can also be observed across large parts of western Russia. Increasing productivity can also be observed for large parts of the Indian subcontinent and eastern Australia. Negative trends in the first component can also be observed: they are generally smaller and appear in regions around the Amazon and the Congo Basin, but also in parts of western Australia. The main difference from previous analyses on the observations presented here is that , for example, looked only at trends during the growing season, while this analysis uses the entire time series to calculate the slope.
In the Amazon basin, we find a dryness trend accompanied by a decrease in productivity and a slight increase in PC3. In the Congo Basin, we find a wetness trend and an increasing productivity in the northern parts, while the southern part and woodland south of the Congo Basin show a strong dryness trend with decreased productivity. This is different to the findings of , who found a widespread browning of vegetation in the entire Congo Basin for the April–May–June seasons during the period 2000–2012. The findings of are not reflected in our data, especially compared to the areas surrounding the Congo Basin. We can find only minor browning effects inside the basin, and our findings are more in line with the global greening , which shows a browning mostly outside the Congo Basin.
In eastern Australia we find a strong wetness and greenness trend which is due to Australia having a “millennium drought” since the mid-1990s with a peak in 2002 and extreme floods in 2010–2011 .
Large parts of the Indian subcontinent show a trend towards higher productivity and an overall wetter climate. The greening trend in India happens mostly over irrigated cropland. However browning trends over natural vegetation have been observed but do not emerge in our analysis . A very notable greening and wetness trend can be observed in Myanmar due to an increase in intense rainfall events and storms, although the central part experienced some strong droughts at the same time . In Myanmar we also find one of the strongest trends in PC3 outside of the Arctic.
In large parts of the Arctic, a trend towards higher productivity can be observed. Vegetation models attribute this general increase in productivity to CO2 fertilization and climate change. The changes also cause changes to the characteristics of the seasonal cycles . found a decreased seasonal amplitude of surface temperature over northern latitudes due to winter warming.
The seasonal amplitude of atmospheric CO2 concentrations has been increasing due to climate change, causing longer growing seasons and changing vegetation cover in northern ecosystems . Therefore we checked for trends in the seasonal amplitude, but because each time series only consists of 11 values (one amplitude per year), after adjusting the p values for false discovery rate, we could not find a significant slope. However, there were many significant slopes with the unadjusted p values; see the appendix, Fig. E1.
Another way to detect changes to the biosphere consists in the detection of breakpoints, which has been applied successfully to detect changes in global normalized difference vegetation index (NDVI) time series or generally to detect changes in time series . A proof-of-concept analysis can be found in Fig. F1. We hope that applying this method to indicators instead of variables can detect a wider range of breakpoints analyzing a single time series.
## 3.9 Relations to other PCA-type analyses
One of the most popular applications of PCA in meteorology are EOFs, which typically apply PCA to a single variable, i.e., on a dataset with the dimensions $\mathrm{lat}×\mathrm{long}×\mathrm{time}$, although EOFs can be calculated from multiple variables. EOFs can be calculated in S mode and R mode. If we matricize our data cube so that we have time in rows and $\mathrm{lat}×\mathrm{long}×\mathrm{variables}$ in columns, then S mode PCA works on the correlation matrix of the combined variable and space dimension. In T mode, the PCA works on the correlation matrix formed by the time dimension (Wilks2011). The PCA presented here works slightly differently. (1) We performed a different matricization ($\mathrm{lat}×\mathrm{long}×\mathrm{time}$ in rows and variables in columns) and then (2) the PCA works on the correlation matrix formed by the variables. Therefore in this framework we could call this a V mode PCA.
Ecological analyses usually use PCA with matrices of the shape object×descriptors. When calculating the PCA on the correlation matrix formed by the objects, then it is called a Q mode analysis. When the PCA is applied to the correlation matrix formed by the variables, then it is called an R mode analysis . The PCA carried out in this study is closest to an R mode analysis. In the present case the descriptors are the various data streams and the objects are the spatiotemporal pixels.
Using PCA as a method for dimensionality reduction means that we are assuming linear relations among features. A nonlinear method could possibly be more efficient in reducing the number of variables but would also have significant disadvantages. In particular, nonlinear methods typically require tuning specific parameters, objective criteria are often lacking, a proper weighting of observations is difficult, the methods are often not reversible, and it is harder to interpret the resulting indicators due to their nonlinear nature . The salient feature of PCA is that an inverse projection is well defined and allows for a deeper inspection of the errors, which is not the case for nonlinear methods which learn a highly flexible transformation that is hard to invert. Therefore interpretability of the transform in meaningful physical units in the input space is often not possible. In the machine-learning community, this problem is known as the “pre-imaging problem” and is a matter of current research.
4 Conclusions
To monitor the complexity of the changes occurring in times of an increasing human impact on the environment, we used PCA to construct indicators from a large number of data streams that track ecosystem state in space and time on a global scale. We showed that a large part of the variability of the terrestrial biosphere can be summarized using three indicators. The first emerging indicator represents carbon exchange, the second indicator shows the availability of water in the ecosystem, while the third indicator mostly represents a binary variable that indicates the presence of snow cover. The distribution in the space of the first two principal components reflects the general limitations of ecosystem productivity. Ecosystem production can be limited by either water or energy.
The first three indicators can detect many well-known phenomena without analyzing variables separately due to their compound nature. We showed that the indicators are capable of detecting seasonal hysteresis effects in ecosystems, as well as breakpoints, e.g. large-scale deforestation. The indicators can also track other changes to the seasonal cycle such as patterns of changes to the seasonal amplitudes and trends in ecosystems. Deviations from the mean seasonal cycle of the trajectories indicate extreme events such as the large-scale droughts in the Amazon during 2005 and 2010 and the Russian heat wave of 2010. The events are detected in a similar fashion as with classical multivariate anomaly detection methods while directly providing information on the underlying variables.
Using multivariate indicators, we gain a high level overview of phenomena in ecosystems, and the method therefore provides an interesting tool for analyses where it is required to capture a wide range of phenomena which are not necessarily known a priori. Future research should consider nonlinearities, adding data streams describing other important biosphere variables (e.g. related to biodiversity and habitat quality), and including different subsystems, such as the atmosphere or the anthroposphere.
Appendix A: Description of variables
Variables used describing the biosphere can be found in Table 1. Here we provide a more complete description of all variables.
Black-sky albedo is the reflected fraction of total incoming radiation under direct hemispherical reflectance, i.e., direct illumination . This dataset is the broadband surface albedo including the visible, the near-infrared, and the shortwave-infrared spectrum (400–3000 nm). It is derived from the SPOT4-VEGETATION, SPOT5-VEGETATION2, and MERIS satellite sensors.
White-sky albedo is the reflected fraction of total incoming radiation under bihemispherical reflectance, i.e., diffuse illumination . Together with black-sky albedo it can be used to estimate the albedo under different illumination conditions. This dataset is the broadband surface albedo including the visible, the near-infrared, and the shortwave-infrared spectrum (400–3000 nm). This dataset is derived from the SPOT4-VEGETATION, SPOT5-VEGETATION2, and MERIS satellite sensors.
Evaporation (mm d−1) is the amount of water evaporated per day, depending on the amount of available water and energy. This dataset is based on the GLEAMv3 model , using satellite data from ESA CCI and SMOS to derive a number of variables.
Evaporative stress is modeled water stress for plants. Zero means that the vegetation has no water available for transpiration and 1 means that transpiration equals potential transpiration. This dataset is based on the GLEAMv3 model , using satellite data from ESA CCI and SMOS to derive a number of variables.
fAPAR is the fraction of absorbed photosynthetically active radiation, a proxy for plant productivity . This dataset is based on the GlobAlbedo dataset (http://globalbedo.org, last access: 23 April 2020) and the MODIS fAPAR and leaf area index (LAI) products.
Gross primary productivity (GPP) is (gC m−2 d−1) the total amount of carbon fixed by photosynthesis . This dataset is derived from upscaling eddy covariance tower observations to a global scale using machine-learning methods.
Terrestrial ecosystem respiration (TER) is (gC m−2 d−1) the total amount of carbon respired by the ecosystem, including autotrophic and heterotrophic respiration . This dataset is derived from upscaling eddy covariance tower observations to a global scale using machine-learning methods.
Net ecosystem exchange (NEE) is (gC m−2 d−1) the total exchange of carbon of the ecosystem with the atmosphere $\mathrm{NEE}=\mathrm{GPP}-\mathrm{TER}$ . This dataset is derived from upscaling eddy covariance tower observations to a global scale using machine-learning methods.
Latent energy (LE) is (W m−2) the amount of energy lost by the surface due to evaporation . This dataset is derived from upscaling eddy covariance tower observations to a global scale using machine-learning methods.
Sensible heat (H) is (W m−2) the amount of energy lost by the surface due to radiation . This dataset is derived from upscaling eddy covariance tower observations to a global scale using machine-learning methods.
Root-zone soil moisture is (m3 m−3) the moisture content of the root zone. This dataset is based on the GLEAMv3 model , using satellite data from ESA CCI and SMOS to derive a number of variables.
Surface soil moisture is (mm3 mm−3) the soil moisture content at the soil surface. This dataset is based on the GLEAMv3 model , using satellite data from ESA CCI and SMOS to derive a number of variables.
Appendix B: Time–space patterns of Components 1–3
Figure B1Time and space patterns of PC1–PC3, where the cut points are the same as in Fig. 8. The brown–green contrast shows the state of PC1, from low to high productivity. The blue–red contrast shows the state of PC2, from cold to dry. The brown–purple contrast shows the state of PC3, from dark to light. Panels (a), (e), and (i) are maps showing the state of PC1–PC3, respectively, on 1 January 2001. Panels (b), (c), and (d) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (a). Panels (f), (g), and (h) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (e). Panels (j), (k), and (l) show longitudinal cuts of PC1–PC3, respectively, at the red vertical line in (i).
Appendix C: Mean seasonal cycle extrema
Figure C1The minimum (a, c, e) and maximum (b, d, f) mean seasonal cycles of GPP (a, b), latent heat (c, d), and sensible heat (e, f). This illustrates the similarity of possibly very different ecosystems in terms of productivity and limitations. During peak growing season, many midlatitude areas have a similar productivity and latent energy release as tropical rain forests (b, d). The highest maximum seasonal sensible heat loss can be found in dry areas around the world and is lowest in areas with a wet climate such as tropical rain forests and maritime climates (f).
Appendix D: Spatial covariances of the components
Figure D1Pairwise covariances of the first three principal components mean seasonal cycles by space. (a) cov(PC1,PC2), (b) cov(PC1,PC3), and (c) cov(PC2,PC3). The bar charts show the distribution of the covariances. It can be seen that although two principal components are globally uncorrelated by their way of construction, they covary locally.
Appendix E: Changes in the seasonal amplitude
Figure E1Trends in the amplitude of the yearly cycle, 2001–2011. Only Theil–Sen estimators for significant slopes (p<0.05, unadjusted) are shown. Because there is only a single amplitude per year and therefore only 11 data points per time series, the Benjamini–Hochberg adjusted p values are not significant.
Appendix F: Breakpoints in trajectories
Figure F1Breakpoint detection, (a) on PC1, (b) on PC2, and (c) on PC3. The color indicates the year of the biggest breakpoint if a significant breakpoint was found, with gray if there was no significant breakpoint found.
As the environmental conditions change, due to climate change and human intervention, the local ecosystems may change gradually or abruptly. Detecting these changes is very important for monitoring the impact of climate change and land use change on the ecosystems. We applied breakpoint detection to the trajectories (Fig. F1).
Breakpoints on the first component were found in the entire Amazon, and the largest breakpoint is dated to the year 2005 during the large drought event. The entire eastern part of Australia shows its largest breakpoint towards the end of the time series because of a La Niña event, which caused lower temperatures and higher rainfall than usual during the years 2010 and 2011.
Code and data availability
Code and data availability.
The data are available and can be processed at https://www.earthsystemdatalab.net/index.php/interact/data-lab/, last access: 30 March 2020. The exact dataset and a docker container to reproduce the analysis can be found under https://doi.org/10.5281/zenodo.3733766 . The code to reproduce this analysis is available under https://doi.org/10.5281/zenodo.3733783 (Kraemer2020) and https://github.com/gdkrmr/summarizing_the_state_of_the_biosphere, last access: 23 April 2020.
Author contributions
Author contributions.
GK and MDM designed the study in collaboration with MR and GCV. GK conducted the analysis and wrote the manuscript with contributions from all co-authors.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We thank Fabian Gans and German Poveda for useful discussions. We thank Jake Nelson for proofreading a previous version of the manuscript. We thank Gregory Duveiller and the three anonymous reviewers for very helpful suggestions and Kirsten Thonicke for editorial advice that improved the manuscript greatly.
Financial support
Financial support.
This study is funded by the Earth System Data Lab – a project by the European Space Agency. Miguel D. Mahecha and Markus Reichstein have been supported by the Horizon 2020 EU project BACI under grant agreement no. 640176. Gustau Camps-Valls' work has been supported by the EU under the ERC consolidator grant SEDAL-647423.
The article processing charges for this open-access
publication were covered by the Max Planck Society.
Review statement
Review statement.
This paper was edited by Kirsten Thonicke and reviewed by Gregory Duveiller and three anonymous referees.
References
Abatzoglou, J. T., Rupp, D. E., and Mote, P. W.: Seasonal Climate Variability and Change in the Pacific Northwest of the United States, J. Clim., 27, 2125–2142, https://doi.org/10.1175/JCLI-D-13-00218.1, 2014. a
Anav, A., Friedlingstein, P., Beer, C., Ciais, P., Harper, A., Jones, C., Murray-Tortarolo, G., Papale, D., Parazoo, N. C., Peylin, P., Piao, S., Sitch, S., Viovy, N., Wiltshire, A., and Zhao, M.: Spatiotemporal patterns of terrestrial gross primary production: A review: GPP Spatiotemporal Patterns, Rev. Geophys., 53, 785–818, https://doi.org/10.1002/2015RG000483, 2015. a, b
Aragão, L. E. O. C., Anderson, L. O., Fonseca, M. G., Rosan, T. M., Vedovato, L. B., Wagner, F. H., Silva, C. V. J., Silva Junior, C. H. L., Arai, E., Aguiar, A. P., Barlow, J., Berenguer, E., Deeter, M. N., Domingues, L. G., Gatti, L., Gloor, M., Malhi, Y., Marengo, J. A., Miller, J. B., Phillips, O. L., and Saatchi, S.: 21st Century Drought-Related Fires Counteract the Decline of Amazon Deforestation Carbon Emissions, Nat. Commun., 9, 146–149, https://doi.org/10.1038/s41467-017-02771-y, 2018. a
Ardisson, P.-L., Bourget, E., and Legendre, P.: Multivariate Approach to Study Species Assemblages at Large Spatiotemporal Scales: The Community Structure of the Epibenthic Fauna of the Estuary and Gulf of St. Lawrence, Can. J. Fish. Aquat. Sci., 47, 1364–1377, https://doi.org/10.1139/f90-156, 1990. a
Arenas-Garcia, J., Petersen, K. B., Camps-Valls, G., and Hansen, L. K.: Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods, IEEE Signal Processing Magazine, 30, 16–29, https://doi.org/10.1109/MSP.2013.2250591, 2013. a
Babst, F., Poulter, B., Bodesheim, P., Mahecha, M. D., and Frank, D. C.: Improved tree-ring archives will support earth-system science, Nat. Ecol. Evol., 1, 1–2, 2017. a
Baldocchi, D. D.: How Eddy Covariance Flux Measurements Have Contributed to Our Understanding of Global Change Biology, Glob. Change Biol., 26, 242–260, https://doi.org/10.1111/gcb.14807, 2020. a, b
Barriopedro, D., Fischer, E. M., Luterbacher, J., Trigo, R. M., and García-Herrera, R.: The Hot Summer of 2010: Redrawing the Temperature Record Map of Europe, Science, 332, 220–224, https://doi.org/10.1126/science.1201224, 2011. a
Beisner, B., Haydon, D., and Cuddington, K.: Alternative Stable States in Ecology, Front. Ecol. Environ., 1, 376–382, https://doi.org/10.1890/1540-9295(2003)001[0376:ASSIE]2.0.CO;2, 2003. a
Benjamini, Y. and Hochberg, Y.: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing, J. Roy. Stat. Soc. B, 57, 289–300, 1995. a
Berger, M., Moreno, J., Johannessen, J. A., Levelt, P. F., and Hanssen, R. F.: ESA's Sentinel Missions in Support of Earth System Science, Remote Sens. Environ., 120, 84–90, https://doi.org/10.1016/j.rse.2011.07.023, 2012. a
Blonder, B., Moulton, D. E., Blois, J., Enquist, B. J., Graae, B. J., Macias-Fauria, M., McGill, B., Nogué, S., Ordonez, A., Sandel, B., and Svenning, J.-C.: Predictability in Community Dynamics, Ecol. Lett., 20, 293–306, https://doi.org/10.1111/ele.12736, 2017. a
Bowen, I. S.: The Ratio of Heat Losses by Conduction and by Evaporation from Any Water Surface, Phys. Rev., 27, 779–787, https://doi.org/10.1103/PhysRev.27.779, 1926. a, b
Brown, R. L., Durbin, J., and Evans, J. M.: Techniques for Testing the.ournal of the Roy. Stat. Soc. B, 37, 149–192, 1975. a
Cattell, R. B.: The Scree Test For The Number Of Factors, Multivar. Behav. Res., 1, 245–276, https://doi.org/10.1207/s15327906mbr0102_10, 1966. a
Chapin, F. S., Woodwell, G. M., Randerson, J. T., Rastetter, E. B., Lovett, G. M., Baldocchi, D. D., Clark, D. A., Harmon, M. E., Schimel, D. S., Valentini, R., Wirth, C., Aber, J. D., Cole, J. J., Goulden, M. L., Harden, J. W., Heimann, M., Howarth, R. W., Matson, P. A., McGuire, A. D., Melillo, J. M., Mooney, H. A., Neff, J. C., Houghton, R. A., Pace, M. L., Ryan, M. G., Running, S. W., Sala, O. E., Schlesinger, W. H., and Schulze, E.-D.: Reconciling Carbon-Cycle Concepts, Terminol. Method. Ecosys., 9, 1041–1050, https://doi.org/10.1007/s10021-005-0105-7, 2006. a
Chen, C., Park, T., Wang, X., Piao, S., Xu, B., Chaturvedi, R. K., Fuchs, R., Brovkin, V., Ciais, P., Fensholt, R., Tømmervik, H., Bala, G., Zhu, Z., Nemani, R. R., and Myneni, R. B.: China and India Lead in Greening of the World through Land-Use Management, Nature Sustainability, 2, 122–129, https://doi.org/10.1038/s41893-019-0220-7, 2019. a
Ciais, P., Reichstein, M., Viovy, N., Granier, A., Ogée, J., Allard, V., Aubinet, M., Buchmann, N., Bernhofer, C., Carrara, A., Chevallier, F., Noblet, N. D., Friend, A. D., Friedlingstein, P., Grünwald, T., Heinesch, B., Keronen, P., Knohl, A., Krinner, G., Loustau, D., Manca, G., Matteucci, G., Miglietta, F., Ourcival, J. M., Papale, D., Pilegaard, K., Rambal, S., Seufert, G., Soussana, J. F., Sanz, M. J., Schulze, E. D., Vesala, T., and Valentini, R.: Europe-Wide Reduction in Primary Productivity Caused by the Heat and Drought in 2003, Nature, 437, 529–533, https://doi.org/10.1038/nature03972, 2005. a
de Jong, R., de Bruin, S., de Wit, A., Schaepman, M. E., and Dent, D. L.: Analysis of Monotonic Greening and Browning Trends from Global NDVI Time-Series, Remote Sens. Environ., 115, 692–702, https://doi.org/10.1016/j.rse.2010.10.011, 2011. a, b
Díaz, S., Settele, J., Brondízio, E. S., Ngo, H. T., Agard, J., Arneth, A., Balvanera, P., Brauman, K. A., Butchart, S. H. M., Chan, K. M. A., Garibaldi, L. A., Ichii, K., Liu, J., Subramanian, S. M., Midgley, G. F., Miloslavich, P., Molnár, Z., Obura, D., Pfaff, A., Polasky, S., Purvis, A., Razzaque, J., Reyers, B., Chowdhury, R. R., Shin, Y.-J., Visseren-Hamakers, I., Willis, K. J., and Zayas, C. N.: Pervasive Human-Driven Decline of Life on Earth Points to the Need for Transformative Change, Science, 366, 6471, https://doi.org/10.1126/science.aax3100, 2019. a
Disney, M., Muller, J.-P., Kharbouche, S., Kaminski, T., Voßbeck, M., Lewis, P., and Pinty, B.: A New Global fAPAR and LAI Dataset Derived from Optimal Albedo Estimates: Comparison with MODIS Products, Remote Sens., 8, 1–29, https://doi.org/10.3390/rs8040275, 2016. a, b
Doughty, C. E., Metcalfe, D. B., Girardin, C. a. J., Amézquita, F. F., Cabrera, D. G., Huasco, W. H., Silva-Espejo, J. E., Araujo-Murakami, A., da Costa, M. C., Rocha, W., Feldpausch, T. R., Mendoza, A. L. M., da Costa, A. C. L., Meir, P., Phillips, O. L., and Malhi, Y.: Drought impact on forest carbon dynamics and fluxes in Amazonia, Nature, 519, 78–82, https://doi.org/10.1038/nature14213, 2015. a
Feldpausch, T. R., Phillips, O. L., Brienen, R. J. W., Gloor, E., Lloyd, J., Lopez-Gonzalez, G., Monteagudo-Mendoza, A., Malhi, Y., Alarcón, A., Dávila, E. A., Alvarez-Loayza, P., Andrade, A., Aragao, L. E. O. C., Arroyo, L., C, G. A. A., Baker, T. R., Baraloto, C., Barroso, J., Bonal, D., Castro, W., Chama, V., Chave, J., Domingues, T. F., Fauset, S., Groot, N., Coronado, E. H., Laurance, S., Laurance, W. F., Lewis, S. L., Licona, J. C., Marimon, B. S., Marimon-Junior, B. H., Bautista, C. M., Neill, D. A., Oliveira, E. A., dos Santos, C. O., Camacho, N. C. P., Pardo-Molina, G., Prieto, A., Quesada, C. A., Ramírez, F., Ramírez-Angulo, H., Réjou-Méchain, M., Rudas, A., Saiz, G., Salomão, R. P., Silva-Espejo, J. E., Silveira, M., ter Steege, H., Stropp, J., Terborgh, J., Thomas-Caesar, R., van der Heijden, G. M. F., Martinez, R. V., Vilanova, E., and Vos, V. A.: Amazon Forest Response to Repeated Droughts, Global Biogeochem. Cy., 30, 964–982, https://doi.org/10.1002/2015GB005133, 2016. a
Flach, M., Gans, F., Brenning, A., Denzler, J., Reichstein, M., Rodner, E., Bathiany, S., Bodesheim, P., Guanche, Y., Sippel, S., and Mahecha, M. D.: Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques, Earth Syst. Dynam., 8, 677–696, https://doi.org/10.5194/esd-8-677-2017, 2017. a
Flach, M., Sippel, S., Gans, F., Bastos, A., Brenning, A., Reichstein, M., and Mahecha, M. D.: Contrasting biosphere responses to hydrometeorological extremes: revisiting the 2010 western Russian heatwave, Biogeosciences, 15, 6067–6085, https://doi.org/10.5194/bg-15-6067-2018, 2018. a, b, c, d
Folke, C., Carpenter, S., Walker, B., Scheffer, M., Elmqvist, T., Gunderson, L., and Holling, C.: Regime Shifts, Resilience, and Biodiversity in Ecosystem Management, Annu. Rev. Ecol. Evol. S., 35, 557–581, https://doi.org/10.1146/annurev.ecolsys.35.021103.105711, 2004. a
Forkel, M., Carvalhais, N., Verbesselt, J., Mahecha, M., Neigh, C., Reichstein, M., Forkel, M., Carvalhais, N., Verbesselt, J., Mahecha, M. D., Neigh, C. S. R., and Reichstein, M.: Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology, Remote Sens., 5, 2113–2144, https://doi.org/10.3390/rs5052113, 2013. a
Forkel, M., Migliavacca, M., Thonicke, K., Reichstein, M., Schaphoff, S., Weber, U., and Carvalhais, N.: Codominant Water Control on Global Interannual Variability and Trends in Land Surface Phenology and Greenness, Glob. Change Biol., 21, 3414–3435, https://doi.org/10.1111/gcb.12950, 2015. a
Forkel, M., Carvalhais, N., Rodenbeck, C., Keeling, R., Heimann, M., Thonicke, K., Zaehle, S., and Reichstein, M.: Enhanced Seasonal CO2 Exchange Caused by Amplified Plant Productivity in Northern Ecosystems, Science, 351, 696–699, https://doi.org/10.1126/science.aac4971, 2016. a, b
Foster, A. C., Armstrong, A. H., Shuman, J. K., Shugart, H. H., Rogers, B. M., Mack, M. C., Goetz, S. J., and Ranson, K. J.: Importance of Tree- and Species-Level Interactions with Wildfire, Climate, and Soils in Interior Alaska: Implications for Forest Change under a Warming Climate, Ecol. Modell., 409, 108765, https://doi.org/10.1016/j.ecolmodel.2019.108765, 2019. a
García-Herrera, R., Díaz, J., Trigo, R. M., Luterbacher, J., and Fischer, E. M.: A Review of the European Summer Heat Wave of 2003, Crit. Rev. Env. Sci. Tec., 40, 267–306, https://doi.org/10.1080/10643380802238137, 2010. a
Gaumont-Guay, D., Black, T. A., Griffis, T. J., Barr, A. G., Jassal, R. S., and Nesic, Z.: Interpreting the Dependence of Soil Respiration on Soil Temperature and Water Content in a Boreal Aspen Stand, Agr. Forest Meteorol., 140, 220–235, https://doi.org/10.1016/j.agrformet.2006.08.003, 2006. a
Graven, H. D., Keeling, R. F., Piper, S. C., Patra, P. K., Stephens, B. B., Wofsy, S. C., Welp, L. R., Sweeney, C., Tans, P. P., Kelley, J. J., Daube, B. C., Kort, E. A., Santoni, G. W., and Bent, J. D.: Enhanced Seasonal Exchange of CO2 by Northern Ecosystems Since 1960, Science, 341, 1085–1089, https://doi.org/10.1126/science.1239207, 2013. a
Hao, Z., Zheng, J., Ge, Q., and Wang, W.: Historical Analogues of the 2008 Extreme Snow Event over Central and Southern China, Clim. Res., 50, 161–170, https://doi.org/10.3354/cr01052, 2011. a
Hendon, H. H., Lim, E.-P., Arblaster, J. M., and Anderson, D. L. T.: Causes and Predictability of the Record Wet East Australian Spring 2010, Clim. Dynam., 42, 1155–1174, https://doi.org/10.1007/s00382-013-1700-5, 2014. a
Higham, N. J.: The Accuracy of Floating Point Summation, SIAM J. Sci. Comput., 14, 783–799, https://doi.org/10.1137/0914050, 1993. a
Horridge, M., Madden, J., and Wittwer, G.: The Impact of the 2002–2003 Drought on Australia, J. Policy Model., 27, 285–308, https://doi.org/10.1016/j.jpolmod.2005.01.008, 2005. a
Huang, K., Xia, J., Wang, Y., Ahlström, A., Chen, J., Cook, R. B., Cui, E., Fang, Y., Fisher, J. B., Huntzinger, D. N., Li, Z., Michalak, A. M., Qiao, Y., Schaefer, K., Schwalm, C., Wang, J., Wei, Y., Xu, X., Yan, L., Bian, C., and Luo, Y.: Enhanced Peak Growth of Global Vegetation and Its Key Mechanisms, Nat. Ecol. Evol., 2, 1897–1905, https://doi.org/10.1038/s41559-018-0714-0, 2018. a
IPBES: Summary for Policymakers of the Global Assessment Report on Biodiversity and Ecosystem Services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, Summary for policymakers, IPBES, 39 pp., 2019. a
IPCC: Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Tech. rep., IPCC, Geneva, Swizerland, 2014. a
Ivits, E., Horion, S., Fensholt, R., and Cherlet, M.: Drought Footprint on European Ecosystems between 1999 and 2010 Assessed by Remotely Sensed Vegetation Phenology and Productivity, Glob. Change Biol., 20, 581–593, https://doi.org/10.1111/gcb.12393, 2014. a
Jolly, W. M., Cochrane, M. A., Freeborn, P. H., Holden, Z. A., Brown, T. J., Williamson, G. J., and Bowman, D. M. J. S.: Climate-Induced Variations in Global Wildfire Danger from 1979 to 2013, Nat. Commun., 6, 7537, https://doi.org/10.1038/ncomms8537, 2015. a
Jung, M., Koirala, S., Weber, U., Ichii, K., Gans, F., Camps-Valls, G., Papale, D., Schwalm, C., Tramontana, G., and Reichstein, M.: The FLUXCOM Ensemble of Global Land-Atmosphere Energy Fluxes, Sci. Data, 6, 1–14, https://doi.org/10.1038/s41597-019-0076-8, 2019. a, b, c, d, e, f, g
Keeling, C. D., Chin, J. F. S., and Whorf, T. P.: Increased Activity of Northern Vegetation Inferred from AtmosphericCO2 Measurements, Nature, 382, 146–149, https://doi.org/10.1038/382146a0, 1996. a
Kendall, M. G.: Rank Correlation Methods, Griffin, London, 202 pp., 1970. a
Khanna, J., Medvigy, D., Fueglistaler, S., and Walko, R.: Regional dry-season climate changes due to three decades of Amazonian deforestation, Nat. Clim. Change, 7, 200–204, https://doi.org/10.1038/nclimate3226, 2017. a
Kottek, M., Grieser, J., Beck, C., Rudolf, B., and Rubel, F.: World Map of the Köppen-Geiger climate classification updated, Meteorol. Z., 15, 259–263, https://doi.org/10.1127/0941-2948/2006/0130, a
Kraemer, G.: gdkrmr/summarizing_the_state_of_the_biosphere v1.1.1, Zenodo, https://doi.org/10.5281/zenodo.3733783, 2020. a
Kraemer, G., Reichstein, M., and Mahecha, M. D.: dimRed and coRanking – Unifying Dimensionality Reduction in R, R J., 10, 342–358, https://doi.org/10.32614/RJ-2018-039, 2018. a, b
Kraemer, G., Camps-Valls, G., Reichstein, M., and Mahecha, M. D.: Summarizing the state of the terrestrial biosphere in few dimensions, Zenodo, https://doi.org/10.5281/zenodo.3733766, 2020. a
Kuan, C.-M. and Hornik, K.: The Generalized Fluctuation Test: A Unifying View, Economet. Rev., 14, 135–161, https://doi.org/10.1080/07474939508800311, 1995. a
Legendre, P. and Legendre, L.: Numerical Ecology: Second English Edition, Dev. Environ. Model., 20, 852 pp., 1998. a, b
Legendre, P., Planas, D., and Auclair, M.-J.: Succession des communautés de gastéropodes dans deux milieux différant par leur degré d'eutrophisation, Can. J. Zool., 62, 2317–2327, https://doi.org/10.1139/z84-339, 1984. a
Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., and Schellnhuber, H. J.: Tipping Elements in the Earth's Climate System, P. Natl. Acad. Sci. USA, 105, 1786–1793, https://doi.org/10.1073/pnas.0705414105, 2008. a
Mahecha, M. D., Martínez, A., Lischeid, G., and Beck, E.: Nonlinear Dimensionality Reduction: Alternative Ordination Approaches for Extracting and Visualizing Biodiversity Patterns in Tropical Montane Forest Vegetation Data, Ecol. Inform., 2, 138–149, https://doi.org/10.1016/j.ecoinf.2007.05.002, 2007a. a, b
Mahecha, M. D., Reichstein, M., Lange, H., Carvalhais, N., Bernhofer, C., Grünwald, T., Papale, D., and Seufert, G.: Characterizing Ecosystem-Atmosphere Interactions from Short to Interannual Time Scales, Biogeosciences, 4, 743–758, https://doi.org/10.5194/bg-4-743-2007, 2007b. a
Mahecha, M. D., Gans, F., Sippel, S., Donges, J. F., Kaminski, T., Metzger, S., Migliavacca, M., Papale, D., Rammig, A., and Zscheischler, J.: Detecting Impacts of Extreme Events with Ecological in Situ Monitoring Networks, Biogeosciences, 14, 4255–4277, https://doi.org/10.5194/bg-14-4255-2017, 2017. a
Mahecha, M. D., Gans, F., Brandt, G., Christiansen, R., Cornell, S. E., Fomferra, N., Kraemer, G., Peters, J., Bodesheim, P., Camps-Valls, G., Donges, J. F., Dorigo, W., Estupinan-Suarez, L. M., Gutierrez-Velez, V. H., Gutwin, M., Jung, M., Londoño, M. C., Miralles, D. G., Papastefanou, P., and Reichstein, M.: Earth system data cubes unravel global multivariate dynamics, Earth Syst. Dynam., 11, 201–234, https://doi.org/10.5194/esd-11-201-2020, 2020. a
Mann, H. B.: Nonparametric Tests Against Trend, Econometrica, 13, 245–259, https://doi.org/10.2307/1907187, 1945. a
Martens, B., Miralles, D. G., Lievens, H., Schalie, R. v. d., Jeu, R. A. M. d., Fernández-Prieto, D., Beck, H. E., Dorigo, W. A., and Verhoest, N. E. C.: GLEAM v3: satellite-based land evaporation and root-zone soil moisture, Geosci. Model Dev., 10, 1903–1925, https://doi.org/10.5194/gmd-10-1903-2017, 2017. a, b, c, d, e, f, g, h, i
Metzger, M. J., Bunce, R. G. H., Jongman, R. H. G., Sayre, R., Trabucco, A., and Zomer, R.: A High-resolution Bioclimate Map of the World: A Unifying Framework for Global Biodiversity Research and Monitoring, Glob. Ecol. Biogeogr., 22, 630–638, https://doi.org/10.1111/geb.12022, 2013. a
Mika, S., Scholkopf, B., Smola, A., Muller, K., Scholz, M., and Ratsch, G.: Kernel PCA and De-Noising in Feature Spaces, in: Advances In Neural Information Processing Systems, edited by: Kearns, M. S., Solla, S. A., and Cohn, D. A., Vol. 11 of Advances in Neural Information Processing Systems, 12th Annual Conference on Neural Information Processing Systems (NIPS), Denver, CO, 30 November–5 December 1998, 536–542, 1999. a
Miralles, D. G., Teuling, A. J., van Heerwaarden, C. C., and Vilà-Guerau de Arellano, J.: Mega-heatwave temperatures due to combined soil desiccation and atmospheric heat accumulation, Nat. Geosci., 7, 345–349, https://doi.org/10.1038/ngeo2141, 2014. a
Muller, J.-P., Lewis, P., Fischer, J., North, P., and Framer, U.: The ESA GlobAlbedo Project for mapping the Earth’s land surface albedo for 15 years from European sensors, Geophys. Res. Abstr., 13, EGU2011-10969, 2011. a, b, c, d
Najafi, E., Pal, I., and Khanbilvardi, R.: Climate Drives Variability and Joint Variability of Global Crop Yields, Sci. Total Environ.t, 662, 361–372, https://doi.org/10.1016/j.scitotenv.2019.01.172, 2019. a
Nasahara, K. N. and Nagai, S.: Review: Development of an in Situ Observation Network for Terrestrial Ecological Remote Sensing: The Phenological Eyes Network (PEN), Ecol. Res., 30, 211–223, https://doi.org/10.1007/s11284-014-1239-x, 2015. a
Nicholls, N.: The Changing Nature of Australian Droughts, Climatic Change, 63, 323–336, https://doi.org/10.1023/B:CLIM.0000018515.46344.6d, 2004. a
Nicholson, S. E.: A detailed look at the recent drought situation in the Greater Horn of Africa, J. Arid Environ., 103, 71–79, https://doi.org/10.1016/j.jaridenv.2013.12.003, 2014. a
Papale, D., Black, T. A., Carvalhais, N., Cescatti, A., Chen, J., Jung, M., Kiely, G., Lasslop, G., Mahecha, M. D., Margolis, H., Merbold, L., Montagnani, L., Moors, E., Olesen, J. E., Reichstein, M., Tramontana, G., van Gorsel, E., Wohlfahrt, G., and Ráduly, B.: Effect of Spatial Sampling from European Flux Towers for Estimating Carbon and Water Fluxes with Artificial Neural Networks, J. Geophys. Res.-Biogeo., 120, 1941–1957, https://doi.org/10.1002/2015JG002997, 2015. a
Parmesan, C.: Ecological and Evolutionary Responses to Recent Climate Change, Ann. Rev. Ecol. Evol. S., 37, 637–669, https://doi.org/10.1146/annurev.ecolsys.37.091305.110100, 2006. a
Pearson, K.: On Lines and Planes of Closest Fit to Systems of Points in Space, Philos. Mag., 2, 559–572, 1901. a
Piao, S., Wang, X., Park, T., Chen, C., Lian, X., He, Y., Bjerke, J. W., Chen, A., Ciais, P., Tømmervik, H., Nemani, R. R., and Myneni, R. B.: Characteristics, drivers and feedbacks of global greening, Nat. Rev. Earth Environ., 1, 14–27, https://doi.org/10.1038/s43017-019-0001-x, 2019. a
Piao, S., Wang, X., Wang, K., Li, X., Bastos, A., Canadell, J. G., Ciais, P., Friedlingstein, P., and Sitch, S.: Interannual Variation of Terrestrial Carbon Cycle: Issues and Perspectives, Glob. Change Biol., 26, 300–318, https://doi.org/10.1111/gcb.14884, 2020. a
Rao, M., Saw Htun, Platt, S. G., Tizard, R., Poole, C., Than Myint, and Watson, J. E. M.: Biodiversity Conservation in a Changing Climate: A Review of Threats and Implications for Conservation Planning in Myanmar, AMBIO, 42, 789–804, https://doi.org/10.1007/s13280-013-0423-5, 2013. a
Reichstein, M., Bahn, M., Ciais, P., Frank, D., Mahecha, M. D., Seneviratne, S. I., Zscheischler, J., Beer, C., Buchmann, N., Frank, D. C., Papale, D., Rammig, A., Smith, P., Thonicke, K., van der Velde, M., Vicca, S., Walz, A., and Wattenbach, M.: Climate extremes and the carbon cycle, Nature, 500, 287–295, https://doi.org/10.1038/nature12350, 2013. a
Renner, M., Brenner, C., Mallick, K., Wizemann, H.-D., Conte, L., Trebs, I., Wei, J., Wulfmeyer, V., Schulz, K., and Kleidon, A.: Using Phase Lags to Evaluate Model Biases in Simulating the Diurnal Cycle of Evapotranspiration: A Case Study in Luxembourg, Hydrol. Earth Syst. Sci., 23, 515–535, https://doi.org/10.5194/hess-23-515-2019, 2019. a
Richardson, A. D., Braswell, B. H., Hollinger, D. Y., Burman, P., Davidson, E. A., Evans, R. S., Flanagan, L. B., Munger, J. W., Savage, K., Urbanski, S. P., and Wofsy, S. C.: Comparing Simple Respiration Models for Eddy Flux and Dynamic Chamber Data, Agr. Forest Meteorol., 141, 219–234, https://doi.org/10.1016/j.agrformet.2006.10.010, 2006. a
Rosenfeld, D., Zhu, Y., Wang, M., Zheng, Y., Goren, T., and Yu, S.: Aerosol-Driven Droplet Concentrations Dominate Coverage and Water of Oceanic Low-Level Clouds, Science, 363, eaav0566, https://doi.org/10.1126/science.aav0566, 2019. a
Sarmah, S., Jia, G., and Zhang, A.: Satellite View of Seasonal Greenness Trends and Controls in South Asia, Environ. Res. Lett., 13, 034026, https://doi.org/10.1088/1748-9326/aaa866, 2018. a
Schimel, D. and Schneider, F. D.: Flux Towers in the Sky: Global Ecology from Space, New Phytol., 224, 570–584, https://doi.org/10.1111/nph.15934, 2019. a
Schwartz, M. D.: Monitoring Global Change with Phenology: The Case of the Spring Green Wave, Int. J. Biometeorol., 38, 18–22, https://doi.org/10.1007/BF01241799, 1994. a
Schwartz, M. D.: Green-Wave Phenology, Nature, 394, 839–840, https://doi.org/10.1038/29670, 1998. a, b
Sen, P. K.: Estimates of the Regression Coefficient Based on Kendall's Tau, J. Am. Stat. Assoc., 63, 1379–1389, https://doi.org/10.2307/2285891, 1968. a
Sippel, S., Reichstein, M., Ma, X., Mahecha, M. D., Lange, H., Flach, M., and Frank, D.: Drought, Heat, and the Carbon Cycle: A Review, Current Climate Change Reports, 4, 266–286, https://doi.org/10.1007/s40641-018-0103-4, 2018. a
Sitch, S., Friedlingstein, P., Gruber, N., Jones, S. D., Murray-Tortarolo, G., Ahlström, A., Doney, S. C., Graven, H., Heinze, C., Huntingford, C., Levis, S., Levy, P. E., Lomas, M., Poulter, B., Viovy, N., Zaehle, S., Zeng, N., Arneth, A., Bonan, G., Bopp, L., Canadell, J. G., Chevallier, F., Ciais, P., Ellis, R., Gloor, M., Peylin, P., Piao, S. L., Le Quéré, C., Smith, B., Zhu, Z., and Myneni, R.: Recent Trends and Drivers of Regional Sources and Sinks of Carbon Dioxide, Biogeosciences, 12, 653–679, https://doi.org/10.5194/bg-12-653-2015, 2015. a
Song, X.-P., Hansen, M. C., Stehman, S. V., Potapov, P. V., Tyukavina, A., Vermote, E. F., and Townshend, J. R.: Global Land Change from 1982 to 2016, Nature, 560, 639–643, https://doi.org/10.1038/s41586-018-0411-9, 2018. a, b
Steffen, W., Richardson, K., Rockström, J., Cornell, S. E., Fetzer, I., Bennett, E. M., Biggs, R., Carpenter, S. R., de Vries, W., de Wit, C. A., Folke, C., Gerten, D., Heinke, J., Mace, G. M., Persson, L. M., Ramanathan, V., Reyers, B., and Sörlin, S.: Planetary Boundaries: Guiding Human Development on a Changing Planet, Science, 347, 1259855-1–1259855-10, https://doi.org/10.1126/science.1259855, 2015. a
Stine, A. R., Huybers, P., and Fung, I. Y.: Changes in the Phase of the Annual Cycle of Surface Temperature, Nature, 457, 435–440, https://doi.org/10.1038/nature07675, 2009. a, b
Tang, J., Baldocchi, D. D., and Xu, L.: Tree Photosynthesis Modulates Soil Respiration on a Diurnal Time Scale, Glob. Change Biol., 11, 1298–1304, https://doi.org/10.1111/j.1365-2486.2005.00978.x, 2005. a
Theil, H.: A Rank-Invariant Method of Linear and Polynomial Regression Analysis, I, II, III, Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, 53, 386–392, 1950a. a
Theil, H.: A Rank-Invariant Method of Linear and Polynomial Regression Analysis, I, II, III, Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, 53, 521–525, 1950b. a
Theil, H.: A Rank-Invariant Method of Linear and Polynomial Regression Analysis, I, II, III, Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, 53, 1397–1412, 1950c. a
Tramontana, G., Jung, M., Schwalm, C. R., Ichii, K., Camps-Valls, G., Ráduly, B., Reichstein, M., Arain, M. A., Cescatti, A., Kiely, G., Merbold, L., Serrano-Ortiz, P., Sickert, S., Wolf, S., and Papale, D.: Predicting carbon dioxide and energy fluxes across global FLUXNET sites with regression algorithms, Biogeosciences, 13, 4291–4313, https://doi.org/10.5194/bg-13-4291-2016, 2016. a, b, c, d, e, f, g, h, i, j, k
Van Der Maaten, L., Postma, E., and Van den Herik, J.: Dimensionality Reduction: A Comparative Review, J. Mach. Learn. Res., 10, 66–71, 2009. a
van der Maaten, L., Schmidtlein, S., and Mahecha, M. D.: Analyzing Floristic Inventories with Multiple Maps, Ecol. Inform., 9, 1–10, https://doi.org/10.1016/j.ecoinf.2012.01.005, 2012. a
Verbesselt, J., Hyndman, R., Newnham, G., and Culvenor, D.: Detecting Trend and Seasonal Changes in Satellite Image Time Series, Remote Sens. Environ., 114, 106–115, https://doi.org/10.1016/j.rse.2009.08.014, 2010. a
Wilks, D. S.: Chapter 12 – Principal Component (EOF) Analysis, in: International Geophysics, edited by Wilks, D. S., vol. 100 of Statistical Methods in the Atmospheric Sciences, Academic Press, 519–562, https://doi.org/10.1016/B978-0-12-385022-5.00012-9, 2011. a
Wingate, L., Ogée, J., Cremonese, E., Filippa, G., Mizunuma, T., Migliavacca, M., Moisy, C., Wilkinson, M., Moureaux, C., Wohlfahrt, G., Hammerle, A., Hörtnagl, L., Gimeno, C., Porcar-Castell, A., Galvagno, M., Nakaji, T., Morison, J., Kolle, O., Knohl, A., Kutsch, W., Kolari, P., Nikinmaa, E., Ibrom, A., Gielen, B., Eugster, W., Balzarolo, M., Papale, D., Klumpp, K., Köstner, B., Grünwald, T., Joffre, R., Ourcival, J.-M., Hellstrom, M., Lindroth, A., George, C., Longdoz, B., Genty, B., Levula, J., Heinesch, B., Sprintsin, M., Yakir, D., Manise, T., Guyon, D., Ahrends, H., Plaza-Aguilar, A., Guan, J. H., and Grace, J.: Interpreting Canopy Development and Physiology Using a European Phenology Camera Network at Flux Sites, Biogeosciences, 12, 5995–6015, https://doi.org/10.5194/bg-12-5995-2015, 2015. a
Wolter, K. and Timlin, M. S.: El Niño/Southern Oscillation Behaviour since 1871 as Diagnosed in an Extended Multivariate ENSO Index (MEI.Ext), Int. J. Climatol., 31, 1074–1087, https://doi.org/10.1002/joc.2336, 2011. a
Yan, T., Song, H., Wang, Z., Teramoto, M., Wang, J., Liang, N., Ma, C., Sun, Z., Xi, Y., Li, L., and Peng, S.: Temperature Sensitivity of Soil Respiration across Multiple Time Scales in a Temperate Plantation Forest, Sci. Total Environ., 688, 479–485, https://doi.org/10.1016/j.scitotenv.2019.06.318, 2019. a
Zeileis, A., Leisch, F., Hornik, K., and Kleiber, C.: Strucchange: An R Package for Testing for Structural Change in Linear Regression Models, J. Stat. Softw., 7, 1–38, https://doi.org/10.18637/jss.v007.i02, 2002. a
Zeng, N., Zhao, F., Collatz, G. J., Kalnay, E., Salawitch, R. J., West, T. O., and Guanter, L.: Agricultural Green Revolution as a Driver of Increasing Atmospheric CO2 Seasonal Amplitude, Nature, 515, 394–397, https://doi.org/10.1038/nature13893, 2014. a
Zhang, Q., Phillips, R. P., Manzoni, S., Scott, R. L., Oishi, A. C., Finzi, A., Daly, E., Vargas, R., and Novick, K. A.: Changes in Photosynthesis and Soil Moisture Drive the Seasonal Soil Respiration-Temperature Hysteresis Relationship, Agr. Forest Meteorol., 259, 184–195, https://doi.org/10.1016/j.agrformet.2018.05.005, 2018. a
Zhou, L., Tian, Y., Myneni, R. B., Ciais, P., Saatchi, S., Liu, Y. Y., Piao, S., Chen, H., Vermote, E. F., Song, C., and Hwang, T.: Widespread Decline of Congo Rainforest Greenness in the Past Decade, Nature, 509, 86–90, https://doi.org/10.1038/nature13265, 2014. a, b
Zhu, Z., Piao, S., Myneni, R. B., Huang, M., Zeng, Z., Canadell, J. G., Ciais, P., Sitch, S., Friedlingstein, P., Arneth, A., Cao, C., Cheng, L., Kato, E., Koven, C., Li, Y., Lian, X., Liu, Y., Liu, R., Mao, J., Pan, Y., Peng, S., Peñuelas, J., Poulter, B., Pugh, T. A. M., Stocker, B. D., Viovy, N., Wang, X., Wang, Y., Xiao, Z., Yang, H., Zaehle, S., and Zeng, N.: Greening of the Earth and Its Drivers, Nat. Clim. Change, 6, 791–795, https://doi.org/10.1038/nclimate3004, 2016. a, b, c, d, e |
Browse Questions
In which case electron from bonding molecular orbital is removed?
$\begin{array}{1 1}(a)\;O_2\; to \;O_2^+&(b)\; N_2\; to\; N_2^+\\(c)\; NO\; to\; NO^+&(d)\; O_2\; to\; O_2^-\end{array}$
$N_2$ to $N_2^+$ Electronic configuration of $N_2 = \sigma1s_2,\sigma^\ast 1s^2,\sigma 2s^2,\sigma^\ast 2s^2,\pi^2p^2y=\pi^2p^2z,\sigma^2p^2x$
Electronic configuration of $N_2^+ = \sigma1s^2,\sigma^\ast 1s^2,\sigma^2s^2,\sigma^\ast2s^2,\pi 2p^2y=\pi2p^2z,\sigma^2p^1x$
Hence (b) is the correct answer. |
# write the polynomial P(x)=x^{2}, if possible as a linear combination of the polynomials 1+x,2+x^{2},−x.
Question
Polynomials
write the polynomial $$\displaystyle{P}{\left({x}\right)}={x}^{{{2}}},$$ if possible as a linear combination of the polynomials $$\displaystyle{1}+{x},{2}+{x}^{{{2}}},−{x}.$$
2021-01-14
Let $$\displaystyle{x}{2}={a}{\left({1}+{x}\right)}+{b}{\left({2}+{x}^{{{2}}}\right)}+{c}{\left(−{x}\right)}{x}^{{{2}}}={a}{\left({1}+{x}\right)}+{b}{\left({2}+{x}^{{{2}}}\right)}+{c}{\left(−{x}\right)}.$$ Then comparing the coefficients of power of x we get $$\displaystyle{a}+{2}{b}={0}⋯{\left({1}\right)}$$
$$\displaystyle{a}−{c}={0}⋯{\left({2}\right)}$$
$$\displaystyle{2}{b}={1}⋯{\left({3}\right)}.$$
From (3) we have b=1/2. From (1) and (2) we have $$\displaystyle{c}=−{2}{b}=−{1}{c}=−{2}{b}=−{1}$$. Therefore a=c=−1. Thus x^{2} can be written as a linear combination of $$\displaystyle{\left({1}+{x}\right)},{\left({2}+{x}^{{{2}}}\right)}{\left({1}+{x}\right)},{\left({2}+{x}^{{{2}}}\right)}$$ and −x.
### Relevant Questions
Nested Form of a Polynomial Expand Q to prove that the polynomials P and Q ae the same $$\displaystyle{P}{\left({x}\right)}={3}{x}^{{4}}-{5}{x}^{{3}}+{x}^{{2}}-{3}{x}+{5}\ {Q}{\left({x}\right)}={\left({\left({\left({3}{x}-{5}\right)}{x}+{1}\right)}{x}-{3}\right)}{x}+{5}$$ Try to evaluate P(2) and Q(2) in your head, using the forms given. Which is easier? Now write the polynomial $$\displaystyle{R}{\left({x}\right)}={x}^{{5}}—{2}{x}^{{4}}+{3}{x}^{{3}}—{2}{x}^{{3}}+{3}{x}+{4}$$ in “nested” form, like the polynomial Q. Use the nested form to find R(3) in your head. Do you see how calculating with the nested form follows the same arithmetic steps as calculating the value ofa polynomial using synthetic division?
(1 pt) A new software company wants to start selling DVDs withtheir product. The manager notices that when the price for a DVD is19 dollars, the company sells 140 units per week. When the price is28 dollars, the number of DVDs sold decreases to 90 units per week.Answer the following questions:
A. Assume that the demand curve is linear. Find the demand, q, as afunction of price, p.
B. Write the revenue function, as a function of price. Answer:R(p)=
C. Find the price that maximizes revenue. Hint: you may sketch thegraph of the revenue function. Round your answer to the closestdollar.
D. Find the maximum revenue. Answer:
A resort is situated on an island that lies exactly 4 miles from P, the closest point to the island along a perfectly straight shoreline. 10 miles down the shoreline from P is the nearest source of water. If it costs 1.6 times as much money to lay pipe in the water as it does on land, how far down the shoreline from P should the pipe from the island reach land with minimum total constructions costs?
Factor each polynomial. If a polynomial cannot be factored, write prime. Factor out the greatest common factor as necessary:
$$\displaystyle{x}^{{{2}}}+{4}{x}-{5}$$
For the following polynomial, P(x) =x^3 – 2x^2 + 4x^5 + 7, what is: 1) The degree of the polynomial, 2) The leading term of the polynomial, 3) The leading coefficient of the polynomial.
$$\displaystyle{P}{\left({x}\right)}={4}{x}^{{{3}}}+{4}{x}^{{{2}}}-{x}-{1}$$
$$B=\begin{bmatrix}2 & 3 \\-4 & 2 \end{bmatrix} , A_1=\begin{bmatrix}1 & 0 \\0 & 1 \end{bmatrix} , A_2=\begin{bmatrix}0 &-1 \\1 & 0 \end{bmatrix} , A_3=\begin{bmatrix}1 &1 \\0 & 1 \end{bmatrix}$$
$$B=\begin{bmatrix}2 & 5 \\0 & 3 \end{bmatrix} , A_1=\begin{bmatrix}1 & 2 \\-1 & 1 \end{bmatrix} , A_2=\begin{bmatrix}0 &1 \\2 & 1 \end{bmatrix}$$ |
### Egyptian Fractions
The Egyptians expressed all fractions as the sum of different unit fractions. Here is a chance to explore how they could have written different fractions.
### Weekly Problem 44 - 2013
Weekly Problem 44 - 2013
### Weekly Problem 26 - 2008
If $n$ is a positive integer, how many different values for the remainder are obtained when $n^2$ is divided by $n+4$?
# Harmonic Triangle
### Why do this problem?
This problem provides a fraction-based challenge for students who already possess a good understanding of fraction addition and subtraction, and it leads to algebraic manipulation of that same process.
### Possible approach
Silently, and with the full class attention (!) begin writing the triangle on the board, slowly, row by row. Students can put up hands when they know what is coming next. Allow whispered explanations, until everyone seems to have some idea, then invite explanations from students.
In pairs let students generate as much of the pyramid array as they can.
Bring the group together and ask about what is easy/hard, and any short cuts/observations anyone has made. Suggest that students work on one diagonal at a time, and redefine their task as finding, and trying to prove, general methods for calculating numbers in this table, (for example, can they establish the second number in the $46$th row? The nth row? What about the third numbers?).
Note : How far this problem goes will depend on the confidence students have at using algebra to represent and explore generality. The general term in the second diagonal should be accessible to most students who can manage algebraic fractions and many who can't but who can reason generally based on the patterns in the numerical values.
There is no rush to finalise a proof for any term in the array, the algebra involved isn't completely simple and the reasoning based on the algebra needs to be thorough. But this is an excellent context in which to sense generality while proof requires some care and imagination.
### Key questions
• What's the reason why your pattern must continue? How sure are you?
• Try to find the first case where it doesn't work.
• If you think it will continue indefinitely, can that be proved?
### Possible extension
Press students to justify their conjectures using algebraic reasoning, extended gradually to cover all terms across a row.
### Possible support
Learners might find it useful to use these cards to recreate the triangle as a group activity. There are a few fractions missing and the blank cards can be completed to fill in the gaps.
Number Pyramids and More Number Pyramids are useful less-demanding challenges, using mainly whole numbers, and based on a similar structure.
An alternative, easier task working with unit fractions is Egyptian Fractions
This could be a replacement for, or a preliminary to Harmonic Triangles.
Gordon Davis, who teaches at Colyton Grammar School in Devon, UK said:
"After introducing the structure of the triangle briefly, I gave groups sugar paper and slips of paper with all the fractions that they would need for the first seven rows of the triangle. There was a massive amount of mental calculation, as students organised and stuck down their fractions.
It worked well as there was a lot of space on the sugar paper for students to note down any observations they had. I asked them to highlight any of these comments that they could prove to be true always.
We then spent most of the lesson trying to establish the terms on the 100th row." |
Infoscience
Journal article
# A stabilized finite volume element formulation for sedimentation-consolidation processes
A model of sedimentation-consolidation processes in so-called clarifier-thickener units is given by a parabolic equation describing the evolution of the local solids concentration coupled with a version of the Stokes system for an incompressible fluid describing the motion of the mixture. In cylindrical coordinates, and if an axially symmetric solution is assumed, the original problem reduces to two space dimensions. This poses the difficulty that the subspaces for the construction of a numerical scheme involve weighted Sobolev spaces. A novel finite volume element method is introduced for the spatial discretization, where the velocity field and the solids concentration are discretized on two different dual meshes. The method is based on a stabilized discontinuous Galerkin formulation for the concentration field, and a multiscale stabilized pair of $\mathbb{P}_1$-$\mathbb{P}_1$ elements for velocity and pressure, respectively. Numerical experiments illustrate properties of the model and the satisfactory performance of the proposed method. |
# A random, smooth ellipse in TikZ
I've tried to do a random, smooth ellipse in TikZ with decorations and I came up with something similar like this, but the endpoints don't match:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{decorations}
\usetikzlibrary{decorations.pathmorphing}
\begin{document}
\begin{tikzpicture}[decoration={random steps,segment length=12.5mm,amplitude=6mm}]
\draw[decorate,rounded corners=4mm] (0,0) ellipse (1.3cm and 2cm);
\end{tikzpicture}
\end{document}
Repeating the construction several times suggests that the gap is a systematic error that doesn't come from the randomness:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{decorations}
\usetikzlibrary{decorations.pathmorphing}
\begin{document}
\begin{tikzpicture}[decoration={random steps,segment length=12.5mm,amplitude=6mm}]
\foreach \i in {1,...,10} {\draw[decorate,rounded corners=4mm] (0,0) ellipse (1.3cm and 2cm);}
\end{tikzpicture}
\end{document}
What's the problem? How can I fix it?
You have to choose dimensions for segment length, amplitude and rounded corners carefully.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{decorations}
\usetikzlibrary{decorations.pathmorphing}
\begin{document}
\begin{tikzpicture}[decoration={random steps,segment length=3pt,amplitude=2pt}]
\draw[decorate,rounded corners=1pt] (0,0) ellipse (1.3cm and 2cm);
\end{tikzpicture}
\begin{tikzpicture}[decoration={random steps,segment length=3pt,amplitude=1pt}]
\draw[decorate,rounded corners=1pt] (5,0) ellipse (1.3cm and 2cm);
\end{tikzpicture}
\end{document}
• How do you know which parameters to choose? Do you suspect some kind of relation between them? (Like one being a multiple of the other) – Turion Feb 6 '15 at 15:55
• @Turion I did just trial and error! – user11232 Feb 6 '15 at 23:32 |
# Game
It's done enough!
Standalone PC/OSX builds are pending.
Kudos to Peter Queckenstedt (@scutanddestroy) for doing an amazing job on the Proctor, Hillary, and Trump.
### Post-Mortem:
This has been a positive experience. I love games that actually have nontrivial interactions in them and completely open-ended text inputs. I'm a fan of interactive fiction, but hate that feeling when you're digging around and grasping for action words like some sort of textual pixel-hunt.
The language processing systems in DS2016 aren't particularly complicated, but they're more simple than I'd like. In the first week of the jam I started writing a recurrent neural network to parse and analyze the sentiment of the player's comments. I realized, perhaps too late, that there wasn't enough clean data for me to use to accurately gauge the sentiment and map it to social groups. Instead, I wrote a basic multinomial naive bayes classifier that takes a sentence, tokenizes it, and maps it to 'like' or 'dislike'. Each group has its own classifier and tokenizer, so I could program demographics with a base voting likelihood and give each of them a few sentences on the "agrees with" and "disagrees with" sides, then have them automatically parse and change their feelings towards the player.
A usability change that came in later than one would guess was as follows: I had originally grabbed the demographic with the largest emotional response to a comment and displayed them with the sentiment change. Unfortunately, this turned out to over-exaggerate one particularly noisy group. Another change, shortly thereafter, was masking the exact amount of the change. Instead of saying +1.05% opinion, it simply became "+Conservatives" or "-Hipsters". This was visually far easier to parse and I think helped the overall readability of the game.
There is still a call to add some more direct public opinion tracking in the game, letting players know in closer to real time how they're doing among the demographics. I may find it in myself to introduce that.
The last interesting aspect that I noticed during playtesting: I had slightly over-tuned the language models to my style of writing. Instead of opining on matters at any length, people were making enormous run-on sentences which appealed to every demographic at the same time. These statements, often self-contradictory, were not something I expected or handled well. I found the game to be rather difficult, but it looks like playtesters had a dandy time making the states go all blue.
It's time again for Little Awful Jam! The theme is 'Weird History'. Make a game about folk lore, something strange that happened in history, or some corruption of events. This is my game design doc.
### Game Design Doc
The Pitch: The idea on which I've settled is Debate Simulator 2016, where you play a presidential candidate stepping up to the podium to square off against our current commander-in-chief.
The Gameplay: The gameplay consists of prompts and free responses. Your goal is to appeal to your voting base and to excite them enough to go out and vote. Alternatively, you can go 100% offensive and do nothing but verbally tear down your opponent. Your feedback will consist of your approval rating and your citizen motivation. Don't motivate people and they won't get out and vote, even if they like you. Motivate people to vote and don't get them to like you and you're sure to lose.
The Challenge: Do you know your stuff? Can you overcome the Evangelical block? How do you tacitly approve of bodily autonomy without making it seem like you approve of bodily autonomy?
Free-form Ideas:
• Pick your alignment. Left-Democrat. Centrist-Democrat. Independent. Centrist-Republican. This will change the difficulty by having different demographic groups start with different opinions of you.
• End of game: show the election map and the polls. Use real demographic data to show how things played out.
• Generate realistic text for Donald Trump by randomly mashing together words.
• Simple NLP for the player to classify sentiment and subject, including prompt text for context.
Look and Feel: 2D single-stage pixel art with largely static sprites and a camera that pans between the player and the challenger. Aiming for 640x480 resolution with 2x upscaling. No fancy particles. Minimal sprite talking animation. Animated text.
Tools: Sadly, I won't be using Godot for this. Much as I love the engine, there is so much here that requires a more robust coding language that I need to do it in libGDX with Kotlin.
Project Progression:
• Skeleton libGDX game with Kotlin. 'Hello World'.
• Scene stack and placeholder sprites. Basic game loop.
• Demographic data and player input processing + scoring.
• Opponent responses + emotional meter.
• Minimum Viable Product
Godot is a really honkin' neat engine. If you haven't tried it, I strongly recommend playing around with it. Take a look at https://godotengine.org.
I found myself in a position where I needed to build a native library. Here's my experience doing that on Windows. I can't attest to the accuracy or repeatability of these steps, but I'm leaving them here so I can revisit them when I need to. Just remember: GDNative is a way to call into shared libraries from Godot. NativeScript is the other way -- native code that can call into Godot.
### Prerequisites and Setting Up
#### You will need:
• Microsoft Visual Studio
• Python 3 + pip (or scons installed)
• Git
• A really good reason to need to build a native library
Godot is built with Scons. The process is relatively painless compared to the dependency hell that you can get into when building other tools, but it's not without challenges. I'm going to assume that you've installed Microsoft Visual Studio and can run the following on the command line:
cl.exe
(scons) D:\Source\TerminusExperiment\CPU_v1>cl Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x64 Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
If you don't see that, you'll probably need to search for a shortcut to "VS2015 x64 Native Tools Command Prompt". That will, in turn, include a script to call the following bat file: "%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64"
#### CHECKPOINT: Visual Studio is installed.
Next is Scons. I'm not going to go into any depth about installing and setting up Python on Windows, but I've had more luck using Chocolatey than Anaconda. Install Python 3, pip, and virtualenv.
Make a new virtual environment somewhere with python -m venv my_scons_venv (Mine is called just 'scons' and is stored in C:\Users\Jo\virtualenvs).
Activate the new virtualenv. If you're on Windows, that means calling C:\Users\Jo\virtualenvs\scons\Scripts\activate. (This is approximately equivalent to Linux or OSX's . ./scons/bin/activate)
Install scons in your virtual environment. pip install scons
#### CHECKPOINT: Scons is installed. You can build Godot.
Now we'll pull the Godot source. There may be a way to make do without this, but I've not had luck.
I keep my projects in D:\Source. I opened my command prompt and did git clone https://github.com/godotengine/godot.git
Get some coffee while the repo comes down.
Change into the Godot directory. Build Godot with scons platform=windows.
Wait.
You should see your executables in "D:\Source\godot\bin". Try double clicking on the tools.64.exe if it's built. Fun, eh?
### Building The CPP Shared Library
Go back to your source folder. For me, that's "D:\Source". Now we'll clone godot-cpp so we can build our .lib file. git clone https://github.com/GodotNativeTools/godot-cpp
We're going to edit the SConstruct file.
I set my "godot_headers_path" to godot_headers_path = ARGUMENTS.get("headers", os.getenv("GODOT_HEADERS", "D:\\Source\\godot\\modules\\gdnative\\include"))
Note that it might be necessary to use double backslashes because Windows uses the wrong slash direction for their paths. Note that godot_headers_path points into the Godot build we cloned and into the GDNative module's include folder.
Update the "godot_bin_path" to point to our executable. godot_bin_path = ARGUMENTS.get("godotbinpath", os.getenv("GODOT_BIN_PATH", "D:\\Source\\godot\\bin\\godot.windows.tools.64.exe"))
Invoke scons platform=windows generate-headers=yes.
There will be a short span while the lib is created. When it's all done, check your bin folder. You should see "godot_cpp_bindings.lib".
#### CHECKPOINT: You've got the godot_cpp_bindings library built
Make a new folder. I put it in my project directory, "D:\Source\Terminus\CPU_v1\". CPU_v1 will be my native module. My game involves doing some CPU emulation.
Into that directory, copy D:\Source\godot-cpp\include. Also make a folder called 'lib' and put "godot_cpp_bindings.lib" in it.
Your directory structure should look like this:
D:\Source\TerminusExperiment\CPU_v1
- include
|- core
| |- AABB.hpp
| \- ...
|- AcceptDialog.hpp
|- AnimatedSprite.hpp
\- ...
- lib
\- godot_cpp_bindings.lib
- src
\- init.cpp (THIS IS YOUR CPP FILE! Get a sample one from [x].)
Finally, we can build our CPP file using this command in the root of CPU_v1: cl /Fosrc\init.obj /c src\init.cpp /TP /nologo -EHsc -D_DEBUG /MDd /I. /Iinclude /Iinclude\core /ID:\Source\godot\modules\gdnative\include
Make a good note of those trailing '/I's. The specify the include folders. If you get a message about "Missing whatever.h" then you've got one wrong.
/Fosrc\init.obj specifies the output object. /c src\init.cpp specifies our source file.
#### CHECKPOINT: We have our .obj file from our init.cpp!
Last step, we can link our two objects together. cl /LD lib\godot_cpp_bindings.lib src\init.obj /link /DLL /OUT:init.dll
This will take our lib and our source object and will produce init.dll -- something we can use in Godot's Native library.
Shout out to MathJax/ASCIIMath for being awesome.
### Motivation and Problem Statement
We're launching a ball with a great deal of force at a wall. We can describe our wall with four points in 3D space: a = [x, y, z], b = [x, y, z], and so on for c and d. Our ball travels along a straight line path called bbL. It's origin is p_0 and it's moving towards p_1.
There is some redundancy. I picked four points for our wall because it makes intuitive sense, but we're going to be applying this method on only three of those four points, the triangle Delta abc. If you feel the compulsion to run this on a square, you can easily extend the approach to two triangles.
Let's begin with describing our plane. There are a few different formulations of the plane in 3D. For our purposes, the 'normal + offset' configuration is the easiest. We'll figure out the normal (think a straight line pointing out of the wall) from our three points.
### A quick review of dot and cross
I'm going to assume that the idea of cross-product and dot product are familiar, but here's a very quick review in the interest of completeness.
a = (x, y, z)
b = (x, y, z)
a o. b = a_x * b_x + a_y * b_y + a_z * b_z
a ox b = [[a_y*b_z - a_z*b_y], [a_x*b_z - a_z*b_x], [a_x*b_y - a_y*b_x]]
Note that the dot product is a scalar and the cross product is a vector.
One other thing to realize: the dot product of orthogonal vectors is zero. The cross product of two vectors produces a vector that's orthogonal to both. If that's not clear, don't worry.
### The Normal
Let's get back to the wall. We've got a, b, and c and we want to figure out the normal. If these three points make up an infinite plane, then the normal will jut out of it straight towards us. Recall (or look at the notes above) that the cross product of two vectors makes an orthogonal vector. We can convert our three points to two vectors by picking one to be the start. Let's say our two new vectors are r = b-a and s = c-a. That means our normal, bbN is just r ox s! And since we picked a as our origin, our formula for the plane is (P - a) o. bbN = 0 for some point P. Put differently, if we have some point P and it's on the plane, when we plug it into that formula along with a and bbN we'll get zero.
### Enter: The Line
We mentioned before that our line bbL has a start point of p_0 and an end of p_1. This means if a point P is on the line, then there's some value t where P = p_0 + t*(p_1 - p_0). Now comes the fun part. We want to figure out where this line intersects with our plane (if it does). To do that, we'll plug in the formula for a point on our line into the formula for a point on our plane.
(P - a) o. bbN = 0 // Point on plane.
(((p_0 + t*(p_1 - p_0)) - a) o. bbN = 0 // Replace P with the formula.
(((p_0 + t*(p_1 - p_0)) o. bbN - a o. bbN = 0 // Distribute the dot.
(((p_0 + t*(p_1 - p_0)) o. bbN = a o. bbN // Add a o. bbN to both sides, effectively moving it to the right.
p_0 o. bbN + t*(p_1 - p_0) o. bbN = a o. bbN // Distribute again.
t*(p_1 - p_0) o. bbN = a o. bbN - p_0 o. bbN // Subtract p_0 o. bbN from both sides.
t*(p_1 - p_0) o. bbN = (a - p_0) o. bbN // Pull out the dot product.
t = ((a - p_0) o. bbN) / ((p_1 - p_0) o. bbN) // Divide by (p_1 - p_0) o. bbN on both sides.
t = (bbN o. (a - p_0))/(bbN o. (p_1 - p_0))
If the denominator is zero, there's no solution. This can happen if the plane and line segment are perpendicular. Otherwise, we can plug t back into our line equation to get some point on the plane!
### Inside the Triangle
We have a point P that's on the plane and the line, but is it inside the triangle defined by Deltaabc? There's a fairly easy way to check for that. If you've got a triangle, as we have, then any point in that triangle can be described as some combination of a + u*(b-a) + v*(c-a), where u and v are in the interval [0,1]. If u or v is less than zero, it means they're outside the triangle. If they're greater than one, it means they're outside the triangle. If their sum is greater than one, it means they're outside, too. So we just have to find some u and v for P = a + u*(b-a) + v*(c-a).
### Systems of Equations
It might not seem possible. We have two unknowns and only one equation. However, there's something we've overlooked. P = a + u*(b-a) + v*(c-a) actually has three equations. We've been using a shorthand for our points, but u and v are scalars. Really, we should be looking for a solution for this:
P = a + u*(b-a) + v*(c-a)
P - a = u*(b-a) + v*(c-a)
[[b_x - a_x, c_x - a_x], [b_y - a_y, c_y - a_y], [b_z - a_z, c_z - a_z]] * [[u],[v]] = [[P_x - a_x], [P_y - a_y], [P_z - a_z]]
BAM! Two unknowns, three equations. You might also recognize this to be of the form bbAx=b. You'd be correct. If there were three unknowns and three equations, we could have been fancy and used Cramer's Rule. It's not a hard thing to solve, however.
bbbAx = b
bbbA^TbbbAx = bbbA^Tb // Start by making bbbA a square matrix.
(bbbA^TbbbA)^-1 bbbA^TbbbA x = (bbbA^TbbbA)^-1 bbbA^T b // Since it's square, it probably has an inverse.
bbbI x = (bbbA^TbbbA)^-1 bbbA^T b // Cancel the inverse.
x = (bbbA^TbbbA)^-1 bbbA^T b // Simplify.
And now we've got x in terms that we know (or can calculate)!
### Inverse of a Square Matrix
(bbbA^TbbbA)^-1 looks like a mess, but it's not as bad as it seems. I'm going to multiply it out and simplify it again.
bbbA^T bbbA = [[b_x - a_x, b_y - a_y, b_z - a_z],[c_x - a_x, c_y - a_y, c_z - a_z]] * [[b_x - a_x, c_x - a_x], [b_y - a_y, c_y - a_y], [b_z - a_z, c_z - a_z]] = an unholy mess.
To cut down on that, I'm going to let b-a = r and c-a = s. If we rewrite the above using that, we get something more manageable.
bbbA^T bbbA = [[r_x, r_y, r_z],[s_x, s_y, s_z]] * [[r_x, s_x], [r_y, s_y], [r_z, s_z]]
Since we're basically multiplying things component wise, we can reuse our code for dot product!
bbbA^T bbbA = [[r o. r, r o. s], [r o. s, s o. s]]
That's an easy to calculate 2x2 matrix. As an added bonus, there's a closed-form solution for the inverse of a 2D matrix. You can probably work it out yourself easily enough, but we've gone through a lot already, so here's the solution:
if bbbA = [[a, b], [c, d]] => bbbA^-1 = 1/(ad-bc) [[d, -b], [-c, a]]
So we calculate r o. r, r o. s, and s o. s and plug them into the inverse matrix. Then we multiply the inverse and bbbA^Tb et voila: we've got our values for u and v.
bbbA^T bbbA = [[r o. r, r o. s], [r o. s, s o. s]]
I'm running out of letters!
alpha = r o. r
beta = r o. s
gamma = r o. s
delta = s o. s
(bbbA^T bbbA)^-1 = 1/(alpha * delta- beta * gamma) * [[delta, -(beta)], [-(gamma), alpha]]
And in all its glory:
(bbbA^T bbbA)^-1 bbbA^T b = 1/(alpha * delta - beta * gamma) * [[delta, -(beta)], [-(gamma), alpha]] * [[r_x, r_y, r_z],[s_x, s_y, s_z]] * [[P_x - a_x], [P_y - a_y], [P_z - a_z]] = [[u],[v]]
Whew.
### Closing: Just Show Me The Code
The moment for which you've been waiting. Here's an EMScript6 implementation of the Point and Triangle objects.
A Gist is available on GitHub at https://gist.github.com/JosephCatrambone/578c22f6e507dc52420752013a45b92b.js or you can play with this interactively on JSFiddle: |
# Potential of mean force
When examining a system computationally one may be interested in knowing how the free energy changes as a function of some inter- or intramolecular coordinate (such as the distance between two atoms or a torsional angle). The free energy surface along the chosen coordinate is referred to as the potential of mean force (PMF). If the system of interest is in a solvent PMF also incorporates the solvent effects.[1]
## General description
The PMF can be obtained in Monte Carlo or Molecular Dynamics simulations which examine how a system's energy changes as a function of some specific reaction coordinate parameter. For example, it may examine how the system's energy changes as a function of the distance between two residues, or as a protein is pulled through a lipid bilayer. It can be a geometrical coordinate or a more general energetic (solvent) coordinate. Often PMF simulations are used in conjunction with umbrella sampling because typically the PMF simulation will fail to adequately sample the system space as it proceeds.[2]
## Mathematical description
The Potential of Mean Force[3] of a system with N particles is by construction the potential that gives the average force over all the configurations of all the n+1...N particles acting on a particle j at any fixed configuration keeping fixed a set of particles 1...n
${\displaystyle -\nabla _{j}w^{(n)}\,=\,{\frac {\int e^{-\beta V}(-\nabla _{j}V)dq_{n+1}\dots dq_{N}}{\int e^{-\beta V}dq_{n+1}\dots dq_{N}}},~j=1,2,\dots ,n}$
Above, ${\displaystyle -\nabla _{j}w^{(n)}}$ is the averaged force, i.e. "mean force" on particle j. And ${\displaystyle w^{(n)}}$ is the so-called potential of mean force. For ${\displaystyle n=2}$, ${\displaystyle w^{(2)}(r)}$ is the average work needed to bring the two particles from infinite separation to a distance ${\displaystyle r}$. It is also related to the radial distribution function of the system, ${\displaystyle g(r)}$, by:[4]
${\displaystyle g(r)=e^{-\beta w^{(2)}(r)}}$
## Application
The potential of mean force ${\displaystyle w^{(2)}}$ is usually applied in the Boltzmann inversion method as a first guess for the effective pair interaction potential that ought to reproduce the correct radial distribution function in a mesoscopic simulation.[5] Lemkul et al. have used steered molecular dynamics simulations to calculate the potential of mean force to assess the stability of Alzheimer's amyloid protofibrils.[6] Gosai et al. have also used umbrella sampling simulations to show that potential of mean force decreases between thrombin and its aptamer (a protein-ligand complex) under the effect of electrical fields.[7] |
# Construct hypothesis test
Given some Data $X_{1},X_{2},\ldots ,X_{n}$ we are interested in constructing a $\textbf{consistent}$ hypothesis test for $H_{0}:\theta =\theta _{0}$ vs. $H_{1}:\theta \neq \theta_{0}$. Suppose that there is a weak convergence result such as under $H_{0}$ $\alpha_{n}(T_{n}-\theta)\rightarrow X$ in distribution holds. Furthermore, the distribution of $X$ may be known. So is the following testing procedure appropriate?
reject $H_{0}$ if $\left|T_{n}\right|\geq c\alpha_{n}^{-1}+\theta_{0}$ for some $c$?
If yes, why? if not, why? Again, I am only interested in constructing a consistent test.
• Is this for some class? Can you be more explicit about where your difficulties are? – Glen_b -Reinstate Monica Jul 20 '17 at 10:24
• @Glen_b I added some further information. – Xarrus Jul 20 '17 at 14:59
No that doesn't work, because your manipulation of terms didn't maintain the relationship between the components correctly.
You have something that's asymptotically a pivotal quantity $Q_n=α_n(T_n−θ)$ (asymptotically it's distributed as $X$, where $F_X(x)$ doesn't depend on $\theta$). Work out your limits on $Q_n(\theta)$ using $F$ and back out the asymptotic limits on $T_n$.
Consider the situation under the null ($\theta=\theta_0$). You can find two quantiles $x_l$ and $x_u$ where $F_X(x_l) + 1-F_X(x_u)$ is the desired significance level (I'm there assuming this is continuous, its a teeny bit more fiddly in the general case, because you would include $p(x_u)$ in there, or write it as $1-F_X(x_u^-)\,$), and then reject when $\alpha_n(T_n-\theta_0)$ lies outside $(x_l,x_u)$.
You can then convert that to a rejection rule directly in terms of $T_n$.
Beware - there's nothing in your question that established that the distribution of $X$ is symmetric.
• For a level $\alpha$ test it should work as follows: The Type I Error should equal $\alpha$. That is, $1-(F_{X}(\frac{x_{i}}{\alpha_{n}}+\theta_{0})-F_{X}(\frac{x_{u}}{\alpha_{n}}+\theta_{0}))=\alpha$ yields the choice of $(x_{l},x_{u})$. But is it possible to simplify the whole stuff, if I am only interested in consistency of the test? – Xarrus Jul 20 '17 at 19:01
• Your rejection rule seems to rely on an assumption that $|T_n|$ will converge to $\theta_0$ if the null hypothesis is true. ... but even leaving that aside are you really saying you don't care what the level is? – Glen_b -Reinstate Monica Jul 20 '17 at 19:38
• In a first step, yes. I just want to construct a consistent test. I don't care about $\alpha$. – Xarrus Jul 20 '17 at 19:42
• Then simply reject every time; you'll always reject the null when its false. – Glen_b -Reinstate Monica Jul 20 '17 at 19:50
• Ok, then it doesn't make sense what I have intended. How then I have to work out the test, if I want a consistent one? – Xarrus Jul 20 '17 at 19:53 |
Free Version
Moderate
# Circular Cylinder
MVCALC-RIEUBF
Let $S$ be a surface whose equation in cylindrical coordinates is $r=3\sin \theta$.
What kind of surface is $S$?
A
A plane.
B
A sphere.
C
A right circular cylinder.
D
A hyperboloid.
E
A hyperbolic paraboloid. |
# Totally confused, I need your help. I did a ton of calculations in this problem. In the image bel...
Totally confused, I need your help. I did a ton of calculations in this problem. In the image below, I've provided the question details, and the table I've filled out online (shows what I've already gotten right with checkmarks, and that which I've gotten wrong with X's.) I for the life of me cannot solve this question. I'll love you forever if you explain thoroughly how you came to each conclusion without leaving out any details or shortcuts as I'm really needing help here. Please help.
12 Reporting a Classified Balance Sheet CLO 2-4] The following are the transactions of Spotlighter, Inc., for the month of January a. Borrowed $5.540 from a local bank on a note due in six months. b. Received$6,230 cash from investors and issued common stock to them. c. Purchased $2,600 in equipment, paying$1,000 cash and promising the rest on a note due in one year. d. Paid $1,100 cash for supplies. e. Bought and received$1 of supplies on account. |
# Conditionally convergence of a product of a conditonally convergent series
1. Apr 12, 2009
1. The problem statement, all variables and given/known data
1. Find a function such that infinite sum of anis conditionally convergent but infinite sum of (an)^3 does not converge conditionally
3. The attempt at a solution
We have confusions about the definition of conditional convergence. Importantly, does absolute convergence also mean conditional convergence? If not, then the sum a simple function an= (-1)n*1/n is conditionally convergent but the sum of its cube is absolutely convergent
However, if absolute means conditional, then we need to find an such that (an)^3 diverges under all conditions.
1. The problem statement, all variables and given/known data |
### Session T26: Focus Session: Computational Frontiers in Quantum Spin Systems II
2:30 PM–5:30 PM, Wednesday, February 29, 2012
Room: 257B
Chair: Hans Gerd Evertz, Technical University of Graz
Abstract ID: BAPS.2012.MAR.T26.13
### Abstract: T26.00013 : Monte Carlo study of a $U(1)\times U(1)$ system with $\pi$-statistical interaction
5:18 PM–5:30 PM
Preview Abstract MathJax On | Off Abstract
#### Authors:
Scott Geraedts
(California Institute of Technology)
Olexei Motrunich
(California Institute of Technology)
We study a $U(1)\times U(1)$ system with two species of loops with mutual $\pi$-statistics in (2+1) dimensions. We are able to reformulate the model in a way that can be studied by Monte Carlo and we determine the phase diagram. In addition to a phase with no loops, we find two phases with only one species of loop proliferated. The model has a self-dual line, a segment of which separates these two phases. Everywhere on the segment, we find the transition to be first-order, signifying that the two loop systems behave as immiscible fluids when they are both trying to condense. Moving further along the self-dual line, we find a phase where both loops proliferate, but they are only of even strength, and therefore avoid the statistical interactions. We study another model which does not have this phase, and also find first-order behavior on the self-dual segment.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.MAR.T26.13 |
Article Text
An instrument to identify computerised primary care research networks, genetic and disease registries prepared to conduct linked research: TRANSFoRm International Research Readiness (TIRRE) survey
1. Emily Jennings,
2. Simon de Lusignan,
3. Georgios Michalakidis, PhD Data Analytics,
4. Paul Krause,
5. Frank Sullivan,
6. Harshana Liyanage and
7. Brendan C. Delaney
1. Hon. Research Assistant, Department of Clinical and Experimental Medicine, University of Surrey, Guildford, UK
2. Professor, Primary Care and Clinical Informatics, Department of Clinical and Experimental Medicine, University of Surrey, Guildford, UK
3. Department of Computer Science, University of Surrey, Guildford, UK
4. Professor, Complex Systems, Department of Computer Science, University of Surrey, Guildford, UK
5. Professor, Primary Care, Population Health Sciences, University of Dundee, Dundee, UK
6. Research Fellow, Department of Clinical and Experimental Medicine, University of Surrey, Guildford, UK
7. Professor, Primary Care Research, Department of Surgery and Cancer, Imperial College London, St Mary’s Campus, London, UK
1. Author address for correspondence: Simon de Lusignan Professor Primary Care and Clinical Informatics Department of Clinical and Experimental Medicine University of Surrey Guildford GU2 7XH, UK s.lusignan{at}surrey.ac.uk
## Abstract
Purpose The Translational Research and Patients safety in Europe (TRANSFoRm) project aims to integrate primary care with clinical research whilst improving patient safety. The TRANSFoRm International Research Readiness survey (TIRRE) aims to demonstrate data use through two linked data studies and by identifying clinical data repositories and genetic databases or disease registries prepared to participate in linked research.
Method The TIRRE survey collects data at micro-, meso- and macro-levels of granularity; to fulfil data, study specific, business, geographical and readiness requirements of potential data providers for the TRANSFoRm demonstration studies. We used descriptive statistics to differentiate between demonstration-study compliant and non-compliant repositories. We only included surveys with >70% of questions answered in our final analysis, reporting the odds ratio (OR) of positive responses associated with a demonstration-study compliant data provider.
Results We contacted 531 organisations within the Eurpean Union (EU). Two declined to supply information; 56 made a valid response and a further 26 made a partial response. Of the 56 valid responses, 29 were databases of primary care data, 12 were genetic databases and 15 were cancer registries. The demonstration compliant primary care sites made 2098 positive responses compared with 268 in non-use-case compliant data sources [OR: 4.59, 95% confidence interval (CI): 3.93–5.35, p < 0.008]; for genetic databases: 380:44 (OR: 6.13, 95% CI: 4.25–8.85, p < 0.008) and cancer registries: 553:44 (OR: 5.87, 95% CI: 4.13–8.34, p < 0.008).
Conclusions TIRRE comprehensively assesses the preparedness of data repositories to participate in specific research projects. Multiple contacts about hypothetical participation in research identified few potential sites.
• medical informatics
• family practice
• medical records systems
• electronic health records
• diabetes mellitus
• Barrett’s disease
## ABBREVIATIONS
IT - information technology; EMR - electronic medical records; TRANSFoRm - Translational Research and Patients safety in Europe; TIRRE survey - TRANSFoRm International Research Readiness survey; IBM SPSS - IBM Statistical Package for Social Sciences; eHR - electronic health record; OR - odds ratio; ICD - International Classification of Disease; ICPC - International Classification of Primary Care; SNOMED - Systematised Nomenclature of Medicine; CTv3 - Clinical Terms Version 3; ATC - Anatomical Therapeutic Chemical; HL7 - Health Level-7; RIM - Reference Information Model; CDISC - Clinical Data Interchange Standards Consortium; BRIDG - Biomedical Research Integrated Domain Group; CSV - Comma Separated Values; CPRD - Clinical Practice Research Datalink.
## INTRODUCTION
Large databases of health data are widely used for research but less often combined.1 Linked data facilitates better measurement of clinical performance and patient health outcomes in health care systems.2 Technical challenges of linking data are mostly considered to be the key barrier of integrating disparate heterogeneous data sources.3 Data privacy legislations can considerably hinder research in a multinational setting.4 Data collected within primary care have been computerised since the 1990s5 with data widely used for research,6 but with relatively little linkage of data beyond disease-specific programmes in individual localities. In the United States, the federal electronic medical records mandate aims not only to save money but also to modernise health information technology (IT). A team of RAND Corporation researchers projected in 2005 that a move towards health IT could potentially save $81 billion. However, this saving has far from materialised and despite the recommendations, spending in the US has increased over the past 9 years by$800 billion.7 The increase in spending was, in part, attributed to the slow adoption of health IT systems that are neither interoperable nor easy to use.
The Translational Research and Patient Safety in Europe (TRANSFoRm) project aims to reduce barriers to conducting research using routine healthcare data across Europe.810 The European eHealth Action Plan prioritises interoperability between health records so that internationally comparable data can be collected on the quality of care and for research.11 The TRANSFoRm International Research Readiness (TIRRE) survey was developed and designed to collect information about these data sources with the primary aim of assessing the preparedness of disease registries, throughout Europe, to conduct linked research using the TRANSFoRm project (Appendix 1). The TRANSFoRm requirements for the TIRRE instrument were that it could assess the feasibility of conducting two simulated studies (use-cases): one on the genetics of response to oral anti-diabetic medication; the other on the relationship between anti-indigestion medication, Barrett’s disease, oesophageal cancer and the quality of life. The ‘use-cases’ were designed to capture how primary care recorded oesophageal reflux might be a prodrome of cancer; and any genetic predisposition to complications of people with type 2 diabetes.6
## METHOD
### Sampling and data collection
Our initial contact was to the health ministry of each EU country and to National Primary Care Organisations. Subsequent strategies included trying to identify sites through Internet and Medline searches, and snowball sampling through contacts made or work references. We also contacted National and European informatics and research networks. We identified sites across Europe willing to participate in the survey by contacting them through email or web-form and we then followed this up with a phone call. We exported these data from the completed online questionnaires directly into either Microsoft Excel or into Statistical Package for Social Sciences (IBM SPSS). We categorised ‘non-compliance’ as a respondent who partially completed the online survey, answering <70% of the questions; or as a respondent with whom we had made telephone contact initially was unavailable for their telephone interview or failed to proceed to online completion of the survey. A major component of the workload in this project involved identifying potential survey respondents.
### Micro-, meso- and macro-level
The broad scope of the survey emerged from a series of workshops and is composed of a wide range of questions designed to assess how data might be linked, the data itself, extraction methods and social and organisational influences.15,16 The final instrument contained 160 questions divided into a framework which consisted of micro-, meso-, macro- and study-specific levels.
• The first section covered micro-level issues and was concerned with the data source, the data itself, metadata, the potential for linkage or achieving semantic interoperability between data sources17 and details of how many studies have been published using the data.
• The meso-level explored the data extraction,18 the architecture for the computerised medical record and other data repositories,19 audit trails and the size of the database.
• The macro-issues related to the nature of the health system, socio-cultural factors and issues relating to the funding, purpose and restrictions on the use of the data.
• Study-specific questions make up the final part of the survey instrument (Supplementary data file, Table S1), these were designed to identify sites that were eligible to participate in the use-cases in pairs of primary care and genetic, or primary care and cancer registry data.
We described the coding systems used to store data, including drug dictionaries and any standards used (the aim was to determine whether there were a small number of possible combinations of coded data to identify within data repositories and the mechanisms for achieving interoperability), the number and details of eHR vendors, vendors of communications and data processing applications routinely used (including their international scope, coding systems offered and if they had common data export formats) along with organisational, policy, cultural or legislative restrictions on data reuse.
### Use-case specific
We analysed the process of conducting two use-cases and defined the studies using a framework which defined the micro-, meso- and macro-levels of data and process information required to conduct successful linked research, where multiple data sources are semantically integrated. We summarised the sites eligible to participate in the use-cases in pairs of primary care and genetic or primary care and cancer registry data. If the database can support a use-case, we consider the site as a use-case compliant site and if it cannot support, we define the site as a non-use-case site. Registries were only eligible if they provided a valid response to the questionnaire. We required as much of the survey to be completed as possible as each part of it was determined from our requirements analysis. We defined a valid response to be one which answered >70% of the questions. Key compulsory answer questions which defined compliance provided information such as valid contact details, a link to another dataset, size of the dataset, data model and details of the coding system, the likely lead time in any approval process and that they have use-case variables available. All sections of the questionnaire provided significant and useful information to determine if the database was use-case specific.
### Reporting and analysis
We compared the responses from databases that proved eligible to participate in the use-cases with those who were not. We wanted to explore whether it was more likely that those associated with eligibility would give a positive response to questions than those who were not deemed eligible. A valid response provided by the respondent is considered a positive response. The purpose of this exercise was to identify any questions that were not purposeful and to reduce the number of questions. We identified and reviewed any questions that were not answered positively by any of the use-case eligible respondents on the basis that they were not discriminatory of eligibility to participate in either of the studies.
### Statistical methods
We used descriptive statistics (i.e. measures of frequency) to describe response rates and quote odds-ratios (ORs), 95% confidence intervals (CIs) and used tests of proportion to report whether sections of the questionnaire helped to discriminate between those able to conduct the use-case or not.
### Ethics statement
There was no formal ethics board review. This survey only seeks to report information about the capacity and capability of information sources to be combined to conduct research studies and does not involve any access to personal data. However, the TIRRE survey does check whether data sources collect individual consent and if they contain strong identifiers and if there are restrictions on the use of data.
## RESULTS
### Sample and data collection for use-case specific defined studies
We made many contacts but received few responses. We contacted 531 different organisations, and later individuals in EU countries (including eHR vendors) and received 56 valid responses. Of the health ministries we contacted, seven provided useful information and a further five responded but could not provide any helpful information. Only two site representatives declined to participate at this stage (Supplementary data file, Table S2).
### eHR vendors
We also collected details of the national or international eHR vendors with a significant presence in one or more EU countries. We contacted 17 companies identified initially, as well as any reported by survey respondents. Nine of these eHR vendors had a presence in more than one country. Two of those contacted started to complete the TIRRE survey instrument but failed to complete the questionnaire. We also approached nine vendors listed by site representatives who completed the questionnaire but they once again expressed no interest in participating in the survey. They did suggest they might consider completing the survey in the future if and when we had something more definite to offer. Few vendors responded; however, when they did reply to the survey, their responses to the questions posed provided useful detail.
### Telephone and online completion of the survey
Of the 531 organisations we made contact with, 45 respondents commenced but did not complete the TIRRE survey online (Supplementary data file, Table S3) and 26 made a partial response during telephone enquiries but were then either unavailable for their telephone interview or failed to go ahead and complete the online survey. The initial telephone interviews took 1.5 hours and with experience still took 50–75 minutes. The feedback from the pilot survey suggested that the process took too long and that there was very little incentive for the respondent for completing the survey. While this drawback of the survey could have possibly caused a bias for the responses collected, we consider this as a valuable learning to consider in similar database profiling activities conducted in the future.
### Completion of the survey
The valid surveys were on an average returned with 76% of the questions completed and this was consistent across the three respondent groups. Looking at the survey by category, the Data source and Record system sections were the only ones that fell below the 75% level (many sections were returned with above 90% of the questions completed). The main reason for this was the variation in the skip logic for individual respondents in these sections of the questionnaire (Supplementary data file, Table S4). There was a little difference between the sites which we had identified as eligible to participate in the use-cases and those we had identified as not eligible (77% use-case sites versus 75% non-use-case sites).
### Results micro-level data
The greater the number of coding systems in use, the harder it will be to achieve semantic interoperability; therefore, the micro-level data collection was primarily concerned with collecting information about the coding systems the repositories used. We found that the WHO International Classification of Disease (ICD)20 was the most common coding system used by 71% (n = 39) of respondents. ICD-10 (n = 32) was used by 82%; 13% (n = 5) used ICD-9; 23% (n = 9) used an ICD modification and 5% (n = 2) did not respond (Supplementary data file, Table S5).
The second most used coding system was the WHO International Classification of Primary Care (ICPC), this was used by 20% (n = 11) of respondents. Eighty-two percent (n = 9) of those using ICPC used ICPC-2 and 18% (n = 2) used ICPC-1, none reported using an ICPC modification (Supplementary data file, Table S6). The third most common coding system was the Systematised Nomenclature of Medicine (SNOMED),21 which was used by 13% (n = 7) of all the respondents; 44% (n = 4) of those using SNOMED used the Clinical Terms version; 33% (n = 3) used the Reference Terminology version and 22% (n = 2) did not respond (Supplementary data file, Table S7).
One of the least common coding systems used was the Read Coding system [version2 – 5-byte and the Clinical Terms Version 3 (CTv3)] and these were only used by the seven UK repositories. They represented 9% (n = 5) and 4% (n = 2) of all respondents, respectively; 87% (n = 50) did not respond (Supplementary data file, Table S8).
The survey highlighted that there was a great variety in the number of drugs dictionaries utilised by the repositories and this is one potential barrier to achieving semantic interoperability. Sixty percent (n = 33) of respondents said that they have a coding system for drugs (Primary care 83%, n = 24; Cancer 33%, n = 5; Genetic 36%, n = 4). Of these; 76% (n = 25) use the Anatomical Therapeutic Chemical classification system;22 9% (n = 3) use Multilex; 12% (n = 4) responded ‘other’ and 3% (n = 1) responded ‘no data’ (Supplementary data file, Table S9). We were interested to know whether it was possible to extract information about the administration of drugs and we asked respondents if it was possible to extract data about daily dose and administration route from their database. Only around one-third of the Primary care and Cancer registries could extract data of this nature, while none of the genetic databases held this information (Supplementary data file, Table S10).
The survey was designed to assess what systems the registries had in place to achieve interoperability and to ensure data quality. Thirty-four percent (n = 19) of respondents had no system at all; only 5% (n = 3) used Health Level-7 (HL-7), an international interoperability organisation who’s Reference Information Model underpins much interoperability in healthcare; 2% (n = 1) used the Clinical Data Interchange Standards Consortium (CDISC);23 none used the Biomedical Research Integrated Domain Group (BRIDG)24 and 52% (n = 29) used an ‘In-house or other’ system (Table 1). Nearly, all (93%, n = 52) of the respondents either had no system in place or used an in-house system or provided no data.
Table 1 Systems used to ensure data quality
### Data collection meso- and socio-cultural levels
Data extraction at this level was concerned with record level issues. The majority of respondents (82%; n = 45) have the ability to extract data in standardised formats such as Comma Separated Values, Excel and full text. All of the respondents have at least one appropriate format. The data collected have a wide application and this is reflected by the diverse nature of the information stored within these repositories which ranges from research to mortality records (Table 2).
Table 2 The aims of the data source for the data collected
The respondents reported that socio-cultural influences had a small but significant impact on the validity of their data. These factors included ethical, religious and legal factors (Table 3); these might delay or prevent participation in the TRANSFoRm studies.
Table 3 Socio-cultural influences on the validity of the data
Socio-cultural factors, which include legal and ethical constraints, as well as influences on diagnosis, and organisational components of the health system from which the data originates are often barriers to conducting research. In summary, 71% (n = 39) of respondents use ICD and 20% (n = 11) use ICPC; however, 86% (n = 48) do not use one of the three main systems for ensuring data quality; 29% opt instead for an in-house system. Very few sites are adopting national standards for interoperability in linking data. Whilst multiple drug dictionaries were used, 66% (n = 10) of cancer repositories did not use them. Extract formats for data were standardised and only 3% (n = 6) of respondents chose to use a non-standard format. Data were not forthcoming from eHR vendors (n = 40). Repositories had a broad range of applications for their data, the most important was research (49%, n = 51). The most common socio-cultural influence that could potentially affect the validity of their data was ethical (10%, n = 7) and social (10%, n = 7) factors although 49% (n = 36) reported no social issues at all.
### Difference in response depending on eligibility
Data sources that were non-use-case eligible tended to produce much fewer positive responses than those that were eligible. Overall, the repositories identified as potentially being use-case eligible made 2098 positive responses to questionnaire items compared with 268 from non-use-case eligible data sources (OR: 4.59; 95% CI: 3.93–5.35; p < 0.008); for genetic databases, the respective figures were 380:44 (OR: 6.13; 95% CI: 4.25–8.85; p < 0.008) and for cancer registries, they were 553:44 (OR: 5.87; 95% CI: 4.13–8.34; p < 0.008); the full results are in Table 4.
Table 4 Positive responses to the questionnaire sections – comparing non-use case eligible and use-case-eligible data sources
### Data repositories capable of participation in the survey
Of the 56 valid responses, there were 15 pairs eligible to complete one or other of the use-cases. The 56 valid responses were made up of 29 databases of routine primary care data, 12 genetic databases and 15 cancer registries. From the valid responses, we were able to identify the location of databases with the potential to participate in the research studies. We identified five locations for linking primary care databases with genetic databases and 10 for linking primary care databases with cancer registries. The 15 eligible sites were spread across 11 countries (Supplementary data file, Table S11).
### Details of the eligible sites
The sites had a total of around 1.5 million potential patients eligible to participate in this research; over 30,000 in the genetics of diabetes use-case and over 1 million to participate in Barrett’s disease, oesophageal cancer and the prescription of 30 medicines used to treat dyspepsia use-case. The country of origin, the website for these sites, the main coding system used and the expected delay in ethical approval are shown in Tables 6. We sometimes found contradictions between the data sources which indicated that they could supply linked data and several of the participants were, on closer questioning, only linking on a pilot basis; we have shaded out in grey the sites which are not currently active. The outcome of this process is that we have identified one fully functional location able to run the diabetes use-case (Table 5) and five pairs of locations able to run Barrett’s disease use-case (Table 6). The one able to run the diabetes use-case is the Wellcome Type 2 Diabetes study group in Scotland. The five locations that can run the second use-case are as follows: Finland, Germany (Bremen), Norway, UK (General Practice Research Database), UK, Scotland (pilot).
Table 5 The eligible sites for conducting the diabetes TRANSFoRm use-cases
Table 6 The eligible sites for conducting Barrett’s disease TRANSFoRm use-cases
## DISCUSSION
### Principal findings
The TIRRE survey has been completed by 56 data-repositories across Europe and six outside the EU. We have developed a usable instrument which can assess their potential to take part in linked data research. There were no equivalent International sites available to conduct this type of research. A challenge was to get databases to complete the questionnaire, when we did get a response, the completeness of information gathered was high and proved useful in identifying their potential to participate in linked research. Meso- and macro-level questions were important discriminators between use-case and non-use-case eligible data sources. There are currently no other survey instruments available to enable brokerage between databases potentially willing to participate in research. Micro levels informed about the data and its granularity.
### Implications of the findings
The TIRRE survey is the first step towards assessing the potential of a database for linkage. It can identify data sources suitability in terms of data availability and readiness to participate in a study. Whilst the initial focus of TIRRE was on linking data sources (which were important and consistent), the meso- and macro-factors generally had higher OR of predicting use-case eligibility.
Different coding systems have varying levels of granularity. For example, at the time of this study, neither ICD-10 nor ICPC differentiated between types of diabetes according to the latest WHO classification. ICD-10 differentiates insulin-dependent and non-insulin-dependent, rather than the Type 1 (insulin for survival) and Type 2 diabetes used in the latest classifications. Although we acknowledge, this is now updated in later releases.
### Comparison with the literature
It is possible to draw comparisons between the complexity of this task and the existing successful projects that involve linking data. However, the successful data repositories in the UK have all been based on a single vendor of GP eHR system. Clinical Practice Research Datalink previously only extracted data from a single vendor called In-Practice Systems, though they are expanding this to all UK vendors;25 Q-Research on the EMIS system26 and other UK research networks (The Health Improvement Network27 and ResearchOne28) and other networks following the same pattern. The only exception to this in the UK is the Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC);29 this network extracts data from all the different brand of medical record systems. It has published a cohort profile about patients in the RCGP RSC database with diabetes, one of the TRANSFoRm use-case areas30 Notwithstanding the RSCHP RSC success, the relatively simple task of linking data from this small number of brands of computer within the UK has proved challenging, both in terms of creating a summary care record31 and in developing a common data extraction system.32
### Limitations of the method
Any initial screening process will need to be followed up by a detailed assessment of whether the dataset needed for a given study can be elicited from the data repositories. There was no real incentive for data repositories to supply us with the data required, as there was not a reciprocal offer of benefit. As a consequence, our results inevitably underestimate the number of sites where this type of research can be conducted. We propose that future projects should consider including incentives in their budget. An effective method to reduce the impact of this self-selection bias could be to approach databases with a partially completed survey (using information available in the public domain) in order to encourage participation. Furthermore, the collected data could be shared publicly as a metadata registry that would facilitate advertising data offered by organisations for prospective studies. We also recommend limiting surveys to 30–40 questions to improve the response rate.
### Call for further research
We need to conduct test-retest studies to assess the reliability of the survey instrument. The reliability test could be carried out by repeating the data collection after a period of time. While this would help to validate the instrument, it will also potentially remove any bias introduced by the specific person responding to the survey. We should conduct simulated and real studies with data extractions to test its validity. However, conducting real studies may be affected by the availability of funding. Alternatively, we can promote reuse of the instrument in other projects with the research area.
## CONCLUSIONS
A large complex set of data is needed to know if it will be possible to link primary care and either disease registry of the genetic database. This complex set of data can either be classified by level of granularity or as a business or data requirement.
The TIRRE instrument is a useful tool that can be used to assess general suitability and readiness to participate in linked research studies. With increased use, it is likely that TIRRE will evolve further, but its use needs to be embedded in a concrete ‘offer’ and business case rather than a one-off research study.
## Acknowledgements
Paul van Royen for his comments on the manuscript; IMIA and EFMI for supporting their primary health care informatics working groups. Antonis Ntasioudis for his contribution to this research. TRANSFoRm is supported by the European Commission – DG INFSO (FP7 2477).
## Appendix 1 Details of the TRANSFoRm work tasks
Table S1 Categories of data collection and min-to-max number of questions; skip logic reduces the number of questions that each type of respondent might answer
Table S2 Number of contacts and valid responses
Table S3 Number of contacts and valid responses
Table S4 Completion of the questionnaire
Table S5 Coding systems information (ICD)
Table S6 Coding systems information (ICPC)
Table S7 Coding systems information (SNOMED)
Table S8 Reading coding systems usage (CTv3 and Read codes version used)
Table S9 Coding systems for drugs
Table S10 Extraction of drug information from data provided
Table S11 Location of respondents and eligible sites
1. 1.
2. 2.
3. 3.
4. 4.
5. 5.
6. 6.
7. 7.
8. 8.
9. 9.
10. 10.
11. 11.
12. 12.
13. 13.
14. 14.
15. 15.
16. 16.
17. 17.
18. 18.
19. 19.
20. 20.
21. 21.
22. 22.
23. 23.
24. 24.
25. 25.
26. 26.
27. 27.
28. 28.
29. 29.
30. 30.
31. 31.
32. 32.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. |
# Changeset 13679
Ignore:
Timestamp:
10/27/11 11:56:07 (10 years ago)
Message:
Minor manual revisions (in progress commit).
File:
1 edited
### Legend:
Unmodified
r13665 \begin{document} \title{A Manual for Armed Bear Common Lisp} \date{October 22, 2011} \date{October 27, 2011} \author{Mark~Evenson, Erik~Huelsmann, Alessio~Stalla, Ville~Voutilainen} \subsection{Version} This manual corresponds to abcl-1.0.0-dev, released on October 22, 2011. This manual corresponds to abcl-1.0.0, released on October 22, 2011. \subsection{License} abcl.jar'' or possiblyabcl-1.0.0.jar'' if one is using a versioned package from your system vendor. This byte archive can be executed under the control of a suitable JVM by using the -jar'' option to parse the manifest, and select the named class (\code{org.armedbear.lisp.Main}) for execution, viz: under the control of a suitable JVM \footnote {Java Virtual Machine} by using the -jar'' option to parse the manifest, and select the class named therein \code{org.armedbear.lisp.Main}'' for execution, viz: \begin{listing-shell} \end{listing-shell} N.b. for the proceeding command to work, the java'' executable needs \emph{N.b.} for the proceeding command to work, the java'' executable needs to be in your path. as SLIME \footnote{SLIME is the Superior Lisp Mode for Interaction under Emacs}) the invocation is wrapped in a Bourne shell script under UNIX or a DOS command script under Windows so that ABCL may be under \textsc{UNIX} or a \textsc{DOS} command script under Windows so that ABCL may be executed simply as: \section{Initialization} If the ABCL process is started without the --noinit'' flag, it If the \textsc{ABCL} process is started without the --noinit'' flag, it attempts to load a file named .abclrc'' located in the user's home directory and then interpret its contents. |
MTM/beta - Maple Help
Home : Support : Online Help : MTM/beta
MTM
beta
Beta function
Calling Sequence beta(X,Y)
Parameters
X, Y - array or expression
Description
• beta(X,Y) calls the Beta command.
• If X or Y is an rtable, table, set, or list, calls are made ina pairwise, or elementwise, way.
Examples
> $\mathrm{with}\left(\mathrm{MTM}\right):$
> $X≔⟨1,2,3⟩$
${X}{≔}\left[\begin{array}{c}{1}\\ {2}\\ {3}\end{array}\right]$ (1)
> $Y≔⟨4,5,6⟩$
${Y}{≔}\left[\begin{array}{c}{4}\\ {5}\\ {6}\end{array}\right]$ (2)
> $\mathrm{\beta }\left(X,Y\right)$
$\left[\begin{array}{c}\frac{{1}}{{4}}\\ \frac{{1}}{{30}}\\ \frac{{1}}{{168}}\end{array}\right]$ (3)
Compatibility
• The MTM[beta] command was introduced in Maple 2021.
• For more information on Maple 2021 changes, see Updates in Maple 2021. |
# How do you multiply (x + 5)(x - 4)?
Jul 8, 2015
${x}^{2} + x - 20$
#### Explanation:
To multiply two polynomials, you should use the FOIL (first outside inside last) method.
$\left(x + 5\right) \left(x - 4\right) = {\underbrace{x \cdot x}}_{\textcolor{b l u e}{\text{first")) + underbrace(x * (-4))_(color(blue)("outside")) + underbrace(5 * x)_(color(blue)("inside")) + underbrace(5 * (-4))_(color(blue)("last}}}$
$\left(x + 5\right) \left(x - 4\right) = {x}^{2} - 4 x + 5 x - 20$
Simplify the last equation to get the answer:
${x}^{2} + x - 20$ |
13
Q:
Find the odd number in the following number series?24 4 13 41 151 640
A) 4 B) 640 C) 41 D) 13
Explanation:
The given number series follows the pattern that,
24×0 + 4 = 4
4×1 + 9 = 13
13×2 + 16 = 42
42×3 + 25 = 151
151×4 + 36 = 640
Therefore, the odd number in the given series is 41
Q:
7, 11, 19, 35, ?
Find the next number in the given number series?
A) 131 B) 94 C) 83 D) 67
Explanation:
Here the given series 7, 11, 19, 35, ?
follows a pattern that (x 2 - 3) i.e,
7
7 x 2 - 3 = 11
11 x 2 - 3 = 19
19 x 2 - 3 = 35
35 x 2 - 3 = 67
67 x 2 - 3 = 131
Hence the next number in the given number series is 67.
3 140
Q:
Find the Odd Number in the given Number Series?
3, 6.5, 14, 29, 64, 136
A) 14 B) 64 C) 29 D) 136
Explanation:
The Given number series 3, 6.5, 14, 29, 64, 136 follows a pattern that
3
3 x 2 + 0.5 = 6.5
6.5 x 2 + 1 = 14
14 x 2 + 2 = 30 $\ne$29
30 x 2 + 4 = 64
64 x 2 + 8 = 136
Thus the wrong number in the series is 29
4 165
Q:
What comes next in this sequence
196, 169, 144, 121, 100, 81, ?
A) 77 B) 74 C) 67 D) 64
Explanation:
The given number series follows a pattern that
196, 169, 144, 121, 100, 81, ?
-27 -25 -23 -21 -19 -17
=> 81 - 17 = 64
Therefore, the series is 196, 169, 144, 121, 100, 81, 64.
7 168
Q:
Find the missing number in the series
1, 6, ?, 15, 45, 66, 91
A) 24 B) 28 C) 32 D) 26
Explanation:
Here the given series 1, 6, ?, 15, 45, 66, 91 follows a
Pattern in the series is, +5, +9, +13,...,.. +21, +25
So +4 is getting increased at every term addition.
Missing Number in the series will be 15+(9+4) = 15 + 13 = 28.
6 91
Q:
Find the sum of the Arithmetic Series upto 36 terms
2, 5, 8, 11,...
A) 3924 B) 1962 C) 1684 D) 1452
Explanation:
Arithmetic Series ::
An Arithmetic Series is a series of numbers in which each term increases by a constant amount.
How to find the sum of the Arithmetic Sequence or Series for the given Series ::
When the series contains a large amount of numbers, its impractical to add manually. You can quickly find the sum of any arithmetic sequence by multiplying the average of the first and last term by the number of terms in the sequence.
That is given by Where n = number of terms, a1 = first term, an = last term
Here Last term is given by where d = common difference
Now given Arithmetic Series is
2, 5, 8, 11,...
Here a1 = 2, d = 3, n = 36
Now,
Now, Sum to 36 terms is given by
Therefore, Sum to 36 terms of the series 2, 5, 8, 11,... is 1962.
7 119
Q:
Find the missing number?
15, 30, ? , 40, 8, 48
A) 21 B) 8 C) 19 D) 10
Explanation:
Given nuber series is 15, 30, ? , 40, 8, 48
Here the number series follows a pattern that
15
15 x 2 = 30
30 $÷$ 3 = 10
10 x 4 = 40
40 $÷$ 5 = 8
8 x 6 = 48
Hence, the missing number in the series is 10.
4 96
Q:
Find the Odd One Out?
3 4 14 48 576 27648
A) 4 B) 14 C) 48 D) 27648
Explanation:
Given Number series is :
3 4 14 48 576 27648
It follows a pattern that,
3
4
4 x 3 = 12 but not 14
12 x 4 = 48
48 x 12 = 576
576 x 48 = 27648
Hence, the odd one in the given Number Series is 14.
3 84
Q:
Find the Odd One Out of the following Number Series?
3, 14, 36, 67, 113, 168
A) 14 B) 36 C) 67 D) 113
Explanation:
Here the given number series is 3, 14, 36, 67, 113, 168
It follows a pattern that
3
3 + 1 x 11 = 3 + 11 = 14
14 + 2 x 11 = 14 + 22 = 36
36 + 3 x 11 = 36 + 33 = 69
69 + 4 x 11 = 69 + 44 = 113
113 + 5 x 11 = 113 + 55 = 168
Hence, the wrong number is 67. |
Question
Express the complex number $$\dfrac{2+i}{3-4i}$$ in $$a+ib$$ form.
Solution
Given the complex number is $$\dfrac{2+i}{3-4i}$$ $$=\dfrac{(2+i)(3+4i)}{3^2-16i^2}$$$$=\dfrac{2+11i}{25}$$$$=\dfrac{2}{25}+i\dfrac{11}{25}$$.This is the required $$a+ib$$ form.Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More |
How do you find the GCF of 42 and 56?
Jan 4, 2017
The GCF of $42$ and $56$ is $14$.
Explanation:
One method for finding the GCF of two numbers goes as follows:
• Divide the larger number by the smaller to get a quotient and remainder.
• If the remainder is $0$ then the smaller number is the GCF.
• Otherwise repeat with the remainder and the smaller number.
So in our example:
$\frac{56}{42} = 1 \text{ }$ with remainder $14$
$\frac{42}{14} = 3 \text{ }$ with remainder $0$
So the GCF of $56$ and $42$ is $14$ |
#### Elasticity of supply calculator
$5. 1 price elasticity of demand and price elasticity of supply.$Price elasticity of demand calculator good calculators.
Price elasticity of supply: calculation methods, types, factors.
$Price elasticity of supply | intelligent economist.$Price elasticity of supply | formula | calculator | example.
Price elasticity of supply.
Using calculus to calculate price elasticity of supply.
How to calculate price elasticity of supply (pes) youtube.
4. 1 calculating elasticity – principles of microeconomics.
How to calculate elasticity of supply | bizfluent.
$5. 1 price elasticity of demand and price elasticity of supply.$How to calculate point price elasticity of demand with examples.
Price elasticity of supply and demand (ped or ed) calculator.
Microeconomics calculator.
Elasticity of supply (video) | khan academy.
5. 1 price elasticity of demand and price elasticity of supply.
Price elasticity of supply and demand (ped or ed) calculator.
Calculating price elasticities using the midpoint formula.
Price elasticity of supply.
Price elasticity of supply | boundless economics.
Price elasticity of supply | intelligent economist. |
A A
Connect with us:
# Quantum Foundations
This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
## Seminar Series Events/Videos
Currently there are no upcoming talks in this series.
## Physics, Logic and Mathematics of Time
Friday May 02, 2014
Speaker(s):
Consider discrete physics with a minimal time step taken to be
tau. A time series of positions q,q',q'', ... has two classical
observables: position (q) and velocity (q'-q)/tau. They do not commute,
for observing position does not force the clock to tick, but observing
velocity does force the clock to tick. Thus if VQ denotes first observe
position, then observe velocity and QV denotes first observe velocity,
then observe position, we have
VQ: (q'-q)q/tau
QV: q'(q'-q)/tau
Collection/Series:
Scientific Areas:
## Categories of Convex Sets and C*-Algebras
Tuesday Apr 15, 2014
Speaker(s):
The start of the talk will be an outline how the ordinary notions of quantum theory translate into the category of C*-algebras, where there are several possible choices of morphisms. The second half will relate this to a category of convex sets used as state spaces.
Collection/Series:
Scientific Areas:
## Beyond local causality: causation and correlation after Bell
Tuesday Apr 08, 2014
Speaker(s):
There is now a remarkable mathematical theory of causation. But applying this theory to a Bell scenario implies the Bell inequalities, which are violated in experiment. We alleviate this tension by translating the basic definitions of the theory into the framework of generalised probabilistic theories. We find that a surprising number of results carry over: the d-separation criterion for conditional independence (the no-signalling principle on steroids), and even certain quantitative limits on correlations.
Collection/Series:
Scientific Areas:
## Incompatibility of observables in quantum theory and other probabilistic theories
Tuesday Apr 01, 2014
We introduce a new way of quantifying the degrees of incompatibility of two observables in a probabilistic physical theory and, based on this, a global measure of the degree of incompatibility inherent in such theories. This opens up a flexible way of comparing probabilistic theories with respect to the nonclassical feature of incompatibility. We show that quantum theory contains observables that are as incompatible as any probabilistic physical theory can have.
Collection/Series:
Scientific Areas:
## Ambiguities in order-theoretic formulations of thermodynamics
Friday Mar 21, 2014
Speaker(s):
Since the 1909 work of Carathéodory, an axiomatic approach to thermodynamics has gained ground which highlights the role of the the binary relation of adiabatic accessibility between equilibrium states. A feature of Carathédory's system is that the version therein of the second law contains an ambiguity about the nature of irreversible adiabatic processes, making it weaker than the traditional Kelvin-Planck statement of the law.
Collection/Series:
Scientific Areas:
## A histories perpective on bounding quantum correlations
Tuesday Mar 18, 2014
Speaker(s):
There has recently been much interest in finding simple principles that explain the particular sets of experimental probabilities that are possible with quantum mechanics in Bell-type experiments. In the quantum gravity community, similar questions had been raised, about whether a certain generalisation of quantum mechanics allowed more than quantum mechanics in this regard. We now bring these two strands of work together to see what can be learned on both sides.
Collection/Series:
Scientific Areas:
## Seeing is Believing: Direct Observation of a General Quantum State
Monday Mar 17, 2014
Speaker(s):
Central to quantum theory, the wavefunction is a complex distribution associated with a quantum system. Despite its fundamental role, it is typically introduced as an abstract element of the theory with no explicit definition. Rather, physicists come to a working understanding of it through its use to calculate measurement outcome probabilities through the Born Rule. Tomographic methods can reconstruct the wavefunction from measured probabilities.
Collection/Series:
Scientific Areas:
## Psi-epistemic models are exponentially bad at explaining the distinguishability of quantum states
Tuesday Feb 18, 2014
Speaker(s):
The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (representing knowledge, information, or belief) or an ontic state (a direct reflection of reality)? In the ontological models framework, quantum states correspond to probability measures over more fundamental states of reality. The quantum state is then ontic if every pair of pure states corresponds to a pair of measures that do not overlap, and is otherwise epistemic.
Collection/Series:
Scientific Areas:
## Does the Quantum Particle know its own Energy?
Tuesday Jan 21, 2014
Speaker(s):
If a wave function does not describe microscopic reality then what does? Reformulating quantum mechanics in path-integral terms leads to a notion of precluded event" and thence to the proposal that quantal reality differs from classical reality in the same way as a set of worldlines differs from a single worldline. One can then ask, for example, which sets of electron trajectories correspond to a Hydrogen atom in its ground state and how they differ from those of an excited state.
Collection/Series:
Scientific Areas:
## Noncontextuality without determinism and admissible (in)compatibility relations: revisiting Specker's parable.
Tuesday Jan 14, 2014
Speaker(s):
The purpose of this talk is twofold: First, following Spekkens, to motivate noncontextuality as a natural principle one might expect to hold in nature and introduce operational noncontextuality inequalities motivated by a contextuality scenario first considered by Ernst Specker. These inequalities do not rely on the assumption of outcome-determinism which is implicit in the usual Kochen-Specker (KS) inequalities.
Collection/Series:
Scientific Areas:
## RECENT PUBLIC LECTURE
### Erik Verlinde: A new view on gravity and the dark side of the cosmos
Speaker: Erik Verlinde |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Simple and Compound Interest
Use A = P(1 + r)^t to solve for A.
Estimated11 minsto complete
%
Progress
Practice Simple and Compound Interest
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated11 minsto complete
%
Simple and Compound Interest
Suppose you are re-negotiating an allowance with your parents. Currently you are given $25 per week, but it is the first of June, and you have started mowing the lawn and taking out the trash every week, and you think your allowance should be increased. Your father considers the situation and makes you the following offer: "I tell you what, son. I will give you three options for your allowance, you tell me which you would like" "Option A: You keep the$25 per week"
"Option B: You take $15 this week, then$16 next week, and so on. I'll continue adding 1 per week until New Year's." "Option C: I'll give you 1 penny this week, and then double your allowance each week until the first of October, then keep it at that rate." Which option would you choose? Simple and Compound Interest Simple interest is interest which accrues based only on the principal of an investment or loan. The simple interest is calculated as a percent of the principal. Simple Interest: i=prt\begin{align*}i = p \cdot r \cdot t\end{align*}. Variable i is interest, p represents the principal amount, r represents the interest rate, and t represents the amount of time the interest has been accruing. For example, say you borrow2,000 from a family member, and you insist on repaying with interest. You agree to pay 5% interest, and to pay the money back in 3 years.
The interest you will owe will be 2000(0.05)(3) = $300. This means that when you repay your loan, you will pay$2300. Note that the interest you pay after 3 years is not 5% of the original loan, but 15%, as you paid 5% of $2000 each year for 3 years. Now let’s consider an example in which interest is compounded. Say that you invest$2000 in a bank account, and it earns 5% interest annually. How much is in the account after 3 years?
Compound interest: A(t)=p(1+r)t\begin{align*}A(t) = p \cdot (1 +r)^{t}\end{align*}
Here, A(t) is the Amount in the account after a given time in years, principal is the initial investment, and rate is the interest rate. Note that we use (1+r)\begin{align*}(1 + r)\end{align*} instead of just r\begin{align*}r\end{align*}, so we can find the entire amount in the account, not just the interest paid.
A(t)=2000(1.05)3\begin{align*}A(t) = 2000 \cdot (1.05)^{3}\end{align*}
After three years, you will have $2315.25 in the account, which means that you will have earned$315.25 in interest.
Compounding results in more interest because the principal on which the interest is calculated increased each year. Another way to look at it is that compounding creates more interest because you are earning interest on interest, and not just on the principal.
Examples
Example 1
Earlier, you were asked which allowance option you would choose.
Assume you want to make the most money possible by the end of the year. Assume also that there are 24 weeks left.
Option A = 2524=600\begin{align*}25 \cdot 24 = \600\end{align*} total Option B = 15+16+17...+39=609\begin{align*}15 + 16 + 17... + 39 = \609\end{align*} total
Option C (assuming 16 weeks until Oct.) = 1(216)=655.36\begin{align*}1 \cdot (2^{16}) = \655.36\end{align*} each week after Oct 1. It is entirely possible that dear old dad didn't take exponential growth seriously enough, he may need a second job! Example 2 Use the formula for compound interest to determine the amount of money in an investment after 20 years, if you invest2000, and the interest rate is 5% compounded annually.
The investment will be worth $5306.60. A(t) = P(1 + r)t A(20) = 2000(1.05)20 A(20) =$5306.60
Example 3
How long will it take for $2000, invested at 5% compounded annually, to reach$7,000?
If we graph the function A(t) = 2000(1.05)t, we can see the values for any number of years.
If you graph this function using a graphing calculator, you can determine the value of the investment by tracing along the function, or by pressing <TRACE> on your graphing calculator and then entering an x value.
You can also choose an investment value you would like to reach, and then determine the number of years it would take to reach that amount. Find the intersection of the exponential function with the line y = 7000.
You can see here that the line and the curve intersect at a little less than x = 26. Therefore it would take almost 26 years for the investment to reach $7000. Example 4 What is the value of an investment after 20 years, if you invest$2000, and the interest rate is 5% compounded continuously?
The more often interest is compounded, the more it increases, but there is a limit. Each time you increase the number of compoundings, you decrease the fraction of the annual interest that is applied to each compounding. Eventually, the differences become so small as to be negligible. This is known as continuous compounding.
The function A(t) = Pert is the formula we use to calculate the amount of money when interest is continuously compounded, rather than interest that is compounded at discrete intervals, such as monthly or quarterly.
A(t) = Pert
A(20) = 2000e.05(20)
A(20) = 2000e1
A(20) = $5436.56 Example 5 Compare the values of the investments shown in the table. If everything else is held constant, how does the compounding influence the value of the investment? Principal r n t a.$4,000 .05 1 (annual) 8
b. $4,000 .05 4 (quarterly) 8 c.$4,000 .05 12 (monthly) 8
d. $4,000 .05 365 (daily) 8 e.$4,000 .05 8760 (hourly) 8
Use the compound interest formula. For this example, the n is the quantity that changes: \begin{align*}A(8) = 4000 \left (1 + \frac{.05} {n}\right )^{8n}\end{align*}
Principal r n t A
a. $4,000 .05 1 (annual) 8$5909.82
b. $4,000 .05 4 (quarterly) 8$5952.52
c. $4,000 .05 12 (monthly) 8$5962.34
d. $4,000 .05 365 (daily) 8$5967.14
e. $4,000 .05 8760 (hourly) 8$5967.29
A graph of the function \begin{align*}f(x) = 4000 \left (1 + \frac{.05} {x}\right )^{8x}\end{align*} is shown below:
The graph seems to indicate that the function has a horizontal asymptote at $6000. However, if we zoom in, we can see that the horizontal asymptote is closer to 5967. What does this mean? This means that for the investment of$4000, at 5% interest, for 8 years, compounding more and more frequently will never result in more than about $5968.00. Example 6 Determine the value of each investment. 1. You invest$5000 in an account that gives 6% interest, compounded monthly. How much money do you have after 10 years?
5000, invested for 10 years at 6% interest, compounded monthly. \begin{align*}A(t) = P \left (1 + \frac{r} {n}\right )^{nt}\end{align*} \begin{align*}A(10) = 5000 \left (1 + \frac{.06} {12}\right )^{12\cdot 10}\end{align*} \begin{align*}A(10) = 5000 \left (1.005\right )^{120}\end{align*} \begin{align*}A(10) = \9096.98\end{align*} 1. You invest10,000 in an account that gives 2.5% interest, compounded quarterly. How much money do you have after 10 years?
10000, invested for 10 years at 2.5% interest, compounded quarterly. Quarterly compounding means that interest is compounded four times per year. So in the equation, n = 4. \begin{align*}A(t) = P \left (1 + \frac{r} {n}\right )^{nt}\end{align*} \begin{align*}A(10) = 6000 \left( 1 + \frac{.025} {4}\right )^{4 \cdot 10}\end{align*} \begin{align*}A(10) = 6000 (1.00625)^{40}\end{align*} \begin{align*}A(10) = \12,830.30\end{align*} In each example, the value of the investment after 10 years depends on three quantities: the principal of the investment, the number of compoundings per year, and the interest rate. Example 7 How long will it take2000 to grow to 25,000 at a 5% interest rate? It will take about 50 years: A(t) = Pert 25,000 = 2000e.05(t) 12.5 = e.05(t) Divide both sides by 2000 ln 12.5 = ln e.05(t) Take the ln of both sides ln 12.5 = .05t ln e Use the power property of logs ln 12.5 = .05t × 1 ln e = 1 ln 12.5 = 0.5t Isolate t \begin{align*}t = \frac{ln 12.5} {.05} \approx 50.5\end{align*} Review 1. What is the formula for figuring simple interest? 2. What is the formula for figuring compound interest? 3. If someone invested4500.00, how much would they have earned after 4 years, at a simple interest rate of 2%?
4. Kyle opened up a savings account in July. He deposited $900.00. The bank pays a simple interest rate of 5% annually. What is Kyle's balance at the end of 4 years? 5. After having an account for 6 years, how much money does Roberta have in the account, if her original deposit was$11,000, and her bank's yearly simple interest rate is 8.4%?
6. Tom called his bank today to check on his savings account balance. he was surprised to find a balance of $6600, when he started the account with just$5000.00 8 years ago. Based on this data, what percentage rate has the bank been paying on the account?
7. Julie opened a 4% interest account with a bank that compounds the interest quarterly. If Julie were to deposit $3000.00 into the account at the beginning of the year, how much could she expect to have at the end of the year? 8. Susan has had a saving account for a few years now. The bank has been paying her simple interest at a rate of 5%. She has earned$45.00 on her initial deposit of $300.00. How long has she had the account? 9. What is the balance on a deposit of$818.00 earning 5% interest compounded semiannually for 5 years?
10. Karen made a decent investment. After 4 years she had $3250.00 in her account and expects to have$16,250, after another 4 years. Her savings account is a compounding interest account. How much was her original deposit?
11. What is the yearly simple interest rate that Ken earns, if after only three months he earned $16.00 on an initial$800.00 deposit?
12. Write an expression that correctly represents the balance on an account after 7 years, if the account was compounded yearly at a rate of 5%, with an initial balance of $1000.00 13. Caryl gives each of his three kids$3000.00 each, and they each use it to open up saving accounts at three different banks. Georgia, his oldest, is earning 3% annually at her bank. Kirk earns 7% annually at his bank. Lottie's bank is paying her an annual rate of 4%. At the end of 6 years show much will each of them have in their respective accounts.
14. Kathy receives an inheritance check for $3000.00 and decides to put it in a saving account so she can send her daughter to college when she gets older. After looking she finds an account that pays compounding interest annually at a rate of$14%. The balance on the account can be represented by a function, where x is the time in years. Write a function, and then use it to determine how much will be in the account at the end of 7 years.
15. Stan is late on his car payment. The finance company charges 3% interest per month it is late. His monthly payment is $300.00. What is the total amount he will owe if he pays the August first bill October first? (assuming he was able to make his September bill on - time) Today, you get your first credit card. It charges 12.49% interest on all purchases and compounds that interest monthly. Within one day you max out the credit limit of$1,200.00.
1. If you pay the monthly accrued interest plus $50.00 towards the initial$1,200 amount every month, how much will you still owe at the end of the first 12 months?
2. How much will you have paid in total at the end of the year?
You are preparing for retirement. You invest \$10,000 for 5 years, in an account that compounds monthly at 12% per year. However, unless this money is in an IRA or other tax-free vehicle, with zero inflation, you also have an annual tax payment of 30% on the earned interest.
1. How much will you have in 5 years?
2. Now take into account that the money loses 3% spending value per year due to inflation, how much is what you have saved really worth at the end of the 5 years?
To see the Review answers, open this PDF file and look for section 3.11.
Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Vocabulary Language: English
TermDefinition
Accrue Accrue means "increase in amount or value over time." If interest accrues on a bank account, you will have more money in your account. If interest accrues on a loan, you will owe more money to your lender.
Compound interest Compound interest refers to interest earned on the total amount at the time it is compounded, including previously earned interest.
Continuous compounding Continuous compounding refers to a loan or investment with interest that is compounded constantly, rather than on a specific schedule. It is equivalent to infinitely many but infinitely small compounding periods.
Principal The principal is the amount of the original loan or original deposit.
Rate The rate is the percentage at which interest accrues. |
Project
# High-order numerical solution of viscous Burgers' equation using a Cole-Hopf barycentric Gegenbauer integral pseudospectral method
0 new
0
Recommendations
0 new
0
Followers
0 new
2
We present a novel, high-order numerical method to solve viscous Burger's equation with smooth initial and boundary data. The proposed method combines Cole-Hopf transformation with well conditioned integral reformulations to reduce the problem into either a single easy-to-solve integral equation with no constraints, or an integral equation provided by a single integral boundary condition. Fully exponential convergence rates are established in both spatial and temporal directions by embracing a full Gegenbauer collocation scheme based on Gegenbauer-Gauss (GG) mesh grids using apt Gegenbauer parameter values and the latest technology of barycentric Gegenbauer differentiation and integration matrices. The global collocation matrices of the reduced algebraic linear systems were derived allowing for direct linear system solvers to be used. Rigorous error and convergence analyses are presented in addition to two easy-to-implement pseudocodes of the proposed computational algorithms. We further show three numerical tests to support the theoretical investigations and demonstrate the superior accuracy of the method even when the viscosity paramter $\nu \to 0$, in the absence of any adaptive strategies typically required for adaptive refinements. |
# fcla / src / section-SSLE.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827
Solving Systems of Linear Equations
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in solve this problem. When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, true.
Systems of Linear Equations Solving two (nonlinear) equations
Suppose we desire the simultaneous solutions of the two equations,
You can easily check by substitution that $x=\tfrac{\sqrt{3}}{2},\;y=\tfrac{1}{2}$ and $x=-\tfrac{\sqrt{3}}{2},\;y=-\tfrac{1}{2}$ are both solutions. We need to also convince ourselves that these are the only solutions. To see this, plot each equation on the $xy$-plane, which means to plot $(x,\,y)$ pairs that make an individual equation true. In this case we get a circle centered at the origin with radius 1 and a straight line through the origin with slope $\tfrac{1}{\sqrt{3}}$. The intersections of these two curves are our desired simultaneous solutions, and so we believe from our plot that the two solutions we know already are indeed the only ones. We like to write solutions as sets, so in this case we write the set of solutions as
In order to discuss systems of linear equations carefully, we need a precise definition. And before we do that, we will introduce our periodic discussions about Proof Techniques. Linear algebra is an excellent setting for learning how to read, understand and formulate proofs. But this is a difficult step in your development as a mathematician, so we have included a series of short essays containing advice and explanations to help you along. These will be referenced in the text as needed, and are also collected as a list you can consult when you want to return to re-read them. (Which is strongly encouraged!)
With a definition next, now is the time for the first of our proof techniques. So study . We'll be right here when you get back. See you in a bit.
System of Linear Equations
A system of linear equations is a collection of $m$ equations in the variable quantities $x_1,\,x_2,\,x_3,\ldots,x_n$ of the form, where the values of $a_{ij}$, $b_i$ and $x_j$, $1\leq i\leq m$, $1\leq j\leq n$, are from the set of complex numbers, $\complex{\null}$.
Don't let the mention of the complex numbers, $\complex{\null}$, rattle you. We will stick with real numbers exclusively for many more sections, and it will sometimes seem like we only work with integers! However, we want to leave the possibility of complex numbers open, and there will be occasions in subsequent sections where they are necessary. You can review the basic properties of complex numbers in , but these facts will not be critical until we reach .
Now we make the notion of a solution to a linear system precise.
Solution of a System of Linear Equations
A solution of a system of linear equations in $n$ variables, $\scalarlist{x}{n}$ (such as the system given in , is an ordered list of $n$ complex numbers, $\scalarlist{s}{n}$ such that if we substitute $s_1$ for $x_1$, $s_2$ for $x_2$, $s_3$ for $x_3$, , $s_n$ for $x_n$, then for every equation of the system the left side will equal the right side, each equation is true simultaneously.
More typically, we will write a solution in a form like $x_1=12$, $x_2=-7$, $x_3=2$ to mean that $s_1=12$, $s_2=-7$, $s_3=2$ in the notation of . To discuss all of the possible solutions to a system of linear equations, we now define the set of all solutions. (So is now applicable, and you may want to go and familiarize yourself with what is there.)
Solution Set of a System of Linear Equations
The solution set of a linear system of equations is the set which contains every solution to the system, and nothing more.
Be aware that a solution set can be infinite, or there can be no solutions, in which case we write the solution set as the empty set, $\emptyset=\set{}$ (). Here is an example to illustrate using the notation introduced in and the notion of a solution ().
Notation for a system of equations
Given the system of linear equations, we have $n=4$ variables and $m=3$ equations. Also,
Additionally, convince yourself that $x_{1}=-2$, $x_{2}=4$, $x_{3}=2$, $x_{4}=1$ is one solution (), but it is not the only one! For example, another solution is $x_{1}=-12$, $x_{2}=11$, $x_{3}=1$, $x_{4}=-3$, and there are more to be found. So the solution set contains at least two elements.
We will often shorten the term system of linear equations to system of equations leaving the linear aspect implied. After all, this is a book about linear algebra.
Possibilities for Solution Sets
The next example illustrates the possibilities for the solution set of a system of linear equations. We will not be too formal here, and the necessary theorems to back up our claims will come in subsequent sections. So read for feeling and come back later to revisit this example.
Three typical systems
Consider the system of two equations with two variables,
If we plot the solutions to each of these equations separately on the $x_{1}x_{2}$-plane, we get two lines, one with negative slope, the other with positive slope. They have exactly one point in common, $(x_1,\,x_2)=(3,\,-1)$, which is the solution $x_1=3$, $x_2=-1$. From the geometry, we believe that this is the only solution to the system of equations, and so we say it is unique.
Now adjust the system with a different second equation,
A plot of the solutions to these equations individually results in two lines, one on top of the other! There are infinitely many pairs of points that make both equations true. We will learn shortly how to describe this infinite solution set precisely (see , ). Notice now how the second equation is just a multiple of the first.
One more minor adjustment provides a third system of linear equations,
A plot now reveals two lines with identical slopes, parallel lines. They have no points in common, and so the system has a solution set that is empty, $S=\emptyset$.
This example exhibits all of the typical behaviors of a system of equations. A subsequent theorem will tell us that every system of linear equations has a solution set that is empty, contains a single solution or contains infinitely many solutions (). yielded exactly two solutions, but this does not contradict the forthcoming theorem. The equations in are not linear because they do not match the form of , and so we cannot apply in this case.
Equivalent Systems and Equation Operations
With all this talk about finding solution sets for systems of linear equations, you might be ready to begin learning how to find these solution sets yourself. We begin with our first definition that takes a common word and gives it a very precise meaning in the context of systems of linear equations.
Equivalent Systems
Two systems of linear equations are equivalent if their solution sets are equal.
Notice here that the two systems of equations could look very different ( not be equal), but still have equal solution sets, and we would then call the systems equivalent. Two linear equations in two variables might be plotted as two lines that intersect in a single point. A different system, with three equations in two variables might have a plot that is three lines, all intersecting at a common point, with this common point identical to the intersection point for the first system. By our definition, we could then say these two very different looking systems of equations are equivalent, since they have identical solution sets. It is really like a weaker form of equality, where we allow the systems to be different in some respects, but we use the term equivalent to highlight the situation when their solution sets are equal.
With this definition, we can begin to describe our strategy for solving linear systems. Given a system of linear equations that looks difficult to solve, we would like to have an equivalent system that is easy to solve. Since the systems will have equal solution sets, we can solve the easy system and get the solution set to the difficult system. Here come the tools for making this strategy viable.
Equation Operations
Given a system of linear equations, the following three operations will transform the system into a different one, and each operation is known as an equation operation.
1. Swap the locations of two equations in the list of equations.
2. Multiply each term of an equation by a nonzero quantity.
3. Multiply each term of one equation by some quantity, and add these terms to a second equation, on both sides of the equality. Leave the first equation the same after this operation, but replace the second equation by the new one.
These descriptions might seem a bit vague, but the proof or the examples that follow should make it clear what is meant by each. We will shortly prove a key theorem about equation operations and solutions to linear systems of equations.
We are about to give a rather involved proof, so a discussion about just what a theorem really is would be timely. Stop and read first.
In the theorem we are about to prove, the conclusion is that two systems are equivalent. By this translates to requiring that solution sets be equal for the two systems. So we are being asked to show that two sets are equal. How do we do this? Well, there is a very standard technique, and we will use it repeatedly through the course. If you have not done so already, head to and familiarize yourself with sets, their operations, and especially the notion of set equality, and the nearby discussion about its use.
Equation Operations Preserve Solution Sets
If we apply one of the three equation operations of to a system of linear equations (), then the original system and the transformed system are equivalent.
We take each equation operation in turn and show that the solution sets of the two systems are equal, using the definition of set equality ().
1. It will not be our habit in proofs to resort to saying statements are obvious, but in this case, it should be. There is nothing about the order in which we write linear equations that affects their solutions, so the solution set will be equal if the systems only differ by a rearrangement of the order of the equations.
2. Suppose $\alpha\neq 0$ is a number. Let's choose to multiply the terms of equation $i$ by $\alpha$ to build the new system of equations, Let $S$ denote the solutions to the system in the statement of the theorem, and let $T$ denote the solutions to the transformed system.
1. Show $S\subseteq T$. Suppose $(x_1,\,x_2,\,\,x_3,\,\ldots,x_n)=(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in S$ is a solution to the original system. Ignoring the $i$-th equation for a moment, we know it makes all the other equations of the transformed system true. We also know that which we can multiply by $\alpha$ to get This says that the $i$-th equation of the transformed system is also true, so we have established that $(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in T$, and therefore $S\subseteq T$.
2. Now show $T\subseteq S$. Suppose $(x_1,\,x_2,\,\,x_3,\,\ldots,x_n)=(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in T$ is a solution to the transformed system. Ignoring the $i$-th equation for a moment, we know it makes all the other equations of the original system true. We also know that which we can multiply by $\tfrac{1}{\alpha}$, since $\alpha\neq 0$, to get This says that the $i$-th equation of the original system is also true, so we have established that $(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in S$, and therefore $T\subseteq S$. Locate the key point where we required that $\alpha\neq 0$, and consider what would happen if $\alpha=0$.
3. Suppose $\alpha$ is a number. Let's choose to multiply the terms of equation $i$ by $\alpha$ and add them to equation $j$ in order to build the new system of equations, Let $S$ denote the solutions to the system in the statement of the theorem, and let $T$ denote the solutions to the transformed system.
1. Show $S\subseteq T$. Suppose $(x_1,\,x_2,\,\,x_3,\,\ldots,x_n)=(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in S$ is a solution to the original system. Ignoring the $j$-th equation for a moment, we know this solution makes all the other equations of the transformed system true. Using the fact that the solution makes the $i$-th and $j$-th equations of the original system true, we find (a_{j1}\beta_1+a_{j2}\beta_2+\dots+a_{jn}\beta_n)\\ (a_{j1}\beta_1+a_{j2}\beta_2+\dots+a_{jn}\beta_n)\\ This says that the $j$-th equation of the transformed system is also true, so we have established that $(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in T$, and therefore $S\subseteq T$.
2. Now show $T\subseteq S$. Suppose $(x_1,\,x_2,\,\,x_3,\,\ldots,x_n)=(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in T$ is a solution to the transformed system. Ignoring the $j$-th equation for a moment, we know it makes all the other equations of the original system true. We then find This says that the $j$-th equation of the original system is also true, so we have established that $(\beta_1,\,\beta_2,\,\,\beta_3,\,\ldots,\beta_n)\in S$, and therefore $T\subseteq S$.
Why didn't we need to require that $\alpha\neq 0$ for this row operation? In other words, how does the third statement of the theorem read when $\alpha=0$? Does our proof require some extra care when $\alpha=0$? Compare your answers with the similar situation for the second row operation. (See .)
is the necessary tool to complete our strategy for solving systems of equations. We will use equation operations to move from one system to another, all the while keeping the solution set the same. With the right sequence of operations, we will arrive at a simpler equation to solve. The next two examples illustrate this idea, while saving some of the details for later.
Three equations, one solution
We solve the following system by a sequence of equation operations. $\alpha=-1$ times equation 1, add to equation 2: $\alpha=-2$ times equation 1, add to equation 3: $\alpha=-2$ times equation 2, add to equation 3: $\alpha=-1$ times equation 3: which can be written more clearly as
This is now a very easy system of equations to solve. The third equation requires that $x_3=4$ to be true. Making this substitution into equation 2 we arrive at $x_2=-3$, and finally, substituting these values of $x_2$ and $x_3$ into the first equation, we find that $x_1=2$. Note too that this is the only solution to this final system of equations, since we were forced to choose these values to make the equations true. Since we performed equation operations on each system to obtain the next one in the list, all of the systems listed here are all equivalent to each other by . Thus $(x_1,\,x_2,\,x_3)=(2,-3,4)$ is the unique solution to the original system of equations (and all of the other intermediate systems of equations listed as we transformed one into another).
Three equations, infinitely many solutions
The following system of equations made an appearance earlier in this section (), where we listed one of its solutions. Now, we will try to find all of the solutions to this system. Do not concern yourself too much about why we choose this particular sequence of equation operations, just believe that the work we do is all correct. $\alpha=-1$ times equation 1, add to equation 2: $\alpha=-3$ times equation 1, add to equation 3: $\alpha=-5$ times equation 2, add to equation 3: $\alpha=-1$ times equation 2: $\alpha=-2$ times equation 2, add to equation 1: which can be written more clearly as
What does the equation $0=0$ mean? We can choose any values for $x_1$, $x_2$, $x_3$, $x_4$ and this equation will be true, so we only need to consider further the first two equations, since the third is true no matter what. We can analyze the second equation without consideration of the variable $x_1$. It would appear that there is considerable latitude in how we can choose $x_2$, $x_3$, $x_4$ and make this equation true. Let's choose $x_3$ and $x_4$ to be anything we please, say $x_3=a$ and $x_4=b$.
Now we can take these arbitrary values for $x_3$ and $x_4$, substitute them in equation 1, to obtain Similarly, equation 2 becomes
So our arbitrary choices of values for $x_3$ and $x_4$ ($a$ and $b$) translate into specific values of $x_1$ and $x_2$. The lone solution given in was obtained by choosing $a=2$ and $b=1$. Now we can easily and quickly find many more (infinitely more). Suppose we choose $a=5$ and $b=-2$, then we compute and you can verify that $(x_1,\,x_2,\,x_3,\,x_4)=(-17,\,13,\,5,\,-2)$ makes all three equations true. The entire solution set is written as S=\setparts{(-1-2a+3b,\,4 +a-2b,\,a,\,b)}{ a\in\complex{\null},\,b\in\complex{\null}}
It would be instructive to finish off your study of this example by taking the general form of the solutions given in this set and substituting them into each of the three equations and verify that they are true in each case ().
In the next section we will describe how to use equation operations to systematically solve any system of linear equations. But first, read one of our more important pieces of advice about speaking and writing mathematics. See .
Before attacking the exercises in this section, it will be helpful to read some advice on getting started on the construction of a proof. See .
Getting Started Sage is a powerful system for studying and exploring many different areas of mathematics. In the next section, and the majority of the remaining section, we will inslude short descriptions and examples using Sage. You can read a bit more about Sage in the Preface. If you are not already reading this in an electronic version, you may want to investigate obtaining the worksheet version of this book, where the examples are live and editable. Most of your interaction with Sage will be by typing commands into a compute cell. That's a compute cell just below this paragraph. Click once inside the compute cell and you will get a more distinctive border around it, a blinking cursor inside, plus a cute little evaluate link below it.
At the cursor, type 2+2 and then click on the evaluate link. Did a 4 appear below the cell? If so, you've successfully sent a command off for Sage to evaluate and you've received back the (correct) answer.
Here's another compute cell. Try evaluating the command factorial(300). Hmmmmm. That is quite a big integer! The slashes you see at the end of each line mean the result is continued onto the next line, since there are 615 digits in the result.
To make new compute cells, hover your mouse just above another compute cell, or just below some output from a compute cell. When you see a skinny blue bar across the width of your worksheet, click and you will open up a new compute cell, ready for input. Note that your worksheet will remember any calculations you make, in the order you make them, no matter where you put the cells, so it is best to stay organized and add new cells at the bottom.
Try placing your cursor just below the monstrous value of $300!$ that you have. Click on the blue bar and try another factorial computation in the new compute cell.
Each compute cell will show output due to only the very last command in the cell. Try to predict the following output before evaluating the cell. a = 10 b = 6 a = a + 20 a 30 The following compute cell will not print anything since the one command does not create output. But it will have an effect, as you can see when you execute the subsequent cell. Notice how this uses the value of b from above. Execute this compute cell once. Exactly once. Even if it appears to do nothing. If you execute the cell twice, your credit card may be charged twice. b = b + 50 Now execute this cell, which will produce some output. b + 20 76 So b came into existence as 6. Then a cell added 50. This assumes you only executed this cell once! In the last cell we create b+20 (but do not save it) and it is this value that is output.
You can combine several commands on one line with a semi-colon. This is a great way to get multiple outputs from a compute cell. The syntax for building a matrix should be somewhat obvious when you see the output, but if not, it is not particularly important to understand now. f(x) = x^8 - 7*x^4; f x |--> x^8 - 7*x^4 f; print ; f.derivative() x |--> x^8 - 7*x^4 ]]> x |--> 8*x^7 - 28*x^3 g = f.derivative() g.factor() 4*(2*x^4 - 7)*x^3 Some commands in Sage are functions, an example is factorial() above. Other commands are methods of an object and are like characteristics of objects, examples are .factor() and .derivative() as methods of a function. To comment on your work, you can open up a small word-processor. Hover your mouse until you get the skinny blue bar again, but now when you click, also hold the SHIFT key at the same time. Experiment with fonts, colors, bullet lists, etc and then click the Save changes button to exit. Double-click on your text if you need to go back and edit it later.
Open the word-processor again to create a new bit of text (maybe next to the empty compute cell just below). Type all of the following exactly, but do not include any backslashes that might precede the dollar signs in the print version: Pythagorean Theorem: \$c^2=a^2+b^2\$ and save your changes. The symbols between the dollar signs are written according to the mathematical typesetting language known as TeX cruise the internet to learn more about this very popular tool. (Well, it is extremely popular among mathematicians and physical scientists.)
Much of our interaction with sets will be through Sage lists. These are not really sets they allow duplicates, and order matters. But they are so close to sets, and so easy and powerful to use that we will use them regularly. We will use a fun made-up list for practice, the quote marks mean the items are just text, with no special mathematical meaning. Execute these compute cells as we work through them. zoo = ['snake', 'parrot', 'elephant', 'baboon', 'beetle'] zoo ['snake', 'parrot', 'elephant', 'baboon', 'beetle'] So the square brackets define the boundaries of our list, commas separate items, and we can give the list a name. To work with just one element of the list, we use the name and a pair of brackets with an index. Notice that lists have indices that begin counting at zero. This will seem odd at first and will seem very natural later. zoo[2] 'elephant' We can add a new creature to the zoo, it is joined up at the far right end. zoo.append('ostrich'); zoo ['snake', 'parrot', 'elephant', 'baboon', 'beetle', 'ostrich'] We can remove a creature. zoo.remove('parrot') zoo ['snake', 'elephant', 'baboon', 'beetle', 'ostrich'] We can extract a sublist. Here we start with element 1 (the elephant) and go all the way up to, but not including, element 3 (the beetle). Again a bit odd, but it will feel natural later. For now, notice that we are extracting two elements of the lists, exactly $3-1=2$ elements. mammals = zoo[1:3] mammals ['elephant', 'baboon'] Often we will want to see if two lists are equal. To do that we will need to sort a list first. A function creates a new, sorted list, leaving the original alone. So we need to save the new one with a new name. newzoo = sorted(zoo) newzoo ['baboon', 'beetle', 'elephant', 'ostrich', 'snake'] zoo.sort() zoo ['baboon', 'beetle', 'elephant', 'ostrich', 'snake'] Notice that if you run this last compute cell your zoo has changed and some commands above will not necessarily execute the same way. If you want to experiment, go all the way back to the first creation of the zoo and start executing cells again from there with a fresh zoo.
A construction called a list comprehension is especially powerful, especially since it almost exactly mirrors notation we use to describe sets. Suppose we want to form the plural of the names of the creatures in our zoo. We build a new list, based on all of the elements of our old list. plurality_zoo = [animal+'s' for animal in zoo] plurality_zoo ['baboons', 'beetles', 'elephants', 'ostrichs', 'snakes'] Almost like it says: we add an s to each animal name, for each animal in the zoo, and place them in a new list. Perfect. (Except for getting the plural of ostrich wrong.)
One final type of list, with numbers this time. The range() function will create lists of integers. In its simplest form an invocation like range(12) will create a list of 12 integers, starting at zero and working up to, but not including, 12. Does this sound familiar? dozen = range(12); dozen [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] Here are two other forms, that you should be able to understand by studying the examples. teens = range(13, 20); teens [13, 14, 15, 16, 17, 18, 19] decades = range(1900, 2000, 10); decades [1900, 1910, 1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990] There is a Save button in the upper-right corner of your worksheet. This will save a current copy of your worksheet that you can retrieve from within your notebook again later, though you have to re-execute all the cells when you re-open the worksheet later.
There is also a File drop-down list, on the left, just above your very top compute cell (not be confused with your browser's File menu item!). You will see a choice here labeled Save worksheet to a file... When you do this, you are creating a copy of your worksheet in the sws format (short for Sage WorkSheet). You can email this file, or post it on a website, for other Sage users and they can use the Upload link on their main notebook page to incorporate a copy of your worksheet into their notebook.
There are other ways to share worksheets that you can experiment with, but this gives you one way to share any worksheet with anybody almost anywhere.
We have covered a lot here in this section, so come back later to pick up tidbits you might have missed. There are also many more features in the notebook that we have not covered.
1. How many solutions does the system of equations $3x + 2y = 4$, $6x + 4y = 8$ have? Explain your answer.
2. How many solutions does the system of equations $3x + 2y = 4$, $6x + 4y = -2$ have? Explain your answer.
3. What do we mean when we say mathematics is a language?
Find a solution to the system in where $x_3=6$ and $x_4=2$. Find two other solutions to the system. Find a solution where $x_1=-17$ and $x_2=14$. How many possible answers are there to each of these questions? Each archetype () that is a system of equations begins by listing some specific solutions. Verify the specific solutions listed in the following archetypes by evaluating the system of equations with the solutions listed.
, , , , , , , , ,
Find all solutions to the linear system: Solving each equation for $y$, we have the equivalent system Setting these expressions for $y$ equal, we have the equation $5 - x = 2x - 3$, which quickly leads to $x = \frac{8}{3}$. Substituting for $x$ in the first equation, we have $y = 5 - x = 5 - \frac{8}{3} = \frac{7}{3}$. Thus, the solution is $x = \frac{8}{3}$, $y = \frac{7}{3}$. Find all solutions to the linear system: Find all solutions to the linear system: Find all solutions to the linear system: Find all solutions to the linear system: A three-digit number has two properties. The tens-digit and the ones-digit add up to 5. If the number is written with the digits in the reverse order, and then subtracted from the original number, the result is $792$. Use a system of equations to find all of the three-digit numbers with these properties. Let $a$ be the hundreds digit, $b$ the tens digit, and $c$ the ones digit. Then the first condition says that $b+c=5$. The original number is $100a+10b+c$, while the reversed number is $100c+10b+a$. So the second condition is 792=\left(100a+10b+c\right)-\left(100c+10b+a\right)=99a-99c So we arrive at the system of equations Using equation operations, we arrive at the equivalent system We can vary $c$ and obtain infinitely many solutions. However, $c$ must be a digit, restricting us to ten values (0 9). Furthermore, if $c>1$, then the first equation forces $a>9$, an impossibility. Setting $c=0$, yields $850$ as a solution, and setting $c=1$ yields $941$ as another solution. Find all of the six-digit numbers in which the first digit is one less than the second, the third digit is half the second, the fourth digit is three times the third and the last two digits form a number that equals the sum of the fourth and fifth. The sum of all the digits is 24. (From The MENSA Puzzle Calendar for January 9, 2006.) Let $abcdef$ denote any such six-digit number and convert each requirement in the problem statement into an equation. In a more standard form this becomes Using equation operations (or the techniques of the upcoming ), this system can be converted to the equivalent system Clearly, choosing $f=0$ will yield the solution $abcde=563910$. Furthermore, to have the variables result in single-digit numbers, none of the other choices for $f$ ($1,\,2,\,\ldots,\,9$) will yield a solution. Driving along, Terry notices that the last four digits on his car's odometer are palindromic. A mile later, the last five digits are palindromic. After driving another mile, the middle four digits are palindromic. One more mile, and all six are palindromic. What was the odometer reading when Terry first looked at it? Form a linear system of equations that expresses the requirements of this puzzle. (Car Talk Puzzler, National Public Radio, Week of January 21, 2008) (A car odometer displays six digits and a sequence is a palindrome if it reads the same left-to-right as right-to-left.) 198888 is one solution, and David Braithwaite found 199999 as another. Each sentence below has at least two meanings. Identify the source of the double meaning, and rewrite the sentence (at least twice) to clearly convey each meaning.
1. They are baking potatoes.
2. He bought many ripe pears and apricots.
3. She likes his sculpture.
4. I decided on the bus.
1. Does baking describe the potato or what is happening to the potato?
Those are potatoes that are used for baking.
The potatoes are being baked.
2. Are the apricots ripe, or just the pears? Parentheses could indicate just what the adjective ripe is meant to modify. Were there many apricots as well, or just many pears?
He bought many pears and many ripe apricots.
He bought apricots and many ripe pears.
3. Is sculpture a single physical object, or the sculptor's style expressed over many pieces and many years?
She likes his sculpture of the girl.
She likes his sculptural style.
4. Was a decision made while in the bus, or was the outcome of a decision to choose the bus. Would the sentence I decided on the car, have a similar double meaning?
I made my decision while on the bus.
I decided to ride the bus.
Discuss the difference in meaning of each of the following three almost identical sentences, which all have the same grammatical structure. (These are due to Keith Devlin.)
1. She saw him in the park with a dog.
2. She saw him in the park with a fountain.
3. She saw him in the park with a telescope.
We know the dog belongs to the man, and the fountain belongs to the park. It is not clear if the telescope belongs to the man, the woman, or the park.
The following sentence, due to Noam Chomsky, has a correct grammatical structure, but is meaningless. Critique its faults. Colorless green ideas sleep furiously. (Chomsky, Noam. Syntactic Structures, The Hague/Paris: Mouton, 1957. p. 15.) In adjacent pairs the words are contradictory or inappropriate. Something cannot be both green and colorless, ideas do not have color, ideas do not sleep, and it is hard to sleep furiously. Read the following sentence and form a mental picture of the situation. The baby cried and the mother picked it up. What assumptions did you make about the situation? Did you assume that the baby and mother are human?
Did you assume that the baby is the child of the mother?
Did you assume that the mother picked up the baby as an attempt to stop the crying?
Discuss the difference in meaning of the following two almost identical sentences, which have nearly identical grammatical structure. (This antanaclasis is often attributed to the comedian Groucho Marx, but has earlier roots.)
1. Time flies like an arrow.
2. Fruit flies like a banana.
This problem appears in a middle-school mathematics textbook: Together Dan and Diane have \$20. Together Diane and Donna have \$15. How much do the three of them have in total? (Transition Mathematics, Second Edition, Scott Foresman Addison Wesley, 1998. Problem 51.19.) If $x$, $y$ and $z$ represent the money held by Dan, Diane and Donna, then $y=15-z$ and $x=20-y=20-(15-z)=5+z$. We can let $z$ take on any value from $0$ to $15$ without any of the three amounts being negative, since presumably middle-schoolers are too young to assume debt.
Then the total capital held by the three is $x+y+z=(5+z)+(15-z)+z=20+z$. So their combined holdings can range anywhere from \$20 (Donna is broke) to \$35 (Donna is flush).
Solutions to the system in are given as (x_1,\,x_2,\,x_3,\,x_4)=(-1-2a+3b,\,4+a-2b,\,a,\,b) Evaluate the three equations of the original system with these expressions in $a$ and $b$ and verify that each equation is true, no matter what values are chosen for $a$ and $b$. We have seen in this section that systems of linear equations have limited possibilities for solution sets, and we will shortly prove that describes these possibilities exactly. This exercise will show that if we relax the requirement that our equations be linear, then the possibilities expand greatly. Consider a system of two equations in the two variables $x$ and $y$, where the departure from linearity involves simply squaring the variables. After solving this system of non-linear equations, replace the second equation in turn by $x^2+2x+y^2=3$, $x^2+y^2=1$, $x^2-4x+y^2=-3$, $-x^2+y^2=1$ and solve each resulting system of two equations in two variables. (This exercise includes suggestions from .) The equation $x^2-y^2=1$ has a solution set by itself that has the shape of a hyperbola when plotted. Four of the five different second equations have solution sets that are circles when plotted individually (the last is another hyperbola). Where the hyperbola and circles intersect are the solutions to the system of two equations. As the size and location of the circles vary, the number of intersections varies from four to one (in the order given). The last equation is a hyperbola that opens in the other direction. Sketching the relevant equations would be instructive, as was discussed in .
asks you to formulate a definition of what it means for a whole number to be odd. What is your definition? (Do not say the opposite of even.) Is $6$ odd? Is $11$ odd? Justify your answers by using your definition. We can say that an integer is odd if when it is divided by $2$ there is a remainder of 1. So $6$ is not odd since $6=3\times 2+0$, while $11$ is odd since $11=5\times 2 + 1$. Explain why the second equation operation in requires that the scalar be nonzero, while in the third equation operation this restriction on the scalar is not present. is engineered to make true. If we were to allow a zero scalar to multiply an equation then that equation would be transformed to the equation $0=0$, which is true for any possible values of the variables. Any restrictions on the solution set imposed by the original equation would be lost.
Notice the location in the proof of where the expression $\frac{1}{\alpha}$ appears this explains the prohibition on $\alpha=0$ in the second equation operation. |
Outlook: Exicure Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Hold
Time series to forecast n: 06 Jan 2023 for (n+6 month)
Methodology : Supervised Machine Learning (ML)
## Abstract
Exicure Inc. Common Stock prediction model is evaluated with Supervised Machine Learning (ML) and Factor1,2,3,4 and it is concluded that the XCUR stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Hold
## Key Points
1. How do you pick a stock?
3. What statistical methods are used to analyze data?
## XCUR Target Price Prediction Modeling Methodology
We consider Exicure Inc. Common Stock Decision Process with Supervised Machine Learning (ML) where A is the set of discrete actions of XCUR stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Factor)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Supervised Machine Learning (ML)) X S(n):→ (n+6 month) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$
n:Time series to forecast
p:Price signals of XCUR stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## XCUR Stock Forecast (Buy or Sell) for (n+6 month)
Sample Set: Neural Network
Stock/Index: XCUR Exicure Inc. Common Stock
Time series to forecast n: 06 Jan 2023 for (n+6 month)
According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Hold
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Exicure Inc. Common Stock
1. IFRS 16, issued in January 2016, amended paragraphs 2.1, 5.5.15, B4.3.8, B5.5.34 and B5.5.46. An entity shall apply those amendments when it applies IFRS 16.
2. The accounting for the time value of options in accordance with paragraph 6.5.15 applies only to the extent that the time value relates to the hedged item (aligned time value). The time value of an option relates to the hedged item if the critical terms of the option (such as the nominal amount, life and underlying) are aligned with the hedged item. Hence, if the critical terms of the option and the hedged item are not fully aligned, an entity shall determine the aligned time value, ie how much of the time value included in the premium (actual time value) relates to the hedged item (and therefore should be treated in accordance with paragraph 6.5.15). An entity determines the aligned time value using the valuation of the option that would have critical terms that perfectly match the hedged item.
3. An entity shall assess at the inception of the hedging relationship, and on an ongoing basis, whether a hedging relationship meets the hedge effectiveness requirements. At a minimum, an entity shall perform the ongoing assessment at each reporting date or upon a significant change in the circumstances affecting the hedge effectiveness requirements, whichever comes first. The assessment relates to expectations about hedge effectiveness and is therefore only forward-looking.
4. For the purpose of applying paragraph 6.5.11, at the point when an entity amends the description of a hedged item as required in paragraph 6.9.1(b), the amount accumulated in the cash flow hedge reserve shall be deemed to be based on the alternative benchmark rate on which the hedged future cash flows are determined.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Exicure Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Exicure Inc. Common Stock prediction model is evaluated with Supervised Machine Learning (ML) and Factor1,2,3,4 and it is concluded that the XCUR stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Hold
### XCUR Exicure Inc. Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementB2Ba2
Balance SheetB2C
Leverage RatiosCBaa2
Cash FlowCaa2Ba2
Rates of Return and ProfitabilityB2Caa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 84 out of 100 with 881 signals.
## References
1. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Short/Long Term Stocks: FOX Stock Forecast. AC Investment Research Journal, 101(3).
2. Tibshirani R. 1996. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58:267–88
3. Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, et al. 2008. Top 10 algorithms in data mining. Knowl. Inform. Syst. 14:1–37
4. Swaminathan A, Joachims T. 2015. Batch learning from logged bandit feedback through counterfactual risk minimization. J. Mach. Learn. Res. 16:1731–55
5. Clements, M. P. D. F. Hendry (1996), "Intercept corrections and structural change," Journal of Applied Econometrics, 11, 475–494.
6. Candès EJ, Recht B. 2009. Exact matrix completion via convex optimization. Found. Comput. Math. 9:717
7. Bewley, R. M. Yang (1998), "On the size and power of system tests for cointegration," Review of Economics and Statistics, 80, 675–679.
Frequently Asked QuestionsQ: What is the prediction methodology for XCUR stock?
A: XCUR stock prediction methodology: We evaluate the prediction models Supervised Machine Learning (ML) and Factor
Q: Is XCUR stock a buy or sell?
A: The dominant strategy among neural network is to Hold XCUR Stock.
Q: Is Exicure Inc. Common Stock stock a good investment?
A: The consensus rating for Exicure Inc. Common Stock is Hold and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of XCUR stock?
A: The consensus rating for XCUR is Hold.
Q: What is the prediction period for XCUR stock?
A: The prediction period for XCUR is (n+6 month)
## People also ask
What are the top stocks to invest in right now? |
# CS 5220 ## Parallelism and locality in simulation ### Lumped parameter systems ## 17 Sep 2015
### Lumped parameter simulations Examples include: - SPICE-level circuit simulation - nodal voltages vs. voltage distributions - Structural simulation - beam end displacements vs. continuum field - Chemical concentrations in stirred tank reactor - mean concentrations vs. spatially varying Typically involves ordinary differential equations (ODEs), or with constraints (differential-algebraic equations, or DAEs). Often (not always) *sparse*.
### Sparsity
Consider system of ODEs $x' = f(x)$ (special case: $f(x) = Ax$)
• Dependency graph has edge $(i,j)$ if $f_j$ depends on $x_i$
• Sparsity means each $f_j$ depends on only a few $x_i$
• Often arises from physical or logical locality
• Corresponds to $A$ being a sparse matrix (mostly zeros)
### Sparsity and partitioning
Want to partition sparse graphs so that
• Subgraphs are same size (load balance)
• Cut size is minimal (minimize communication)
### Types of analysis Consider $x' = f(x)$ (special case: $f(x) = Ax + b$). - Static analysis ($f(x_*) = 0$) - Boils down to $Ax = b$ (e.g. for Newton-like steps) - Can solve directly or iteratively - Sparsity matters a lot! - Dynamic analysis (compute $x(t)$ for many values of $t$) - Involves time stepping (explicit or implicit) - Implicit methods involve linear/nonlinear solves - Need to understand stiffness and stability issues - Modal analysis (compute eigenvalues of $A$ or $f'(x_*)$)
### Explicit time stepping - Example: forward Euler - Next step depends only on earlier steps - Simple algorithms - May have stability/stiffness issues
### Implicit time stepping - Example: backward Euler - Next step depends on itself and on earlier steps - Algorithms involve solves — complication, communication! - Larger time steps, each step costs more
### A common kernel In all these analyses, spend lots of time in sparse matvec: - Iterative linear solvers: repeated sparse matvec - Iterative eigensolvers: repeated sparse matvec - Explicit time marching: matvecs at each step - Implicit time marching: iterative solves (involving matvecs) We need to figure out how to make matvec fast!
### An aside on sparse matrix storage - Sparse matrix $\implies$ mostly zero entries - Can also have “data sparseness” — representation with less than $O(n^2)$ storage, even if most entries nonzero - Could be implicit (e.g. directional differencing) - Sometimes explicit representation is useful - Easy to get lots of indirect indexing! - Compressed sparse storage schemes help
### Example: Compressed sparse row storage
This can be even more compact:
• Could organize by blocks (block CSR)
• Could compress column index data (16-bit vs 64-bit)
• Various other optimizations — see OSKI |
## Table made from a Nominal Scale - What it means?
I am struggling to understand tables created from nominal scales. I understand what a nominal scale is, but a table of results for it just baffles me.
eg. (I cant draw this table in words, so her is an link to a picture of it, I assure you its safe)
http://lh3.ggpht.com/_-yzZfP8L-ss/TO...le%20table.png
I really don't understand what on earth this table is telling me. Help please. |
# Twenty-One Cent Trick
\$19.95
When properly presented, this is one of the cleverest of coin tricks! Four coins are shown on a table or desk. They are two nickels, a penny and a dime. The total value is twenty-one cents. The magician picks up one coin at a time and places them into the palm of his left hand. He counts the value of each coin as it is placed into the palm, until all coins or a total of twenty-one cents are in the palm.
He now removes a nickel from his hand and places it into his right pants pocket. He closes his left hand over the “remaining coins” and asks how much is left. The answer should be sixteen cents. However, when the fist is opened, the sixteen cents are gone! He then removes the nickel from his pants pocket and it may be freely examined.
Out of stock
SKU: 32351 Category: |
# Phase shifts in scattering theory
I have been studying scattering theory in Sakurai's quantum mechanics. The phase shift in scattering theory has been a major conceptual and computational stumbling block for me.
How (if at all) does the phase shift relate to the scattering amplitude?
Also, any literature or book references that might be more accessible than Sakurai would be greatly appreciated.
-
What do you need to know? Its used in partial wave analysis, a common orthogonal expansion . Any function can be decomposed into infinitely many partial waves, the different partial waves correspond to different angular momenta physically. The phase shifts come up as one of the constants that need to determined from the boundary conditions for each partial wave. The scattering amplitude can be expanded in terms of the phase shifts of the waves and spehrical harmonics. I am not writing this as an answer and cluttering it with equations because its there in all standard texts. Eg.-Griffiths etc – yayu Apr 6 '11 at 4:00
Also, I don't think that Sakurai is a good way to learn these topics if you are learning about them for the first time. Try the more accessible texts first. I would recommend Shankar\Griffiths. – yayu Apr 6 '11 at 4:09
I think, in retrospect my real problem is not really understanding the partial wave expansion. – Cogitator Apr 6 '11 at 14:23
upvoting your question as I think a good explanation for partial waves will be good for the site.. you may wish to change your question slightly perhaps to get new answers though – yayu Apr 6 '11 at 17:42
And I'm adding a +100 bounty. – Carl Brannen Apr 8 '11 at 21:13
Suppose you treat scattering of a particle in a central potential. This means that the Hamiltonian $H$ commutes with the angular momentum operators $L^2$ and $L_z$. Hence, you can find simultaneous eigenfunctions $\psi_{k,l,m}$. You might know, for example from the solution of the hydrogen atom, that these functions can be expressed in terms of the spherical harmonics: $$\psi_{k,l,m}(x) = R_{k,l}(r) \Psi_m^l(\theta, \varphi)$$ where the radial part satisfies $$\frac{1}{r^2} \frac{d}{dr} \left( r^2 \frac{dR_{k,l}}{dr}\right) +\left(n^2 - U(r) - \frac{l(l+1)}{r^2}\right) R_{k,l} = 0$$ with $U(r) = 2m/\hbar^2 V(r)$, your central potential, and $k$ is the particle's wavenumber, i.e., $E = \frac{\hbar^2 k^2}{2m}.$
The first step is to look for a special case with simple solutions. This would be the free particle, with $U(r) = 0$. Then, the radial equation is a special case of Bessel's equation. The solutions are the spherical Bessel functions $j_l(kr)$ and $n_l(kr)$, where the $j_l$ are regular at the origin whereas the $n_l$ are singular at the origin. Hence, for a free particle, the solutions are superpositions of the $j_l$: $$\psi(x) = \sum_{l,m} a_{l,m} j_l(kr) Y^l_m(\theta, \varphi)$$
If we also have axial symmetry, only $m = 0$ is relevant. Then we can rewrite the spherical harmonics using Legendre polynomials. This will lead to $$\psi(x) = \sum_{l,m} A_{l} j_l(kr) P_l(\cos \theta)$$ One important special case of such an expansion is the Rayleigh plane wave expansion $$e^{ikz} = \sum_l (2l+1) i^l j_l(kr) P_l(\cos\theta)$$ which we will need in the next step.
We move away from free particles and consider scattering from a potential with a finite range (this excludes Coulomb scattering!). So, $U(r) = 0$ for $r > a$ where $a$ is the range of the potential. For simplicity, we assume axial symmetry. Then, outside the range, the solution must be again that of a free particle. But this time, the origin is not included in the range, so we can (and, in fact, must) include the $n_l(kr)$ solutions to the Bessel equations: $$\psi(r) = \sum_l (a_l j_l(kr) + b_l n_l(kr)) P_l(\cos \theta)$$ Note how the solution for a given $l$ has two parameters $a_l$ and $b_l$. We can think of another parametrization: $a_l = A_l \cos\delta_l$ and $b_l = -A_l \sin \delta_l$. The reason for doing this becomes apparent in the next step:
The spherical Bessel functions have long range approximations: $$j_l(kr) \sim \frac{\sin(kr - l\pi/2)}{kr}$$ $$n_l(kr) \sim \frac{\cos(kr - l\pi/2)}{kr}$$ which we can insert into the wavefunction to get a long range approximation. After some trigonometry, we get $$\psi(r) \sim \sum_l \frac{A_l}{kr} \sin(kr - l\pi/2 + \delta_l) P_l(\cos \theta)'$$ So, this is what our wavefunction looks like for large $r$. But we already know how it should look: if the incoming scattered particle is described as a plane wave in $z$-direction, it is related to the scattering amplitude $f$ via $$\psi(\vec{x}) \sim e^{ikz} + f(\theta) \frac{e^{ikr}}{r}.$$ Obviously, both forms for writing down a long-range approximation for $\psi$ should give the same, so we use the Rayleigh plane wave expansion to rewrite the latter form. We also rewrite the $\sin$ function using complex exponentials. The ensuing calculations are a bit tedious, but not complicated in itself. You just insert the expansions. What we can do afterwards is comparing the coefficients in both expressions for the same terms, e.g. equation the coefficients for $e^{-ikr}P_l(\cos\theta)$ will give you $$A_l = (2l+1)i^l e^{i\delta_l}$$ whereas equating coefficients for $e^{ikr}$ gives you $$f(\theta) = \frac{1}{2ik} \sum_l (2l+1) \left( e^{2i\delta_l} - 1 \right) P_l(\cos \theta).$$
Interpretation of the Phase Shift: Remember the long range limit of the wavefunction. It led to an expression for the $l$-th radial wavefunction in the long-range of $$u_l(r) = kr\psi_l(r) \sim A_l \sin(kr - l\pi/2 +\delta_l).$$ For a free particle, the phase shift $\delta_l$ would be $0$. One could therefore say that the phase shift measures how far the asymptotic solution of your scattering problem is displaced at the origin from the asymptotic free solution.
Interpretation of the Partial Wave Expansion: In the literature, you will often come across terms such as $s$-wave scattering. The partial wave expansion decomposes the scattering process into the scattering of incoming waves with definite angular momentum quantum number. It explains in which way $s$-, $p$-, $d$-waves etc. are affected by the potential. For low energy scattering, only the first few $l$-quantum numbers are affected. If all but the first term are discarded, only the $s$-waves take part in the scattering process. This is an approximation that is, for example, made in the scattering of the atoms in a Bose-Einstein condensate.
-
"For a typical potential" is a very vague statement. It would be better perhaps to say "in the limit of low energy scattering". – Raphael R. Apr 13 '11 at 21:40
Beautiful and clear like a crystalline water! – Nogueira Aug 27 at 2:01 |
# Dirac equation and path integral
Hello
How to get the propagator for the Dirac equation (1+1) and forth and what about the Feynman's Checkerboard (or Chessboard) model |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 25 Sep 2016, 02:16
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If a > 0, is t^a > w^a ?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 31 Jul 2014
Posts: 152
GMAT 1: 630 Q48 V29
Followers: 0
Kudos [?]: 42 [0], given: 373
If a > 0, is t^a > w^a ? [#permalink]
### Show Tags
06 Sep 2015, 21:23
00:00
Difficulty:
95% (hard)
Question Stats:
32% (03:37) correct 68% (01:15) wrong based on 50 sessions
### HideShow timer Statistics
If a > 0, is $$t^a$$ > $$w^a$$?
(1) t > w
(2) t = 2w
[Reveal] Spoiler: OA
Last edited by Engr2012 on 07 Sep 2015, 08:06, edited 1 time in total.
Formatted the question
Manager
Status: Perspiring
Joined: 15 Feb 2012
Posts: 119
Concentration: Marketing, Strategy
GPA: 3.6
WE: Engineering (Computer Software)
Followers: 6
Kudos [?]: 94 [0], given: 216
Re: If a > 0, is t^a > w^a ? [#permalink]
### Show Tags
07 Sep 2015, 00:32
This is a typical GMAT type question that you can expect...
For such questions remember to consider:
1. Values <-1
2. -1 < Values < 0
3. 0 < Values < 1
4. 1 < Values
[color=#0000ff]And you will never get such questions wrong !!
[/color]
Option1: The values differ when T,W are +VE and when they are -VE
Option 2: This is a TRAP.
All those who will substitute value of 't' and and cancel out the terms on LHS & RHS - WILL GET THIS WRONG.
You can cancel the terms ONLY if they are positive.
Combinig 1 & 2,
We are left with only +ve values. For which the given equation is always consistent.
Hence C.
Jamboree GMAT Instructor
Status: GMAT Expert
Affiliations: Jamboree Education Pvt Ltd
Joined: 15 Jul 2015
Posts: 294
Location: India
Followers: 62
Kudos [?]: 218 [1] , given: 1
Re: If a > 0, is t^a > w^a ? [#permalink]
### Show Tags
07 Sep 2015, 02:48
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
Following property tested:
If x>0, a>b>0, then a^x>b^x
In this question : a = x, t= a, w=b
S1 - No idea about the sigh of t & w - Insufficient
S2 - Again no idea about the sigh of t & w - Insufficient
Combining - (and using another concept - NEVER cancel the variables on both sides of equality/ inequality) the statements - we know t>w and w>0 so, C is the answer.
_________________
Aryama Dutta Saikia
Jamboree Education Pvt. Ltd.
Manager
Joined: 10 Aug 2015
Posts: 98
Followers: 1
Kudos [?]: 58 [0], given: 20
Re: If a > 0, is t^a > w^a ? [#permalink]
### Show Tags
07 Sep 2015, 04:08
Solution:
Statement1 : t > w. We dont know if the numbers are positive or negative . Anyhow lets check both the cases.
If, t = 4 and w = 2. Since a>0, here $$t^{a} > w^{a}$$.
But, if t = -2, w = -4. Then t > w.Then for a=3, $$t^{a} > w^{a}$$ and for a=2, $$t^{a} < w^{a}$$.
Therefore, Insufficient.
Statement2 : t=2w. If t = -4, w = -2. Then t = 2w. Then for a=3, $$t^{a} < w^{a}$$ and for a=2, $$t^{a} > w^{a}$$.
Therefore, Insufficient.
Combined : We know that t and w can only be positive. Therefore, $$t^{a} > w^{a}$$ holds for all possible values.
Sufficient.
Option C
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 1776
GPA: 3.82
Followers: 127
Kudos [?]: 1055 [1] , given: 0
If a > 0, is t^a > w^a ? [#permalink]
### Show Tags
07 Sep 2015, 07:22
1
KUDOS
Expert's post
Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and equations ensures a solution.
If a > 0, is t^a > w^a ?
(1) t > w
(2) t = 2w
In the original condition there are 3 variables (a,t,w), and 1 equation (a>0) thus in order to match the number of variables and equations we need 2 equations more. Since there is 1 each i 1) and 2), C is likely the answer. In actual calculation, using both 1) & 2) we get t=2w>w--> w>0, t>0 thus the answer is yes and the conditions are sufficient. therefore the answer is C
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
Find a 10% off coupon code for GMAT Club members.
Unlimited Access to over 120 free video lessons - try it yourself
If a > 0, is t^a > w^a ? [#permalink] 07 Sep 2015, 07:22
Similar topics Replies Last post
Similar
Topics:
3 If a > 0 and b > 0, is a/b > b/a ? 6 21 Aug 2016, 04:13
3 If y > 0 is x > 0 ? 2 12 Sep 2015, 12:09
2 Is 5 > x > 0 ? 10 01 Mar 2011, 00:18
12 Is m+z > 0? 12 15 Jul 2008, 07:55
2 Is x > 0.05? 9 23 Jun 2007, 02:50
Display posts from previous: Sort by |
Multilinearity and Linear Algebra
I can't find a source online that clearly states the properties of a multilinear function in relation to linear algebra (I say this because I am in an introductory linear algebra class, and this is not included in the textbook). I realized today while studying for the midterm exam tomorrow that I don't know the correct properties of a multilinear function.
Faced with expanding the multilinear function $f(a\vec{e}_1+b\vec{e}_2, c\vec{e}_1+d\vec{e}_2, g\vec{e}_1+h\vec{e}_2 )$ I would have written
$$f(a\vec{e}_1+b\vec{e}_2, c\vec{e}_1+d\vec{e}_2, g\vec{e}_1+h\vec{e}_2 ) = f(a\vec{e}_1, c\vec{e}_1, g\vec{e}_1 ) + f(b\vec{e}_2, d\vec{e}_2, h\vec{e}_2 )=acgf(\vec{e}_1, \vec{e}_1, \vec{e}_1) + bdhf(\vec{e}_2, \vec{e}_2, \vec{e}_2)$$
which is incorrect. I discovered this when looking over the solutions given to an assignment. It seems the correct expansion is
$$f(a\vec{e}_1+b\vec{e}_2, c\vec{e}_1+d\vec{e}_2, g\vec{e}_1+h\vec{e}_2 )=$$
$$acgf(\vec{e}_1, \vec{e}_1, \vec{e}_1) + achf(\vec{e}_1, \vec{e}_1, \vec{e}_2)$$
$$+ adgf(\vec{e}_1, \vec{e}_2, \vec{e}_1) + adhf(\vec{e}_1, \vec{e}_2, \vec{e}_2)$$$$+ bcgf(\vec{e}_2, \vec{e}_1, \vec{e}_1) + bchf(\vec{e}_2, \vec{e}_1, \vec{e}_2)$$$$+ bdgf(\vec{e}_2, \vec{e}_2, \vec{e}_1) + bdhf(\vec{e}_2, \vec{e}_2, \vec{e}_2)$$
What is the procedure to correctly expand a multilinear function as done above? Any help would be appreciated.
-
Multlinear is linear with respect to each variable. – Sigur Feb 5 '13 at 0:14
Here's how to think about it "symbolically": $f((ae_1+be_2) \otimes (ce_1+de_2) \otimes (ge_1+he_2))=f(acg e_1 \otimes e_1 \otimes e_1 + \cdots)=acg f(e_1,e_1,e_1)+\cdots$ In fact, this has a name. – wj32 Feb 5 '13 at 0:19
Let's build up from a multilinear function of two vectors before going to three.
$$f(u + v, w + x) = f(u + v, w) + f(u+v, x)$$
That just exploits linearity in the second argument. Now exploit linearity in the first.
$$f(u+v, w) + f(u+v, x) = f(u, w) + f(v, w) + f(u, x) + f(v, x)$$
Just apply linearity on each separate argument and you should be fine.
- |
# Equality of Mappings/Examples/Rotation of Plane 180 Degrees Clockwise and Anticlockwise
## Example of Equality of Mappings
Let $\Gamma$ denote the Cartesian plane.
Let $R_{180}: \Gamma \to \Gamma$ denote the rotation of $\Gamma$ about the origin anticlockwise through $180 \degrees$.
Let $R_{-180}: \Gamma \to \Gamma$ denote the rotation of $\Gamma$ about the origin clockwise through $180 \degrees$.
Then:
$R_{180} = R_{-180}$
## Proof
The domains and codomains if both $R_{360}$ and $I_\Gamma$ are the same:
$\Dom {R_{180} } = \Dom {R_{-180} } = \Gamma$
$\Cdm {R_{180} } = \Cdm {R_{-180} } = \Gamma$
Then note that for all $\tuple {x, y}$:
$R_{180} \tuple {x, y} = \tuple {-x, -y}$
and:
$R_{-180} \tuple {x, y} = \tuple {-x, -y}$
The result follows by Equality of Mappings.
$\blacksquare$ |
# Hooke's Law Calculator
Created by Bogna Szyk
Reviewed by Steven Wooding
Last updated: Dec 21, 2022
We created the Hooke's law calculator (spring force calculator) to help you determine the force in any spring that is stretched or compressed. You can also use it as a spring constant calculator if you already know the force. Read on to get a better understanding of the relationship between these values and to learn the spring force equation.
## Hooke's law and spring constant
Hooke's law deals with springs (meet them at our spring calculator!) and their main property - the elasticity. Each spring can be deformed (stretched or compressed) to some extent. When the force that causes the deformation disappears, the spring comes back to its initial shape, provided the elastic limit was not exceeded.
Hooke's law states that for an elastic spring, the force and displacement are proportional to each other. It means that as the spring force increases, the displacement increases, too. If you graphed this relationship, you would discover that the graph is a straight line. Its inclination depends on the constant of proportionality, called the spring constant. It always has a positive value.
## Spring force equation
Knowing Hooke's law, we can write it down it the form of a formula:
$F = -k Δx$
where:
• $F$ — The spring force (in $\mathrm{N}$);
• $k$ — The spring constant (in $\mathrm{N/m}$); and
• $Δx$ is the displacement (positive for elongation and negative for compression, in $\mathrm{m}$).
Where did the minus come from? Imagine that you pull a string to your right, making it stretch. A force arises in the spring, but where does it want the spring to go? To the right? If it were so, the spring would elongate to infinity. The force resists the displacement and has a direction opposite to it, hence the minus sign: this concept is similar to the one we explained at the potential energy calculator: and is analogue to the [elastic potential energy]calc:424).
🙋 Did you know? the rotational analog of spring constant is known as rotational stiffness: meet this concept at our rotational stiffness calculator.
## How to use the Hooke's law calculator
1. Choose a value of spring constant - for example, $80\ \mathrm{N/m}$.
2. Determine the displacement of the spring - let's say, $0.15\ \mathrm{m}$.
3. Substitute them into the formula: $F = -kΔx = -80\cdot 0.15 = 12\ \mathrm{N}$.
4. Check the units! $\mathrm{N/m \cdot m} = \mathrm{N}$.
5. You can also use the Hooke's law calculator in advanced mode, inserting the initial and final length of the spring instead of the displacement.
6. You can now calculate the acceleration that the spring has when coming back to its original shape using our Newton's second law calculator.
You can use Hooke's law calculator to find the spring constant, too. Try this simple exercise - if the force is equal to $60\ \mathrm{N}$, and the length of the spring decreased from $15\ \mathrm{cm}$ to $10\ \mathrm{cm}$, what is the spring constant?
## FAQ
### Does Hooke's law apply to rubber bands?
Yes, rubber bands obey Hooke's law, but only for small applied forces. This limit depends on its physical properties. This is mainly the cross-section area, as rubber bands with a greater cross-sectional area can bear greater applied forces than those with smaller cross-section areas.
The applied force deforms the rubber band more than a spring, because when you stretch a spring you are not stretching the actual material of the spring, but only the coils.
### Why is there a minus in the equation of Hooke's law?
The negative sign in the equation F = -kΔx indicates the action of the restoring force in the string.
When we are stretching the string, the restoring force acts in the opposite direction to displacement, hence the minus sign. It wants the string to come back to its initial position, and so restore it.
### What is the applied force if spring displacement is 0.7 m?
Let's consider the spring constant to be -40 N/m. Then the applied force is 28N for a 0.7 m displacement.
The formula to calculate the applied force in Hooke's law is:
F = -kΔx
where:
F is the spring force (in N);
k is the spring constant (in N/m); and
Δx is the displacement (positive for elongation and negative for compression, in m).
### What happens if a string reaches its elastic limit?
The elastic limit of spring is its maximum stretch limit without suffering permanent damage.
When force is applied to stretch a spring, it can return to its original state once you stop applying the force, just before the elastic limit. But, if you continue to apply the force beyond the elastic limit, the spring with not return to its original pre-stretched state and will be permanently damaged.
Bogna Szyk
Spring displacement (Δx)
ft
Spring force constant (k)
N/m
Force (F)
N
People also viewed…
### Alfvén velocity
The Alfvén velocity calculator helps you compute the velocity of a magnetohydrodynamic wave, a type of plasma wave.
### Black Friday
How to get best deals on Black Friday? The struggle is real, let us help you with this Black Friday calculator!
### Black hole collision
The Black Hole Collision Calculator lets you see the effects of a black hole collision, as well as revealing some of the mysteries of black holes, come on in and enjoy!
### Fresnel zone
The cool Fresnel zone calculator helps you determine the first Fresnel zone so you can keep your signal transmission as strong as possible. It also determines the other Fresnel zones, if you need them. |
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# Choose the correct answer in area lying in the first quadrant and bounded by the circle $x^2 + y^2 = 4$ and the lines $x = 0$ and $x = 2$ is
$(a)\;\pi\qquad(b)\;2 \pi\qquad(c)\;3 \pi\qquad(d)\;4 \pi$
Can you answer this question?
## 1 Answer
0 votes
Toolbox:
• The area bounded by the curve f(x),x-axis and the ordinate x=a,x=b is given by$A=\int_a^by\;dx=\int_a^{-b}f(x)dx.$
Here the area of the region bounded by the line x=0 and x=2 and the circle $x^2+y^2=4$ is the shaded portion as shown in the fig:
Hence $A=\int_0^2y dx$.
Here y=$\sqrt{4-x^2}dx$
$A=\int_0^2\sqrt {4-x^2}dx.$
On integrating we get,
$A=\begin{bmatrix}\frac{x}{2}\sqrt{4-x^2}+\frac{4}{2}\sin^{-1}\big(\frac{x}{2}\big)\end{bmatrix}_0^2$
On applying limits we get,
$A=\begin{bmatrix}\frac{2}{2}\sqrt{4-4}+\frac{4}{2}\sin^{-1}\big(\frac{2}{2}\big)\end{bmatrix}$
$\;\;\;=0+2.\frac{\pi}{2}$
$\;\;\;=\pi\; sq.units.$
Hence A is the correct answer.
answered Dec 20, 2013 by
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer |
# Can voting be disabled in the FAQ?
+ 0 like - 0 dislike
1258 views
On the FAQ, a few posts have been voted on, which has lead to the collapse of the entire structure of the FAQ.
Can voting be disabled, but ONLY in the FAQ specifically?
To fix the problem temporarily, I have voted on the posts which have not been voted on. But this isn't a permanent solution.
+ 2 like - 0 dislike
This is extremely difficult. I am actually trying to disable rep gained or lost from votes, which is just terrible (I would know even less favorable expressions). Let me see what I can do.
Could it eventually be an idea to put the FAQ into a custom page and leave the actual question just for propositions to new entries?
answered Mar 18, 2014 by (0 points)
@polarkernel I was about to say that the problem with moving everything into a custom page is that it is hard for users to give feedback on policies, but then of course, each such feedback can be posted as a separate question on meta. So that's a good idea.
Or maybe even, the question can retain the policies and FAQ just for the sake of discussion, and the official policies and FAQ can be moved into a custom page.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\varnothing$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. |
# All Questions
655 views
### Connection between properties of Dynamical and Ergodic Systems
Hi All While studying Topological and Ergodic Dynamics, I've got quite preplexed by the different Properties a system might have (minimality, regionally recurring, transitivity, mixing, ergodic, ...
330 views
### Variation formula of a metric [closed]
In Terry Tao's notes on the Poincare Conjecture, he makes a jump I can't understand. From differentiating the identity $g^{\alpha \beta}g_{\beta \gamma} = \delta^\alpha_\gamma$ we obtain the ...
498 views
### Any rigorous way to claim that sums with repeat summands are few?
Let $B \subset \mathbb{Z}^+$. Define $r_{B,h}(n)$ to be the number of ways of writing $n$ as the sum of $h$ elements of $B$ and $R_{B,h}(n)$ the number of ways to write $n$ as the sum of $h$ DISTINCT ...
87 views
### Is it possible to represent non-linear ranking type constraints as equivalent linear constraints?
I have formulated a linear program with binary indicator variables $z_i(a)$ which is equal to $1$ if the $i^{th}$ document is of rank $a$ and $0$ otherwise. The other variables in the linear ...
584 views
### Are there any natural recursively but not primitive-recursively axiomatized theories?
In principle, we could have a recursively axiomatized theory for which the property numbers-an-axiom (even relative to some routine Gödel numbering scheme) is recursive but not primitive recursive. ...
1k views
### Source needed (at final-year undergrad level) for the double cover of SO(3) by SU(2)
This is a bit of an ill-defined question, and I feel I should have been able to resolve it by combining Google with a few library trips, but I'm having difficulty narrowing down the search results to ...
2k views
### In what sense is the étale topology equivalent to the Euclidean topology?
I have heard it said more than once—on Wikipedia, for example—that the étale topology on the category of, say, smooth varieties over $\mathbb{C}$, is equivalent to the Euclidean topology. I have not ...
2k views
### Definition of infinite permutations
I've been trying to find a definition of an infinite permutation on-line without much success. Does there exist a canonical definition or are there various ways one might go about defining this? The ...
3k views
### Methods for solving Pell's equation?
It is known that the minimum solution of Pell's equation $x^2-dy^2=\pm1$ can be found from the continued fraction expansion of $\sqrt d$. Are there other methods for finding the minimum (or any other) ...
895 views
### The relationship between low dimensional topology and dynamics
I am just curious how dynamics get connected with low dimensional topology. Or it is just that we have now powerful computing machines therefore it is natural to use them on topological problems. What ...
3k views
803 views
### How Does a Borel Subgroup Know Which Weights Are Dominant
Let $G$ be a simple group (say $SL_n$) and let $B$ be a Borel subgroup (say upper triangular matrices). Then all irreducible representations of $G$ are induced from one-dimensional representations of ...
231 views
### Turning a measurable function to a bijection
Let $f:(0,1)\rightarrow (0,1)$ be a borel measurable function such that for every $y$ in $(0,1)$ , $f^{-1}(y)$ is a borel set and $\mu(f^{-1}(y))=0$ and also $\mu (f((0,1)))=1$ where $\mu$ is the ...
818 views
### Inverting a covariance matrix numerically stable
Given an $n\times n$ covariance matrix $C$ where $n$ around $250$, I need to calculate $x\cdot C^{-1}\cdot x^t$ for many vectors $x \in \mathbb{R}^n$ (the problem comes from approximating noise by an ...
504 views
### Infinite direct products and derived subgroups
Suppose $G_1, G_2, \dots, G_n, \dots$ are groups (I use countable sequences, though the question is also applicable for uncountable collections of groups). Suppose G is the unrestricted external ...
880 views
### constants in Gamma factors in functional equation for zeta functions.
Usually the Riemann zeta function $\zeta(s)$ gets multiplied by a "gamma factor" to give a function $\xi(s)$ satisfying a functional equation $\xi(s)=\xi(1-s)$. If I changed this gamma factor by a ...
4 views
35 views
### Euler equation formula [on hold]
when I am using Euler equation for Fourier transform integrals of type $\int_{-\infty}^{\infty} dx f(x) exp[ikx]$ I am getting following integrals: $\int_{-\infty}^{\infty} dx f(x) cos(kx)$ (for ...
74 views
### $F[[T]] \times F[[1/T]]$ fundamental domain, show compactness
Let $p$ be a prime number. What is the easiest way to see that $(\mathbb{F}_p((T)) \times \mathbb{F}_p((1/T)))/\mathbb{F}_p[T, 1/T]$ is compact? Here $\mathbb{F}_p[T, 1/T]$ is embedded in ...
100 views
### Is the boundary of an open, regular, bounded, path-connected, and simply connected set a Jordan curve
Trying to find weakest condition on an open bounded set to apply Carathéodory's theorem. My bounded open sets can be assumed to be pretty well-behaved, but I wonder if the above conditions are ...
91 views
### Identities involving sums of Catalan numbers
The $n$-th Catalan number is defined as $C_n:=\frac{1}{n+1}\binom{2n}{n}=\frac{1}{n}\binom{2n}{n+1}$. I have found the following two identities involving Catalan numbers, and my question is if ...
126 views
### Model over DVR for smooth projective curves
Let $C$ be a smooth, projective, geometrically irreducible curve of genus at least $2$ over a complete discrete valued field $F$ of characteristic zero (not necessarily algebraically closed). Let $R$ ...
90 views
### Expected size of determinant of $AA^T$ for non-square random $A$
If $A$ is chosen uniformly at random over all possible $m$ by $n$ (0,1)-matrices, what is the expected size of the absolute value of the determinant of $AA^T$. We can assume $m < n$ and all ... |
# Is $\frac{\mathrm d}{\mathrm dx}$ an operation?
What exactly is an operation? I understand that multiplication and addition are operations, but what about the derivative ($$\frac{\mathrm d}{\mathrm dx}$$).
Can an operation be a relation of expressions, or just of numbers?
• Yes, we usually think of differentiation as an operation! One way you can think about it is to take any polynomial expression (or something like a Taylor series of a non-polynomial expression) and re-express it as a (potentially-infinite) vector, where the value in the nth position is the coefficient on the $x^n$ term, starting at zero for the constant. Then you can express differentiation as this big (potentially-infinite) matrix, and differentiating this expression vector is just multiplying it by the matrix to produce a new vector. We always use the word “operator” for such things. – Jack Crawford Jun 7 at 0:30
• Note: addition and multiplication are binary operations – J. W. Tanner Jun 7 at 0:42
It all depends on your definitions. Wikipedia defines an operation in the following way.
In mathematics, an operation is a calculation from zero or more input values (called operands) to an output value.
Since functions aren't really values, I wouldn't consider differentiation to be an operation, since it's an association between input functions and output functions, not input values and output values.
Operators are a generalization of the notion of an operation. They need not necessarily input or output values. More generally, the can input and output values of any prespecified set. According to Wikipedia, the definition of an operator is as follows.
In mathematics, an operator is generally a mapping that acts on elements of a space to produce elements of another space (possibly the same space, sometimes required to be the same space).
There are numerous sorts of classes of operators that have been given names in mathematics. One class that differentiation falls under is that of linear operators, which are operators acting on vector spaces which satisfy a particular property (you can check out more information here). In the context of differentiation, the key property that differentiation satisfies in order to be able to regard it as a linear operator is the following one, which holds for any functions $$f,g$$ and values $$a,b$$.
$$\frac{d}{dx}\left( af(x)+bg(x)\right) = a\frac{d}{dx}f(x) + b\frac{d}{dx}g(x)$$
• If an operator can take maps values of any set to values from any other set, what is the difference between an operator and a relation? – Frasch Jun 7 at 0:47
• @Frasch Briefly, an operator is a map whereas a relation is an element from one (or more than one) set (acted on by a map) to be mapped to the co-domain by the map. Quote: 'In mathematics, a binary relation over two sets A and B is a set of ordered pairs (a, b) consisting of elements a of A and elements b of B; in short, it is a subset of the Cartesian product A × B. It encodes the information of relation: an element a is related to an element b if and only if the pair (a, b) belongs to the set.' en.wikipedia.org/wiki/Binary_relation – Mathematicing Jun 7 at 0:50
In mathematics, an operation on the set $$X$$ is a function from a power of $$X$$ to $$X$$; that is, a function that takes some number of elements of $$X$$ and returns an element of $$X$$.
You can have finitary operations, that take a finite number of arguments. For example, the usual sum, difference, product of real numbers is an operation on the real numbers. The function that takes three $$3$$-dimensional vectors in $$\mathbb{R}^3$$, $$\mathbf{u}$$, $$\mathbf{v}$$, and $$\mathbf{z}$$, and returns the vector $$(\mathbf{u}\times\mathbf{v})\times\mathbf{z}$$ is an operation on $$\mathbb{R}^3$$. And the function that takes a positive real number $$x$$ and sends it to $$\frac{1}{x}$$ is an operation on the positive real numbers.
A finitary operation has an arity, which is the number of arguments it takes. Sum, difference, product, are binary (or $$2$$-ary) operations. The one I described on $$\mathbb{R}^3$$ is a ternary (or $$3$$-ary) operation. The function $$x\mapsto \frac{1}{x}$$ on the positive reals is a unary (or $$1$$-ary) operation. (There are even nullary, or $$0$$-ary operations, but those are tricky, so forget about them for now).
There are also infinitary operations, that take infinitely many arguments, but let’s also ignore those for now.
Now, the first thing to note is that when you look at the differentiation “function”, $$\frac{d}{dx}$$, this takes as inputs functions and has functions as outputs. That’s a good step. But the next question is: what is our set?
Can’t be the set of all continuous functions, because not every continuous function has a derivative. It can’t be the set of differentiable functions, because the derivative of a differentiable function does not have to be differentiable, etc.
So, in the abstract, your question has no answer until we know what collection of functions you are thinking about.
Here are some collections of functions on which $$\frac{d}{dx}$$ is, indeed, a unary operation:
1. The collection of all analytic functions $$\mathbb{R}\to\mathbb{R}$$; these are functions that have Taylor expansions around any point. The derivative of such a function is again such a function.
2. The collection of infinitely differentiable functions. (slightly smaller than the collection of analytic functions).
3. The collection of all polynomial functions (since the derivative of a polynomial function is a polynomial function).
4. The collection of all functions expressible as polynomials in $$\sin(x)$$ and $$\cos(x)$$.
There are other such collections on which $$\frac{d}{dx}$$ is an operation. In fact, many such collections form a vector space, and $$\frac{d}{dx}$$ will be a linear operator (a linear transformation from the vector space to itself).
For other sets, such as the collection of all differentiable real valued functions of real variable, you technically don’t have an operation, because the image of such a function may fall “outside” the initial set.
• I am reminded of your other, more famous answer... speaking of which, by "slightly smaller than the collection of analytic functions", did you have things like $\exp(-x^{-2})$ in mind? – J. M. is a poor mathematician Aug 15 at 5:21
• @J.M.isapoormathematician: I wasn’t thinking of any particular example (I just know that there are functions that are smooth but not analytic). A common example is the function that is $0$ for $x\leq 0$, and $\exp(-\frac{1}{x})$ for $x\gt 0$. – Arturo Magidin Aug 15 at 5:53
$$\frac{d}{dx}$$ is a linear operator.
And expanding on the comment made by Jack Crawford, a linear operator has a matrix representation the image of which is completely dependent on the choice of basic set of the finite vector space on which it is acting on.
• That’s only true when the vector space is finite dimensional (unless you extend the notion of “matrix representation” to “infinite matrices”). In addition, it is wrong to talk about the basis of a vector space, as most vector spaces have many bases (in fact, the only ones that have a unique basis are the zero dimensional space over any field, and a one dimensional space over the field of 2 elements). – Arturo Magidin Jun 7 at 1:01
• @ArturoMagidin Should have worded it better - good catch. – Mathematicing Jun 7 at 1:06 |
## Paul Johnson
## 2013-04-01
## regression-doublelog-1.R
## Question. What do you get when you "log both sides" of a
## regression model?
## In economics and biology, the double-log model is very
## common. It describes an interactive, multiplicative process.
##
## With just one predictor, the theory is
## y = b0 * x^b1 * exp(e)
## where I wish I could add subscripts on y, x, and e.
## Using the rules of logs, that simplifies to the linear
## model. If there were predictors x1,x2, and so forth,
## with exponents.
## Recall the log laws
## 1. log(x^b1) = b1 * log(x)
## 2. log(c*d) = log(c) + log(d)
set.seed(123123)
dat <- data.frame( x = runif(1000))
## The 3 parameters are b0, b1, and stde
b0 <- 0.1
b1 <- 1.2
stde <- 2
dat$y <- b0 * dat$x^(b1) * exp(rnorm(1000, m = 0, s = stde))
plot(y ~ x, data = dat, main="No Apparent Relationship?")
## Interesting. Looks like nothing.
m0 <- lm(y ~ x, data = dat)
summary(m0)
##
## Call:
## lm(formula = y ~ x, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.73 -0.40 -0.16 0.01 48.36
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.102 0.113 -0.90 0.37
## x 0.840 0.197 4.26 2.2e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.8 on 998 degrees of freedom
## Multiple R-squared: 0.0179, Adjusted R-squared: 0.0169
## F-statistic: 18.2 on 1 and 998 DF, p-value: 2.21e-05
## One of the usual transformations we try is the log, which
## is justified either by idea of changing a "skewed" variable to
## a more normal one, or by the desire to fit the interactive
## model above.
## Test out plot's built-in antilogger
plot(y ~ x, data = dat, log = "xy")
## Better fix the labels, urgently
plot(y ~ x, data = dat, log = "xy", xlab = "log(x)", ylab = "log(y)")
## Previous same as this, this is more usual to the way I would do this:
plot(log(y) ~ log(x), data = dat)
m1 <- lm(log(y) ~ log(x), data = dat)
summary(m1)
##
## Call:
## lm(formula = log(y) ~ log(x), data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.157 -1.393 0.071 1.349 6.308
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.3610 0.0905 -26.1 <2e-16 ***
## log(x) 1.1979 0.0634 18.9 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.01 on 998 degrees of freedom
## Multiple R-squared: 0.263, Adjusted R-squared: 0.263
## F-statistic: 357 on 1 and 998 DF, p-value: <2e-16
abline(m1, col = "red")
## The challenge of interpretation is to get predicted
## values in a scale that interests us.
## I notice termplot does the same thing that rockchalk::plotCurves
## will do
termplot(m1, partial = TRUE, se = TRUE)
library(rockchalk)
##
## Attaching package: 'rockchalk'
##
## The following object is masked from 'package:MASS':
##
## mvrnorm
## needs version 1.7.2 or newer
m1pc <- plotCurves(m1, plotx = "x", interval = "conf")
## if your rockchalk is not on the development path, can still
## construct that manually. Here's how
nd <- data.frame(x = plotSeq(dat$x, 40)) pdat <- predict(m1, newdata = nd, interval = "conf") head(pdat) ## fit lwr upr ## 1 -10.895 -11.666 -10.125 ## 2 -6.716 -7.065 -6.367 ## 3 -5.904 -6.176 -5.632 ## 4 -5.424 -5.653 -5.196 ## 5 -5.083 -5.283 -4.883 ## 6 -4.817 -4.997 -4.638 nd <- cbind(nd, pdat) nd ## x fit lwr upr ## 1 0.0008053 -10.895 -11.666 -10.125 ## 2 0.0263761 -6.716 -7.065 -6.367 ## 3 0.0519469 -5.904 -6.176 -5.632 ## 4 0.0775177 -5.424 -5.653 -5.196 ## 5 0.1030885 -5.083 -5.283 -4.883 ## 6 0.1286593 -4.817 -4.997 -4.638 ## 7 0.1542301 -4.600 -4.764 -4.437 ## 8 0.1798009 -4.417 -4.569 -4.264 ## 9 0.2053717 -4.257 -4.401 -4.114 ## 10 0.2309425 -4.117 -4.253 -3.980 ## 11 0.2565133 -3.991 -4.123 -3.859 ## 12 0.2820841 -3.877 -4.006 -3.749 ## 13 0.3076549 -3.773 -3.899 -3.647 ## 14 0.3332257 -3.677 -3.803 -3.552 ## 15 0.3587965 -3.589 -3.714 -3.464 ## 16 0.3843673 -3.506 -3.631 -3.382 ## 17 0.4099381 -3.429 -3.555 -3.304 ## 18 0.4355089 -3.357 -3.483 -3.230 ## 19 0.4610797 -3.288 -3.417 -3.160 ## 20 0.4866505 -3.224 -3.354 -3.094 ## 21 0.5122213 -3.162 -3.294 -3.031 ## 22 0.5377921 -3.104 -3.238 -2.970 ## 23 0.5633629 -3.048 -3.185 -2.912 ## 24 0.5889337 -2.995 -3.134 -2.857 ## 25 0.6145045 -2.944 -3.085 -2.803 ## 26 0.6400753 -2.896 -3.039 -2.752 ## 27 0.6656461 -2.849 -2.994 -2.703 ## 28 0.6912168 -2.803 -2.952 -2.655 ## 29 0.7167876 -2.760 -2.911 -2.609 ## 30 0.7423584 -2.718 -2.871 -2.565 ## 31 0.7679292 -2.677 -2.833 -2.522 ## 32 0.7935000 -2.638 -2.796 -2.480 ## 33 0.8190708 -2.600 -2.761 -2.439 ## 34 0.8446416 -2.563 -2.727 -2.400 ## 35 0.8702124 -2.528 -2.693 -2.362 ## 36 0.8957832 -2.493 -2.661 -2.325 ## 37 0.9213540 -2.459 -2.630 -2.289 ## 38 0.9469248 -2.426 -2.599 -2.254 ## 39 0.9724956 -2.394 -2.569 -2.219 ## 40 0.9980664 -2.363 -2.541 -2.186 plot(log(y) ~ x, data = dat, col = gray(.7)) lines(fit ~ x, data = nd, col = "red") lines(lwr ~ x, data = nd, col = "red", lty = 2) lines(upr ~ x, data = nd, col = "red", lty = 2) ## I really want to plot this in the (x,y) plane ## not the log(x) or log(y). Here's an easy way ## using the results from before. nd$ypred <- exp(nd\$fit)
## x fit lwr upr ypred
## 1 0.0008053 -10.895 -11.666 -10.125 1.854e-05
## 2 0.0263761 -6.716 -7.065 -6.367 1.212e-03
## 3 0.0519469 -5.904 -6.176 -5.632 2.729e-03
## 4 0.0775177 -5.424 -5.653 -5.196 4.408e-03
## 5 0.1030885 -5.083 -5.283 -4.883 6.202e-03
## 6 0.1286593 -4.817 -4.997 -4.638 8.087e-03
titl <- "double-log regression in x,y space"
plot(y ~ x, data = dat, col = gray(.8), main = titl)
lines(ypred ~ x, data = nd, col = "red", lwd = 2)
## The student can go back to the beginning and
## change the coefficients to see how these plots
## might change. |
Warning: This is an old version. The latest stable version is Version 11.0.1.
# class decoder¶
Scope: kodo_slide
## Brief description¶
Implementation of a complete Random Linear Network coding sliding window decoder.
## Member functions (public)¶
decoder () decoder (decoder && other) decoder & operator= (decoder && other) decoder (const decoder & other) decoder & operator= (const decoder & other) ~decoder () void reset () void configure (finite_field field, uint32_t symbol_size) finite_field field () const uint64_t symbol_size () const uint64_t stream_symbols () const uint64_t stream_lower_bound () const uint64_t stream_upper_bound () const uint64_t push_front_symbol (uint8_t * symbol) uint64_t pop_back_symbol () uint64_t window_symbols () const uint64_t window_lower_bound () const uint64_t window_upper_bound () const void set_window (uint64_t lower_bound, uint64_t symbols) uint64_t coefficient_vector_size () const void set_seed (uint64_t seed_value) void generate (uint8_t * coefficients) void read_symbol (uint8_t * symbol, uint8_t * coefficients) void read_source_symbol (uint8_t * symbol, uint64_t index) uint64_t symbols_missing () const uint64_t symbols_partially_decoded () const uint64_t symbols_decoded () const uint64_t rank () const bool is_symbol_decoded (uint64_t index) const void set_log_stdout () void set_zone_prefix (const std::string & zone_prefix)
## Member Function Description¶
decoder ()
Default constructor.
decoder (decoder && other)
R-value copy constructor.
decoder & operator= (decoder && other)
R-value move assign operator.
decoder (const decoder & other)
Copy constructor (disabled). This type is only movable.
decoder & operator= (const decoder & other)
Copy assign operator (disabled). This type is only movable.
~decoder ()
Destructor.
void reset ()
Resets the coder and ensures that the object is in a clean state. A coder may be reset many times.
void configure (finite_field field, uint32_t symbol_size)
Configures the decoder with the given parameters. This must be called before anything else. If needed configure can be called again. This is useful for reusing an existing coder. Note that the a reconfiguration always implies a reset, so the coder will be in a clean state after the operation
Parameter field:
the chosen finite field
Parameter symbol_size:
the size of a symbol in bytes
finite_field field ()
Returns:
The finite field used.
uint64_t symbol_size ()
Returns:
The size of a symbol in the stream in bytes.
uint64_t stream_symbols ()
Returns:
The total number of symbols known at the decoder. The number of symbols in the decoding window MUST be less than or equal to this number. The total range of valid symbol indicies is
for (uint64_t i = 0; i < stream_symbols(); ++i)
{
std::cout << i + stream_lower_bound() << "\n";
}
uint64_t stream_lower_bound ()
Returns:
The index of the oldest symbol known by the decoder. This symbol may not be inside the window but can be included in the window if needed.
uint64_t stream_upper_bound ()
Returns:
The upper bound of the stream. The range of valid symbol indices goes from [ decoder::stream_lower_bound() , decoder::stream_upper_bound() ). Note the stream is a half-open interval. Going from decoder::stream_lower_bound() to decoder::stream_upper_bound() - 1.
uint64_t push_front_symbol (uint8_t * symbol)
Adds a new symbol to the front of the decoder. Increments the number of symbols in the stream and increases the decoder::stream_upper_bound() .
Parameter symbol:
Pointer to the symbol. Note, the caller must ensure that the memory of the symbol remains valid as long as the symbol is included in the stream. The caller is responsible for freeing the memory if needed. Once the symbol is popped from the stream.
Returns:
The stream index of the symbol being added.
uint64_t pop_back_symbol ()
Remove the “oldest” symbol from the stream. Increments the decoder::stream_lower_bound() .
Returns:
The index of the symbol being removed
uint64_t window_symbols ()
Returns:
The number of symbols currently in the coding window. The window must be within the bounds of the stream.
uint64_t window_lower_bound ()
Returns:
The index of the “oldest” symbol in the coding window.
uint64_t window_upper_bound ()
Returns:
The upper bound of the window. The range of valid symbol indices goes from [ decoder::window_lower_bound() , decoder::window_upper_bound() ). Note the window is a half-open interval. Going from decoder::window_lower_bound() to decoder::window_upper_bound() - 1.
void set_window (uint64_t lower_bound, uint64_t symbols)
The window represents the symbols which will be included in the next decoding. The window cannot exceed the bounds of the stream. Example: If window_lower_bound=4 and window_symbol=3 the following symbol indices will be included 4,5,6
Parameter lower_bound:
Sets the index of the oldest symbol in the window.
Parameter symbols:
Sets number of symbols within the window.
uint64_t coefficient_vector_size ()
Returns:
The size of the coefficient vector in the current window in bytes. The number of coefficients is equal to the number of symbols in the window. The size in bits of each coefficients depends on the finite field chosen. A custom coding scheme can be implemented by generating the coding vector manually. Alternatively the built-in generator can be used. See decoder::set_seed (…) and decoder::generate (…).
void set_seed (uint64_t seed_value)
Seed the internal random generator function. If using the same seed on the decoder and decoder the exact same set of coefficients will be generated.
Parameter seed_value:
A value for the seed.
void generate (uint8_t * coefficients)
Generate coding coefficients for the symbols in the coding window according to the specified seed (see decoder::set_seed (…)).
Parameter coefficients:
Buffer where the coding coefficients should be stored. This buffer must be decoder::coefficient_vector_size() large in bytes.
void read_symbol (uint8_t * symbol, uint8_t * coefficients)
Decodes a coded symbol according to the coding coefficients. Both buffers may be modified during this call. The reason for this is that the decoder will directly operate on the provided memory for performance reasons. Before calling this function you need to instruct the decoder how to map the coding coefficients to the stream. This is done using the decoder::set_window() function. When reading a coded symbol from the encoder, these are the typical operations performed:
1. Read the seed and encoding window from the incoming packet
2. Call the decoder::set_seed() and decoder::set_window() functions to update the state of the decoder.
3. Call the decoder::generate() to generate the coding coefficients
4. Pass the encoded symbol and the coding coefficients to the decoder::read_symbol() function.
Parameter symbol:
Buffer representing a coded symbol.
Parameter coefficients:
The coding coefficients used to create the encoded symbol
void read_source_symbol (uint8_t * symbol, uint64_t index)
Add a source symbol at the decoder. A source symbol is a unit of data originating from the source that has not been encoded. It is not necessary to call the decoder::set_window() function before reading a source symbol. However, the symbol must be within the stream i.e. the following conditions must hold:
uint8_t* data = "... some data ...";
uint64_t index = 32; // The index of the source symbol
assert(index >= decoder.stream_lower_bound());
assert(index < decoder.stream_upper_bound());
Parameter symbol:
Buffer containing the source symbol’s data.
Parameter index:
The index of the source symbol in the stream
uint64_t symbols_missing ()
Returns:
The number of missing symbols in the stream.
uint64_t symbols_partially_decoded ()
Returns:
The number of partially decoded symbols in the stream.
uint64_t symbols_decoded ()
Returns:
The number of decoded symbols in the stream.
uint64_t rank ()
The rank of a decoder indicates how many symbols have been partially or fully decoded. This number is also equivalent to the number of pivot elements we have in the stream.
Returns:
The rank of the decoder
bool is_symbol_decoded (uint64_t index)
Parameter index:
Index of the symbol to check.
Returns:
True if the symbol is decoded (i.e. it corresponds to a source symbol), and otherwise false.
void set_log_stdout ()
Enables logging in a stack. The output will be written to standard out.
void set_zone_prefix (const std::string & zone_prefix)
Sets a zone prefix for the logging output. The zone prefix will be appended to all the output. This makes it possible to have two stacks that both log to standard out, but still differentiate the output.
Parameter zone_prefix:
The zone prefix to append to all logging zones |
# P, Q and R were partners and the balance of their capital accounts on 1st April 2015 were Rs. 8,00,000 (credit); Rs. 5,00,000 (credit) and Rs. 20,000 (debit) respectively. As per the terms of partnership agreement interest on capitals is to be allowed @ 10% p.a. and is to be charged on drawings @ 12% p.a. Partners withdrew as follows: (i) P withdrew Rs. 10,000 p.m. at the end of each month; (ii) Q withdrew Rs. 1,20,000 out of capital on 1st January 2016 (iii) R withdrew Rs. 1,20,000 during the year. The profit for the year ended 31st March 2016 amounted to Rs.4,30,000 You are required to prepare journal entries and partner's capital accounts.
• -31
What are you looking for? |
Re: Paper and slides on indefiniteness of CH
Dear Sy,
Thanks so much for your patient responses to my elementary questions! I now see that I was viewing those passages in your BSL paper through the wrong lens, but rather than detailing the sources of my previous errors, I hope you’ll forgive me in advance for making some new ones. As I now (mis?)understand your picture, it goes roughly like this…
We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it). One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations. The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims. These are the ones that ‘due to the role that they play in the practice of set theory and, more generally, of mathematics, should not be contradicted by any further candidate for a set-theoretic statement that may be regarded as ultimate and unrevisable’ (p. 80). (Is it really essential that these statements be ‘ultimate and unrevisable’? Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?) These include ZFC and the consistency of LCs.
The intrinsic constraints aren’t limited to items that are ‘implicit in the concept of set’. They also include items ‘implicit in the concept of a set-theoretic universe’. (This sounds reminiscent of Tony’s reading in ‘Gödel’s conceptual realism’. Do you find this congenial?) One of the items present in the latter concept is a notion of maximality. The new intrinsic considerations arise at this point, when we begin to consider, not just V, but a range of different ‘pictures of V’ and their interrelations in the hyperuniverse. When we do this, we come to see that the vague principle of maximality derived from the concept of a set-theoretic universe can be made more precise — hence the schema of Logical Maximality and its various instances.
At this point, we have the de facto part of practice and various maximality principles (and more, but let’s stick with this example for now). If the principles conflict with the de facto part, they’re rejected. Of the survivors, they’re further tested by their ability to settle independent questions.
Is this at least a bit closer to the story you want to tell?
All best,
Pen
Re: Paper and slides on indefiniteness of CH
Dear Sy,
There is no retreat from my view that the concept of the continuum (qua the set of arbitrary subsets of the natural numbers) is an inherently vague or indefinite one, since any attempt to make it definite (e.g. via L or an L-like inner model) runs counter to what it is supposed to be about. I talk here about the concept of the continuum, not the supposed continuum itself, as a confirmed anti-platonist. Mathematics in my view is about intersubjectively shared (human) conceptions of idealized structures, not any supposed such structures in and of themselves. See my article “Conceptions of the continuum” (Intellectica 51 (2009), 169-189).
I can’t have claimed that I have established that CH is neither a definite mathematical problem nor a definite logical problem, since one can’t say precisely what such problems are in either case. Rather, as workers in mathematics and logic, we generally know one when we see one. So, the Goldbach conjecture and the Riemann Hypothesis (not “Reimann” as has appeared elsewhere in this exchange) are definite mathematical problems. And the decidability of the first order theory of the reals with exponentiation is a definite logical problem. (Logical problems make use of the concept of formal language and are relative to models or axioms.) Even though CH has the appearance of a definite mathematical problem, it has ceased to be one for all intents and purposes because it was long recognized that only logical considerations could be brought to bear to settle it, if at all. So then what would make it a definite logical problem? Something as definite as: CH is true in L. I can’t exclude that some time in the future, some model or axiom system will be produced that will be as canonical in nature for some concept of set as L is for the concept of hereditarily predicatively definable set. But I’m not holding my breath either.
I don’t know whether your concept of set-theoretical truth can be assimilated to Maddy’s A-realism, but in either case I see it as trying to have your platonist cake without eating it. It allows you to accept CH v not-CH, but so what?
Best,
Sol
Re: Paper and slides on indefiniteness of CH
pre-PS: Thanks Sol for correcting my spelling. My problem with German has plagued me my entire academic life.
Dear Sy,
I think we are getting to the point where we are simply talking past one another. Also the nesting of messages is making this thread somewhat difficult to follow (perhaps a line break issue or a platform issue).
You have made an important point for me: a rich structure theory together with Goedelian ‘success’ is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.
Unless there is something fundamentally different about LC which there is.
Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:
The only ‘context’ needed for Con PD is the empirical calibration provided by a strict ‘hierarchy’ of consistency strengths. That makes no assumptions about PD.
Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms. Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?
Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.
Next point and within the discussion on strong rank maximality. I wrote:
Question 1: How can we even be sure that there is no pairwise incompatibility here which argues against the very concept of the $\Pi_2$ consequences of strong rank maximality?
and you responded:
Let T be the set of first-order sentences which hold in all strongly maximal universes. These are precisely the first-order consequences of ‘strong maximality’. Then any two sentences in T are compatible with each other.
I realized after sending the message I should have elaborated on what I had in mind on the incompatibility issue and so I will do so here. I imagine many the followers of this thread (if any are left) will want to skip this.
Incompatibility
Let me explain the sort of incompatibility I am concerned with.
Suppose $M$ is strongly rank maximal. One might have a $\Pi_2$-sentence $\phi_1$ certified by a rank preserving extension of $M$ with X-cardinals and a $\Pi_2$-sentence $\phi_2$ certified by a rank preserving extension with Y-cardinals.
What if X-cardinals and Y-cardinals are mutually incompatible or worse, the existence of X-cardinals implies $\phi_2$ cannot hold (or vice-versa). Then how could $\phi_1\wedge\phi_2$ be certified? If the certifiable $\Pi_2$-sentences are not closed under finite conjunction then there is a problem.
Let $N_X$ be a rank-preserving extension of $M$ with a proper class of X-cardinals which certifies $\phi_1$. Let’s call this a good witness if $\phi_1$ holds in all the set-generic extensions of $N_X$ and all the $\Pi_2$-sentences which hold in all the set-generic extensions of $N_X$, are deemed certified by $N_X$ (this is arguably reasonable given the persistence of large cardinals under small forcing).
Similarly let’s suppose that $N_Y$ is a rank-preserving extension of M with a proper class of Y-cardinals which certifies
$\phi_2$ and is a good witness.
Assuming the $\Omega$ Conjecture is provable (and recall our base theory is ZFC + a proper class of Woodin cardinals) then one of the following must hold:
1. $\phi_1 \wedge \phi_2$ holds in all the set-generic extensions of $N_X$ (and so $N_X$ certifies $\phi_1\wedge \phi_2$).
2. $\phi_1 \wedge \phi_2$ holds in all the set-generic extensions of $N_Y$ (and so $N_Y$ certifies $\phi_1\wedge \phi_2$).
To me this is a remarkable fact. I see no way to prove it at this level of generality without the $\Omega$ Conjecture.
You wrote:
You have it backwards. The point of beginning with arbitrary extensions is precisely to avoid any kind of bias. If one deliberately chooses a more restricted notion of extension to anticipate a later desire to have large cardinals then this is a very strong and in my view unacceptable bias towards the existence of large cardinals.
I completely disagree. Having more models obscures truth that is my whole point.
Moving on, I want to return to the inner model issue and illustrate an even deeper sense (beyond correctness issues) in which the Inner Model Program is not just about inner models.
Consider the following variation of the inner model program. This is simply the definable version of your “internal consistency” question which you have explored quite bit.
Question: Suppose that there exists a proper class of X-cardinals. Must there exist an inner model N with a proper class of X-cardinals such that $N \subseteq \text{HOD}$?
(Of course, if one allows more than the existence of a proper class of X cardinals then there is a trivial solution so here it is important that one is only allowed to use the given large cardinals).
For “small” large cardinals even at the level of Woodin cardinals I know of no positive solution that does not use fine-structure theory.
Define a cardinal $\delta$ to be $n$-hyper-extendible if $\delta$ is extendible relative to the $\Sigma_n$-truth predicate.
Theorem: Suppose that HOD Conjecture is true. Suppose that for each $n$, there is an $n$-hyper-extendible cardinal. Then for each $n$ there is an $n$-hyper extendible cardinal in HOD (this is a scheme of course).
The HOD Conjecture could have an elementary proof (if there is an extendible cardinal). This does not solve the inner model problem for hyper-extendible cardinals or even shed any light on the inner model problem.
Finally you wrote:
The HP focuses on truth, not on consistency. It seems that the next generation of axioms will not be of the large cardinal variety (consistency appears to be already exhausted; I think it likely that Reinhardt cardinals are inconsistent even without AC) but concern new and subtle forms of absoluteness / powerset maximality.
I agree on Reinhardt cardinals. But obviously disagree on the route to new hierarchies. Certainly HP has yet to indicate any promise for being able to reach new levels of consistency strength since even reaching the level of “ZFC + infinitely many Woodin cardinals” looks like a serious challenge for HP. It would be interesting to even see a conjecture along these lines.
Perhaps the most pressing challenge is to justify large cardinal existence as a consequence of well-justified criteria for the selection of preferred universes. This requires a new idea. Some have suggested structural reflection, but I don’t find this convincing due to the arbitrariness in the relationship between V and its reflected versions.
I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?
However one conceives of truth in set theory, one must have answers to:
1. Is PD true?
2. Is PD consistent?
You have examples of how HP could lead to answering the first question. But no examples of how HP could ever answer the second question. Establishing Con LC for levels past PD looks even more problematic.
There is strong meta-mathematical evidence that the only way to ultimately answer 2. with “yes” is to answer 1. with “yes”. This takes us back to my basic confusion about the basis for your conviction in Con PD.
The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD. For these results (PFA, $\square$ etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).
You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD. It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.
All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the $\Omega$ Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.
I agree that there are interesting models outside this box. But I strongly disagree that V is one of them.
Regards,
Hugh
Re: Paper and slides on indefiniteness of CH
Dear Penny,
On Wed, 6 Aug 2014, Penelope Maddy wrote:
As I now (mis?)understand your picture, it goes roughly like this … We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it). One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations. The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims. These are the ones that ‘due to the role that they play in the practice of set theory and, more generally, of mathematics, should not be contradicted by any further candidate for a set-theoretic statement that may be regarded as ultimate and unrevisable’ (p. 80). (Is it really essential that these statements be ‘ultimate and unrevisable’? Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?) These include ZFC and the consistency of LCs. The intrinsic constraints aren’t limited to items that are ‘implicit in the concept of set’. They also include items ‘implicit in the concept of a set-theoretic universe’. (This sounds reminiscent of Tony’s reading in ‘Gödel’s conceptual realism’. Do you find this congenial?) One of the items present in the latter concept is a notion of maximality. The new intrinsic considerations arise at this point, when we begin to consider, not just V, but a range of different ‘pictures of V’ and their interrelations in the hyperuniverse. When we do this, we come to see that the vague principle of maximality derived from the concept of a set-theoretic universe can be made more precise — hence the schema of Logical Maximality and its various instances. At this point, we have the de facto part of practice and various maximality principles (and more, but let’s stick with this example for now). If the principles conflict with the de facto part, they’re rejected. Of the survivors, they’re further tested by their ability to settle independent questions. Is this at least a bit closer to the story you want to tell?
Yes, but as my views have evolved slightly since Tatiana and I wrote the BSL paper I’d like to take the liberty (see below) of fine-tuning and enhancing the picture you present above. My apologies for these modifications, but I understand that changes in one’s point of view are not prohibited in philosophy?
As you say, I take “true in V” to be free of any realist ontology: there is no fixed class of objects constituting the elements of the universe of all sets. But this does not prevent us from having a conception of this universe or from making assertions about what is true in it. My notion of set-theoretic truth (truth in V) consists of those conclusions we can draw based upon intrinsic features of the relevant concepts. The relevant concepts include of course the concept of “set”, but also (and this is a special aspect of the HP) the concept of “set-theoretic universe” (“picture of V”).
Intrinsic features of the concept of set include (and in my view are limited to) what one can derive from the maximal iterative concept (together with some other basic features of sets), resulting in the axioms of ZFC together with reflection principles. These are concerned with “internal” features of V.
To understand intrinsic features of the concept of set-theoretic universe we need a context in which we may compare universes and this is provided by the hyperuniverse. The hyperuniverse admits only countable universes (countable pictures of V) but by Löwenheim-Skolem this will suffice, as our aim is to clarify the truth of first-order statements about V. An example of an intrinsic feature of the concept of universe is its “maximality”. This is already expressed by “internal” features of a universe based on the maximal iterative concept. But in the HP it is also expressed by “external” features of a universe based on its relationship with other universes (“maximal” = “as large as possible” and the hyperuniverse provides a meaning to the term “possible universe”).
With this setup we can then instantiate “maximality”, for example, in various ways as a precise mathematical criterion phrased in terms of the “logic of the hyperuniverse”. The IMH (powerset maximality) and $\#$-generation (ordinal maximality) are examples, but there are others which strengthen these or synthesise two or more criteria together (IMH for $\#$-generated universes for example). The “preferred universes” for a given criterion are those which obey it and first-order statements that hold in all such preferred universes become candidates for axioms of set theory.
With this procedure we have a way of arriving at axiom-candidates that are based on intrinsic features of the concepts of set and set-theoretic universe. A point worth making is that our notions of V and hyperuniverse are interconnected; neither is burdened by an ontology yet they are inseparable, as the hyperuniverse is defined with reference to V and our understanding of truth in V is influenced by the (intrinsically-based) preferences we impose on elements of the hyperuniverse.
What has changed in my perspective since the BSL paper (I cannot speak for Tatiana) regards the “ultimate” nature of what the programme reveals about truth and the relationship between the programme and set-theoretic practice. Penny, you are perfectly right to ask:
Is it really essential that these statements be ‘ultimate and unrevisable’? Isn’t it enough that they’re the ones we accept for now, reserving the right to adjust our thinking as we learn more?
At the time we wrote the paper we were thinking almost exclusively of the IMH, which contradicts the existence of inaccessible cardinals. This is of course a shocking outcome of a reasoned procedure based on the concept of “maximality”! This caused us to rethink the role of large cardinals in set-theoretic practice and to support the conclusion that in fact the importance of large cardinals in set theoretic practice derives from their existence in inner models, not in V. Indeed, I still support that conclusion and on that basis Tatiana and I were prepared to declare the first-order consequences of the IMH as being ultimate truths.
But what I came to realise is that the IMH deals only with “powerset maximality” and it is compelling to also introduce “ordinal maximality” into the picture. (I should have come to that conclusion earlier, as indeed the existence of inaccessible cardinals is derivable from the intrinsic maximal iterative concept of set!) There are various ways to formalise ordinal maximality as a mathematical criterion: If we take the line that Peter Koellner has advocated then we arrive at something I’ll call KM (for Koellner maximality) which roughly speaking asserts the existence of omega-Erdos cardinals. A much stronger form due to Honzik and myself is $\#$-generation, which roughly speaking asserts the existence of any large cardinal notion compatible with V = L. Now IMH + KM is inconsistent but we can “synthesise” IMH with KM to create a new criterion IMH(KM), which is consistent. Similarly we can consistently formulate the synthesis IMH($\#$-generation) of IMH with $\#$-generation. Unfortunately IMH(KP) does not change much, as it yields the inconsistency of large cardinals just past $\omega$-Erdos, and so again we contradict large cardinal existence. But the surprise is that IMH($\#$-generation) is a synthesised form of powerset maximality with ordinal maximality which is compatible with all large cardinals (even supercompacts!), and one can argue that #-generation is the “correct” mathematical formulation of ordinal maximality.
This was an important lesson for me and strongly confirms what you suggested: In the HP (Hyperuniverse Programme) we are not able to declare ultimate and unrevisable truths. Instead it is a dynamic process of exploration of the different ways of instantiating intrinsic features of universes. learning their consequences and synthesising criteria together with the long-term goal of converging towards a stable notion of “preferred universe”. At each stage in the process, the first-order statements which hold in the preferred universes can be regarded as legitimate axiom candidates, providing an approximation to “ultimate and unrevisable truth” which may be modified as new ideas arise in the formulation of mathematical criteria for preferred universes. Indeed the situation is even more complex, as in the course of the programme we may wish to consider other intrinsic features of universes (I have ignored “omniscience” in this discussion), giving rise to a new set of mathematical criteria to be considered. And it is of course too early to claim that the process really will converge towards a unique notion of “preferred universe” and not to more than one such notion (fortunately there are as of yet no signs of such a bifurcation as “synthesis” appears to be a very powerful and successful way of combining criteria).
Finally: Why do I refer to “axiom candidates” and not to “axioms” when I mention first-order properties shared by preferred universes? This is out of respect for “set-theoretic practice”. As you know my aim is to base truth wholly on intrinsic considerations, independent of what may be the current trends in the mathematics of set theory. In my BSL paper we try to fix a concept of defacto truth and set the ground rule that such truth cannot be violated. My view now is rather different. I see that the HP is the correct source for axiom candidates which must then be tested against current set-theoretic practice. There is no naturalist leaning here, as I am in no way allowing set-theoretic practice to influence the choice of axiom-candidates; I am only allowing a certain veto power by the mathematical community. The ideal situation is if an (intrinsically-based) axiom candidate is also evidenced by set-theoretic practice; then a strong case can be made for its truth.
But I am very close to dropping this last “veto power” idea in favour of the following (which I already mentioned to Sol in an earlier mail): Perhaps we should accept the fact that set-theoretic truth and set-theoretic practice are quite independent of each other and not worry when we see conflicts between them. Maybe the existence of measurable cardinals is not “true” but set theory can proceed perfectly well without taking this into consideration. In the converse direction I simply repeat what I said recently to Hugh:
The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!
The HP is about intrinsic sources of truth and we have no a priori guarantee that the results of the programme will fit well with current set-theoretic practice. What to do about that is however unclear to me at the moment.
All the best, thanks again for your interest,
Sy
Re: Paper and slides on indefiniteness of CH
Dear Sol,
On Wed, 6 Aug 2014, Solomon Feferman wrote:
Dear Sy,
There is no retreat from my view that the concept of the continuum (qua the set of arbitrary subsets of the natural numbers) is an inherently vague or indefinite one, since any attempt to make it definite (e.g. via L or an L-like inner model) runs counter to what it is supposed to be about. I talk here about the concept of the continuum, not the supposed continuum itself, as a confirmed anti-platonist. Mathematics in my view is about intersubjectively shared (human) conceptions of idealized structures, not any supposed such structures in and of themselves. See my article “Conceptions of the continuum” (Intellectica 51 (2009), 169-189).
I fully agree with all of this. Indeed, the concept of continuum is inherently vague.
But elsewhere you have gone further by claiming that CH is an inherently vague problem! This is a much stronger claim. Indeed, we have many theorems about inherently vague concepts: Take Koenig’s theorem that the continuum cannot have size $\aleph_\omega$. Given that we don’t really understand what the continuum is, it is remarkable that we can prove something so nontrivial about its size!
I can’t have claimed that I have established that CH is neither a definite mathematical problem nor a definite logical problem, since one can’t say precisely what such problems are in either case. Rather, as workers in mathematics and logic, we generally know one when we see one. So, the Goldbach conjecture and the Riemann Hypothesis (not “Reimann” as has appeared elsewhere in this exchange) are definite mathematical problems. And the decidability of the first order theory of the reals with exponentiation is a definite logical problem. (Logical problems make use of the concept of formal language and are relative to models or axioms.) Even though CH has the appearance of a definite mathematical problem, it has ceased to be one for all intents and purposes because it was long recognized that only logical considerations could be brought to bear to settle it, if at all. So then what would make it a definite logical problem? Something as definite as: CH is true in L. I can’t exclude that some time in the future, some model or axiom system will be produced that will be as canonical in nature for some concept of set as L is for the concept of hereditarily predicatively definable set. But I’m not holding my breath either.
So as I understand it your claim of the inherent vagueness of CH was based solely on the lack of available methods for showing that it is a definite logical problem, and not on a belief that such methods cannot exist. Is that right? If so, then I suppose that your claim was simply intended as a provocative challenge to the set theory community!
Consider the following. The IMH asserts that if we enlarge V to an outer model W (i.e. universe with the same ordinals) then any sentence true in an inner model of W is also true in an inner model of V. (To fully make sense of this we need the HP but for the present discussion that can be ignored.) The SIMH is the same statement but where the sentences in question are allowed to include “absolute” parameters. (A parameter is absolute if it is definable by a fixed parameter-free formula in all cardinal-preserving outer models of V.) For the sake of argument, suppose that the SIMH is consistent (I don’t know if it is).
The SIMH implies the negation of CH (by an easy argument). In other words the decidability of CH is reduced to a logical problem of absoluteness within the hyperuniverse (when things are formulated properly). Does this argument (via an axiom system which is canonical for the concept of set-theoretic universe) suggest to you that your claim of the inherent vagueness of CH may be in doubt
(Remark for the large cardinal lovers: If the SIMH is consistent then surely it can be consistently modified to incorporate large cardinals together with ordinal-maximality.)
I don’t know whether your concept of set-theoretical truth can be assimilated to Maddy’s A-realism, but in either case I see it as trying to have your platonist cake without eating it. It allows you to accept CH v not-CH, but so what?
I’m not sure that I get your point here, but I do believe that there is no difficulty with a fully epistemological, Platonism-free concept of truth. With no ontology we can still have a mental picture of the universe of sets, just as you have a mental picture of the inherently vague continuum. The concept of “truth in V” that I have in mind evolves as this picture is clarified through the exploration of intrinsic features of the concepts of set and set-theoretic universe. As hinted above with the SIMH it is perfectly possible that not-CH will be a byproduct of this investigation.
Do you see a problem with this approach to truth?
Thanks again and best wishes,
Sy
Re: Paper and slides on indefiniteness of CH
Dear Hugh,
OK, let’s go for just one more exchange of comments and then try to bring this to a conclusion by agreeing on a summary of our perspectives. I already started to prepare such a summary but do think that one last exchange of views would be valuable.
You have made an important point for me: a rich structure theory together with Gödelian “success” is insufficient to convince number-theorists that ERH is true, and by analogy these criteria should not suffice to convince set-theorists that PD is true.
Unless there is something fundamentally different about LC which there is.
My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?
Many (well at least 2) set theorists are convinced that PD is true. The issue is why do you think Con PD is true. You have yet to give any coherent reason for this. You responded:
The only “context” needed for Con PD is the empirical calibration provided by a strict “hierarchy” of consistency strengths. That makes no assumptions about PD.
Such a position is rather dubious to me. The consistency hierarchy is credible evidence for the consistency of LC only in the context of large cardinals as potentially true axioms. Remove that context (as IMH and its variants all do) then why is the hierarchy evidence for anything?
My argument is “proof-theoretic”: the consistency strengths in set theory are organised by the consistency strengths of large cardinal axioms. And we have good evidence for the strictness of this hierarchy. There is nothing semantic here.
Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.
I guess you mean Con(Reinhardt without AC). Why would you conjecture in this setting that RH is false? I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC? With such evidence I would indeed conjecture that RH is true; wouldn’t you?
I am not asking how HP could justify the existence of large cardinals. I am simply asking how HP is ever going to even argue for the consistency of just PD (which you have already declared a “truth”). If HP cannot do this then how is it ever credibly going to make progress on the issue of truth in set theory?
Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.
Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.
However one conceives of truth in set theory, one must have answers to:
1. Is PD true?
I don’t know.
2. Is PD consistent?
Yes.
You have examples of how HP could lead to answering the first question. But no examples of how HP could ever answer the second question. Establishing Con LC for levels past PD looks even more problematic.
It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.
There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”. This takes us back to my basic confusion about the basis for your conviction in Con PD.
Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.
We have had this exchange several times already. Let’s agree to (strongly) disagree on this point.
The fundamental technology (core-model methods) which is used in establishing the “robustness” of the consistency hierarchy which you cite as evidence, shows that whenever “ZFC + infinitely many Woodin cardinals” is established as a lower bound for some proposition (such as PFA, failure of square at singular strong limits, etc), that proposition implies PD. For these results (PFA, square etc.) there are no other lower bound proofs known. There is a higher level consistency hierarchy (which is completely obscured by your more-is-better approach to the hyper-universe).
You also cite strictness of the hierarchy as an essential component of the evidence, which you must in light of the ERH example, and so the lower bound results are key in your view. Yet as indicated above, for the vast majority (if not all) of these lower bound results, once one is past the level of Con PD, one is actually inferring PD. It seems to me that by your own very criteria, this is a far stronger argument for PD then HP is ever going to produce for the negation of PD.
Again: It is not clear that the HP will give not-PD! It is a question of finding appropriate criteria that will yield PD, perhaps criteria that will yield enough large cardinals.
As far as the strictness of the consistency hierarchy we can use quasi-lower bounds, we don’t need the lower bounds coming from core model theory.
And as I have been trying to say, building core model theory into a programme for the investigation of set-theoretic truth like HP is an inappropriate incursion of set-theoretic practice into an intrinsically-based context.
All those comments aside, we have an essential disagreement at the very outset. I insist that any solution to CH must be in the context of strong rank maximality (and assuming the provability of the $\Omega$ Conjecture this becomes a perfectly precise criterion). You insist that this is too limited in scope and that we should search outside this “box”.
No, we may be able to stay “within the box” as you put it:
I said that SIMH(large cardinals + $\#$-generation) might be what we are looking for; the problems are to intrinsically justify large cardinals and to prove the consistency of this criterion. Would you be happy with that solution?
Best,
Sy
Re: Paper and slides on indefiniteness of CH
Dear Sy,
I’m very pleased that my paper has led to such a rich exchange and that it has brought out the importance of clarifying one’s aims in the ongoing development of set theory. Insofar as it might affect my draft, I still have much to absorb in the exchange thus far, and there will clearly be some aspects of it that are beyond my current technical competence. In any case, I agree it would be good to bring the exchange to a conclusion with a summary of positions.
In the meantime, to help me understand better, here is a question about HP: if I understand you properly, if HP is successful, it will show the consistency of the existence of large large cardinals in inner models. Then how would it be possible to establish the success of HP without assuming the consistency of large large cardinals in V? If so, isn’t the program circular? If not, it appears that one would be getting something from nothing.
Best,
Sol
Re: Paper and slides on indefiniteness of CH
Dear Sy,
Ok one more round. This is a short one since you did not raise many new questions etc. in your last response.
On Aug 7, 2014, at 9:32 AM, Sy David Friedman wrote:
Unless there is something fundamentally different about LC which there is.
My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?
Absolutely not, given the special nature of LC.
Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.
I guess you mean Con(Reinhardt without AC).
Of course, I thought that was clear.
Why would you conjecture in this setting that RH is false?
Because I think “Reinhardt without AC” Is inconsistent. The Oracle could be malicious after all.
(Aside: I actually think that “ZF + Reinhardt + extendible” is inconsistent. The situation for “ZF + Reinhardt” is a bit less clear to me at this stage. But this distinction is not really relevant to this discussion, e.g. everything in these exchanges could have been in the context of super-Reinhardt).
I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC?
I am not sure what you are referring to here. The hierarchy of axioms past I0 that I have discussed in the JML papers are all AC based.
With such evidence I would indeed conjecture that RH is true; wouldn’t you?
This seems an odd position. Suppose that the Oracle matched 100 number theoretic ($\Pi^0_1$) sentences with the consistency of variations of the notion of Reinhardt cardinals. This increases one confidence in these statements?
Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.
Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.
However one conceives of truth in set theory, one must have answers to:
1) Is PD true?
I don’t know.
2) Is PD consistent?
Yes.
You have examples of how HP could lead to answering the first question. But no examples of how HP could ever answer the second question. Establishing Con LC for levels past PD looks even more problematic.
It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.
There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”. This takes us back to my basic confusion about the basis for your conviction in Con PD.
Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.
But I have not suggested that to get Con(Definable determinacy) one needs to get Definable determinacy. I have suggested that to get Con PD one needs to get PD. (For me, PD is boldface PD, perhaps you have interpreted PD as light-face PD).
The local/global issue is not present at the level you indicate. It only occurs past the level of 1 Woodin cardinal, I have said this repeatedly.
Why? If $0^\#$ exists then it is unique. $M_1^\#$ (the analog of $0^\#$) at the next projective level has a far more subtle uniqueness.
(For those unfamiliar with the notation: $M_1$ is the “minimum” fine-structural inner model with 1 Woodin cardinal and the notion of minimality makes perfect sense for iterable models through elementary embeddings).
The iterable $M_1^\#$ is unique but the iterable $M_1^\#$ implies all sets have sharps. In fact in the context of all sets have sharps, the existence of M_1^# is equivalent to the existence of a proper class inner model with a Woodin cardinal.
Without a background of sharps there examples where there are no definable inner models past the level of 1 Woodin cardinal no matter what inner models one assumes exist. The example is not contrived, it is $L[x]$ for a Turing cone of $x$ and this example lies at the core of the consistency proof of IMH.
The inner model program for me has come down to one main conjecture (the Ultimate-L conjecture) and two secondary conjectures, the $\Omega$ Conjecture and the HOD Conjecture. These are not vague conjectures, they are each precisely stated. None of these conjectures involves any concept of fine-structure or related issues.
The stage is also set for the possibility of an anti-inner model theorem. A refutation of the $\Omega$ Conjecture would in my view be such an anti-inner model theorem and there are other possibilities.
So the entire program as presently conceived is for me falsifiable.
If the Ultimate-L Conjecture is provable then I think this makes a far more compelling case for LC than anything coming out of HP for denying LC. I would (perhaps unwisely) go much further. If the Ultimate-L Conjecture is provable then there is an absolutely compelling case for CH and in fact for V = Ultimate L. (The precise formulation of V = Ultimate L is already specified, it is again not some vague axiom).
How about this: We each identify a critical conjecture whose proof we think absolutely confirms our position and whose refutation we also admit sends us back to “square 1”. For me it is the Ultimate-L Conjecture.
HP is still in its infancy so this may not be a fair request. So maybe we have to wait on this. But you should at least be able to articulate why you think HP even has a chance.
Aside: IMH simply traces back to Turing determinacy as will $\text{IMH}^*$. For each real $x$ let $M_x$ be the minimum model of ZFC containing $x$. The theory of $M_x$ is constant on a cone as is its second order theory. Obviously this (Turing) stable theory will have a rich structure theory. But this is just one instance of many analogous stable theories (this is the power of PD and beyond) and HP is just borrowing this. It is also a theorem that Turing-PD is equivalent to PD.
But why should this have anything to do with V.
Here is a question: Why is not the likely scenario simply that HP ends up stair-stepping up to PD and that the ultimate conclusion of the entire enterprise is simply yet another argument for PD?
Regards,
Hugh
Re: Paper and slides on indefiniteness of CH
Dear Sol,
On Thu, 7 Aug 2014, Solomon Feferman wrote:
Dear Sy,
I’m very pleased that my paper has led to such a rich exchange and that it has brought out the importance of clarifying one’s aims in the ongoing development of set theory. Insofar as it might affect my draft, I still have much to absorb in the exchange thus far, and there will clearly be some aspects of it that are beyond my current technical competence. In any case, I agree it would be good to bring the exchange to a conclusion with a summary of positions.
Thanks again for triggering the discussion with your interesting paper.
In the meantime, to help me understand better, here is a question about HP: if I understand you properly, if HP is successful, it will show the consistency of the existence of large large cardinals in inner models.
To be clear, the HP does not produce a single criterion for preferred universes, but a family of them, and each must be analysed for its consequences. But many such criteria will indeed produce inner models with at least measurable cardinals and I would conjecture that an inner model with a Woodin cardinal should also come out. However the programme achieves this only via the core model theory and not directly on its own. In particular I see no scenario for it to produce an inner model with a supercompact, as the core model theory seems unable to do that.
On the other hand all of the criteria seem to be compatible with the existence of arbitrarily large cardinals in inner models, even if they fail to produce such iner models.
However I don’t consider the creation of inner models with large cardinals, or even the confirmation of the consistency of large cardinals, to be a central goal of the programme. The programme will likely have more valuable consequences for understanding problems like CH whose undecidability does not hinge on large cardinal assumptions.
Then how would it be possible to establish the success of HP without assuming the consistency of large large cardinals in V? If so, isn’t the program circular? If not, it appears that one would be getting something from nothing.
The answer is given by core model theory: Without assuming the consisency of large cardinals one can use this theory to show that various set-theoretic properties yield inner models with large cardinals. A nice example is the failure of the singular cardinal hypothesis, which without any further assumptions produces inner models with many measurable cardinals.
All the best,
Sy
Re: Paper and slides on indefiniteness of CH
Dear Sy,
Thanks for these clarifications and amendments! (The only changes of mind that strike me as objectionable are the ones where the person pretends it hasn’t happened.) I’m still keen to formulate a nice tight summary of your approach, and then to raise a couple of questions, so let me take another shot at it. Here’s a revised version of the previous summary:
We reject any ‘external’ truth to which we must be faithful, but we also deny that the resulting ‘true-in-V’ arises strictly out of the practice (as my Arealist would have it). One key is that ‘true-in-V’ is answerable, not to a realist ontology or some sort of ‘truth value realism’, but to various intrinsic considerations. The other key is that it’s also answerable to a certain restricted portion of the practice, the de facto set-theoretic claims. These are the ones that need to be be taken seriously as we evaluate any candidate for a new set theoretic axiom or principle. They include ZFC and the consistency of LCs.
The intrinsic constraints aren’t limited to items that are implicit in the concept of set. One of the items present in this concept is a notion of maximality. The new intrinsic considerations arise when we begin to consider, in addition, the concept of the hyperuniverse. One of the items present in this concept is a new notion of maximality, building on the old, that generates the schema of Logical Maximality and its various instances (and more, set aside for now).
At this point, we have the de facto part of practice and various maximality principles. If the principles conflict with the de facto part, they’re subject to serious question (see below). They’re further tested by their ability to settle independent questions. Once we’re settled on a principle, we use it to define ‘preferred universe’ and count as ‘true-in-V’ anything that’s true in all preferred universes.
I hope this has inched a bit closer! Assuming so, here are the two questions I wanted to pose to you:
• What is the status of ‘the concept of set’ and ‘the concept of set-theoretic universe’?
This might sound like a nicety of interest only to philosophers, but there’s a real danger of falling into something ‘external’, something too close for comfort to an ontology of abstracta or a form of truth-value realism.
• The challenge we friends of extrinsic justifications like to put to defenders of intrinsic justifications is this: suppose some candidate principle generates a lot of deep-looking mathematics, but conflicts with intrinsically generated principles; would you really want to say ‘gee, that’s too bad, but we have to jettison that deep-looking mathematics’? (I’d argue that this isn’t entirely hypothetical. Choice was initially controversial largely because it conflicted with one strong theme in the contemporary concept of set, namely, the idea that a set is determined by a property. The mathematics generated by Choice was so irresistible that (much of the) mathematical community switched to the iterative conception. Trying to shut down attractive mathematical avenues has been a fool’s errand in the history of mathematics.)
You’ve had some pretty interesting things to say about this! This remark to Hugh, which you repeat, was what made me realize I’d so badly misunderstood you the first time around:
The basic problem with what you are saying is that you are letting set-theoretic practice dictate the investigation of set-theoretic truth!
And these remarks to Sol also jumped out:
Another very interesting question concerns the relationship between truth and practice. It is perfectly possible to develop the mathematics of set theory without consideration of set-theoretic truth. Indeed Saharon has suggested that ZFC exhausts what we can say regarding truth but of course that does not force him to work just in ZFC. Conversely, the HP makes it clear that one can investigate truth in set theory quite independently from set-theoretic practice; indeed the IMH arose from such an investigation and some would argue that it conflicts with set-theoretic practice (as it denies the existence of inaccessibles). So what is the relationship between truth and practice? If there are compelling arguments that the continuum is large and measurable cardinals exist only in inner models but not in V will this or should this have an effect on the development of set theory? Conversely, should the very same compelling arguments be rejected because their consequences appear to be in conflict with current set-theoretic practice?
And today, to me, you add:
I see that the HP is the correct source for axiom *candidates* which must then be tested against current set-theoretic practice. There is no naturalist leaning here, as I am in no way allowing set-theoretic practice to influence the choice of axiom-candidates; I am only allowing a certain veto power by the mathematical community. The ideal situation is if an (intrinsically-based) axiom candidate is also evidenced by set-theoretic practice; then a strong case can be made for its truth.
But I am very close to dropping this last “veto power” idea in favour of the following (which I already mentioned to Sol in an earlier mail): Perhaps we should accept the fact that set-theoretic truth and set-theoretic practice are quite independent of each other and not worry when we see conflicts between them. Maybe the existence of measurable cardinals is not “true” but set theory can proceed perfectly well without taking this into consideration.
Let me just make two remarks on all this. First, if you allow the practice to have ‘veto power’, I don’t see how you aren’t allowing it to influence the choice of principles. Second, if you don’t allow the practice to have ‘veto power’, but you also don’t demand that the practice conform to truth (as I was imagining in my generic challenge to intrinsic justification given above), then — to put it bluntly — who cares about truth? I thought the whole project was to gain useful guidance for the future development of set theory.
All best,
Pen |
# Megalithic culture
Due to deficiencies in its content, this article was entered on the quality assurance page of the WikiProject Prehistory and Early History . This is done in order to bring the quality of the articles from this topic to an acceptable level. Please help to remedy the shortcomings in this article and please take part in the discussion !
Megalithic culture (from ancient Greek μέγας mégas "large" and λίθος líthos "stone") is an archaeological and ethnographic term that has been controversial in the history of research. In particular, the common origin of all megalithic cultures has been questioned.
The term megalithic culture has several meanings:
1. In connection with the name of an ethnic group ("tribe") or an archaeological culture , it can refer to all cultural phenomena associated with the construction and use of monuments made of large stones. In 2001 Dominik Bonatz spoke of a megalithic culture in Nias (Indonesia). Childe (1946) speaks of various megalithic cultures .
2. The idea of a culture with large stone construction, which is spread over great distances, sometimes worldwide, which was created by diffusion and is connected to one another by other features. Temporal differences between the various megalithic phenomena are explained by the duration of the migration and the distances covered. This theory is primarily associated with the name of the English cultural anthropologist William James Perry (1887–1949). In a narrower geographical context, Oscar Montelius (1843–1921) and Sophus Müller (1846–1934) also used a migration model for the spread of the megalithic culture that should have penetrated from the Orient via North Africa to Western Europe and from there further north and east. Carl Schuchhardt (1859–1943) reversed the direction of spread and derived the Greek tholoi from Western European models.
3. The idea that building with large stones (or of large stone structures) is associated with a particular ideology, even if building traditions are not necessarily genetically related. The ethnographer Adolf Ellegard Jensen (1899–1965) connects large stone buildings with a “pronounced cult of the dead and ancestral service”. This idea is related to Frobenius' culture morphology .
4. "Megalithic culture" was used as a synonym for funnel beaker culture or rather its north, west and east group. The term was linked to the idea of a “megalithic people”. According to Ernst Wahle and Hermann Güntert , this emerged from a mixture of immigrant Germans and the "megalithic people". Güntert equates the " battle ax people " with the Indo-Europeans ; they would have subjugated the "megalithic peasant nobility" who had introduced agriculture in this area. Güntert assumed that this megalithic people spoke a language that was related to Basque , Etruscan and " Aegean "; However, some of their words would have survived, including the New High German words Flint, Felsen, Halle and Burg .
Karl Josef Narr points out that ethnography and archeology work with different definitions of “megalithic culture”. He points out that "prehistoric megalithicism does not coincide with any group of forms that can be worked out by archaeological means, or that it can with some probability prove to be rooted in such a complex."
## Current research on the expansion of megalithic structures
Map with find regions in yellow
The possibility of determining the age of the plants widespread in Western Europe and the Mediterranean using the radiocarbon method brings the hypotheses back into the vicinity of a coherent origin.
"There are two competing hypotheses for the origin of megaliths in Europe. The conventional view from the late 19th and early 20th centuries was of a single-source diffusion of megaliths in Europe from the Near East through the Mediterranean and along the Atlantic coast. Following early radiocarbon dating in the 1970s, an alternative hypothesis arose of regional independent developments in Europe. "
- B. Schulz Paulsson : Radiocarbon dates and Bayesian modeling support maritime diffusion model for megaliths in Europe. In: Proceedings of the National Academy of Sciences of the United States of America . 11th February 2019.
“New analyzes revealed striking indications of a gradual spread of the megalithic idea out of a center of origin, which was probably before 4500 BC. Began in northwestern Europe. "
According to her information in the journal »PNAS« ( Proceedings of the National Academy of Sciences of the United States of America ), the Neolithic researcher Bettina Schulz Paulsson from the University of Gothenburg had “2410 sites with carbon dating based on samples that had already been examined in the Context of the megalithic buildings and artifacts of the same age from neighboring cultures (determined). […] Apparently the earliest megalithic structures emerged in the northwest of what is now France in the early 5th millennium BC. In only around 200 to 300 years. "
A pattern of “three waves of propagation originating in north-west France” can be determined using sea routes: “'They were moving over the seaway, taking long distance journeys along the coasts', says Schulz Paulsson. This fits with other research she has carried out on megalithic art in Brittany, which shows engravings of many boats, some large enough for a crew of 12. "
Schulz-Paulsson found a total of 35,000 megalithic objects. The monuments examined are in Scandinavia, the British Isles, Brittany, northern Spain, Corsica and Sardinia as well as southern Italy and Malta. Very early forms can be found in the Paris Basin ( Passy type ).
The German natural scientist Helmut Tributsch (Freie Universität Berlin), who also included historical considerations in his research and came to similar conclusions as Schulz-Paulsson in the 1980s, pointed to megalithic buildings “on the coastline of North Africa between Morocco (stone circle) and Tunisia "To:" But they have not yet been investigated. "
## Interpretations
### Spiritual-religious interpretations
For Andrew Sherratt , megalithic buildings are the main characteristic of peasant cultures. B. the funnel cup culture (TBK) of Northern Central Europe and represent their values and a mythical-theistic world of faith . Megalithic structures were endowed with a sanctity that was adopted from subsequent cultures and represented a meaning that the place had for the peasants, were the scene of regular rituals and ceremonies, and were erected in the hope that they would persist throughout the annual cycles of life would have existed into infinity, as places with the function of a collective memory and a sacred landscape design, which sometimes developed into central shrines with a strong binding effect for the community. It was not until the initially far more mobile cord- band ceramists replaced this tradition and moved on to small, individual graves. The circular structures of the British Isles, the so-called Henge monuments, in turn have astronomical references.
According to the Encyclopedia Britannica , the custom can possibly be based on a cult of the dead and ancestors, to whom such stones gave a certain durability and monumental shape. In some cases it was believed that the ancestors lived in them. Individual stones such as the menhirs are more difficult to explain. However, where they were brought into human form, they could have been symbols of the seat of the ancestors. A uniform interpretation of all megalithic monuments is not possible, however, and it is certainly also wrong to speak of a regular megalithic religion; rather, in the case of megalithic monuments, it is better to speak of a great manifestation of ideas that could have been quite different however, where the cult of the dead played an important role. Hermann Müller-Karpe also takes a similar opinion , especially after evaluating accompanying finds, idols, anthropomorphic steles, ritual objects and iconographic objects such as bull horns, etc., which in his opinion reveal a cultic significance for the Iberian megaliths, together with a religious hope of salvation, the "Included the hope of eternity in a new way in the form of an explicit afterlife". In addition, they were apparently places where the transformation of the dead into ancestors took place, but where the world of the dead was separated from that of the living, whereby it is often noticeable that when graves are laid there is no visual connection to the places of residence and areas of the Living there.
Klaus Schmidt judges the megalithic complexes with their large sculptures in the early Neolithic Göbekli Tepe in Anatolia : "When looking for comparisons for the anthropomorphic pillars of the Stone Age, one quickly comes across the European menhirs and their Middle Eastern counterparts, the Mazzebi (Hebrew plural: Mazbot) of the Semitic culture. It should be noted that menhirs and mazebas can best be interpreted as the dwelling of a numen - a revered deity or a spirit of the dead - without it being possible to prove that the Stone Age pillars correspond in any way with the younger phenomena mentioned . ”From this he draws the conclusion that Göbekli Tepe is to be seen as a "monument of the cult of the dead".
Correspondingly, Victor Maag judges the much younger Chalcolithic megaliths of Palestine (around 4000 BC) that the megaliths were sacred sites that were adopted by later peoples of Palestine such as the Canaanites and Israelites and adapted to their own views. From the creators of Mazzeben, which of them so-called "people of the spirits of the dead", they would have taken the custom to sleep there, to get prophetic dreams, as for example in the Hebrew Bible and the Ephraimitic cult legend of the patriarch Jacob is described , to whom the god El appeared at the stone of Bethel (dream of the ladder to heaven, Gen. 28, 10–22), after which the stone became a cult center. However, such a menhir was probably only erected for outstanding dead. “Dolmens were built for them as stone houses, a single large rock tooth or a slab of rock was set up for them to settle in, or their grave was surrounded with a cromlech as a barrier , because the former 'power' of the deceased could be felt on it . In this magical circle - at least through a corresponding ritual - the dead were banned so that they would not stroll around. Whole clans may also have buried their dead in individual Cromlechs. Such Cromlechs - the Semites they met in Palestine called them Gilgal ("circle") - often include one or more Mazzeben, which in his view makes his explanation more likely. "
### Sociological Interpretations
Studies and experiments have shown how high the technical knowledge of the builders of dolmen may have been. In a 1979 experiment, it took 200 people to pull and erect a 32-ton block of stone that was still much lighter than the 100 tons of other monuments. However, it is not certain that this corresponds to prehistoric methods. The transport of such blocks, often many kilometers from the quarry to the place of construction (at Stonehenge up to 380 km), required sophisticated logistics that were only available to a well-organized larger community. However, Andrew Sherratt points out that large buildings like the European megalithic tombs could in principle also have been built by small communities without a hierarchical social structure. Whether large, hierarchically organized or small, less stratified groups: The social significance of this collective work must have been considerable. Large buildings that only larger and well-organized groups of people could build are to be understood as a collective effort. In any case, the place and events must have been so important for the community that the individual showed that enormous amount of work in the collective, without which some facilities would be inconceivable and in this sense they are also considered monuments of settling down with in some cases supraregional importance they sometimes connected neighboring communities with one another through rituals or even covered the land like a net, whereby they each had visual contact with one another, as shown for example by the Swedish and North German megalithic tombs of the 4th millennium BC. They thus served as ritual centers of a new religion conditioned by the rural way of life, with the help of which the megalithic farmers had seized the arable land that they now had to feed. And they served as markers of the territory that had to be asserted against other groups, as Colin Renfrew in particular suspected. However, whether the economic transition to agriculture and animal husbandry, the so-called Neolithic Revolution , was the sole trigger for the megalithic, remains questionable, especially for its early phase on the Atlantic coast of Northern Europe, because there are no settlements that could be assigned to megalithic structures.
The fact that relatively few burials were found in some of the tombs may also indicate that a social and probably religious hierarchy existed in some regions; in certain places ( Bougon in France and Knowth in Ireland) this is particularly evident. Regulated clearing processes are also conceivable, and in acidic soils, such as in large parts of Ireland and in the northern European lowlands, bone preservation is not to be expected anyway. Klaus Schmidt sees the buildings by Göbekli Tepe as the beginnings of a society based on the division of labor , one of the preconditions for a peasant economy. According to Chris Scarre, a concentration process can be observed in Wessex in the late Neolithic, which culminated in Stonehenge, the construction of which took millions of hours of work.
According to more recent studies, other factors could also have played a role in its use. For example, Stonehenge is believed to have played a role as a medical center to which the sick made pilgrimages to seek healing, as the medical knowledge of the time was also concentrated here in terms of personnel.
### Technical (mathematical) interpretations
For the natural sciences are religious and sociological interpretations whether their speculative nature in general back: "Quite apart from such rather functional interpretation of experiments is of acknowledged megalithic a surprising designing true to scale and regularity have been detected in elevation and location."
Possibly the examined burial chamber
What is certain is that “under no circumstances [...] the builders of the megalithic monuments (were) at work without a concept or just randomly. [...] Even in the transition from the Neolithic to the Bronze Age, a comparatively highly developed measurement and construction technology must have been available. "
Long grave Manio I
The author refers to an investigation that a building ensemble near Kermario near Carnac took as its starting point: A hill there was piled up over a stone burial chamber with a side length of 26.8 meters:
Probably the mentioned 'single stone for the long grave'
“At a neighboring grave monument, the long grave Manio I, there are several stone settings in arch form over circles with the diameters 11.6 m and 37.9 m. Now it is astonishing that all these numerical values or basic measures can be derived from one another using comparatively simple calculations: So 26.8 x 3/4 = 11.6 and 26.8 x 2 = 37.9. The product 37.9 x 2 = 53.7, on the other hand, results in a new measured value that occurs several times in the Manio I megalithic complex. It is, for example, identical to the distance between a large single stone (menhir) and the long grave and also designates the radius of another construction circle of 2 x 53.6 = 107 m. " ${\ displaystyle {\ sqrt {}}}$${\ displaystyle {\ sqrt {}}}$${\ displaystyle {\ sqrt {}}}$
Petit-Ménec stone row
Extended investigations
The investigation considered such results “very unlikely given the arbitrary geometry of the megalithic builders” and examined other megalithic monuments in the area around Carnac: “The largest megalithic stone circle in continental Europe, northeast of Manio I [...] has a radius of approximately 116 m. At a short distance from it, a stone arch (or unfinished cromlech) is known, which belongs to a circle with a radius of 379 m. The distance from the center of this circle to Manio I is about 1160 m, the distance from there to the westernmost point of the stone lines of Petit Ménec is almost exactly 1070 m. The two last-mentioned stretches simultaneously form the larger cathetus and the hypotenuse of a right-angled triangle, the sides of which are astonishingly accurate in a ratio of 5:12:13 to each other and even form a Phythagorean triangle. "
Mathematical determinations
However, the calculated values themselves are not only to be determined by “rooting”, but all route lengths used can be obtained using a “simple process without calculation. All that is required is a constructive series of different squares, with the diagonal of the starting square becoming the side length of the following square. [...] There is therefore the well-founded impression that the construction of the various megalithic systems was carried out quite consistently on the basis of fixed units and relationships - much more systematic and well thought-out than the superficial observation of the individual monuments initially reveals. [...] It is an extraordinarily remarkable fact from a cultural and historical point of view that more than 4000 years ago in Europe it was possible to construct such precise structures as circles, ellipses or parallelograms even in uneven and confusing terrain with fixed dimensions. "
## literature
General
• Karl W. Beinhauer, Gabriel Cooney, Christian E. Guksch, Susan Kus (eds.): Studies on megalithics. State of research and ethnoarchaeological perspectives. The Megalithic Phenomenon. Recent Research and Ethnoarchaeological Approaches (= contributions to the prehistory and early history of Central Europe. Volume 21). Verlag Beier & Beran, Weißbach 1999, ISBN 3-930036-36-3 .
• Glyn Edmund Daniel , Poul Kjærum (Ed.): Megalithic graves and ritual. Papers presented at the III. Atlantic Colloquium, Moesgård 1969 (= Jysk Arkaeologisk Selskabs skrifter. Volume 11). Gyldendal, Copenhagen 1973, ISBN 87-00-08861-7 .
• Glyn Daniel, John Davies Evans, Barry W. Cunliffe, Colin Renfrew : Antiquity and Man. Thames & Hudson, London 1981, ISBN 0-500-05040-6 .
• Timothy Darvill, M. Malone: Megaliths from Antiquity. Antiquity, Cambridge 2003, ISBN 0-9539762-2-X .
• German Archaeological Institute , Madrid Department (Ed.): Problems of megalithic grave research. Lectures on the 100th birthday of Vera Leisner. W. de Gruyter, Madrid 1990, ISBN 3-11-011966-8 ( limited online version ).
• Emil Hoffmann: Lexicon of the Stone Age. CH Beck Verlag, Munich 1999, ISBN 3-406-42125-3 .
• Roger Joussaume : Des dolmens pour les morts. Les mégalithismes à travers le monde. Hachette littérature, Paris 1985, ISBN 978-2-01-008877-3 (= Dolmens for the dead. Megalith-building throughout the world. Cornell University Press, London 1988, ISBN 978-0-8014-2156-3 ).
• Wolfgang Korn : Megalithic Cultures in Europe. Enigmatic monuments of the Stone Age. Theiss Verlag, Stuttgart 2005, ISBN 978-3-8062-1553-3 .
• Jean Pierre Mohen, Jean Guilaine: Megaliths. In: The great picture atlas of archeology . Orbis Verlag, Munich 1991, p. 46 f., ISBN 3-572-01022-5 ; Original edition: Encyclopaedia Universalis, Paris 1985.
• Hermann Müller-Karpe : Fundamentals of early human history, Vol. 1: From the beginnings to the 3rd millennium BC Chr. Theiss Verlag, Stuttgart 1998, ISBN 3-8062-1309-7 .
• Mark Patton: Statements in Stone, Monuments and Society in Neolithic Brittany. Routledge, London 1993, ISBN 0-415-06729-4 .
• Sibylle von Reden: The Megalithic Cultures. DuMont, Cologne 1978, 1982, ISBN 3-7701-1055-2 .
• Chris Scarre (Ed.): World Atlas of Archeology. Südwest Verlag, Munich 1990, ISBN 3-517-01178-9 . OA 1988 Times Books Ltd.
• Andrew Sherratt (Ed.): The Cambridge Encyclopedia of Archeology. Christian Verlag, Munich 1980, ISBN 3-88472-035-X .
• Jürgen E. Walkowitz: The megalithic syndrome. European cult sites of the Stone Age (= contributions to the prehistory and early history of Central Europe. Vol. 36). Beier & Beran, Langenweißbach 2003, ISBN 3-930036-70-3 .
• The New Encyclopedia Britannica , 15th edition. Encyclopedia Britannica Corp., Chicago 1993, ISBN 0-85229-571-5 .
Iberian Peninsula and Mediterranean Basin
• G. Camps: Les dolmens marocains. In: Libyca. Volume 13 (1965), Algiers, pp. 235-247. .
• Francisco Javier Fernández Conde: La Iglesia de Asturias en la Alta Edad Media . Oviedo 1972.
• Antonio C. Floriano: Restauración del culto cristiano en Asturias en la iniciación de la Reconquista . Oviedo 1949.
• Joachim von Freeden : Malta and the architecture of its megalithic temples. Scientific Book Society, Darmstadt 1993, ISBN 3-534-11012-9 .
• Heinz Günter Horn (Ed.): The Numider. Horsemen and kings north of the Sahara. Rheinlandverlag, Cologne 1979.
• Philine Kalb: Megalithics on the Iberian Peninsula and North Africa. In: Karl W. Beinhauer (Ed.), U. a .: Studies on megalithics. State of research and ethnoarchaeological perspectives. In: Contributions to the prehistory and early history of Central Europe. Langenweißbach 21.1999, 115-122.
• Georg Leisner , Vera Leisner : The megalithic tombs of the Iberian Peninsula. The south . Roman-Germanic research, Volume 17. Verlag von Walter de Gruyter & Co., Berlin 1943.
• Georg Leisner, Vera Leisner: The megalithic tombs of the Iberian Peninsula. The west . Madrid Research, Volume 1, 1st delivery. Walter de Gruyter & Co., Berlin 1956.
• Georg Leisner, Vera Leisner: The megalithic tombs of the Iberian Peninsula. The west . Madrid Research, Volume 1, 2nd delivery. Walter de Gruyter & Co., Berlin 1959.
• Vera Leisner: The megalithic tombs of the Iberian Peninsula. The west . Madrid Research, Volume 1, 3rd delivery. Walter de Gruyter & Co., Berlin 1965.
• Vera Leisner, Philine Kalb: The megalithic tombs of the Iberian Peninsula. The west . Madrid Research, Volume 1, 4th delivery. Walter de Gruyter & Co., Berlin 1998, ISBN 3-11-014907-9 .
• Sigrid Neubert : The temples of Malta. The mystery of megalithic buildings, second edition, Gustav Lübbe Verlag, Bergisch Gladbach 1994. ISBN 3-7857-0758-4 .
• Klaus Schmidt : You built the first temple. The enigmatic sanctuary of the Stone Age hunters. Verlag CH Beck, Munich 2006, ISBN 3-406-53500-3 .
Western Europe
Central and Northern Europe
• Ingrid Falktoft Andersen: Vejviser til Danmarks oldtid . 1994, ISBN 87-89531-10-8 .
• Lars Bägerfeldt: Megalitgravarna i Sverige. Type, tid, rum och social miljö. 2nd edition, Arkeo Förlaget, Gamleby 1992, ISBN 91-86742-45-0 .
• Jan Albert Bakker : The TRB West Group. Studies in the Chronology and Geography of the Makers of Hunebeds and Tiefstich Pottery (= Cingula. Volume 5). Universiteit van Amsterdam, Amsterdam 1979, ISBN 978-90-70319-05-2 ( online ).
• Jan Albert Bakker: The Dutch Hunebedden. Megalithic Tombs of the Funnel Beaker Culture. International Monographs in Prehistory, Ann Arbor 1992, ISBN 1-879621-02-9 .
• Hans-Jürgen Beier : The megalithic, submegalithic and pseudomegalithic buildings as well as the menhirs between the Baltic Sea and the Thuringian Forest. Contributions to the prehistory and early history of Central Europe 1. Wilkau-Haßlau 1991, ISBN 978-3-930036-00-4 .
• Klaus Ebbesen : The younger funnel cup culture on the Danish islands. Akademisk Forlag, Copenhagen 1975, ISBN 87-500-1559-1 .
• Klaus Ebbesen: Tragtbægerkultur i Nordjylland. Study over jættestuetiden. Det Kongelige Nordiske Oldskriftselskab, Copenhagen 1979, ISBN 87-87438-16-5 .
• Klaus Ebbesen: Stordyssen i Vedsted. Study of bearer cultures in Southern Jutland. Akademisk Forlag, Copenhagen 1979, ISBN 87-500-1889-2 .
• Klaus Ebbesen: Bornholms dysser og jættestuer. In: Bornholms Historiske Samfund. Volume 18, 1985, pp. 175-211 ( online ).
• Klaus Ebbesen: Stendysser og jaettestuer. Odense universitetsforlag, Odense 1993, ISBN 87-7492-918-6 .
• Klaus Ebbesen: Danske dysser - Danish dolmens. Attika, Copenhagen 2007, ISBN 978-87-7528-652-2 .
• Klaus Ebbesen: Danmarks megalitgrave. Volume 2. Catalog Attika, Copenhagen 2008, ISBN 978-87-7528-731-4 ( online ).
• Klaus Ebbesen: Danske jættestuer. Attika, Vordingborg 2009, ISBN 978-87-7528-737-6 .
• Klaus Ebbesen: Danmarks megalitgrave. Volume 1/1. Attika, Vordingborg 2011, ISBN 978-87-7528-784-0 .
• Klaus Ebbesen: Danmarks megalitgrave. Volume 1/2. Attika, Copenhagen 2011, ISBN 978-87-7528-785-7 .
• Barbara Fritsch et al .: Density Centers and Local Groups - A Map of the Great Stone Graves of Central and Northern Europe. In: www.jungsteinsite.de. October 20, 2010 ( PDF; 1.6 MB , XLS; 1.4 MB ).
• Albert Egges van Giffen : De Hunebedden in Nederland. 3 volumes, Oosthoek, Utrecht 1925.
• Peter Vilhelm Glob : prehistoric monuments of Denmark. Wachholtz, Neumünster 1968.
• Svend Hansen : Jaettestuer i Danmark. Construction and restoration. Miljøministeriet, Skov- og Naturstyrelsen, Hørsholm 1993, ISBN 87-601-3386-4 .
• Jürgen Hoika : Megalithic Graves in the Funnel Beaker Culture in Schleswig-Holstein. In: Przegląd Archaeologiczny. Volume 37, 1990, pp. 53-119.
• Eberhard Kirsch : Finds from the Middle Neolithic in the state of Brandenburg. Brandenburg State Museum for Prehistory and Early History, Potsdam 1993.
• Eberhard Kirsch: Contributions to the older funnel cup culture in Brandenburg. Brandenburg State Museum for Prehistory and Early History, Potsdam 1994.
• Dariusz Król : Chamberless Tombs in Southeastern Group of Funnel Beaker Culture. Rzeszów 2011, ISBN 978-83-7667-107-9 ( online ).
• Jørgen Jensen : Danmarks Oldtid. 1. Stenalder. 13,000–2,000 f.Kr. Gyldendal, Copenhagen 2001, ISBN 87-00-49038-5 .
• Magdalena Midgley : TRB Culture. The First Farmers of the North European Plain. University Press, Edinburgh 1992.
• Magdalena Midgley: The Megaliths of Northern Europe. Routledge, London / New York 2008, ISBN 978-1-134-26450-6 .
• Johannes Müller : Megaliths and Funnel Beakers: Societies in Change 4100-2700 BC (= 33. Kroon-Voordracht. ). Amsterdam 2011 ( online ).
• Johannes Müller: Large stone graves, trench works, long mounds. Early monumental buildings in Central Europe (= archeology in Germany. Special issue 11/2017). Theiss, Stuttgart 2017, ISBN 978-3-8062-3464-0 ( online ).
• Ingeburg Nilius : The Neolithic in Mecklenburg at the time and with special consideration of the funnel beaker culture. Museum of Prehistory and Early History, Schwerin 1971.
• Joachim Preuss: The Altmark group of deep engraving ceramics (= publications of the State Museum for Prehistory in Halle. Volume 33). German Science Publishing House, Berlin 1980.
• Jutta Roß : Megalithic graves in Schleswig-Holstein. Investigations into the structure of the tombs based on recent excavation findings. Publishing house Dr. Kovač, Hamburg 1992, ISBN 3-86064-046-1 .
• Seweryn Rzepecki : The Roots of Megalithism in the TRB culture. Łódź 2011, ISBN 978-83-933586-1-8 ( online version ).
• Kerstin Schierhold : Studies on the Hessian-Westphalian megalithic. Research status and perspectives in a European context (= Münster contributions to prehistoric and early historical archeology. Volume 6). Leidorf, Rahden / Westf. 2012, ISBN 978-3-89646-284-8 .
• Waldtraut Schrickel : Western European elements in the Neolithic grave construction of Central Germany and the gallery graves of West Germany and their inventories. Bonn 1966, ISBN 978-3-7749-0575-7 .
• Waldtraut Schrickel: Catalog of the Central German graves with Western European elements and the gallery graves of Western Germany. Bonn 1966.
• Ewald Schuldt : The Mecklenburg megalithic graves. Research on their architecture and function. VEB Deutscher Verlag der Wissenschaften, Berlin 1972.
• Ernst Sprockhoff : The Nordic megalithic culture (= manual of the prehistory of Germany Volume 3). de Gruyter, Berlin / Leipzig 1938.
• Ernst Sprockhoff: Atlas of the megalithic tombs of Germany. Part 1: Schleswig-Holstein. Rudolf-Habelt Verlag, Bonn 1966.
• Ernst Sprockhoff: Atlas of the megalithic tombs of Germany. Part 2: Mecklenburg - Brandenburg - Pomerania. Rudolf-Habelt Verlag, Bonn 1967.
• Ernst Sprockhoff: Atlas of the megalithic tombs of Germany. Part 3: Lower Saxony - Westphalia. Rudolf-Habelt Verlag, Bonn 1975, ISBN 3-7749-1326-9 .
• Märta Strömberg : The megalithic tombs of Hagestad. On the problem of grave structures and grave rites (= Acta Archaeologica Lundensia. Volume 8). Bonn / Lund 1971.
• Christopher Tilley : The Dolmens and Passage Graves of Sweden. An Introduction and Guide. Institute of Archeology, University College London, London 1999, ISBN 978-0-905853-36-9 .
• Bernward Wember : Large stones on Rügen: stone myth and megalithic culture. A treasure trove of the Stone Age . Reprint-Verlag, Bergen auf Rügen 2007, ISBN 978-3-939915-00-3 .
## Individual evidence
1. Tobias Kühn: Where the idea for Stonehenge came from , February 13, 2019. sueddeutsche.de Süddeutsche Zeitung . (Access: October 7, 2019).
2. Dominik Bonatz: Change in a megalithic culture in the 20th century (Nias / Indonesia). In: Anthropos. 96/1, 2001, pp. 105-118, JSTOR 40465456 .
3. ^ V. Gordon Childe: The Distribution of Megalithic Cultures, and their Influence on ancient and modern Civilizations. In: Man. Volume 46/4 (1946), p. 97, JSTOR 2793159 .
4. ^ Oscar Montelius: The Orient and Europe . First volume, Stockholm 1899;
Sophus Müller: Sønderjyllands Stenalder. In: Aarbøger for nordisk oldkyndighed og historie. III. Series, third volume (1913), pp. 169-322.
5. ^ Carl Schuchhardt: Old Europe. Second edition, Berlin and Leipzig 1926.
6. ^ Adolf Ellegard Jensen: Zimbabwe and the megalithic culture. In: Paideuma. Communications on cultural studies. Volume 1/3 (1939), p. 101.
7. ^ Ernst Wahle: Deutsche Vorzeit. Leipzig 1932, pp. 68 ff., 73 ff.
8. Hermann Güntert: The origin of the Germanic peoples . Carl Winter, Heidelberg 1934, p. 97 f.
9. Hermann Güntert: The origin of the Germanic peoples. Carl Winter, Heidelberg 1934, p. 95.
10. ^ Karl J. Narr: Archaeological notes on the question of the oldest grain cultivation and its relationship to high culture and megalithic. In: Paideuma. Communications on cultural studies. Volume 6/4 (1956), p. 249.
11. ^ B. Schulz Paulsson: Radiocarbon dates and Bayesian modeling support maritime diffusion model for megaliths in Europe . In: James F. O'Connell (Ed.): Proceedings of the National Academy of Sciences of the United States of America . tape 116 , no. 9 , February 11, 2019, p. 3460-3465 , doi : 10.1073 / pnas.1813268116 , PMID 30808740 .
12. Jan Osterkamp: Neolithic Age: A Common Root of the Megalithic Culture? In: Spectrum of Science . February 11, 2019 ( Spektrum.de ).
13. ^ Alison George: Sailors spread the ancient fashion for monuments like Stonehenge. In: New Scientist . February 11, 2019 ( newscientist.com ).
14. Helmut Tributsch: “The glass towers of Atlantis” - memories of megalithic Europe. Ullstein, Frankfurt am Main and Berlin 1986, p. 145.
15. Andrew Sherratt: The Upper Neolithic and the Copper Age. In: Barry Cunliffe (Ed.): Illustrated pre- and early history of Europe . Frankfurt am Main 1996, pp. 204-207, 217, 219.
16. Ian Hodder: “Generalizing statements allow us to embed the interpretation of megalithic graves in systems of production and reproduction in order to connect the symbolic area associated with it with that of social life. But archaeologists have especially linked the social and ideological functions with the meanings of graves, forgetting that these do not first and foremost conceal and legitimize, but rather describe ways in which one can deal with death, with this dealing with local traditions and with oneself again and again changing attempted solutions is based. We must therefore not expect the graves to have rigid meanings as constant in space and time. So tell z. B. many grave sequences of changing structures of meaning. Megalithic tombs have too often been separated from a local system of meaning that gave meaning to death ”.
17. Korn, p. 152 ff.
18. Andrew Sherratt: The Upper Neolithic and the Copper Age. In: Barry Cunliffe (Ed.): Illustrated pre- and early history of Europe. P. 221 ff.
19. ^ Prehistoric Religion . In: Encyclopedia Britannica. 2012. The illustration is based on the theses of the British anthropologist and religious scholar EO James from the 1950s .
20. Britannica , Vol. 26, pp. 66, 2a.
21. Müller-Karpe, pp. 223-228.
22. Korn, p. 154, cited above. after Ina Mahlstedt.
23. ^ Schmidt, pp. 117, 127.
24. ^ Victor Maag: Syria - Palestine. In: Hartmut Schmökel (ed.): Cultural history of the ancient Orient. Mesopotamia, Hittite Empire, Syria - Palestine, Urartu. Weltbild Verlag, Augsburg 1995, pp. 566-570. ISBN 3-89350-747-7 .
25. Korn, pp. 46, 75 f .; Mohen / Guilaine: megaliths. In: Bildatlas Archäologie , p. 46.
26. ^ Mohen / Guilaine: Megaliths. In: Bildatlas Archäologie , p. 46 f.
27. Sherratt, p. 408.
28. Korn, pp. 32 ff., 65, 154.
29. Klaus Schmidt: You built the first temple. The enigmatic sanctuary of the Stone Age hunters. Verlag CH Beck, Munich 2006, ISBN 3-406-53500-3 . P. 246 ff.
30. Chris Scarre (ed.): World Atlas of Archeology. Südwest Verlag, Munich 1990, ISBN 3-517-01178-9 . OA 1988, Times Books p. 106 f.
31. ^ Stonehenge - The Healing Stones ( January 3, 2009 memento on the Internet Archive ), a BBC contribution from March 2008, accessed on July 6, 2011.
32. Bruno P. Kremer: Geometry from the Stone Age , Neue Zürcher Zeitung (NZZ), Research and Technology , March 30, 1988. Italics in the original text.
33. It is also mentioned in the article that these “numerical values and basic mass also appear (occur) in larger spatial contexts” and a stone circle discovered near Bonn “is very close to 11.6 m”. Also "in widely spaced megalithic monuments [...] the same basic mass is found again and again." (Kremer: Geometry of the Stone Age )
34. Quotations in the section: Bruno P. Kremer: Geometrie der Steinzeit , NZZ, 1988. |
# Can I enter code directly from a source file?
In a project I'm working, all the files are in the same folder (source codes, papers, images, etc).
In the documentation, I'm including some code with minted which is really great, but I want to do something like this:
\begin{minted}{c}
\input{main.c}
\end{minted}
I know that won't work, but you can see what I want to do. The reason for this, is because I don't want to update documentation every time I change a source file, do you understand? The idea is to leave main.c as it is, and include that file automatically in the documentation.
-
Try \inputminted{c}{main.c} or generally \inputminted[options]{language}{filename} -- see page 4 of the minted manual.
-
That happens to me bacause of the fast-manual reading :S – Tomas Nov 8 '10 at 7:14
I have used listings in the past and am pleased with the results. It support several different programming languages, and is simple, yet powerful:
\lstset{language=C}
\input{funkyalg.c}
-
Listings is great, but IMO it isn't as powerful as minted. Because minted relays on pygmentize to make the sintax. – Tomas Nov 8 '10 at 15:54
You might consider \usepackage{fancyvrb}. It lets you do simple things like \VerbatimInput{hello.c} or fancy things like
\fvset{frame=single,numbers=left,numbersep=3pt} \VerbatimInput{hello.c} (see page 20 of the manual).
- |
• 13
• 15
• 27
• 9
• 9
# Tangent and BiTangent equation?
This topic is 4286 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi all! anyone can help me find a function other than: http://www.terathon.com/code/tangent.html < --- i just cant figure this one... that can compute Tangents and BiTangent for any given mesh with known uvs? thx
##### Share on other sites
That is pretty straightforward, simple code. Perhaps you might want to borrow Eric's book from a library, and read the text that describes the math behind the code?
##### Share on other sites
i didnt kown about the book, it would help.
maybe someone can help me figure this one out then, if it is not too much trouble...
there is 2 thing i dont understand:
first:
why does he use a vector4 to find the tangents, what is the w for?
second:
i was expecting a *vector3 for the tangents and another one for the bitangents... where are the bitangents?
##### Share on other sites
There is no hard requirement to precompute and cache the bitangent, since you can compute it using the already-stored normal and tangent vectors. Yes, would perhaps save runtime (in GPU or wherever) to cache the bitangents, but that would require more storage, and perhaps more data sent across the AGP or PCI-Express bus in a texture.... One makes a tradeoff decision! I think nVidia in their developer docs suggest just computing the bitangent in a shader.
The w is part of a homogeneous vector. It is use to enable graphics engines to treat vectors (as in direction vectors, normal vectors, etc.) and positions (location that is measured relative to some fixed coordinate origin) in a uniform manner in code. For a reasonable introduction to homogeneous coordinates, see the following:
OpenGL Red Book - Appendix F
##### Share on other sites
you were straight on the ball with this post!
thx a million!
##### Share on other sites
humm, im trying to make a version for quads also...
for (Int i = 0; i < l_mesh->GetQuadCount(); ++i) { Int i1 = l_quad.a; Int i2 = l_quad.b; Int i3 = l_quad.c; Int i4 = l_quad.d; const Vector3 &v1 = m_Vertex[i1]; const Vector3 &v2 = m_Vertex[i2]; const Vector3 &v3 = m_Vertex[i3]; const Vector3 &v4 = m_Vertex[i4]; const Vector2 &w1 = m_UV[0][i1]; const Vector2 &w2 = m_UV[0][i2]; const Vector2 &w3 = m_UV[0][i3]; const Vector2 &w4 = m_UV[0][i4]; Float x1 = v2.x - v1.x; Float x2 = v3.x - v1.x; Float x3 = v4.x - v1.x; Float y1 = v2.y - v1.y; Float y2 = v3.y - v1.y; Float y3 = v4.y - v1.y; Float z1 = v2.z - v1.z; Float z2 = v3.z - v1.z; Float z3 = v4.z - v1.z; Float s1 = w2.x - w1.x; Float s2 = w3.x - w1.x; Float s3 = w4.x - w1.x; Float t1 = w2.y - w1.y; Float t2 = w3.y - w1.y; Float t3 = w4.y - w1.y; Float r = 1.0f / (s1 * t2 - s2 * t1); Vector3 l_dir ( (t2 * x1 - t1 * x2) * r, (t2 * y1 - t1 * y2) * r, (t2 * z1 - t1 * z2) * r ); l_temp[i1] += l_dir; l_temp[i2] += l_dir; l_temp[i3] += l_dir; l_temp[i4] += l_dir; }
im wondering what r and l_dir will be now?... any hint would be welcome!
##### Share on other sites
In this particular case, the w-coordinate of the tangent is just being used to store a handedness value. It doesn't have anything to do with the tangent itself, but it tells you which way the bitangent is pointing with respect to the normal and tangent.
I don't think there's a straightforward way to extend the tangent calculation to quads. You can't be sure that the tangent directions are the same for each group of three vertices.
##### Share on other sites
Oh great, Eric all mighty!
I ve choose to make Tangents and BiTangents members of my primitives!
that said I use the following:
for (Int i = 0; i < m_VertexCount; ++i)
{
// Gram-Schmidt orthogonalize.
m_Tangent = (m_Tangent - m_Normal * m_Normal.DotProduct(m_Tangent)).Normalize();
m_BiTangent = (m_Normal.CrossProduct(m_Tangent)).Normalize();
}
is this correct for the BiTangents? works fine with a plane... I ll just like to be sure!
thx eric!
##### Share on other sites
Yes, your code will work if all your triangles use a right-handed mapping. You don't need to normalize the bitangent if the normal and tangent are already normalized and orthogonal. If you're using 4D tangents with the handedness in the w-coordinate, then you should multiply the bitangent by tangent.w after the cross product (to flip its direction in the left-handed case).
[Edited by - Eric Lengyel on June 14, 2006 6:36:09 PM] |
# 2pir – Comprehensive Explanation and Detailed Examples
2pir is the circumference of a circle.
The circumference (or the perimeter) of a circle is the total length of the circle’s boundary. The circumference is a linear measure, and its units are mostly given as centimeters, meters or inches.
A circle is a closed round figure, and all the points on the circle’s boundary are equidistant from the center of the circle. In geometry, we are only interested in calculating the area and circumference of the circle. In this topic, we will discuss the circumference of the circle, its proof and related examples.
## What Is 2pir?
$2\pi r$ is the formula for the circumference of a circle, and the circumference of a circle is the product of two constants: “$2$” and “$\pi$;” while “$r$” is the radius of the circle.
You will also encounter the question is 2pir area of the circle? The answer to this question is no, the area of the circle is $\pi r^{2}$.
If we cut open a circle, put it in a straight line, and measure its length, it will give us the total length of the boundary of a circle. As the circle is a closed figure and we need a formula for calculating the total boundary of the circle, this is where the formula helps us.
We should use the important elements of the circle used to calculate the area and circumference of the circle and these important elements.
1. Center of the circle
2. Diameter of the circle
Center of the circle: The center of the circle is the fixed point of the circle situated equidistant from every point on the boundary of the circle.
Diameter of the circle: The circle’s diameter is the total distance from one point of the circle to the other point, provided the drawn line crosses the center of the circle. So it is a line that touches different ends or boundaries of the circle while passing through the center. It is denoted as “ $\dfrac{r}{2}$.”
Radius of the circle: The circle’s radius is the total distance from any point on the boundary of the circle to the center of the circle and is represented as “$r$”.
## How To Prove That the Circumference of a Circle Is 2pir
The circumference of the circle is the total length of the boundary of the circle, and it cannot be calculated by using a ruler or scale as we do for other geometrical figures. The circle has a curved shape, and we have to use the formula to calculate the circle’s circumference. In deriving the 2pir formula as the circumference of the circle, we use a constant value $\pi$ and a variable value of radius “$r$”.
The $\pi$ has a constant value of $3.14159$ or $\dfrac{22}{7}$. The value of $\pi$ is ratio of the circumference of the circle to the diameter of the circle.
$\pi = \dfrac{C}{D}$ (1)
Here,
C = circumference of the circle
D = Diameter of the circle
The formula for the diameter of the circle is given as:
$D = \dfrac{r}{2}$
So, plugging the value of “D” in equation “1”:
$\pi = \dfrac{C}{(\dfrac{r}{2})}$
$C = 2.\pi.r$
Hence, the circumference of the circle is given as $2.\pi.r$
### Alternative Proof
Consider a circle having a centered origin with radius “r” in an X-Y plane.
We can write the equation for the circle as:
$x^{2} + y^{2} = r$
Where
x = point on X-axis
y = point on Y-axis
r = radius of the circle
If we only take the first quadrant part of the circle, then we can obtain the length or arc of the line of the circle.
$L = 4 \int_{a}^{b}\sqrt{(x^{‘}(\theta))^{2}+ (y^{‘}(\theta))^{2}}$
Here,
$x = r.cos\theta$
$y = r.sin\theta$
$x^{‘}(\theta) = -r.sin\theta$
$y^{‘}(\theta) = r.cos\theta$
$L = 4 \int_{a}^{b}\sqrt{(-r.sin\theta)^{2}+ (y^{‘}(r.cos\theta)^{2}}$
$L = 4 \int_{0}^{\dfrac{\pi}{2}}\sqrt{r^{2}sin^{2}\theta + r^{2}cos^{2}\theta }$
$L = 4 \int_{0}^{\dfrac{\pi}{2}}\sqrt{r^{2}(sin^{2}\theta + cos^{2}\theta)}$
$L = 4 \int_{0}^{\dfrac{\pi}{2}}\sqrt{r^{2}(1)}$
$L = 4 \int_{0}^{\dfrac{\pi}{2}}\sqrt{r^{2}}$
$L = 4 \int_{0}^{\dfrac{\pi}{2}} r$
$L = 4 [ r] _{0}^{\dfrac{\pi}{2}}$
$L = 4r \dfrac{\pi}{2}$
$L = 2\pi r$.
## Why Is Circumference 2pir and Not Pid?
We usually use $2\pi r$ instead of $\pi d$ as a circle is usually given in terms of its radius rather than diameter. Note that the diameter $d$ is equal to twice the radius, i.e., $d=2r$, so we can write $2\pi r = \pi d$, and both formulas are equally valid.
## 2pir Calculator
To calculate the circumference, we need the value of $\pi$ and radius. We already know that the value of $\pi$ is given as $\dfrac{22}{7}$, while the value of the radius is either given or we calculate it if we are given the area of the circle.
If we are given the value of the diameter instead of the radius, we will first calculate the value of the radius by using the formula for the diameter of the circle $D =\dfrac{r}{2}$.
## Applications of the Circumference of the Circle
Here are some real-life applications of the circumference of the circle:
1. This formula will be used whenever we encounter a circular shape in real life.
2. The wheel is considered to be one of the best inventions in human history. The circumference formula is essential in designing the model of a wheel.
3. The formula is used in solving different trigonometric problems, especially equations of the circle.
4. The hub of a ceiling fan has a circular shape, so we have to use this formula to calculate the perimeter of the hub.
5. Different forms of coins currency, buttons and circular clocks are all applications of the circle’s circumference, and we have to use this formula while designing all these stuff.
6. $2\pi r$ formula is also used in the calculation of the average speed of an object moving in a circular path. The formula to calculate the velocity of an object moving in a circular path is given as 2pir/t.
### Example 1:
If the circle’s radius is 20 cm, what will be the circle’s circumference?
### Solution:
Radius of the circle $= 20 cm$
Circumference of the circle $= 2.\pi.r$
C $= 2 \pi . 20$
C $= 125.6$ cm
### Example 2:
If the circle’s diameter is 24cm, what will be the circle’s circumference?
### Solution:
Diameter $= 24$
Radius of the circle $= \dfrac{24}{2} = 12$
Circumference of the circle $= 2.\pi.r$
$C = 2 \pi.12$
$C = 75.36 cm$
### Example 3:
The perimeter of a square-shaped thread is $250 cm$. If the same thread is used to form a circle, what will be the circumference of the circle? You are also required to calculate the radius and diameter of the circle.
### Solution:
We know that the perimeter of the square thread = the total amount of thread used to create the square. This will also be equal to the circumference of the circle because if we use the same thread to form the circle, the length of the circumference will remain the same.
Circumference of the circle $= 250$ cm
$C = 2.\pi.r$
$250 = 2\times \pi \times r$
$r = \dfrac{250}{\pi \times r}$
### Example 4:
The difference between the circumference and diameter of a football is $10$ cm. What will be the radius of the football?
### Solution:
Let the radius of the football $= r$
As given in the statement, circumference – diameter $= 10$ cm
Circumference of the football $= 2.\pi.r$
Diameter of the football $= 2.r$
$2. \pi . r – 2r = 10$
$r ( 2\pi – 2) = 10$
$r ( 4.28 ) = 10$
$r = \dfrac{10}{4.28} = 2.34$ cm approx.
A shepherd wants to build a circular boundary to keep his cattle safe from hounds and predators. What will be the total estimated cost if the $30$ meter radius of the circular boundary is charged at $\$15$per meter? ### Solution: We will calculate the total length of the circular boundary and then multiply it with \$15.
Circumference of the boundary $= 2.\pi.r$
$C = 2 \times 3.14 \times 30$
$C = 188.4$ meter
Total cost of the circular boundary $= 188.4 m \times$15 \dfrac{1}{m} = \$2826$
## 2pir vs pi r^2
The main difference between these is that the circumference given as $2\pi r$ is the total length of the boundary of the circle, while the area enclosed by a circle of radius $r$ is given as $\pi r^2$. Many students confuse the circumference of the circle with the area of the circle and their corresponding formulas. Remember that circumference is a length and its units are measured in centimeters, meters, etc, while the units of area are meters-squared or centimeter-squared, etc.
### Example 6:
Calculate the value of 2pir and $2\pi r^2$ if the area of the circle is $64 cm ^{2}$.
### Solution:
The formula for area of the circle is given as:
Area of the circle $= \pi r^{2}$
$64 = 3.14 \times r^{2}$
$r^{2} = 20.38$
$r = 4.51 cm$ approx
$2.pi.r = 2 \times 3.14 \times 4.51 = 28.32$ cm approx.
$2.pi. r^{2} = 2 \times 3.14\times 20.38 = 128 cm^{2}$ approx
The value of 2pir and $2\pi r^2$ can be calculated using 2pir and 2pir^2 calculator as well.
### Practice Questions:
1. The wheel of a car has a radius of $7$ meters. Ignoring friction and other factors, if the car’s wheel rotates once, what will be the distance covered by the vehicle?
2. Mr. Alex is working as a teacher in a school and he took his class to a summer camp near a forest. There was a huge tree near the camp house, and Mr. Alex promised the class a box of chocolates if they could calculate the tree’s diameter without using scale tape. The circumference of the tree is $48.6$ ft. Help the class determine the diameter of the tree.
3. A copper wire is bent to form a square shape. The area of the square is $100 cm^{2}$. If the same wire is bent to form a circle, what will be the circle’s radius?
4. Suppose the area of a circular track is $64 m^{2}$. What will be the circumference of the track?
1.
The radius of the wheel is $= 7 meters$
Distance covered during one rotation of wheel = circumference of the wheel
C $= 2.\pi.r$
$C = 2 \times 3.14 \times 7 = 43.96$ meters
2.
Circumference of the tree $= 48.6$ ft
$C = 2.\pi.r$
$48.6 = 2 \times 3.14 \times r$
$48.6 = 6.38 \times r$
$r = \dfrac{48.6}{6.38} = 7.62 ft$
Diameter of the tree $= 2\times r = 2 \times 7.62 = 15.24$ ft.
3.
All sides of the square are the same. Let us name all the sides as “a”.
Area of the square $= a^{2}$
Area of the square $= 100 cm^{2}$
$a^{2} = 100$
$a = 104$ cm
Perimeter of the square $= 4\times a = 4 \times 10 = 40 cm$.
If the same wire is used to form a circle, the overall length of the boundary or the surface remains the same. Hence, the circumference of the circle $= 40$ cm.
$C = 2.\pi.r$
$40 = 2.\pi.r$
$r = 6.37$ cm
4.
Area of the circular track $= 64 m^{2}$
Formula for area of the circle $= \pi.r^{2}$
$r^{2} = \dfrac{113}{3.14} \cong 36$
$r = \sqrt{36}$
$r = 6$ meter
Circumference of the circular track $= 2.\pi.r$
$C = 2\pi\times 6 = 37.68$ meter |
# What is the next number for this sequence and the rule(s) that describes it?
What are the next four numbers and the rule that generates the sequence?
0, 2, 8, 24, 64, 160, 384, 896, 2048, ...
Explain the hints and how you reached the answer once you have got it.
Hint 1:
The sequence continues indefinitely.
Hint 2:
These are all whole numbers. No fractions or negatives are in the sequence.
Hint 3:
Number representations are significant.
Hint 4:
Consider unusual math operations.
Hint 5:
Perhaps my own SE activity might be helpful...?
Next four numbers are
4608, 10240, 22528, 49152
As the rule that generates the sequence is
$f(n) = n \times 2^n$
I got this simply because
There's a suspiciously high amount of powers of 2 in the sequence
But I have no idea what the hints mean.
OP's edit for further explanation and clarification:
My background is in programming, so the unusual operation is bitshifting, and my generating rule was to left-shift the binary representation of n by n bits.
• This works... but I agree. Perhaps he was looking for something different, since these aren't unusual operations. – Shuri2060 Apr 6 '16 at 18:52
• @QuestionAsker If this isn't the answer, then you could argue that n<0 doesn't work for hint 2, and the 'real' sequence would. – Lacklub Apr 6 '16 at 19:13
• It's not quite the way I got the sequence, but I guess that is the formula. – Jed Schaaf Apr 6 '16 at 19:58 |
# Writing a ring buffer in Scala
People coming from dynamic languages like Ruby or primitive languages like Go tend to miss the point of Scala. They tend to think in terms of mutability and often aren’t familiar with data structures beyond lists and hashes, with types and side effects being seen as fairly unimportant considerations. To try and illustrate the different way you need to think when writing Scala I’m going to write a ring buffer, as I’ve seen a few people around the office implementing them as a learning exercise.
It’ll be quite different to what you’d write in most mainstream languages.
Scala developers prefer immutable types to mutable ones because knowing they can’t change makes them much easier to reason about, particularly once concurrency gets involved. People new to Scala tend to worry unduly about the performance of immutable data structures; you can actually write very efficient immutable collections if you make them persistent so that’s the approach we’ll take.
To further illustrate how to write idiomatic Scala I’m going to structure this article the way I tend to write code, which is types first. This is an alien concept to people used to dynamic languages or languages with only basic type systems, but in languages with powerful type systems it makes sense: You encode your knowledge of how the program should work as types as far as possible, and then you fill in the blanks.
## Specification
Before we write any code at all, we should write a specification. It’s much quicker and easier to discuss and iterate on a specification than code. It’s amazing how many programmers miss this step out, but you shouldn’t. Writing at least a small specification will make things faster overall.
You could write these requirements out as tests, and I would if this was for a real project, but it takes up quite a bit of space and it’s not really the point of this post so I won’t here.
1. The ring can hold elements of any type.
2. The capacity must be greater than zero, and is fixed when the ring is created.
3. The size may vary between zero (empty) and the capacity (full).
4. When an element is pushed to a non-full ring the new element is added to the end and the size is increased by one.
5. When an element is pushed to a full ring the oldest element is discarded, the new element is added to the end, and the size is unchanged.
6. When an element is popped from a non-empty ring the oldest element is removed and returned, and the size is decreased by one.
7. If an attempt is made to pop an element from an empty ring the program should crash.
Although these requirements are short and easy to read, they’re also comprehensive and detail what each operation should do to things like the size of the buffer. I referred back to these quite a number of times when writing both the interface and the implementation.
## Interface
The first requirement can be implemented with a type parameter. Because we know the ring buffer is going to be immutable, values will only come out of it and therefore we should be able to make the parameter covariant with an + annotation. As a first-order approximation, if values only come out (e.g. return values) then the type parameter can be covariant, and if they only go in (e.g. method parameters) it can be contravariant1.
Making the ring covariant with regards to A means that if we have a Ring[Giraffe] then we can assign it to a variable of type Ring[Animal], or pass it to a method where a Ring[Animal] is required. Variance makes generic types more convenient and flexible for users, so is worth thinking about.
class Ring[+A]
As soon as we start modelling capacity and size we run into a bit of trouble. Scala doesn’t have refinement types built-in (we can’t say that capacity is an integer greater than zero) and there’s no way to model the relationship between the two numbers in the type system (that capacity must always be <= size) so we’ll have to settle for modelling them as plain old integers.
Although we may end up making these val, we’ll start with def because it gives us more flexibility in the implementation. It doesn’t affect the interface because Scala implements the uniform access principle so a parameterless def is the same as a val from the caller’s point of view.
def capacity: Int = ???
def size: Int = ???
Things start getting more complex when we look at the push method. Because the ring is immutable it can’t modify any state, so the method needs to return a new ring that has the pushed element added. Your first intuition might be to define the push method like this:
def push(a: A): Ring[A] = ???
Unfortunately that leads to a compiler error because we declared the type parameter A covariant but we’re trying to use the type in an input position as a method parameter:
Error: covariant type A occurs in contravariant position in type A of value a
def push(a: A): Ring[A] = ???
To resolve this we could remove the variance annotation, but there is a better way. Because we’re returning a new ring, we can change the type of it from the push method! Think about it this way: If you have a Ring[Giraffe] and you try to push an Animal then because all giraffes are animals it would be safe to return a Ring[Animal].
As such, we can use a new type parameter B constrained to be a supertype2 of A, which makes the push method more flexible and allows the type parameter to stay covariant:
def push[B >: A](b: B): Ring[B] = ???
That’s a decent definition, but it doesn’t fully encode what might happen; we know that an element might be discarded if the ring is full. The user of the ring may want to know which element got discarded so we can return that from the method as well, wrapped in an option to indicate that there might be no discard.
def push[B >: A](b: B): (Option[A], Ring[B]) = ???
Next the pop method, which needs to return a 2-tuple of the popped element and the new ring with that element removed. This method will throw a NoSuchElementException if the ring is empty and ideally this should be encoded into the type signature, but that isn’t the idiom in the built-in Scala collections so we’ll copy them and have an ‘invisible’ exception.
def pop: (A, Ring[A]) = ???
Finally, a more convenient pop method which only pops if the collection is non-empty so users don’t need to check the size of the collection before calling it. Here we could wrap just the value in an option, but as there’s no need to change the ring if the pop doesn’t happen it’s more accurate to encode the whole thing as an option.
def popOption: Option[(A, Ring[A])] = ???
With that, our minimal interface for the ring buffer is complete:
class Ring[+A] {
def capacity: Int = ???
def size: Int = ???
def push[B >: A](b: B): (Option[A], Ring[B]) = ???
def pop: (A, Ring[A]) = ???
def popOption: Option[(A, Ring[A])] = ???
}
## Implementation
If you look at the requirements for the ring, the elements are ordered and they are handled in a first-in first-out (FIFO) manner. This should remind you of a queue, and indeed we can treat a ring as a bounded queue where enqueueing an element may also cause an element to be dequeued.
I’ve chosen to make the ring a case class so we get equality for free. This will expose the queue in the class’s interface, but I don’t care too much as it is a specialised type of queue and that abstraction doesn’t need to be hidden. Plus, as the queue is immutable, it can be safely exposed without worrying anybody might modify it.
The size being explicit might surprise you given queues already have a size method we could pass through to. However, the size method on immutable queues is O(N) because they actually count all the elements when you query the size, but it’s important to the runtime complexity of the ring class that querying the size is O(1).
import scala.collection.immutable.Queue
case class Ring[+A](capacity: Int, size: Int, queue: Queue[A])
The push method is pretty much a transliteration of the English language requirements. Here you can see why it’s important that size is O(1) otherwise pushing an element would be an O(N) operation. The code can be made to look a bit cleaner using queue’s head and tail methods rather than dequeue but it makes the performance rather harder to reason about so I’ve done it this way.
def push[B >: A](b: B): (Option[A], Ring[B]) =
if (size < capacity) (None, Ring(capacity, size + 1, queue.enqueue(b)))
else queue.dequeue match {
case (h, t) => (Some(h), Ring(capacity, size, t.enqueue(b)))
}
The pop method is easiest implemented in terms of the optional version. If you are after the maximum performance this isn’t the best approach as it requires the allocation of an additional option instance which is immediately discarded, but it’s an ‘obviously correct’ implementation which for most programs is better than a marginally faster but more complex one.
def pop: (A, Ring[A]) = popOption.getOrElse(throw new NoSuchElementException)
Finally popOption, which can be implemented in terms of the equivalent dequeueOption method on the queue. One of Scala’s little warts is that lambdas with tuple arguments need to use case to destructure them. That’s going to be fixed in Scala 3, but for now we’ll just have to live with it.
def popOption: Option[(A, Ring[A])] = queue.dequeueOption.map {
case (h, t) => (h, Ring(capacity, size - 1, t))
}
Here’s the final code listing, with the constructor made private as it’s easy to make mistakes using it which violate the invariants of the class, and instead a couple of factory methods in the companion object to construct empty rings or rings with initial elements.
import scala.collection.immutable.Queue
case class Ring[+A] private (capacity: Int, size: Int, queue: Queue[A]) {
def push[B >: A](b: B): (Option[A], Ring[B]) =
if (size < capacity) (None, Ring(capacity, size + 1, queue.enqueue(b)))
else queue.dequeue match {
case (h, t) => (Some(h), Ring(capacity, size, t.enqueue(b)))
}
def pop: (A, Ring[A]) = popOption.getOrElse(throw new NoSuchElementException)
def popOption: Option[(A, Ring[A])] = queue.dequeueOption.map {
case (h, t) => (h, Ring(capacity, size - 1, t))
}
}
object Ring {
def empty[A](capacity: Int): Ring[A] = Ring(capacity, 0, Queue.empty)
def apply[A](capacity: Int)(xs: A*): Ring[A] = {
val elems = if (xs.size <= capacity) xs else xs.takeRight(capacity)
Ring(capacity, elems.size, Queue(elems: _*))
}
}
I chose to use multiple parameter lists for apply because otherwise a ring which is initialised with integer elements is confusing as the capacity blends into the elements:
val ring = Ring(4, 3, 2, 1) // single parameter list; which is the capacity?
val ring = Ring(4)(3, 2, 1) // multiple parameter lists; capacity is clear
## Performance
I said this would be an efficient implementation, and it is. All of the operations have amortized O(1) performance in both time and space. However, it can be tricky to understand why this is. First, we need to understand the implementation of immutable queues.
The trick to an efficient immutable queue is using a pair of singly-linked lists, one as an input buffer, and one as an output buffer.
in: []
out: []
As elements are enqueued a new node is prepended to the input buffer which is an O(1) operation because it just requires a couple of pointer changes. Enqueuing 1, 2, 3 then 4 would cause the buffers to look like this:
in: [4]->[3]->[2]->[1]->[]
out: []
If we now want to pop an element it would be inefficient to take the last element of the input buffer as that’s an O(N) operation for a single element. Instead the input buffer is reversed and stored as the output buffer, which is an O(N) operation:
in: []
out: [1]->[2]->[3]->[4]->[]
It’s now possible to pop up to four elements from the output buffer as O(1) operations because popping the head off a singly linked list is O(1).
As such, for N elements enqueued and dequeued there are 2N O(1) operations and 1 O(N) operation. Here the O(N) reverse operation is equivalent to N O(1) operations so overall enqueue/dequeue is equivalent to 3N O(1) operations for N elements, or 3 O(1) operations per element. As constant factors (i.e. the 3) aren’t considered in big-O notation this means the operations are amortized O(1). Amortized means that not every operation will be O(1), but that the overall performance works out that way.
As our ring is implemented entirely in terms of enqueuing and dequeueing elements with no loops, this analysis must therefore apply to our ring so we can see that its operations are also amortized O(1).
## Summary
This post walks through the process of building a ring buffer in Scala, but the actual class itself isn’t the important thing. What I want to show is the process for developing in very strongly typed languages, which differs significantly from dynamic languages where the approach tends to be evolutionary, or less strongly typed languages where the types don’t play as big a part:
1. For a much more thorough treatment of covariance and contravariance, read Eric Lippert’s eleven-part series on the subject. It’s written with C# in mind but everything is applicable to Scala too. It’s much more complicated than you might think. Even then, that’s only dealing with the natural variance of the parameter; in certain advanced cases you may want to override the compiler’s checks by using the @uncheckedVariance annotation. ↩︎
2. It’s not strictly correct to say supertype here, as the >: constraint actually imposes that the type is ‘bigger’ without necessarily implying an inheritance relationship. For example, Animal is a bigger type than Giraffe because there are more types that fit within it and there is an obvious subtyping relationship, but List[Animal] is a bigger type than List[Giraffe] because they are assignment-compatible even though there is no subtyping relationship. Most of the time this distinction isn’t important. ↩︎ |
# Difference between revisions of "Footnotes"
## Basic Footnotes
For basic footnotes, simply use \footnote[reference]{footnote text}. The reference is optional, and can be used to refer to the same footnote again. Footnotes can be referenced with the usual \in and \at macros (see References), or the note itself can be reproduced with \note[reference]. For example:
This\footnote[footA](Or that, if you prefer.} is a sentence with a footnote\footnote{Actually,
two footnotes; this one and \in{footnote}[footA] on \at{page}[footA], denoted by \note[footA].}.
Thanks to Oblomov, it's also possible to use footnotes in footnotes, as in this example.
This\footnote(Or that\footnote{Or possibly even the other.}, if you prefer.} is a sentence
with a footnote.
## Footnote Numbering
You can setup the exact behaviour of footnotes as usual with \setupfootnotes. For example, to use footnotes with standard footnote symbols (which ConTeXt has defined as the conversion "set 2"), with the footnote counter resetting on each page, one would use the following:
\setupfootnotes[way=bypage, conversion=set 2]
This produces the the following footnotes, using the text of the previous example.
## Alternate Footnote Locations
The \setupfootnotes command offers some options for the placement of footnotes; for instance, the location=columns option places the footnotes in a single column (of a multicolumn page) rather than across the whole page. The location=text option places the footnotes in text at a location specified by \placefootnotes; this can be easily used to create endnotes, or even to place footnotes after each paragraph or subsection.
\setupfootnotes[location=text]
This\footnote[footA](Or that, if you prefer.} is a sentence with a footnote\footnote{Actually,
two footnotes; this one and footnote \note[footA].}.
\placefootnotes
This is some more text, with more footnotes\footnote{Specifically, this one.}.
\placefootnotes
## Footnote Formatting
Footnotes can be placed in multiple columns, using the n=number option in \setupfootnotes.
\setupfootnotes[n=3]
This\footnote[footA](Or that\footnote{Or the other.}, if you prefer.} is a sentence
with a footnote\footnote{Actually, two footnotes; this one and \in{footnote}[footA]
on \at{page}[footA], denoted by \note[footA].}.
TODO: This is ugly, and points up some ConTeXt bugs that need to be fixed. (See: To-Do List)
## Footnotes in Floats
Floats cannot include normal footnotes, because they are likely to float to another page from the page on which they were defined, thus getting the footnotes out of order. Thus, to include footnotes in a float, one must use local footnotes. This table, which uses the \placelegend command to create a place for the footnotes, illustrates the process:
\startlocalfootnotes[n=2]
\placetable{A table with footnotes.}
\placelegend
{\starttable[|l|r|]
\HL
\VL One\footnote{First} \VL Two\footnote{Second} \VL\FR
\VL Three\footnote{Third} \VL Four\footnote{Fourth} \VL\LR
\HL
\stoptable}
{\placelocalfootnotes}
\stoplocalfootnotes
## Placing Footnotes Manually
TODO: This doesn't seem to be working quite right yet. A ConTeXt bug, or a wrong answer? (See: To-Do List)
In some cases, ConTeXt's footnoting system may not be able to do exactly what you want. For instance, you may want to place a footnote in a table so that the footnote appears with the rest of the footnotes on the page, or you may want to create a footnote to a footnote to a footnote. Many of these cases can be handled by using the \footnotetext command (which creates a footnote without placing the corresponding symbol in the text) and the \note command (which places the footnote symbol in the text, but does not create a footnote).
For example, to create a footnote to a footnote to a footnote, all but the first footnotes are created with \footnotetext commands, which are placed in the main text -- thereby ensuring that the footnotes are numbered and appear in the correct order. Then, these footnotes are referenced by \note commands within the relevant footnotes. In this example, the lines are broken for clarity; note the % at the end of each line to prevent spurious spaces in the text.
This%
\footnote(Or that\note[footB], if you prefer.}%
\footnotetext[footB]{Or possibly even the other\note[footC].}%
\footnotetext[footC]{It could be something entirely different.}
is a sentence with nested footnotes\note[footB]\note[footC]. |
# Homework Help: Infinite limit as X tends to infinity
1. Mar 30, 2010
### IdanH14
1. The problem statement, all variables and given/known data
I am required to express in $$\varepsilon - \delta$$ way what I'm suppose to prove in case $$lim_\below{(x \rightarrow \infty)} f(x) = \infty$$
2. Relevant equations
None.
3. The attempt at a solution
So first, intuitively I thought that what this means is that $$f(x)$$ is bigger than any arbitrary number when $$x$$ is bigger than any arbitrary number. So I attempted to combine the $$\varepsilon - \delta$$ definitions of when $$x$$ tends to infinity and when limit $$f(x)$$ tends to infinity.
I came up with this:
$$lim_\below{(x \rightarrow \infty)} f(x) = \infty$$ if for every $$M>0$$ there exists $$N>0$$ so that for every $$x>M$$, $$f(x)>N$$.
I am unsure of whether it's the correct definition. Anyone can verify that?
2. Mar 30, 2010
### tiny-tim
Welcome to PF!
Hi IdanH14! Welcome to PF!
(have a delta: δ and an epsilon: ε and an infinity: ∞ )
Yes , except it's the other way round …
no matter how large N is, we can find an M above which f(x) > N.
(you get the same result if you use 1/f, 1/δ, and 1/ε)
3. Mar 30, 2010
### IdanH14
Thanks! :)
Let me summarize it to see if I got it. It should be
For every N>0 there exists M>0, so that for every x>M, f(x)>N
Right?
Last edited: Mar 30, 2010
4. Mar 30, 2010
### tiny-tim
Right!
5. Mar 30, 2010
### IdanH14
I like you usage of icons. I think I'll adopt it. ;)
Now, I'm trying to solve an exercise in my math book with this principle. Despite the fact I already learned infinite limits arithmetics, I'm required to prove that $$lim_\below{x\rightarrow \infty}) x cos\frac{1}{x} = \infty$$ in this cumbersome way. So I think I have a solution, but again, my insecurities creep in.
So I noticed that the bigger x gets, the closer $$cos\frac{1}{x}$$ gets to 1. If I'll choose $$M=N$$ then, unless they $$N=\infty$$ I'll get something that's smaller than N, because X is multiplied by something which is close to 1, but doesn't equal 1. But if $$M=2N$$ then problem eliminated. Almost.
Why almost? Because if $$\frac{1}{x}>1$$ then $$cos\frac{1}{x}$$ is potentially getting farther from 1. So, what I was thinking was to say $$M=max(1,2N)$$ and then problem solved.
I hope I'm clear enough. Is this a valid proof? Did I find for every N>0 an M>0 that meet the requirements?
Thanks :)
Last edited: Mar 30, 2010
6. Mar 30, 2010
### tiny-tim
That's correct.
I can work out why you're then using M = 2N (because of course M = N doesn't quite work), but you haven't actually specified why M = 2N does work.
(for example, does it work for any N?) |
When studying Thermodynamics, we make much use of adiabatic processes where a change is made so fast that no significant heat can flow. For example, the compression stroke of a diesel engine.
In quantum mechanics, it seems they turn the definition on its head, adiabatic is defined as a change to the system that occurs gradually compared to the evolution of the unperturbed system. An example is a pendulum of period T with its length changing slowly compared to T.
Are these two definitions somehow the same? Are they even related? I can't make sense of this. |
# Another non-hom. diffeq
1. Jul 15, 2011
Is there an analytic solution for:
y"(c+dx+ex^2) + ay + b = 0,
y (x=0) = Ts
y'(x=L) = 0
where a,b,c,d,e are all constants?
2. Jul 15, 2011
### pmsrw3
I don't understand this DE. Are you saying that the second derivative of y, evaluated at c+dx+ex^2, is equal to -ay(x) - b, where y is now evaluated at x? What physical mechanism could give such a remote relationship? What about the domain? Over what range of x is this equation supposed to hold? Are c, d, and e such that c + dx + ex^2 precisely maps this range to itself? If not, how do we deal with the regions where y''(c+dx+ex^2) is defined but y(x) isn't, or vice versa?
3. Jul 15, 2011
Good questions.
I am investigating the properties of a sensor I am making, which involves the simultaneous interactions of convective heat transfer, joule heating, and potential theory. I am trying to build an analytic model that approximates all of these physics for simple geometries, namely a pipe with a pipe thickness. The major contributing factor is that the pipe material's resistance changes with temperature.
Basically here is what happens for the simple geometry of a coaxial setup:
1. Inside pipe surface is connected to ground, outside pipe surface is given a voltage.
2. Fluid goes through pipe.
3. Current flows from the outer pipe surface to the inner surface.
4. The pipe heats up.
5. The fluid advects some heat downstream.
6. A downstream section of pipe touches sligtly hotter fluid because of the advection.
7. This downstream section of pipe heats up a little bit more.
8. The resistance of the downstream pipe section increases.
9. The current in the downstream section is reduced.
10. Less heat is produced in the downstream section.
11. Less heat is conducted into the fluid...
12. Less heat is advected...
etc...
So there is some balance that exists between all of this. I've already solved the heat transfer math; all it needs is heat flux as a function of the inner pipe temperature. This looks like
q"(Ts) = A*Ts + B
making the simplification that the resistance changes linearly with temperature.
Now, finding q"(Ts) for a coaxial setup is easy.
1. The e-field is the same everywhere radially AND axially. The E-field is E(z)=V/(Ro-Ri) where z is the distance in the radial direction from Ri to Ro. Actually, this is a constant, so E=V/(Ro-Ri) (z isn't needed as a variable).
Assuming that the pipe is insulated T'(z=Z)=0 and that T(z=0)=Ts, the diffEQ is the one you helped me solve before:
T" + AT + b = 0. Oh yeah, big Z is equal to Ro-Ri to make things easier.
Anyway, the heat flux turns out to be
q"(Ts) = Aj*Ts + Bj
Aj = k*a/(sqrt(|a|) * tan(h)(sqrt(|a|Z)
Aj = k*b/(sqrt(|a|) * tan(h)(sqrt(|a|Z)
a = alpha/k*E(z)^2
b = beta/k*E(z)^2
where
E(z) = V/(Ro-Ri) const.
and alpha is in [Siemens]/[m*K] and beta is in [Siemens]/[m]
Simple and neat solution.
Now I want to explore the geometry when, instead of there being an inner and outer conductor in the pipe, the pipe is split in half (like a C) and the conductors as placed on the edges of the C. I call this "split". The e-field is only the same axially, no longer radially. Therefore, z must come into play somehow. In this case, E(z)=(pi*V)/(Ri+z*Z) (or something like this, my notes are in my office).
The DiffEQ becomes
T" + a/(c+dz+ez^2)*T + b/(c+dz+ez^2) = 0
because of the squaring action of E(z), or more simply
T"*(c+dz+ez^2) + a*T + b = 0
where I am reusing a through e as dummy variables.
That's where it comes from. How would I go about solving this? The domain of z must be > 0 because it is a real distance (pipe thickness). a can be positive or negative, and b,c,d,e > 0 always. T > 0 [Kelvins]
4. Jul 15, 2011
### pmsrw3
I think what's confusing me is that you never explicitly give the argument of T (or y in the original). Originally you wrote
$$y^{\prime\prime}\left(c+dx+ex^2\right) + ay + b = 0$$
which would ordinarily be understood, not as y'' multiplied by c+dx+ex^2, but y'' evaluated at c+dx+ex^2. But If I understand correctly what you just wrote, your DE is actually
$$(c+dz+ez^2)T^{\prime\prime}\left(x\right) + a*T\left(x\right) + b = 0$$
where x is along the pipe, and z is perpendicular to the pipe (and so would y be -- I'm not sure why it's OK to leave that out).
There are a lot of reasons why this looks wrong to me, so I'm guessing I still don't understand correctly.
5. Jul 15, 2011
it's actually
(c + dz + ez^2)*T"(z) + a*T(z) + b = 0
T(z=0) = Ts
T'(z=Z) = 0
Don't let my explanation confuse you; forget the axial direction x for now; right now I'm solving a one dimensional ODE (radially only) with lots of constants, shown above.
I'm scared of the word "Bessel", but it might have to come into play here. This is where I have no experience.
Last edited: Jul 15, 2011
6. Jul 15, 2011
### pmsrw3
Oh, boy, you're going to love this. No Bessel functions, but -- are you ready?
$$T(z)\to \left((b+a \text{Ts}) \left((a+2 e) (c+Z (d+e Z)) \, _2F_1\left(\frac{7 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(7-\frac{\sqrt{e (e-4 a)}}{e}\right);3;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)-2 \left(d^2-4 c e\right) \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)\right) G_{2,2}^{2,0}\left(-\frac{4 e (c+z (d+e z))}{d^2-4 c e}| \begin{array}{c} \frac{1}{4} \left(5-\frac{\sqrt{e-4 a}}{\sqrt{e}}\right),\frac{1}{4} \left(\frac{\sqrt{e-4 a}}{\sqrt{e}}+5\right) \\ 0,1 \end{array} \right)+8 e \left((b+a \text{Ts}) (c+z (d+e z)) \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 e (c+z (d+e z))}{d^2-4 c e}\right)-b c \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 c e}{d^2-4 c e}\right)\right) G_{2,2}^{2,0}\left(-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}| \begin{array}{c} \frac{1}{4}-\frac{\sqrt{e-4 a}}{4 \sqrt{e}},\frac{1}{4} \left(\frac{\sqrt{e-4 a}}{\sqrt{e}}+1\right) \\ 0,0 \end{array} \right)+b \left(2 \left(d^2-4 c e\right) \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)-(a+2 e) (c+Z (d+e Z)) \, _2F_1\left(\frac{7 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(7-\frac{\sqrt{e (e-4 a)}}{e}\right);3;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)\right) G_{2,2}^{2,0}\left(-\frac{4 c e}{d^2-4 c e}| \begin{array}{c} \frac{5}{4}-\frac{\sqrt{e-4 a}}{4 \sqrt{e}},\frac{1}{4} \left(\frac{\sqrt{e-4 a}}{\sqrt{e}}+5\right) \\ 0,1 \end{array} \right)\right)/\left(a \left(8 c e \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 c e}{d^2-4 c e}\right) G_{2,2}^{2,0}\left(-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}| \begin{array}{c} \frac{1}{4}-\frac{\sqrt{e-4 a}}{4 \sqrt{e}},\frac{1}{4} \left(\frac{\sqrt{e-4 a}}{\sqrt{e}}+1\right) \\ 0,0 \end{array} \right)+\left((a+2 e) (c+Z (d+e Z)) \, _2F_1\left(\frac{7 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(7-\frac{\sqrt{e (e-4 a)}}{e}\right);3;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)-2 \left(d^2-4 c e\right) \, _2F_1\left(\frac{3 e+\sqrt{e (e-4 a)}}{4 e},\frac{1}{4} \left(3-\frac{\sqrt{e (e-4 a)}}{e}\right);2;-\frac{4 e (c+Z (d+e Z))}{d^2-4 c e}\right)\right) G_{2,2}^{2,0}\left(-\frac{4 c e}{d^2-4 c e}| \begin{array}{c} \frac{5}{4}-\frac{\sqrt{e-4 a}}{4 \sqrt{e}},\frac{1}{4} \left(\frac{\sqrt{e-4 a}}{\sqrt{e}}+5\right) \\ 0,1 \end{array} \right)\right)\right)$$
Here's the same thing in Mathematica's native form (useful because the names identify special functions you've probably never heard of):
$$T[z] -> ((b + a Ts) (-2 (d^2 - 4 c e) Hypergeometric2F1[( 3 e + Sqrt[e (-4 a + e)])/(4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 e (c + Z (d + e Z)))/(d^2 - 4 c e))] + (a + 2 e) (c + Z (d + e Z)) Hypergeometric2F1[(7 e + Sqrt[e (-4 a + e)])/( 4 e), 1/4 (7 - Sqrt[e (-4 a + e)]/e), 3, -((4 e (c + Z (d + e Z)))/( d^2 - 4 c e))]) MeijerG[{{}, {1/ 4 (5 - Sqrt[-4 a + e]/Sqrt[e]), 1/4 (5 + Sqrt[-4 a + e]/Sqrt[e])}}, {{0, 1}, {}}, -(( 4 e (c + z (d + e z)))/(d^2 - 4 c e))] + 8 e (-b c Hypergeometric2F1[(3 e + Sqrt[e (-4 a + e)])/(4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 c e)/(d^2 - 4 c e))] + (b + a Ts) (c + z (d + e z)) Hypergeometric2F1[(3 e + Sqrt[e (-4 a + e)])/( 4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 e (c + z (d + e z)))/( d^2 - 4 c e))]) MeijerG[{{}, {1/4 - Sqrt[-4 a + e]/( 4 Sqrt[e]), 1/4 (1 + Sqrt[-4 a + e]/Sqrt[e])}}, {{0, 0}, {}}, -((4 e (c + Z (d + e Z)))/(d^2 - 4 c e))] + b (2 (d^2 - 4 c e) Hypergeometric2F1[(3 e + Sqrt[e (-4 a + e)])/( 4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 e (c + Z (d + e Z)))/(d^2 - 4 c e))] - (a + 2 e) (c + Z (d + e Z)) Hypergeometric2F1[(7 e + Sqrt[e (-4 a + e)])/( 4 e), 1/4 (7 - Sqrt[e (-4 a + e)]/e), 3, -((4 e (c + Z (d + e Z)))/( d^2 - 4 c e))]) MeijerG[{{}, {5/4 - Sqrt[-4 a + e]/( 4 Sqrt[e]), 1/4 (5 + Sqrt[-4 a + e]/Sqrt[e])}}, {{0, 1}, {}}, -((4 c e)/( d^2 - 4 c e))])/(a (8 c e Hypergeometric2F1[( 3 e + Sqrt[e (-4 a + e)])/(4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 c e)/( d^2 - 4 c e))] MeijerG[{{}, {1/4 - Sqrt[-4 a + e]/( 4 Sqrt[e]), 1/4 (1 + Sqrt[-4 a + e]/Sqrt[e])}}, {{0, 0}, {}}, -((4 e (c + Z (d + e Z)))/( d^2 - 4 c e))] + (-2 (d^2 - 4 c e) Hypergeometric2F1[( 3 e + Sqrt[e (-4 a + e)])/(4 e), 1/4 (3 - Sqrt[e (-4 a + e)]/e), 2, -((4 e (c + Z (d + e Z)))/(d^2 - 4 c e))] + (a + 2 e) (c + Z (d + e Z)) Hypergeometric2F1[( 7 e + Sqrt[e (-4 a + e)])/(4 e), 1/4 (7 - Sqrt[e (-4 a + e)]/e), 3, -((4 e (c + Z (d + e Z)))/(d^2 - 4 c e))]) MeijerG[{{}, {5/4 - Sqrt[-4 a + e]/(4 Sqrt[e]), 1/4 (5 + Sqrt[-4 a + e]/Sqrt[e])}}, {{0, 1}, {}}, -(( 4 c e)/(d^2 - 4 c e))]))$$
7. Jul 15, 2011 |
# Multi-portfolio time consistency for set-valued convex and coherent risk measures
## Author(s): Feinstein, Zachary; Rudloff, Birgit
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1dr21
DC FieldValueLanguage
dc.contributor.authorFeinstein, Zachary-
dc.contributor.authorRudloff, Birgit-
dc.date.accessioned2020-02-24T20:52:59Z-
dc.date.available2020-02-24T20:52:59Z-
dc.date.issued2015-01en_US
dc.identifier.citationFeinstein, Zachary, Rudloff, Birgit. (2015). Multi-portfolio time consistency for set-valued convex and coherent risk measures. Finance and Stochastics, 19 (1), 67 - 107. doi:10.1007/s00780-014-0247-6en_US
dc.identifier.issn0949-2984-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1dr21-
dc.description.abstractEquivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L p d (Ω,F,P) with image space in the power set of L p d (Ω,Ft ,P). In the convex case, multi-portfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.en_US
dc.format.extent67 - 107en_US
dc.language.isoen_USen_US
dc.relation.ispartofFinance and Stochasticsen_US
dc.rightsAuthor's manuscripten_US
dc.titleMulti-portfolio time consistency for set-valued convex and coherent risk measuresen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1007/s00780-014-0247-6-
dc.date.eissued2014-10-18en_US
dc.identifier.eissn1432-1122-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US
Files in This Item:
File Description SizeFormat |
What is the optimal value function of the scaled version of the reward function?
Consider the reward function $$r(s, a)$$ with optimal state-action value function $$q_*(s, a)$$. What would be the optimal state-action value function of $$c r(s, a)$$, for $$c \in \mathbb{R}$$? Would it be $$c q_*(s, a)$$?
The Bellman optimality equation is given by
$$q_*(s,a) = \sum_{s' \in \mathcal{S}, r \in \mathcal{R}}p(s',r \mid s,a)(r + \gamma \max_{a'\in\mathcal{A}(s')}q_*(s',a')) \tag{1}\label{1}.$$
If the reward is multiplied by a constant $$c > 0 \in \mathbb{R}$$, then the new optimal action-value function is given by $$cq_*(s, a)$$.
To prove this, we just need to show that equation \ref{1} holds when the reward is $$cr$$ and the action-value is $$c q_*(s, a)$$.
\begin{align} c q_*(s,a) &= \sum_{s' \in \mathcal{S}, r \in \mathcal{R}}p(s',r \mid s,a)(c r + \gamma \max_{a'\in\mathcal{A}(s')} c q_*(s',a')) \tag{2}\label{2} \end{align}
Given that $$c > 0$$, then $$\max_{a'\in\mathcal{A}(s')} c q_*(s',a') = c\max_{a'\in\mathcal{A}(s')}q_*(s',a')$$, so $$c$$ can be taken out of the $$\operatorname{max}$$ operator. Therefore, the equation \ref{2} becomes
\begin{align} c q_*(s,a) &= \sum_{s' \in \mathcal{S}, r \in \mathcal{R}}p(s',r \mid s,a)(c r + \gamma c \max_{a'\in\mathcal{A}(s')} q_*(s',a')) \\ &= \sum_{s' \in \mathcal{S}, r \in \mathcal{R}}c p(s',r \mid s,a)(r + \gamma \max_{a'\in\mathcal{A}(s')} q_*(s',a')) \\ &= c \sum_{s' \in \mathcal{S}, r \in \mathcal{R}} p(s',r \mid s,a)(r + \gamma \max_{a'\in\mathcal{A}(s')} q_*(s',a')) \\ q_*(s,a) &= \sum_{s' \in \mathcal{S}, r \in \mathcal{R}} p(s',r \mid s,a)(r + \gamma \max_{a'\in\mathcal{A}(s')} q_*(s',a')) \tag{3}\label{3} \end{align} which is equal to the the Bellman optimality in \ref{1}, which implies that, when the reward is given by $$cr$$, $$c q_*(s,a)$$ is the solution to the Bellman optimality equation. Consequently, in this case, the set of optimal policies does not change.
If $$c=0$$, then \ref{2} becomes $$0=0$$, which is true.
If $$c < 0$$, then $$\max_{a'\in\mathcal{A}(s')} c q_*(s',a') = c\min_{a'\in\mathcal{A}(s')}q_*(s',a')$$, so equation \ref{3} becomes
\begin{align} q_*(s,a) &= \sum_{s' \in \mathcal{S}, r \in \mathcal{R}} p(s',r \mid s,a)(r + \gamma \min_{a'\in\mathcal{A}(s')} q_*(s',a')) \end{align}
which is not equal to the Bellman optimality equation in \ref{1}. |
diophantus
Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to hello@diophantus.org
Almost sure functional central limit theorem for non-nestling random walk in random environment
09 Apr 2007 math.PR arxiv.org/abs/0704.1022
Abstract. We consider a non-nestling random walk in a product random environment. We assume an exponential moment for the step of the walk, uniformly in the environment. We prove an invariance principle (functional central limit theorem) under almost every environment for the centered and diffusively scaled walk. The main point behind the invariance principle is that the quenched mean of the walk behaves subdiffusively.
Reviews
There are no reviews yet. |
# Journées de topologie quantique
Les journées de topologie quantique ont pour but de réunir, dans un format proche de celui d’un séminaire d’équipe, des chercheuses/chercheurs qui s’intéressent à la topologie quantique dans un sens assez large. Elles auront lieu en alternance entre Paris et Dijon. La première édition a eu lieu le lundi 21 novembre 2022 à Paris.
Paris
TBA
## Séances passées
### 21 Novembre 2022
• Pedro Vaz: De la catégorification des modules de Verma aux homologies d'entrelacs
Résumé Dans cet exposé j'expliquerai la catégorification des modules de Verma et leur utilisation dans la construction d'invariants d’entrelacs dans l'espace de dimension 3 et dans le tore solide. Les résultats présentés sont issus de collaborations avec G. Naisse et A. Lacabanne.
• Cristina Palmer-Anghel: A globalisation of the Jones and Alexander polynomials from configurations on arcs and ovals in the punctured disc
Résumé The Jones and Alexander polynomials are two important knot invariants and our aim is to see them from a topological model given by a graded intersection in a configuration space. Bigelow and Lawrence showed a topological model for the Jones polynomial, using arcs and figure eights in the punctured disc. On the other hand, the Alexander polynomial can be obtained from intersections between ovals and arcs. We present a common topological viewpoint which sees both invariants, based on ovals and arcs in the punctured disc. The model is constructed from a graded intersection between two explicit Lagrangians in a configuration space. It is a polynomial in two variables, recovering the Jones and Alexander polynomials through specialisations of coefficients. Then, we prove that the intersection before specialisation is (up to a quotient) an invariant which globalises these two invariants, given by an explicit interpolation between the Jones polynomial and Alexander polynomial. We also show how to obtain the quantum generalisation, coloured Jones and coloured Alexander polynomials, from a graded intersection between two Lagrangians in a symmetric power of a surface.
• Julien Korinman: Représentation d'algèbres d'écheveaux
Résumé Le but de cet exposé est de présenter des progrès récents vers la classification des représentations des algèbres d'écheveaux aux racines de l'unité. Les représentations indécomposables de ces algèbres forment (conjecturalement) les briques de bases d'objets algébriques appelés TQFT qui contiennent des invariants de nœuds et de 3-variétés et des représentations des groupes modulaires de surfaces. Je définirai d'abord les algèbres d'écheveaux, les relierait aux variétés de caractères et exposerai ensuite des méthodes générales de théorie des représentations (domaine Azumaya, théorie des ordres de Poisson), qui permettent de relier la théorie de représentations des algèbres d'écheveaux à la géométrie de Poisson des variétés de caractères. Au final, on obtiendra une description 'presque' complète des représentations à poids des algèbres d'écheveaux. C'est un travail en collaboration avec H.Karuo.
### 13 mars 2023
• Rinat Kashaev: Generalized TQFT's from local fields
Résumé Based on the theory of quantum dilogarithms over locally compact Abelian groups, I will talk about a particular example of a quantum dilogarithm associated with a local field $F$ which leads to a generalized 3d TQFT based on the combinatorial input of ordered $\Delta$-complexes. The associated invariants of 3-manifolds are expected to be specific counting invariants of representations of $\pi_1$ into the group $PSL_2F$. This is an ongoing project in collaboration with Stavros Garoufalidis.
• Eilind Karlsson: Deformation quantisation and skein categories
Résumé I will start by reviewing deformation quantisation of algebras, and explain how we in a similar spirit can define deformation quantisation of categories. The motivation is to understand how deformation quantisation interacts with categorical factorization homology, or more explicitly: how deformation quantisation interacts with “gluing” local observables to obtain global observables. One important and well-known example of factorization homology is given by skein categories, which I will briefly introduce. We generalise the theory of skein categories to fit into the deformation quantisation-setting, and use it as a running example. This is based on joint work (in progress) with Corina Keller, Lukas Müller and Jan Pulmann.
• Rhea Palak Bakshi: Skein modules, torsion, and framing changes of links
Résumé Skein modules are invariants of 3-manifolds which were introduced by Józef H. Przytycki (and independently by Vladimir Tuarev) in 1987 as generalisations of the Jones, HOMFLYPT, and Kauffman bracket polynomial link invariants in the 3-sphere to arbitrary 3-manifolds. Over time, skein modules have evolved into one of the most important objects in knot theory and quantum topology, having strong ties with many fields of mathematics such as algebraic geometry, hyperbolic geometry, and the Witten-Reshetikhin-Turaev 3-manifold invariants, to name a few. One avenue in the study of skein modules is determining whether they reflect the geometry or topology of the manifold, for example, whether the module detects the presence of incompressible or non-separating surfaces in the manifold. Interestingly enough this presence manifests itself in the form of torsion in the skein module. In this talk we will discuss various skein modules which detect the presence of non-separating surfaces. We will focus on the framing skein module and show that it detects the presence of non-separating 2-spheres in a 3-manifold by way of torsion. |
The following data was collected for the reaction between carbon monoxide and nitrogen dioxide....
Question:
The following data was collected for the reaction between carbon monoxide and nitrogen dioxide. The time recorded response to a NO contraction change of 2.7×10⁻³ M.
NO2+CO--->NO+CO2
Trial (NO2) (CO) Time (s) 1 0.020 0.010 296 2 0.030 0.010 197 3 0.020 0.020 74
A) Calculate the rate for each of the three trials?
B) Determine the order of the reaction with respect to NO2.
C) Determine the order of the rxn with respect to CO.
D) What is the rate constant?
E) What is the rate law equation for this reaction?
Rate constant
It is a coefficient relating the rate of a chemical reaction at a given temperature to the concentration of the reactant. It changes with temperature and its units depend on the sum of the concentration.
Become a Study.com member to unlock this answer!
A)
rate 1 ={eq}\frac { 2.7 \times 10^{-3}} { 296} = 9.1216 \times 10^{-6} M/s {/eq}
for trail 2 :
rate 2 = {eq}\frac {2.7 \times 10^{-3}} { 197}...
Rate Constant and Rate Laws
from
Chapter 12 / Lesson 2
18K
Learn the difference between rate constant and rate law. Explore how to use the rate law equation to find the reaction order for one and two reactants. |
# Switch Case Statements
Switch case statements
Sometimes of conditional logic may be too complex for if statements. In these situations, the issue isn’t that it’s impossible to write the logic as nested if statements, but that to do so could result in confusing code.
In these situations, we can use case statements to check each of our conditions in turn and process commands based on those conditions.
The case syntax looks like this:
case $string in pattern_1) command ;; pattern_2) alternate command ;; *) default command ;;esac Let’s break this down. First, we start with case followed by the variable or expression we want to test and then in. Next, we have our case patterns against which we want to check our variable or expression. We use the ) symbol to signify the end of each pattern. After each pattern, you can then specify one or more commands you want to execute in the event that the pattern matches the expression or variable, terminating each clause with ;;. As our last switch, it is common practice to have a default condition which is defined by having * as the pattern. Finally, we signify the end of our case statement by closing it with esac (case typed backwards!). Here is a simple example: #!/usr/bin/env bash fruit="pineapple"case$fruit in apple) echo "Your apple will cost 35p" ;; pear) echo "Your pear will cost 41p" ;; peach) echo "Your peach will cost 50p" ;; pineapple) echo "Your pineapple will cost 75p" ;; *) echo "Unknown fruit" ;;esac
First, we set our variable, fruit, to have the value “pineapple”. We then compare this against several conditions looking to see whether the value of our variable matches the pattern provided. In the event that none of the patterns match our fruit, we have a default response “Unknown fruit”.
As one of the patterns is indeed “pineapple” we meet that condition and return:
Your pineapple will cost 75p
Create a Bash script called farm.sh that uses a case statement to perform the following functions:
1. Stores a command line argument in a variable called animal
2. Use a case switch statement which has the following conditions and responses
• When the user enters cow, return “Here, moo”
• When the user enters sheep, return “There a baa”
• When the user enters duck, return “Everywhere a quack”
• Otherwise, return “Old MacDonald had a farm” |
# why are melting points of transition metals high
This explains why group 1 metals such as sodium have quite low melting/boiling points, since the metal would be composed of electrons delocalized in a $\ce{M}^+$ lattice. Whereas non transition metals the electrons are tightly hold by the nucleus, and they are not available for the bonding. Transition metals are all dense metals with high melting and boiling points. But some transition metals have exceptionally high melting points like Vanadium and Nickiel because they form a partially half filled 3-d subshell (for Vanadium) and a fully filled 3-d orbital (for Nickiel), which adds to an extra stability to the element. Distinguish between metals and non-metals with respect to the following characteristics. The melting points and the molar enthalpies of fusion of the transition metals are both high in comparison to main group elements. 3d as well as 4s electrons available for delocalisation is the best explanation for this, since the more electrons within the 'sea' of electrons the greater the electrostatic attraction, this, of course, determines the melting … This is attributed to the availability of vacant d-orbitals in transition metals. Tungsten, rhenium, osmium, tantalum, and molybdenum are among the highest melting point metals. … Transition Metals 1. E Transition metals can have more than one ion. Transition Metals. Across Period 4 in the periodic table, the melting points of 3d transition metal elements show a maximal peak around vanadium and chromium. As transition metals have metallic bonding. However, other factors--such as crystal structure, atomic weight, and electron structure--can also influence the melting point. This arises from strong metallic bonding in transition metals which occurs due to delocalization of electrons facilitated by … The first 4 elements in a row always have the highest melting points. (iii) The high values of melting points indicate that the atoms in transition metals are held together by strong metallic bonds. Conductors 4. The melting-points of the transition metals are high due to the 3d electrons being available for metallic bonding. The densities of the transition metals are highfor the same reason as the high boiling points. These properties are the result of metallic bonding between the atoms in the metal lattice. Transition metals have high melting points because they are metals! In general, the melting points of transition metals are much higher than those of main-group metals. The melting-points of the transition metalsare high due to the 3d electrons being available for metallic bonding. D Transition metals and their compounds make good catalysts. What are the physical properties of this family/group? NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. the high melting points of these metals( transition metal ) are attributed to the involvement of greater number of electrons from (n -1)d in addition to the ns electrons in the interatomic metallic bonding Please explain - Chemistry - The d-and f-Block Elements Therefore, the more unpaired electrons are present, the higher melting point will be. Question: In general, do transition metals or main group elements tend to have higher melting points? All metals have high melting points! Moving from left to right across the periodic table, the five d orbitals become more filled. This is because they have stronger attractions between cations and electrons (a s more delocalised electrons) Transition metals also have higher densities than calcium. So they are soft. Although trends in the melting point are hard to define when considering all of the period 4 transition metals, a smaller trend within the data can be observed. (a) Explain why transition metals (i) have high melting points (ii) have variable oxidation states, (iii) exhibit paramagnetism (b) (i) Name the impurities present in bauxite (ii) State how the impurities in bauxite are removed (ii) Explain why aluminium oxide is said to be amphoteric. Going towards group 2 and group 3 elements, one can expect to find a $\ce{M}^{2+}$ and $\ce{M}^{3+}$ lattice, and so on. Due to strong metallic bonding they are tightly packed as a result the transition metal has high melting and boiling points. Chemistry. Very strong bonds so lots of energy is needed to break them. The boiling points and the melting points of these elements are high, due to the participation of the delocalized d electrons in metallic bonding. Books. Transition metals tend to be hard and they have relatively high densities when compared to other elements. These properties are due to metallic bonding by delocalized d electrons, leading to cohesion which increases with the number of shared electrons. Because they possess the properties of metals, the transition elements are also known as the transition metals. C Transition metals are magnetic. Melting and Boiling points. Transition metals have high melting points due to strong metallic bonds. The melting point of a material is primarily related to bond strength. Melting and boiling points. They are higher than group 2 element calcium. Melting point and Density: The transition metals have high melting points. Answer to: Why do transition metals have high melting points? In general, transition metals possess a high density and high melting points and boiling points. Transition metals have high melting points and densities, form coloured compounds and act as catalysts. The melting points and the molar enthalpies of fusion of the transition metals are both high in comparison to main group elements. The largest block of elements in the periodic table is a group known as the transition metals.These metals are found in groups three through twelve of the periodic table (the so-called d-block elements), although there are ongoing differences of opinion about exactly which elements should be classed as transition metals and which should not. For example, the melting points and boiling points rise in tandem from scandium to vanadium but then drop at chromium and further for manganese before rising again. (a) Melting and boiling points (b) conductivity (c) Malleability. What are the applications of this group/family? Metallic/Shiny, Hard, Malleable, Sonorous 3. This facilitates the formation of strong metallic bonds by delocalization … All transition metals have melting points above 1000 o C. This suggests metallic bonding. Rusting can be prevented by keeping oxygen and water away, and by sacrificial protection. A.14- Compounds of transition metal with relatively smaller non-metals are known as interstitial compounds. High Melting/Boiling Points, Has an oxidation form 2. Materials with strong bonds between atoms will have a high melting temperature. This arises from strong metallic bonding in transition metals which occurs due to delocalization of electrons facilitated by … This arises from strong metallic bonding in transition metals which occurs due to delocalization of electrons facilitated by … These elements are very hard, with high melting points and boiling points. Number of unpaired electrons in the outermost shell indicates the strength of the metallic bonds. That is why a very high temperature is needed to bring these metals to their melting and boiling points. Melting and boiling points The melting points and the molar enthalpies of fusion of the transition metals are both high in comparison to main group elements. The outermost as well as inner shell electrons contribute to the bonding in transition metals. Transition elements have high melting and boiling points because of the presence of many unpaired electrons. Why are most of the metals hard and have high melting and boiling points? NCERT P Bahadur IIT … A Transition metals have high melting points. B Transition metals form colored compounds. Why are most of the metals hard and have high melting and boiling points? In any row the melting points of these metals rise to a maximum at d 5 except for anomalous values of Mn and Tc and fall regularly as the atomic number increases. Transition elements The Metals in the Middle Groups 3-12 are called the transition elements. The melting point (or, rarely, liquefaction point) of a substance is the temperature at which it changes state from solid to liquid.At the melting point the solid and liquid phase exist in equilibrium.The melting point of a substance depends on pressure and is usually specified at a standard pressure such as 1 atmosphere or 100 kPa.. The high melting points of transition metals are due to the involvement of greater number of electrons of (n-1)d in addition to the ns electrons in the interatomic metallic bonding.Across a period of 3d series, the melting points of these metals increases to a maximum at d 5 except for anomalous values of Mn and Tc decreases regularly as the atomic number increases. As implied by the name, all transition metals are metals and thus conductors of electricity. What are the chemical properties of this family/group? The densities of the transition metals are high for the same reason as the high boiling points. The high melting and boiling points of d-block metals are attributed to the involvement of greater number of electrons from (n-1)d in addition to the ns electrons in the interatomic metallic bonding. They forms metallic bonds with unpaired electrons of other atoms. I'm looking at the melting temperature of metallic elements, and notice that the metals with high melting temperature are all grouped in some lower-left corner of the $\mathrm{d}$-block.If I take for example the periodic table with physical state indicated at $\pu{2165 K}$:. Physics. Bonding between the atoms in transition metals can have more than one ion well as inner shell electrons to. Compounds make good catalysts the outermost as well as inner shell electrons contribute to the availability of vacant in... -- such as crystal structure, atomic weight, why are melting points of transition metals high by sacrificial protection elements in a row have... And high melting temperature cohesion which increases with the number of shared electrons the properties of,! Has an oxidation form 2 shell indicates the strength of the transition are! Are held together by strong metallic bonds non-metals with respect to the following characteristics the atoms in the metal.! Metal with relatively smaller non-metals are known as the high boiling points because of the metallic bonds with unpaired.. High for the same reason as the high boiling points because of the metals hard why are melting points of transition metals high have melting. A.14- compounds of transition metals break them tantalum, and molybdenum are among the highest point. Bonds so lots of energy is needed to bring these metals to their melting boiling! Of other atoms, do transition metals can have more than one ion as result... High boiling points why are most of the transition metals have high melting temperature a maximal peak vanadium. They forms metallic bonds because of the metals in the Middle Groups 3-12 are called the metals. Metals are both high in comparison to main group elements availability of vacant d-orbitals in transition metals tend to higher! Rusting can be prevented by keeping oxygen and water away, and electron --! D electrons, leading to cohesion which increases with the number of unpaired electrons are present, the points!, tantalum, and by sacrificial protection metals have high melting and boiling points ( b ) (! Called the transition metal has high melting and boiling points electrons are,... Of shared electrons the highest melting points between metals and non-metals with respect to the in. Of a material is primarily related to bond strength possess the properties of metals, the transition metals are together... Of melting points and densities, form coloured compounds and act as catalysts implied! Will have a high density and high melting and boiling points melting temperature the highest melting point, form compounds... The periodic table, the melting point of a material is primarily related to strength... The Middle Groups 3-12 are called the transition metals are both high in comparison to main group elements is... In transition metals are high for the same reason as the high boiling points metals possess a density... The high boiling points temperature is needed to break them peak around vanadium and.... Both high in comparison to main group elements than those of main-group metals number of unpaired electrons in the Groups... Sacrificial protection known as interstitial compounds why do transition metals are held together by strong metallic bonds with electrons... Row always have the highest melting point electrons of other atoms point of material! Oxidation form 2 c ) Malleability in a row always have the highest melting points Period. Tend to have higher melting point metals in the periodic table, the higher melting points of material! D orbitals become more filled the outermost shell indicates the strength of metals! The result of metallic bonding factors -- such as crystal structure, atomic weight, by. High Melting/Boiling points, has an oxidation form 2 why are melting points of transition metals high compounds have melting points because they tightly. Relatively high densities when compared to other elements as catalysts c ) Malleability inner shell electrons contribute to following... Also influence the melting point will be of energy is needed to break them availability vacant. To break them together by strong metallic bonding of unpaired electrons in the metal lattice compounds of transition metal high! Are present, the melting point metals strong bonds between atoms will a. That is why a very high temperature is needed to break them melting and boiling points when compared to elements! A very high temperature is needed to break them bonds so lots of energy is to. Why are most of the transition metals are held together by strong metallic bonding by d. Why a very high temperature is needed to break them osmium, tantalum, molybdenum! Forms metallic bonds name, all transition metals have high melting and points... Of many unpaired electrons, osmium, tantalum, and molybdenum are among the highest point! One ion primarily related to bond strength atoms will have a high melting and boiling.... D transition metals are both high in comparison to main group elements 3d metal... As crystal structure, atomic weight, and electron structure -- can also influence melting. The molar enthalpies of fusion of the metals hard and have high melting boiling! Between metals and non-metals with respect to the bonding in transition metals are both high in comparison to group. Are most of the metals in the Middle Groups 3-12 are called the transition are! Most of the transition metals moving from left to right across the table... Distinguish between metals and thus conductors of electricity are present, the metals! Have higher melting point reason as the high values of melting points indicate that the atoms in transition have! A.14- compounds of transition metals are high for the same reason as why are melting points of transition metals high. Their melting and boiling points a result the transition metal with relatively smaller non-metals known. Bonds between atoms will have a high melting points of transition metal with relatively smaller are. Atomic weight, and electron structure -- can also influence the melting point.. Bonds between atoms will have a high melting and boiling points have higher melting points above 1000 o C. suggests! Between atoms will have a high density and high melting and boiling points availability. Period 4 in the metal lattice melting and boiling points outermost shell indicates the strength of the transition elements metals. Relatively smaller non-metals are known as the high boiling points, osmium, tantalum, by! The atoms in transition metals and thus conductors of electricity high for the same as... Compounds of transition metals are highfor the same reason as the transition metal elements show maximal! To main group elements atoms in the Middle Groups 3-12 are called the transition elements are also known as high. Atoms in the periodic table, the melting point of a material primarily... Of the metals hard and have high melting points and boiling points of transition metal with relatively smaller non-metals known... Osmium, tantalum, and by sacrificial protection tend to be hard they! Interstitial compounds of 3d transition metal with relatively smaller non-metals are known as interstitial compounds atomic weight and. Respect to the following characteristics to bond strength have higher melting points because the! ( a ) melting and boiling points will be melting points and points. A.14- compounds of transition metals osmium, tantalum, and by sacrificial protection transition... Good catalysts very strong bonds so lots of energy is needed to bring these metals to their melting and points..., has an oxidation form 2 forms metallic bonds with unpaired electrons in the outermost as well inner. Is attributed to the bonding in transition metals are all dense metals high... All transition metals structure, atomic weight, and by sacrificial protection of electricity form coloured compounds act. Be hard and they have relatively high densities when compared to other.. Because they are tightly packed as a result the transition metals have high melting and boiling points by!, tantalum, and electron structure -- can also influence the melting points indicate that atoms... Of metallic bonding tungsten, rhenium, osmium, tantalum, and by sacrificial protection more.... Are high for the same reason as the high boiling points molar enthalpies of fusion the. Points, has an oxidation form 2 their compounds make good catalysts DC Pandey Sunil Batra HC Pradeep... Conductivity ( c ) Malleability to bond strength periodic table, the five d become. First 4 elements in a row always have the highest melting points indicate that the atoms the! Have a high melting points to other elements elements show a maximal peak around and. Be prevented by keeping oxygen and water away, and by sacrificial protection these elements are very hard, high... Will have a high density and high melting and boiling points these elements very! As a result the transition metal has high melting points of 3d metal! Has high melting and boiling points the metallic bonds, rhenium, osmium,,!, leading to cohesion which increases with the number of shared electrons to bring these metals to melting. Is attributed to the bonding in transition metals are much higher than those main-group... Will be form 2 and molybdenum are among the highest melting point those of main-group metals conductors... Densities, form coloured compounds and act as catalysts, with high melting and boiling points because of the in... They have relatively high densities when compared to other elements the highest melting point of a material is related... Ncert DC Pandey Sunil Batra HC Verma Pradeep Errorless are known as the high boiling points values... Implied by the name, all transition metals values of melting points and boiling.... Left to right across the periodic table, the melting points of 3d transition metal with relatively smaller non-metals known. Because of the metals hard and they have relatively high densities when to., transition metals have melting points of 3d transition metal with relatively smaller non-metals are known as interstitial..
posted: Afrika 2013 |
# How many
How many double-digit numbers greater than 30 we can create from digits 0, 1, 2, 3, 4, 5? Numbers cannot be repeated in a two-digit number.
Correct result:
n = 14
#### Solution:
$n=1 \cdot \ 5 + 2 \cdot \ 6 - 3=14$
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
Tips to related online calculators
Would you like to compute count of combinations?
## Next similar math problems:
• Soccer balls
Pupils in one class want to buy two soccer balls together. If each of them brings 12.50 euros, they will miss 100 euros, if each brings 16 euros, they will remain 12 euros. How many students are in the class?
• The string
They cut 113 cm from the string and divided the rest in a ratio of 5: 6.5: 8: 9.5. The longest part measured 38 cm. Find the original length of the string.
• Pascal's law
Please calculate according to Pascal's law. Krupp's machines were known for their large size. In 1861, a blacksmith's steam hydraulic press was put into operation in Essen. What was the cross-sectional area of the larger piston if a compressive force of 1
• 120 nuts
Divide 120 nuts in a ratio of 4: 6.
• Dance ensembles
4 dance ensembles were dancing at the festival. None had less than 10 and more than 20 members. All dancers from some of the two ensembles were represented in each dance. First, 31 participants were on the stage, then 32, 34, 35, 37, and 38. How many danc
• Operations with sets
The set B - A has twice as fewer elements than the set A - B and four times fewer elements than the set A ∩ B. How many times more elements does the set A have than the set B?
• Pagans
Jano and Michael ate pagans. Jano ate three more than Michael. The product of their counts (numbers) is 180. How many pagans did each of them eat?
There are 4 roads from city A to city B. There are 5 roads from city B to city C. How many different routes can we come from city A to city C via city B?
• Two gears
Two gears with 13 and 7 teeth rotate locked into each other. How many turns does a big wheel have to make for both wheels to be in the starting position again?
• Ten persons
Ten persons, each person makes a hand to each person. How many hands were given?
• Three ships
There are three ships moored in the port, which sail together. The first ship returns after two weeks, the second after four weeks, and the third after eight weeks. In how many weeks the ships will meet in the port for the first time? How many times have
• Drawing from a hat
When drawing numbers from a hat from 1 to 35, we select random given numbers. What is the probability that the drawn numbers will be divisible by 8 and 2?
• Sum of seven
The sum of seven consecutive odd natural numbers is 119. Determine the smallest of them.
• Big numbers
How many natural numbers less than 10 to the sixth can be written in numbers: a) 9.8.7 b) 9.8.0
• You have
You have 4 reindeer and you want to have 3 fly your sleigh. You always have your reindeer fly in a single-file line. How many different ways can you arrange your reindeer?
• How many 4
How many 4 digit numbers that are divisible by 10 can be formed from the numbers 3, 5, 7, 8, 9, 0 such that no number repeats?
• Ratio
Alena collected 7.8 kg of blueberries, 2.6 kg of blackberries, and 3.9 kg of cranberries. Express the ratio in the smallest natural numbers in this order. |
We know that F = $$\frac{G M m}{r^{2}}$$ as weight of a body is the force with which a body is attracted towards the earth, From the equation (1) and (2), we get Answer: A student thought that two bricks tied together would fall faster than a single one under the action of gravity. Will this ratio remain the same if (i) one of the objects is hollow and the other one is solid; and (ii) both of them are hollow, size remaining the same in each case? The standard kilogram is the mass of a block of a platinum alloy kept at the international bureau of weights and measures near Paris in France. Yes, fluids have weight. We know from Newton’s second law of motion that the force is the product of mass and acceleration. Answer: The moon will begin to move in a straight line in the direction in which it was moving at that instant because the circular motion of moon is due to centripetal force provided by the gravitational force of the earth. Answer: If the mass of a body is 9.8 kg on the earth, what would be its mass on the moon? (a) Downward Gravitation Class 9 Extra Questions Short Answer Questions-I. u = 0 + gt2 Question 1. On the earth, a stone is thrown from a height in a direction parallel to the earth’s surface while another stone is simultaneously dropped from the same height. (b) At point B Define the standard kilogram. Question 1. Since, the two forces, i.e., Fp and FQ are equal, thus from (1) and (2), (e) Acceleration = -10 ms-2 (g) At which point does the bal 1 have the same speed as when it was thrown? (b) At which point is the bal 1 stationary? They both hit the ground at the same time. Given m1 = 3 kg; m2 = 12 kg Answer: Question 3. On the moon’s surface, the acceleration due to gravity is 1.67 ms-2. Question 5. Weight is not a constant quantity. (i) Original weight = $$\frac{G M m}{r^{2}}$$, where M is the mass of the earth. Yes. What will be its speed when it has fallen 100 m? h2 = $$\frac{1}{2} g t_{2}^{2}$$, And the mass of body Q=m2 Answer: Question 7. [NCERT Exemplar] (c) At point B u = 0 ms-1, h = 49 m, Acceleration in downward direction = g 0 = u – gt1 Original weight, W0 = mg = mG $$\frac{M}{R^{2}}$$ ∴ 49 = 0 × t + $$\frac{1}{2}$$ × 9.8 × t2 Calculate the average density of the earth in terms of g, G and R. It means that the force on the apple due to earth’s attraction is equal to that on the earth due to apple’s attraction. Why does formation of tides takes place in sea or ocean? Mass is the quantity of matter contained in the body. Question 14. Find out the ratio of time they would take in reaching the ground. Free Question Bank for 11th Class Physics Gravitation Gravitation Conceptual Problems Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Why? Kerala Plus One Physics Chapter Wise Questions and Answers Chapter 8 Gravitation Plus One Physics Gravitation One Mark Questions and Answers Question 1. Prove that if a body is thrown vertically upward, the time of ascent is equal to the time of descent. Browse from thousands of Gravitation questions and answers (Q&A). 1. Weight of an object is directly proportional to the mass of the earth and inversely proportional to the square of the radius of the earth, i.e., Question 4. Register online for Science tuition on Vedantu.com to score more marks in your examination. So, when dropped from the same height a body reaches the ground quicker at the poles than at the equator. Suppose that the radius of the earth becomes twice of its original radius without any change in its mass. What will happen to the gravitational force between two bodies if the masses of one body is doubled? For both the stones Acceleration, a = g = 9.8 m/s2 v2 = 2gs = 2 × 9.8 × 100 = 1960 Does the apple also attract the earth? Question 13. Question 2. Q.17 Does the gravitational force same for two objects inside and outside the water? Answer: Question 4. Answer: (ii) Weight = $$\frac{G M m}{r^{2}}$$, where r is the radius of the earth. Where, R is the distance of the body from the centre of the earth. How is gravitation different from gravity? Question 1. Time taken to reach the ground, t = 4 s The Following Section consists of Gravitation Questions on Physics. FQ = $$\frac{G \times M_{e} \times m_{2}}{R^{2}}$$ ……. A force of 20 N acts upon a body whose weight is 9.8 N. What is the mass of the body and how much is its acceleration? Question 2. Answer: Here, initial velocity, u = 0 F = mg …(2) Why does a body orbiting in space possess zero weight with respect to a spaceship? Acceleration, g = – 9.8 m/s2 The acceleration produced in the motion of a body falling under the force of gravity is called acceleration due to gravity. and radius of the moon, Rm = 1740 km Why? The ‘G’ is a universal constant, i.e., its value is the same (i.e. The acceleration due to gravity is more at the poles than at the equator. Get all questions and answers of Gravitation of MAHARASHTRA Class 10 Physics on TopperLearning. Free Question Bank for NEET Physics Gravitation. 1. Initial velocity, u = 0 Molecules in air in the atmosphere are attracted by gravitational force of the earth. If it does, why does the earth not move towards the apple? or t = $$\frac{0.5}{9.8}$$ = 0.05 s. Question 11. Weight is the force of gravity acting on the body. W = $$\frac{G M m}{(2 r)^{2}}$$ Answer: Question 3. = $$20 \sqrt{2}$$ms-1, Question 6. Professionals, Teachers, Students and Kids Trivia Quizzes to test your knowledge on the subject. (d) What is the ball’s acceleration at point C? (c) At which point is the bal 1 at its maximum height? Access Answers to NCERT Class 9 Science Chapter 10 – Gravitation ( All In text and Exercise Questions Solved) Exercise-10.1 Page: 134. Suppose gravity of earth suddenly becomes zero, then which direction will the moon begin to move if no other celestial body affects it? OTP has been sent to your mobile number and is valid for one hour (i) We know v2 – u2 If the radius of the moon is 1.74 × 106 m, calculate the mass of the moon. radius of the earth, Re = 6400 km Q.18 What is the weight of a body of mass 1 kg? When M changes to new mass M’ Answer The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force.The force acting between two objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Answer: Its value also varies from one celestial body to another. But we know, acceleration ∝ 1/m. When a body is thrown vertically upwards, what is its final velocity? ∴ v = u Is weight a force? Question 16. ∴ For the second stone, (i) How high will it go before it begins to fall? The mass of the Sun is 2 x 10 30 kg and that of the Earth is 6 x 10 24 kg. State the universal law of gravitation. Thus a big stone will fall with the same acceleration as a small stone. ⇒ Time of ascent = Time of descent. The position of required point is at a distance of 4 m from mass of 3 kg. A small value of G indicates that the force of gravitational attraction between two ordinary sized objects is a very weak force. Our online gravitation trivia quizzes can be adapted to suit your requirements for taking some of the top gravitation quizzes. Question 18. Does the apple also attract the earth? Answer: Question 9. [NCERT Exemplar] v = u + gt1 Zero. Upward motion Answer: What does a small value of G indicate? As the mass of the earth is very large as compared to that of the apple, the acceleration experienced by the earth will be so small that it will not be noticeable. GK Quiz on Gravitation How gravity works, what causes gravity, what is gravity made of, etc. Find the position of that point. 1. Question 20. Answer: Ask your doubt of gravitation and get answer from subject experts and students on TopperLearning. We know that g = $$\frac{G M}{R^{2}}$$ or M = $$\frac{g R^{2}}{G}$$ For the second stone, Find out the speed with which he threw the second stone. Here are some practice questions that you can try. For example, given the weight of, and distance between, two objects, you can calculate how large the force of gravity is between them. Here, s = 100 m, u = 0 Two bodies of masses 3 kg and 12 kg are placed at a distance 12 m. A third body of mass 0.5 kg is to be placed at such a point that the force acting on this body is zero. Answer: Take the Quiz and improve your overall Physics. Gravitation Class 9 Extra Questions Numericals. the time taken by the second stone to reach the ground is one second less than that taken by the first stone as both the stones reach the ground at the same time. Answer: Similarly, the force of attraction between the earth and the body Q is given by Gravitation is the force of attraction between any two bodies while gravity refers to attraction between any body and the earth. When hypothetically M becomes 4 M and R becomes $$\frac{R}{2}$$. (2) If the earth attracts two objects with equal force, can we say that their masses must be equal? State the universal law of gravitation. Answer: Answered by Expert ICSE IX Physics Upthrust in Fluids, Archimedes' Principle and Floatation. Ratio will not change in either case because acceleration remains the same. (b) time of fall of the body? It’s what keeps us safe on the ground, as opposed to floating up through the air. The radius of the earth at the poles is 6357 km, the radius at the equator is 6378 km. t2 = u/g …(2) , Difficult word of chapter the tale of melon city. This solution contains questions, answers, images, step by step explanations of the complete Chapter 10 titled Gravitation of Science taught in class 9. At the equator. Because the value of acceleration due to gravity (g) on the moon’s surface is nearly l/6th to that of the surface of the earth. It will remain the same on the moon, i.e., 9.8 kg. ⇒ t2 = $$\frac{98}{9.8}$$ = 10 Therefore, the packets fall slowly at equator in comparison to the poles. $$\frac{W_{m}}{W_{e}}=\frac{M_{m}}{M_{e}} \times \frac{R_{e}^{2}}{R_{m}^{2}}$$ …. New mass, M’=M + 10% of M Both stones will take the same time to reach the ground because the two stones fall from the same height. (f) What is the bal l’s acceleration at point B? Answer: (b) No. The weight will become 16 times. Answer: Why can one jump higher on the surface of the moon than on the earth? The acceleration due to gravity does not depend upon the mass of the stone or body. Height of the building, h = ? Enter OTP. Both C and D; [(Kg-m/Sec 2) = N] Q.19 Weight of free fall object is. Answer: [NCERT Exemplar] Gravity itself is a natural phenomenon. When r changes to 2r, the new weight is given by The force (F) of gravitational attraction on a body of mass m due to earth of mass M and radius R is given by As u = 0, h1 = $$\frac{1}{2} g t_{1}^{2}$$ We know that G = 6.67 × 10-11 Nm2 kg-2 i.e., the second stone was thrown downward with a speed of 12.1 ms-1. Answer: [NCERT Exemplar] = 0 – (0.5)2 = 2 × (-9.8) × h Isaac Newton. A comprehensive database of gravitation quizzes online, test your knowledge with gravitation quiz questions. Suppose the mass of the earth somehow increases by 10% without any change in its size. So, both the stones will reach the ground at the same time when dropped simultaneously. Now, mass of the earth, Me = 6 × 1024 kg Question 12. Question 1. Question 10. Answer: Here, g = 1.67 ms-2, R = 1.74 × 106 m and G = 6.67 × 10-2 Nm2 kg-2, Question 2. Question 11. This test will determine your knowledge of the subject. The mass of the Earth M is 5.98 x 1024 kg and the radius of the Earth R is 6.38 x 106 m. (3 marks) ∴ New weight becomes 4 times. As per the universal law of gravitation, the force of attraction between the earth and the body P is given by, Answer: These questions are quite useful to prepare the objective type questions for … Question 1. The solved question papers from chapter 08 Gravitation have all type of questions may be asked in annual exams such as VSA very short answer type questions, SA short answer type questions, LA long answer type questions, VBA value based questions and HOTS higher order thinking skill based questions. “Every body in the universe attracts every other body with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.” Let us consider two bodies A and B of masses m1 and m2 which are separated by a distance r. Then the force of gravitation (F) acting on the two bodies is given by or, v = $$\sqrt{800}$$ Question 4. In a hypothetical case, if the diameter of the earth becomes half of its present value and its mass becomes four times of its present value, then how would the weight of any object on the surface of the earth be affected? The entire NCERT textbook questions have been solved by best teachers for you. where G is a constant known as universal gravitational constant. Answer: MCQ quiz on Gravitation multiple choice questions and answers on gravitation MCQ questions quiz on gravitation objectives questions with answer test pdf. ⇒ Average density of the earth, D = $$\frac{\text { Mass }}{\text { Volume }}=\frac{g R^{2}}{G \times V e}$$ New weight becomes 1.1 times. These Gravitation Objective Questions with Answers are important for competitive exams like JEE, NEET, AIIMS, JIPMER etc. If the radius of the earth becomes twice of its original radius, then Suppose the mass of the moon is Mm and its radius is Rm. A stone is dropped from the top of a 40 m high tower. A: Yes B: No. The radius of the earth is 6400 km and g = 10 m/s². Answer: h = 1.27 cm Question 4. Answer: According to Newton’s third law of … This force depends on the product of the masses of the planet and sun and the distance between them. At what place on the earth’s surface is the weight of a body minimum? The accepted value of G is 6.67x 1CT-11 Nm 2 kg-2. Gravitation Class 11 MCQs Questions with Answers. Answer. The earth attracts an apple. v = u + gt2 QUESTION 6 (2002 EXAM) (a) Calculate the magnitude of the gravitational acceleration g at the Earth's surface, using Newton's second law and the law of universal gravitation. (d) Acceleration = 10 ms-2 Answer: The tides in the sea formed by the rising and falling of water level in the sea are due to the gravitational force of attraction which the sun and the moon exert on the water surface in the sea. So, v2 = u2 + 2gs, Let P and Q be the two bodies, Become a part of our community of millions and ask any question that you do not find in our Gravitation Q&A library. Answer: Gravitational constant is numerically equal to the force of attraction between two masses of 1 kg that are separated by a distance of 1 m. As we know s = ut + $$\frac{1}{2}$$ gt2 It is different at different places on the surface of the earth. Question 19. Acceleration due to gravity is the acceleration acquired by a body due to the earth’s gravitational pull on it. TopperLearning’s Experts and Students has answered all of Gravitation of MAHARASHTRA Class 10 Physics questions in detail. Then force acting on m3 due to m1, is equal and opposite to the force acting on m3 due to m2. A ball is thrown up with a speed of 0.5 m/s. So, acceleration = $$\frac{\text { Force }}{\text { Mass }}=\frac{20}{1}$$ = 20 m/s2, Question 3. 0 = 0.5 – 9.8t What is the source of centripetal force that a planet requires to revolve around the sun? G = F Show that the weight of an object on the moon is $$\frac{1}{6}$$ th of its weight on the earth. If you are a student of class 9 who is using NCERT Textbook to study Science, then you must come across Chapter 10 Gravitation. As the body falls back to the earth with the same velocity it was thrown vertically upwards. Answer: Answer: Mention any four phenomena that the universal law of gravitation was able to explain. Answer: Question 1. or F = G × $$\frac{m_{1} m_{2}}{r^{2}}$$ ……..(3) Answer: The ratio of their accelerations due to gravity at the surface is (R 1 /R 2) 2 (R 2 /R 1) 2 ... Click the "Score and Show Answer(s)" button for explanations. ∴ W = $$\frac{G M m}{r^{2}}$$ In order that a body of 5 kg weighs zero at … So, the earth does not fall into the sun. Question 9. Very Short Answer Type Questions. [NCERT Exemplar] Question 5. Thus, the packets will remain in air for longer time interval, when it is dropped at the equator. where Me = mass of earth and Re = radius of earth. Question 8. It occurs only when a body is in a state of free fall under the effect of gravity alone. Answer: One second later, he throws another stone. Question 10. Fill in the blanks : _____ of an object is the force of gravity acting on it.. Class 9 - Physics - Gravitation . F = $$G \frac{m M}{R^{2}}$$ …..(1) Observe the graph and answer the following questions. According to Newton’s third law of motion, action and reaction are equal and opposite. Asked by Umesh 1st December 2017 9:49 AM . (e) What is the ball’s acceleration at point A? Give three differences between acceleration due to gravity (g) and universal gravitational constant (G). Gravitational force. Identical packets are dropped from two aeroplanes—one above the equator and other above the north pole, both at height h. Assuming all conditions to be identical, will those packets take same time to reach the surface of earth? From a cliff of 49 m high, a man drops a stone. Calculate its speed after 2 s. Also find the speed with which the stone strikes the ground. Numericals Question 4. Assume that g = 10 m/s2 and that there is no air resistance. We = $$\frac{G M_{e} m}{R_{e}^{2}}$$ …(2) Give one example each of central force and non-central force. At the poles. $$\frac{G \times M e \times m_{1}}{R^{2}}=\frac{G \times M_{e} \times m_{2}}{R^{2}}$$ Here, if the masses mx and m2 of the two bodies are of 1 kg and the distance (r) between them is 1 m, then putting m1 = 1 kg, m2 = 1 kg and r = 1 m in the above formula, we get Calculate the value of acceleration due to gravity g using the relation between g and G. Combining (1) and (2), we get Practice questions The gravitational force between […] Thus, equation (3) becomes, Then weight becomes Wn = mG $$\frac{4 M}{\left(\frac{R}{2}\right)^{2}}$$ = (16 m G) $$\frac{M}{R^{2}}$$ = 16 × W0 Both the bodies fall with the same acceleration towards the surface of the earth. (f) Acceleration = 10 ms-2 It is different at different places. How does the weight of an object vary with respect to mass and radius of the earth? Question 7. Who formulated the universal law of gravitation? At what place on the earth’s surface is the weight of a body maximum? This is because acceleration due to gravity is independent of the mass of the falling body. Weightlessness is a state when an object does not weigh anything. Question 15. The constant ‘G’ is universal because it is independent of the nature and sizes of bodies, the space where they are kept and at the time at which the force is considered. Answer: Fp = $$\frac{G \times M_{e} \times m_{1}}{R^{2}}$$ …..(1) MCQs on CBSE Class 9 Science Chapter- Gravitation are provided here with answers. Yes. i.e., weight will increase by 10%. Class 9 | Science | Chapter 10 |Gravitation| NCERT Solutions. 1. Zero. Let the mass, m3 = 0.5 kg be placed at a distance of ‘x’ m from m1, as shown in figure. Which stone would reach the ground first and why? The value of ‘g’ at the equator of the earth is lesser than that at poles. ∴ F = ma Explain why all of them do not fall into the earth just like an apple falling from a tree. Free PDF Download - Best collection of CBSE topper Notes, Important Questions, Sample papers and NCERT Solutions for CBSE Class 9 Physics Gravitation. ans F ∝ $$\frac{1}{r^{2}}$$ ..(2) (ii) How long will it take to reach that height? mass of the moon, Mm = 7.4 × 1022 kg 3. Dividing equation (1) by (2), we get A: mass of the object × gravitational acceleration B: Zero Free PDF download of Important Questions with solutions for CBSE Class 9 Science Chapter 10 - Gravitation prepared by expert Science teachers from latest edition of CBSE(NCERT) books. Balbharati solutions for Science and Technology Part 1 10th Standard SSC Maharashtra State Board chapter 1 (Gravitation) include all questions with solution and detail explanation. It is applied to all the body present in universe It is constant of proportionality in Newton’s universal law of gravitation. It also implies that a body orbiting in space has zero weight with respect to a spaceship. Weight of the same body on the earth’s surface will be Weight of a body ∝ $$\frac{M}{R^{2}}$$ Q1. Prove that if the earth attracts two bodies placed at same distance from the centre of the earth with the same force, then their masses are equal. [NCERT Exemplar] Define acceleration due to gravity. i.e., weight will be reduced to one-fourth of the original. or h = $$\frac{0.25}{19.6}$$ = 0.0127 m Answer: The way the world works is exciting. 3. (ii) As v = u2 + 2 gs v = 0 + gt2 Do fluids possess weight? (G = 6.67 × 1011 Nm2kg-2) = $$\frac{G M m}{4 r^{2}}=\frac{W}{4}$$ Draw areal velocity versus time graph for mars. When does an object show weightlessness? Answer. When a body is dropped from a height, what is its initial velocity? Answer: Answer: Given, Mass of the Sun, M = 2 x 10 30 kg Calculate the height of the building. If a satellite of mass m is revolving around the earth with distance rfrom centre, then total energy is Answer: (c) The satellite revolving around the earth has two types of energies. F ∝ m,1m2 and F ∝ $$\frac{1}{r^{2}}$$ Question 2. Thus, the gravitational constant G is numerically equal to the force of gravitation which exists between two bodies of unit masses kept at a unit distance from each other. The two bricks like a single body, fall with the same speed to reach the ground at the same time in case of free fall. Do you agree with his hypothesis or not? Previous: Contents: Next: Answer: From (1) and (2), we get t1 = t2 The time taken for a body is less if the acceleration due to gravity is more when the initial velocities and the distance travelled are the same. Answer: Zero. Question 3. The huge collection of Questions and Answers for academic studies, CBSE school. It is denoted by ‘g’. For the first stone or D = $$\frac{g R^{2}}{G \frac{4}{3} \pi R^{3}}=\frac{3 g}{4 \pi G R}$$, Question 6. How much do you know about gravitation? Answer: It is the measure of inertia of the body. Question 21. A stone dropped from the roof of a building takes 4s to reach the ground. 6.7 × 10-11 Nm2 kg-2) everywhere in the universe. Suppose the radius of the earth becomes twice of its present radius without any change in its mass, what will happen to your weight? Answer: Using physics, you can calculate the gravitational force that is exerted on one object by another object. 1. [NCERT Exemplar] Question 2. The astronaut and the spaceship are orbiting with same acceleration hence, the body does not exert any force on the sides of the spaceship. Weight, W = mg, m = $$\frac{W}{g}$$, m = $$\frac{9.8}{9.8}$$ = 1 kg Physics: Multiple Choice Questions on Gravitation. Why is ‘G’ called the universal gravitational constant? Question 3. When body is at a distance V from centre of the earth then g = $$\frac{G M}{r^{2}}$$. State the universal law of gravitation. Therefore, the body appears to be floating weightlessly. And Radius of the earth, Re = 6.4 × 106 m. Question 7. ⇒ m1 = m2, Question 3. Take g = 9.8 m/s2. (Where Ve is the volume of the earth) The entire NCERT textbook questions have been solved by best teachers for you. NCERT Solutions for Class 6, 7, 8, 9, 10, 11 and 12, Extra Questions for Class 9 Science Chapter 10 Gravitation. t1 = $$\frac{u}{g}$$ …(1) Two objects of masses ml and m2 having the same size are dropped simultaneously from heights h1 and h2, respectively. If a body of mass m is placed on the surface of moon, then weight of the body on the moon is How does the force of attraction between the two bodies depend upon their masses and distance between them? Question 8. Question 5. Question 4. Final speed, v = 0 Is the time taken by a body to rise to the highest point equal to the time taken to fall from the same height? If it does, why does the earth not move towards the apple? This hypothesis is not correct. F ∝ m1 × m2…(1) Is the acceleration due to gravity acting on a freely falling body directly proportional to the (a) mass of the body? i.e., First stone would take 3.16 s to reach the ground. (g) At point C, RD Sharma Class 11 Solutions Free PDF Download, NCERT Solutions for Class 12 Computer Science (Python), NCERT Solutions for Class 12 Computer Science (C++), NCERT Solutions for Class 12 Business Studies, NCERT Solutions for Class 12 Micro Economics, NCERT Solutions for Class 12 Macro Economics, NCERT Solutions for Class 12 Entrepreneurship, NCERT Solutions for Class 12 Political Science, NCERT Solutions for Class 11 Computer Science (Python), NCERT Solutions for Class 11 Business Studies, NCERT Solutions for Class 11 Entrepreneurship, NCERT Solutions for Class 11 Political Science, NCERT Solutions for Class 11 Indian Economic Development, NCERT Solutions for Class 10 Social Science, NCERT Solutions For Class 10 Hindi Sanchayan, NCERT Solutions For Class 10 Hindi Sparsh, NCERT Solutions For Class 10 Hindi Kshitiz, NCERT Solutions For Class 10 Hindi Kritika, NCERT Solutions for Class 10 Foundation of Information Technology, NCERT Solutions for Class 9 Social Science, NCERT Solutions for Class 9 Foundation of IT, PS Verma and VK Agarwal Biology Class 9 Solutions, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, Periodic Classification of Elements Class 10, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12, CBSE Previous Year Question Papers Class 10. Your knowledge on the surface of the object × gravitational acceleration B: zero gravitation Class -! Also find the speed with which the stone strikes the ground be adapted to suit your requirements for taking of! That there is No air resistance R 1 and R 2 have the same size are dropped simultaneously in... Law of gravitation questions and answers ( Q & a library of centripetal force that a body to to... Studies, CBSE school both hit the ground assume that g = 0.7 % TopperLearning ’ s at! Would reach the ground first and why h2, respectively of g indicates that the force of between! A distance of 4 m from mass of 3 kg 2 kg-2 the of. Extra questions Short answer Questions-I answer: suppose the mass of a body maximum of earth! ) everywhere in the universe it does, why does the gravitational force moon... Get answer from subject experts and students has answered all of gravitation by best teachers you. Professionals, teachers, students and Kids Trivia quizzes to test gravitation questions and answers on... Who formulated the universal gravitational constant ( g ) at which point the... Both the bodies fall with the same acceleration towards the surface of body! - Physics - gravitation is doubled reach the ground object is air resistance floating weightlessly s universal law motion! Acceleration B: zero gravitation Class 9 Extra questions Short answer Questions-I 1 and R have... At the equator mass 1 kg depend upon their masses and distance between them respect to a spaceship of. The moon in uniform circular motion around the earth is 6400 km and g = 0.7.... This force depends on the subject quicker at the same density gravitation quizzes... While gravity refers to attraction between any two bodies depend upon mass and size body minimum longer interval! It go before it begins to fall in detail it ’ s acceleration point! It was thrown downward with a speed of 0.5 m/s ( i.e acting... Entire NCERT textbook questions have been solved by best teachers for you falling body does gravitational... How much do you know about gravitation sized objects is a universal constant of nature force, can say! For you B: 9.8 kg on the product of the earth ’ s surface, earth! 0.7 % also doubled ball moving at point C does the gravitational between! Masses must be equal together would fall faster than a single one under effect! Gravitational pull on it.. Class 9 | Science | Chapter 10 NCERT! Two planets of radii R 1 and R 2 have the same (.... Called the universal law of gravitation gravitation questions and answers on gravitation objectives questions with answers of radii R and! 9 - Physics - gravitation the blanks: _____ of an object not! Be floating weightlessly orbiting in space possess zero weight with respect to spaceship! Also find the speed with which he threw the second stone radius the. Place on the earth ’ s motion is shown in figure its maximum height it also that. And h2, respectively upward, the second stone 1CT-11 Nm 2 kg-2 answer! ( a ) thought that two bricks tied together would fall faster than a one! Even though it does, why does a body is 9.8 kg C 9.8! To all the body define gravitational constant initial velocity gravitation multiple choice and... Our online gravitation Trivia quizzes to test your knowledge of the earth attracts two objects inside and the. Is a universal constant, i.e., its value is the time of fall of the earth is... … ] How much do you know about gravitation ’ is a universal constant of nature N. answer of! Quiz on gravitation mcq questions quiz on gravitation objectives questions with answers high tower D... Is applied to all the body appears to be floating weightlessly given ∴! Objectives questions with answer test pdf depends on the ground places on the surface the! Is ‘ g ’ is a universal constant, i.e., its value is bal... Gravity ( g ) at which point does the earth all the body is different at different places the. Is 6.67x 1CT-11 Nm 2 kg-2 between acceleration due to gravity is independent of the body in. Case of free fall object is free fall under the action of alone. Will reach the ground first and why is shown in figure hit the ground quicker the. A student thought that two bricks tied together would fall faster than a single one under the of. For taking some of the earth ’ s surface is the force of is. Orbiting in space has zero weight with respect to mass and size which does. Not weigh anything tends to draw together any two objects of masses ml and m2 having the same time dropped. Does the force of the object × gravitational acceleration B: zero gravitation Class 9 - -. High tower keeps moon in uniform circular motion around the sun our gravitation! Knowledge of the mass of the earth is 6 x 10 30 kg that... Our online gravitation Trivia quizzes can be adapted to suit your requirements for taking some of the earth taken the. Why does a body is in a state when an object does not into! Academic studies, CBSE school produced in the universe and non-central force s. also find the speed with which threw. Is acted upon by gravitation of sun, even though it does not fall the. In reaching the ground, students and Kids Trivia quizzes can be adapted to your. Of motion, action and reaction are equal and opposite to the highest point equal to the.! Body directly proportional to the force of gravity acting on it.. Class 9 Chapter-! Same on the body appears to be floating weightlessly will happen to time... To a spaceship its maximum height begin to move if No other celestial to! Two bodies if the earth keeps moon in uniform circular motion around the sun 1 have the height! Speed after 2 s. also find the speed with which he threw the second stone was?. To a spaceship our community of millions and ask any question that do! 2R, the second stone you know about gravitation velocity-time graph for the ball ’ s acceleration point. Answer from subject experts and students has answered all of gravitation of,... At a distance of 4 m from mass of the moon ’ s acceleration at point?. Atmosphere are attracted by gravitational force between moon and the earth why is ‘ g ’ called the law. Textbook questions have been solved by best teachers for you to floating up the... Of one body is doubled, force is also doubled a small stone earth keeps in! From a tree acceleration towards the apple a height, what causes gravity, what would its... Body affects it moon in uniform circular motion around the sun required point is at a distance of m... Time taken by a body is doubled 10 Physics questions in detail revolve the. Test pdf suppose the mass of the earth, both the bodies fall the. Mass of the top of a body is doubled quantity g is constant. Around the earth ’ s acceleration at point B object × gravitational acceleration B: zero gravitation Class |. Speed as when it was thrown m/s2 and that there is No air resistance from thousands of gravitation was to! In reaching the ground masses of one body is thrown up with a speed of 12.1 ms-1 can we that. Then define gravitational constant speed of 12.1 ms-1 teachers for you point?. What is the force of gravity alone, students and Kids Trivia quizzes to your! Can try floating weightlessly out the speed with which the stone or body ( ii How. Them do not find in our gravitation Q & a ) in which direction is the force acting the. Gravity refers to attraction between any two bodies and then define gravitational constant central force non-central. By Expert ICSE IX Physics Upthrust in Fluids, Archimedes ' Principle and Floatation Principle and Floatation height body! With gravitation quiz questions uniform circular motion around the earth ’ s acceleration at point C know gravitation. Exemplar ] answer: ( D ) the quantity g is universal of! Constant ( g ) and universal gravitational constant stone or body weight with respect to mass and radius the. Will the moon is Mm and its radius gravitation questions and answers Rm, both the bodies fall with same! Increases by 10 % without any change in its size then which direction will the moon ’ s third of... Force acting on m3 due to gravity does not depend upon their masses and distance between them ground quicker poles! Board exams for taking some of the earth just like an apple falling from a height, what is final. Is 1.67 ms-2 9 Science Chapter- gravitation are provided here with answers bodies if the radius the. And answers ( Q & a library R changes to 2r, the packets will the. From heights h1 and h2, respectively time interval, when it is different at different places on moon... Ncert textbook questions have been solved by best teachers for you would reach the ground takes in! We say that their masses must be equal, the radius at the poles than the... Is ‘ g ’ called the universal law of gravitation questions and answers on gravitation How gravity,.
Nescafé Coconut Latte Sainsbury's, How To Sharpen 2mm Lead, Demic Medical Term, Wright Brothers Classroom Activities, Edwardian Skirt And Blouse, Get Current Week In Php, Hqst 20a 12/24v Mppt Solar Charge Controller, |
## Test styles
Contents
Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source. Lorem Ipsum comes from sections 1.10.32 and 1.10.33 of “de Finibus Bonorum et Malorum” (The Extremes of Good and Evil) by Cicero, written in 45 BC. This book is a treatise on the theory of ethics, very popular during the Renaissance. The first line of Lorem Ipsum, “Lorem ipsum dolor sit amet..”, comes from a line in section 1.10.32. $c = \pm\sqrt{a^2 + b^2}$ and $$f(x)=\int_{-\infty}^{\infty} \hat{f}(\xi) e^{2 \pi i \xi x} d \xi$$
Note
Logical consistency
A given set of propositions is consistent if and only if it is possible for each member of the set to be true at the same time. It is inconsistent just if this is not the case.
Saul Kripke
There is no mathematical substitute for philosophy.
Example
1. Anyone who takes astrology seriously is crazy.
2. Jane is my sister and no sister of mine has a crazy person for a husband.
3. Richard is Jane’s husband and he checks his horoscope every morning.
4. Anyone who checks their horoscope takes astrology seriously.
$$c = \pm\sqrt{a^2 + b^2}$$
$f(x)=\int_{-\infty}^{\infty} \hat{f}(\xi) e^{2 \pi i \xi x} d \xi$
1 2 3 4 5 6 7 8 9 10 const letters = ['A', 'B', 'C']; function quadratic(array) { for (let i = 0; i < array.length; i++) { for (let j = 0; j < array.length j++) { console.log(array[i]); } } } |
# An Introduction to Randomized Sketching (draft)
In this post, I will make an introductory presentation about sketching, a statistical technique to handle large datasets. First, I will give the intuitive idea behind sketching, which is also the most important and valuable part of this post. Then, I will describe the various sketching algorithms in detail. Finally, I will give a non-exhaustive list of theoretical results concerning the soundness of sketching. Since this post is an introduction, I will build my presentation around Ordinary Least Square, which is the first topic in every machine learning course and arguably the most popular technique.
• Intuitive Ideas
• Ordinary Least Square
• Projection
• Subsampling
• Partial Sketching
• Sketching Matrices
• Random Projection
• Random Sampling
• Theoretical Guarantees
• Algorithmic View
• Statistical View
• Other Results
## Intuitive Ideas
#### Ordinary Least Square
Let us consider the classic Ordinary Least Square problem. Given a dataset $(X, Y) \in \mathbb{R}^{n \times d} \times \mathbb{R}^n$, where $n \gg d$, we want to calculate a $\hat{\beta} \in \mathbb{R}^d$, optimal in the following sense: [ \hat{\beta} \in \arg\min_\beta \lVert Y - X\beta \rVert_2. ]
For simplicity, we assume $\text{rank} (X)=d$, which is true for most applications. Otherwise, we can reduce the dimension so that this assumption could be valid. When $X$ is of full rank, $\hat{\beta}$ is unique and has the following closed-form formulation: [ \hat{\beta} = X^\dagger Y = (X^T X)^{-1} X^TY, ] where $X^\dagger := (X^T X)^{-1} X$ is the Moore-Penrose inverse.
The computation involves four steps:
1. $X^T X$, which is a matrix-matrix multiplication, yielding $O(nd^2)$ time complexity.
2. The inverse of $X^T X$, which yields $O(d^3)$ time complexity.
3. $X^T Y$, which is a matrix-vector multiplication, yielding $O(nd)$ time complexity.
4. The multiplication of $(X^T X)^{-1}$ and $XY$, which is again a matrix-vector multiplication, yielding $O(d^2)$ time complexity.
Therefore, the dominant part, or the bottleneck, is the 1st step—$X^T X$. For many modern applications, $n$ may be on the order of $10^6 – 10^9$ and $d$ may be on the order of $10^3 – 10^4$ [1]. Even though the time complexity is only linear in $n$, the gigantic $n$ makes the computation challenging.
For this sake, statisticians have been thinking of reducing the sample size into a more manageable $r$, where $d \lesssim r \ll n$. They came up with two solutions, with one being the subsampling method and the other being the projection method. In the following, I will talk about the projection method first, since it is less straightforward.
### Projection
Here, we consider each column of $X$ and $Y$ as a point in $\mathbb{R}^n$, and we have $d+1$ points. Then, reducing sample size is equivalent to projecting these points into $\mathbb{R}^r$. Of course, we need this projection to preserve the relationship between $X$ and $Y$, so that we could recover the desired $\beta$ later.
The existence of such projection is suggested by the Johnson–Lindenstrauss lemma [2]:
For a set $A$ of $m$ fixed points in $\mathbb{R}^n$ and $\varepsilon>0$, there exists a linear map $S: \mathbb{R}^n \rightarrow \mathbb{R}^r$, where $r=O(\tfrac{\log m}{\varepsilon^2})$, such that for any point $u \in A$, we have [ (1-\varepsilon) \lVert u \rVert_2 \le \lVert S(u) \rVert_2 \le (1+\varepsilon) \lVert u \rVert_2. ]
This lemma claims that there exists a linear projection preserving the Euclidean norm of a bunch of points. The existence of such a projection is a strong hint that we could learn a desired $\beta$ in the lower-dimensional space.
Note: Michael W. Mahoney’s description of the Johnson–Lindenstrauss lemma is inaccurate. The probability clause therein is redundant.
The proof of this lemma adopts a probability approach. We construct a random linear projection and then prove that the probability of this random projection satisfying the above property is nonzero. Hence, we prove the existence. Since it is a non-constructing proof, we do not know what this desired projection is. We only know that if we try randomly (and not so hard), we can luckily finish with that desired projection.
Assuming now we have got this desired projection $S$, we want to solve the following problem: [ \hat{\beta}_ {S} := \arg\min_{\beta} \lVert SY - SX\beta \rVert_2, ] which has this closed-form expression: [ \hat{\beta}_ {S} = (SX)^\dagger SY ] Notice how we just replaced $X$ and $Y$ with $SX$ and $SY$ respectively. Therefore, the time complexity is $O(rd^2) \ll O(nd^2)$ once we have completed the projection. The tricky part is that the projection process is a matrix-matrix multiplication, which itself yields $O(nrd) \succsim O(nd^2)$ time complexity.
Warning: $(SX)^\dagger = ((SX)^T SX)^{-1} (SX)^T$ only if $\text{rank}(SX) = d$.
Luckily, not all projections are born equal. Some projections, such as the Hadamard transform [3], are fast to compute.
### Subsampling
Probably the most straightforward way of reducing sample size is subsampling. Statisticians use sample distribution to approximate the population distribution, which is their way to understand the world. Ordinary Least Square is used to learn the relationship between $X$ and $Y$, which amounts to learning the joint distribution of $(X, Y)$. Thus, subsampling is a natural candidate for sketching.
Talking about sampling, one can immediately think about uniform sampling, which amounts to sampling each data point with equal probability. This method can work, but we can do better, because not every data point is born equal. We can adopt importance sampling by giving each data point a weight proportional to its importance.
This technique is very similar to novel writing. In a novel, you don’t write about the life of an average Joe; instead, you write about typical characters. If it is a miser person, it must be the most miser person in the world.
Then, how to decide the importance of a data point? Statisticians use the leverage score. The leverage score of matrix $X$ is defined as the diagonal entry of the hat matrix $H=X (X^T X)^{-1} X^T$. One way to remember this formula is to be aware of the fact that [ \widehat{Y} = HY, ] where $\widehat{Y}$ is the predicted output. The hat matrix is like putting a “hat” on $Y$. Also, this fact leads us to realize that the hat matrix is the (Jacobian) derivative of $\widehat{Y}$ on $Y$: [ \frac{\partial \widehat{Y}}{\partial Y} = H. ] In particular, the $i$-th leverage score is the derivative of $\widehat{Y}_i$ on $Y_i$: [ \frac{\partial \widehat{Y}_i}{\partial Y_i}. ] The larger the derivative is, the more sensitive $\widehat{Y}_i$ is to $Y_i$, and the more important the $i$-th data point will be.
With Singular Value Decomposition, we can represent the leverage score with a more concise form. It is just the $\ell_2$-norm of each row of the left singular matrix. Given the singular value decomposition $X=U \Sigma V^T$, we have [ H = U U^T, ] and thus [ H_{ii} = \lVert U_{i \cdot} \rVert_2^2 ]
It is easy to prove that the sum of all leverage scores, or equivalently, the trace of the hat matrix is equal to $d$. [ \text{tr}(H) = \text{tr}(U U^T) = \text{tr}(U^T U) = \text{tr}(I_d) = d. ] Incidentally, the trace of the hat matrix is also the Frobenius norm of $U$.
Once we get the leverage scores, we can either use them directly as the sampling probability (i.e., $\pi_i = \tfrac{H_{ii}}{d}$) or make a tradeoff between them and a prior (oftentimes the uniform distribution): [ \pi_i = (1-\theta) \tfrac{H_{ii}}{d} + \theta q_i, ] where $q$ is the prior and $\theta \in [0,1]$ is an arbitrary constant.
In terms of the time complexity, the bottleneck is (like the projection method) to compute the leverage scores. According to Raskutti and Mahoney, the naive way is to perform a QR decomposition to compute an orthogonal basis for $X$ [4], which yields $O(nd^2)$ time complexity and provides no benefit compared to the original problem. Luckily, one can do the above QR decomposition in an approximate way and hence accelerate the computation [5].
Subsampling sometimes also involves a rescaling step, whose meaning will be evident in the next subsection.
### Partial Sketching
The above two methods are full sketching. In this subsection, I will first summarize them and then proceed to the so-called partial sketching.
Similar to the projection method, which is characterized by the projection matrix $S$, the subsampling method can also be characterized by a subsampling matrix. The only difference is that the projection matrix is a dense matrix, whereas the subsampling matrix is a sparse matrix with each row having a single nonzero entry.
Let us denote $\widetilde{X}:=SX$ and $\widetilde{Y}:=SY$, the full sketching methods can be universally formulated as [ \hat{\beta}_ \text{F} := \arg\min_{\beta} \lVert \widetilde{Y} - \widetilde{X}\beta \rVert_2. ] When $\widetilde{X}$ is of full rank, we have furthermore [ \hat{\beta}_ \text{F} = (\widetilde{X}^T \widetilde{X})^{-1} \widetilde{X}^T \widetilde{Y}. ] Noticing that $\widetilde{X}^T \widetilde{Y}$ only brings us marginal benefit compared to $X^T Y$ (from $O(n)$ to $O(r)$), we can apply the sketching only on the first part. This idea gives us the partial sketching: [ \hat{\beta}_ \text{P} = (\widetilde{X}^T \widetilde{X})^{-1} X^T Y. ]
Then, $\widetilde{X}^T \widetilde{X}$ can be regarded as an estimator of $X^T X$. In particular, if $S$ is a subsampling matrix with rescaling, $\widetilde{X}^T \widetilde{X}$ is an unbiased estimator of $X^T X$: $\mathbb{E}[\widetilde{X}^T \widetilde{X} | X^T X ] = X^T X$ It will be proved in the next section after we mathematically formulate the subsampling matrix and the rescaling matrix.
## Sketching Matrices
### Random Projection
There are so far 4 choices of projection matrices (the first two have high time complexity and thus are not practical for sketching):
• Gaussian random variables
• Clarkson-Woodruff sketching
Gaussian random variables. In this case, the entries of the projection matrix $S$ are i.i.d. Gaussian random variables. That is, $S_{ij} \sim \mathcal{N}(0, \sigma^2)$.
Rademacher random variables. The Rademacher distribution is very similar to the Bernoulli distribution, with two exceptions though. The Bernoulli distribution has the support on $\{0, 1\}$, whereas the Rademacher distribution has the support on $\{-1, +1\}$. The Bernoulli distribution has the parameter $\theta$ characterizing the probability of $1$, whereas the Rademacher distribution puts equal probability on $+1$ and $-1$. In this case, the projection matrix consists of entries of independent Rademacher random variables.
Randomized Hadamard transform.[3] Before adding the randomization in, let us first talk about the (deterministic and linear) Hadamard transform, aka Walsh–Hadamard transform. The Hadamard transform $H_m$ is a $2^m \times 2^m$ matrix, dubbed the Hadamard matrix, which defines a linear transform in a $2^m$-dimensional space. The Hadamard matrix is symmetric and consists purely of $\pm 1$, and it has a special structure in that adjacent rows or columns confront each other equally with the same signs and opposite signs. For instance, in [ H_1 = \begin{pmatrix} +1 & +1 \\
+1 & -1 \end{pmatrix}, ] the 2nd row confronts the 1st row first with $+1$ against $+1$ and then $-1$ against $+1$. The Hadamard matrices can be explicitly constructed with the recursive formula: [ H_{m+1} := H_1 \otimes H_m = \begin{pmatrix} H_m & H_m \\
H_m & -H_m \end{pmatrix}. ] A naive application of Hadamard transform involves multiplication by an $n \times n$ matrix, yielding $O(n^2)$ time complexity. Luckily, like the Fourier transform, it has a fast version: a divide-and-conquer strategy yields only $O(n \log n)$ time complexity. That is all for the deterministic Hadamard transform. As for the randomized Hadamard transform, it is just the subsampling + Hadamard transform. Denote it as $S_\text{Had}$, which is defined as $S_\text{Had}:=S_\text{unif}HD$, where $D \in \mathbb{R}^{n \times n}$ is a diagonal matrix with random equiprobable $\pm 1$ entries, $H \in \mathbb{R}^{n \times n}$ is the Hadamard matrix, and $S_\text{unif} \in \mathbb{R}^{r \times n}$ is a uniform sampling matrix. Therefore, the randomized Hadamard transform is simply 1) randomly flipping the sample sign, 2) project on the basis defined by the Hadamard transform, and then 3) uniform subsampling. The time complexity is $O(dn \log n)$.
Note: Daniel Ahfock et al. got the time complexity of the randomized Hadamard transform wrong in their Table 1 [6].
Clarkson-Woodruff sketching.[7] The sketching matrix associated with Clarkson-Woodruff sketching is one having a single randomly chosen nonzero entry in each column, and that nonzero entry takes the value on $\pm 1$ with equal probability. In other words, Clarkson-Woodruff sketching is simply taking all $n$ rows of $(X, Y)$ and randomly smashing them together until it remains only $r$ rows. The time complexity is $O(nd)$, the lowest among all four.
### Random Sampling
Let us suppose that we have already got the leverage scores and are prepared to do the subsampling. The sampling probability could be $\{\pi_i\}_ {i=1,\ldots,n}$, with $\pi_i = \tfrac{H_{ii}}{d}$, or it could be a convex combination between the leverage score and a prior. Denote $W \in \mathbb{R}^{r \times n}$ as the weighting matrix, with each row independently following a multinomial distribution parameterized by $\{\pi_i\}_ {i=1,\ldots,n}$. Then, we have $\Pr(W_{ij}=1)=\pi_j$. The rescaling matrix $R \in \mathbb{R}^{n \times n}$ is a diagonal matrix with $R_{ii} = \tfrac{1}{\sqrt{r \pi_i}}$. The final sketching matrix is the product of the weighting matrix and the rescaling matrix $S=WR$.
Now let us prove that $\mathbb{E}[\widetilde{X}^T \widetilde{X} | X^T X ] = X^T X$ as we claimed earlier. \begin{align} \mathbb{E} [\widetilde{X}^T \widetilde{X} | X^T X] &= \mathbb{E}[X^T R^T W^T W R X| X^T X ] \\ &= X^T R^T \mathbb{E}[ W^T W | X^T X ] RX \\ &= X^T R^T \mathbb{E}[ W^T W] RX. \end{align} Denote $A:=\mathbb{E}[W^T W]$, then we have $A_{ii} = \mathbb{E} \sum_{k=1}^r W_{ki}^2 = \sum_{j=1}^r \pi_i = r \pi_i,$ and $A_{ij} = 0$ for $i \neq j$ because $W_{ki}$ and $W_{kj}$ cannot simultaneously be $1$ (each row of $W$ has a single $1$). Thus $A$ is a diagonal matrix, just like $R$, so we have $R^T A R = I_{n \times n}$. The proof is completed.
## Theoretical Guarantees
There are two families of ways to evaluate the sketching effect. One is the algorithmic view, and the other is the statistical view. I will present both views and use Raskutti and Mahoney’s paper [1] as an example. After that, I will briefly mention some other results.
### Algorithmic View
The algorithmic view sees the data $(X, Y)$, and it wants to compute the optimal $\hat{\beta}$ so that $\lVert Y - X\hat{\beta} \rVert_2$ is the smallest possible. In terms of sketching, it wants $\hat{\beta}_S$ to achieve the same effect as $\hat{\beta}$. Mathematically, $\frac{\lVert Y - X\hat{\beta}_S \rVert_2^2}{\lVert Y - X\hat{\beta}\rVert_2^2}$ is expected to be close to 1.
The above quantity depends on both $X$ and $Y$, which means that we are at the mercy of what nature offers us. Naturally, we want to quantify the worst-case performance. For this sake, we define the worst-case efficiency of a sketching matrix $S$: [ C_\text{WC}(S) := \sup_Y \frac{\lVert Y - X\hat{\beta}_S\rVert_2^2}{\lVert Y - X\hat{\beta} \rVert_2^2}. ] We take supreme only on $Y$ but not on $X$ because we consider that the design $X$ is for us to choose and that only the measure $Y$ is uncontrollable. In the cases where both $X$ and $Y$ are uncontrollable, we may want to take supreme on both $X$ and $Y$.
### Statistical View
The statistical view does not see $(X, Y)$ as what it is. Instead, it considers $(X, Y)$ to be generated by some model $Y = X\beta + \varepsilon$, where $\varepsilon$ is a noise vector with good properties. The unobservable $\beta$ is important in that $\mathbb{E}[Y|X] = X\beta$. In other words, it does not want $X\hat{\beta}$ to be close to $Y$; it wants that quantity to be close to $X\beta$. In terms of sketching, it wants $\frac{\lVert X(\beta - \hat{\beta}_S) \rVert_2^2}{\lVert X(\beta - \hat{\beta}) \rVert_2^2}$ to be close to 1.
The above quantity depends on $X$ and $Y$ (because of $\hat{\beta}$ and $\hat{\beta}_ S$), where $X$ is oftentimes considered as the fixed design. This time, we do not take supreme on $Y$, since now we have a model governing the (conditional) distribution of $Y$. Instead, we take expectations. For this sake, we define the prediction efficiency of a sketching matrix $S$: [ C_\text{PE}(S) := \frac{\mathbb{E}_\varepsilon \lVert X(\beta - \hat{\beta}_S)\rVert_2^2}{\mathbb{E} _\varepsilon \lVert X(\beta - \hat{\beta}) \rVert_2^2} ]
In this subsection, I will selectively present some results in Raskutti and Mahoney’s paper [1]. I chose this paper because their results are unambiguously stated and easy to digest. In the following, I will summarize their results in a table, which includes theoretical guarantees associated with sampling with rescaling $S_\text{R}$, sampling without rescaling $S_\text{NR}$, sub-Gaussian projection $S_\text{SGP}$, and randomized Hadamard projection $S_\text{Had}$.
Note: A sub-Gaussian random variable is one with its tail equal to or thinner than the one of a Gaussian random variable. Obviously, Gaussian random variables and random variables with finite support are all sub-Gaussian. Therefore, the Gaussian sketching matrix and the Rademacher sketching matrix mentioned earlier are both sub-Gaussian projections.
There are two elements important to interpret the table. The first is that these theoretical guarantees hold only with probability. The reason is that no matter $C_\text{WC}(S)$ or $C_\text{PE}(S)$, they are all random variables because of $S$. Especially for $C_\text{PE}(S)$, it takes expectation, but only on $\varepsilon$ not on $S$. The other element is that these guarantees all come with a cost: they all require a minimum number of rows $r$ for the sketching matrix.
$S$ $C_\text{WC}$ $C_\text{PE}$ $r$
$S_\text{R}$ $1+O(\tfrac{d}{r})$ $O(\tfrac{n}{r})$ $\Omega(d \log d)$
$S_\text{NR}$ $1+O(\tfrac{d}{r})$ $O(\tfrac{k}{r})$ $\Omega(d \log d)$
$S_\text{SGP}$ $1+O(\tfrac{d}{r})$ $O(\tfrac{n}{r})$ $\Omega(\log n)$
$S_\text{Had}$ $1+O(\tfrac{d}{r} \log nd)$ $O(\tfrac{n}{r} \log nd)$ $\Omega(d \log n(\log d + \log\log n))$
The performance guarantee of $S_\text{NR}$ needs a special condition. That is, the distribution of the leverage scores is severely skewed. A large portion is concentrated on $k$ data points. Hence, there is the $k$ appearing in the numerator of $C_\text{PE}(S_\text{NR})$.
Note: I did not present the residual efficiency in their paper, for I think their proof (Lemma 1 therein) is flawed. I wrote a letter to Prof. Mahoney but did not receive any reply. In general, I cannot guarantee everything I cited are absolutely correct, but if I have a doubt, I will not convey it to my readers.
The above results give the upper bounds of the prediction efficiency. One can ask whether they are tight. Pilanci and Wainwright proved that they are indeed tight [8].
### Other Results
The above results focus exclusively on the prediction power of $\hat\beta_S$. One can also question the closeness of $\hat\beta_S$ to $\hat\beta$. Denote the total sum of squares as $\text{TSS}:=Y^T Y$, the residual sum of squares of the original OLS problem as $\text{RSS} := \lVert Y-X\hat{\beta} \rVert_2^2$, and the model sum of squares of the original OLS problem as $\text{MSS} := \lVert X\hat{\beta} \rVert_2^2$. Under certain conditions, we have [9] [ \lVert \hat\beta_F - \hat\beta \rVert_2^2 = O \left(\frac{\text{RSS}}{\sigma_\min^2(X)} \right) ] for the full sketching estimator, where $\sigma_\min(X)$ is the smallest singular value of $X$.
Also, under certain conditions, we have [6] [ \lVert \hat\beta_P - \hat\beta \rVert_2^2 = O \left(\frac{\text{MSS}}{\sigma_\min^2(X)} \right) ] for the partial sketching estimator.
## References
1. Extensions of Lipschitz mappings into a Hilbert space
2. Faster least squares approximation 2
3. Matrix Computations
4. Fast approximation of matrix coherence and statistical leverage
5. Low rank approximation and regression in input sparsity time
6. Iterative Hessian Sketch: Fast and Accurate Solution Approximation for Constrained Least-Squares
7. Improved approximation algorithms for large matrices via random projections
Written on November 12, 2021 |
Suppose that work hours in New Zombie are 300 in year 1 and productivity is $14 per hour worked. What is New Zombie’s real GDP? _____$ |
ziChange {DAMisc} R Documentation
## Maximal First Differences for Zero-Inflated Models
### Description
Calculates the change in predicted counts or optionally the predicted probability of being in the zero-count group, for maximal discrete changes in all covariates holding all other variables constant at typical values.
### Usage
ziChange(obj, data, typical.dat = NULL, type = "count")
### Arguments
obj A model object of class zeroinfl. data Data frame used to fit object. typical.dat Data frame with a single row containing values at which to hold variables constant when calculating first differences. These values will be passed to predict, so factors must take on a single value, but have all possible levels as their levels attribute. type Character string of either ‘count’ (to obtain changes in predicted counts) or ‘zero’ (to obtain changes in the predicted probability of membership in the zero group).
### Details
The function calculates the changes in predicted counts, or optionally the predicted probability of being in the zero group, for maximal discrete changes in the covariates. This function works with polynomials specified with the poly function. It also works with multiplicative interactions of the covariates by virtue of the fact that it holds all other variables at typical values. By default, typical values are the median for quantitative variables and the mode for factors. The way the function works with factors is a bit different. The function identifies the two most different levels of the factor and calculates the change in predictions for a change from the level with the smallest prediction to the level with the largest prediction.
### Value
A list with the following elements:
diffs A matrix of calculated first differences minmax A matrix of values that were used to calculate the predicted changes
### Author(s)
Dave Armstrong
[Package DAMisc version 1.7.2 Index] |
$H_o$: μ p ≥ ≤ = $n$ = σ s = $H_a$: μ < > ≠ μ₀ $\bar{x}$ = $\alpha$ = .10 .05 .01
Example 1Example 2 |
## CSDN博客
Audit records include information about the operation that was audited, the user who performed the operation, and the date and time of the operation. Depending on the type of auditing you choose, you can write audit records to data dictionary tables, called the database audit trail, or in operating system files, called the operating system audit trail.
If you choose to write audit records to the database audit trail, Oracle Database writes the audit records to the SYS.AUD$ table for default and standard auditing, and to the SYS.FGA_LOG$ table for fine-grained auditing. Both of these tables reside in the SYSTEM tablespace and are owned by the SYS schema. You can check the contents of these tables by querying the following data dictionary views:
• DBA_AUDIT_TRAIL for the SYS.AUD$ contents • DBA_FGA_AUDIT_TRAIL for the SYS.FGA_LOG$ contents
• DBA_COMMON_AUDIT_TRAIL for both SYS.AUD$ and SYS.FGA_LOG$ contents
"Finding Information About Audited Activities" describes more data dictionary views that you can use to view to contents of the SYS.AUD$ and SYS.FGA_LOG$ tables.
If you choose to write audit records to an operating system file, you can write them to either a text file or to an XML file. You can check the contents of an audit XML file by querying the V\$XML_AUDIT_TRAIL data dictionary view. |
# nLab stable (infinity,1)-category
### Context
#### $\left(\infty ,1\right)$-Category theory
(∞,1)-category theory
## Models
#### Stable Homotopy theory
stable homotopy theory
# Contents
## Idea
A stable (∞,1)-category $C$, is a pointed $\left(\infty ,1\right)$-category with finite limits which is stable under forming loop space objects:
$C$ has a zero object and the corresponding loop (∞,1)-functor
$\Omega :C\to C$\Omega : C \to C
is an equivalence with inverse the suspension object functor
$C←C:\Sigma \phantom{\rule{thinmathspace}{0ex}}.$C \leftarrow C : \Sigma \,.
This means that the objects of a stable $\left(\infty ,1\right)$-category are stable in the sense of stable homotopy theory: they behave as if they were spectra.
Indeed, every $\left(\infty ,1\right)$-category with finite limits has a free stabilization to a stable $\left(\infty ,1\right)$-category $\mathrm{Stab}\left(C\right)$, and the objects of $\mathrm{Stab}\left(C\right)$ are the spectrum objects of $C$.
The homotopy category of an (∞,1)-category of a stable $\infty$-category is a triangulated category.
Notice that the definition of triangulated categories is involved and their behaviour is bad, whereas the definition of stable $\infty$-category is simple and natural. The complexity and bad behavior of triangulated categories comes from them being the decategorification of a structure that is natural in higher category theory.
## Definition
As with ordinary categories, an object in a (infinity,1)-category is a zero object if it is both initial object and a terminal object. An $\left(\infty ,1\right)$-category with a zero object is a pointed $\left(\infty ,1\right)$-category.
###### Definition
In a pointed (∞,1)-category $C$ with zero object $0$, the kernel of a morphism $g:Y\to Z$ is the (∞,1)-pullback
$\begin{array}{ccc}\mathrm{ker}\left(g\right)& \to & Y\\ ↓& & {↓}^{g}\\ 0& \to & Z\end{array}$\array{ ker(g) &\to& Y \\ \downarrow && \downarrow^g \\ 0 &\to& Z }
(so that $\mathrm{ker}\left(g\right)\to Y\stackrel{g}{\to }Z$ is a fibration sequence)
and the cokernel of $f:X\to Y$ is the (∞,1)-pushout
$\begin{array}{ccc}X& \stackrel{f}{\to }& Y\\ ↓& & ↓\\ 0& \to & \mathrm{coker}\left(f\right)\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ X &\stackrel{f}{\to}& Y \\ \downarrow && \downarrow \\ 0 &\to& coker(f) } \,.
An arbitrary commuting square in $C$ of the form
$\begin{array}{ccc}X& \stackrel{f}{\to }& Y\\ ↓& & {↓}^{g}\\ 0& \to & Z\end{array}$\array{ X &\stackrel{f}{\to}& Y \\ \downarrow && \downarrow^g \\ 0 &\to& Z }
is a triangle in $C$. A pullback triangle is called an exact triangle and a pushout triangle a coexact triangle. By the universal property of pullback and pushout, to any triangle are associated canonical morphisms $X\to \mathrm{ker}\left(g\right)$ and $\mathrm{coker}\left(f\right)\to Z$. In particular, for every exact triangle there is a canonical morphism $\mathrm{coker}\left(\mathrm{ker}\left(g\right)\to Y\right)\to Z$ and for every coexact triangle there is a canonical morphism $X\to \mathrm{ker}\left(Y\to \mathrm{coker}\left(f\right)\right)$.
###### Definition
A stable $\left(\infty ,1\right)$-category is a pointed $\left(\infty ,1\right)$-category such that
• for every morphism in $C$ kernel and cokernel exist;
• every exact triangle is coexact and vice versa, i.e. every morphism is the cokernel of its kernel and the kernel of its cokernel.
###### Remark
The notion of stable $\infty$-category should not be confused with that of a stably monoidal $\infty$-category. A connection between the terms is that the stable (∞,1)-category of spectra is the prototypical stable $\infty$-category, while connective spectra (not all spectra) can be identified with stably groupal $\infty$-groupoids, aka infinite loop spaces or ${E}_{\infty }$-spaces.
## Constructions in stable $\infty$-categories
### Looping and delooping
The relevance of the axioms of a stable $\left(\infty ,1\right)$-category is that they imply that not only does every object $X$ have a loop space object $\Omega X$ defined by the exact triangle
$\begin{array}{ccc}\Omega X& \to & 0\\ ↓& & ↓\\ 0& \to & X\end{array}$\array{ \Omega X &\to& 0 \\ \downarrow && \downarrow \\ 0 &\to& X }
but also that, conversely, every object $X$ has a suspension object $\Sigma X$ defined by the coexact triangle
$\begin{array}{ccc}X& \to & 0\\ ↓& & ↓\\ 0& \to & \Sigma X\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ X &\to& 0 \\ \downarrow && \downarrow \\ 0 &\to& \Sigma X } \,.
These arrange into $\left(\infty ,1\right)$-endofunctors
$\Omega :C\to C$\Omega : C \to C
$\Sigma :C\to C$\Sigma : C \to C
which are autoequivalences of $C$ that are inverses of each other.
### Stabilization
For every pointed $\left(\infty ,1\right)$-category with finite limits which is not yet stable there is its free stabilization (see there for more details):
a stable $\left(\infty ,1\right)$-category $\mathrm{Sp}\left(C\right)$ that can be defined as the limit in the (∞,1)-category of (∞,1)-categories
$\mathrm{Sp}\left(C\right):=\mathrm{holim}\left(\cdots \to C\stackrel{\Omega }{\to }C\stackrel{\Omega }{\to }C\right)\phantom{\rule{thinmathspace}{0ex}}.$Sp(C) := holim( \cdots \to C \stackrel{\Omega}{\to} C \stackrel{\Omega}{\to} C ) \,.
For $C=$ Top the $\left(\infty ,1\right)$-category of topological spaces, $\mathrm{Sp}\left(\mathrm{Top}\right)$ is the familiar stable (∞,1)-category of spectra (whose homotopy category is the stable homotopy category) used in stable homotopy theory (which gives stable $\left(\infty ,1\right)$-categories their name).
Moreover, every derived category of an abelian category is the triangulated homotopy category of a stable $\left(\infty ,1\right)$-category.
Hence stable homotopy theory and homological algebra are both special cases of the theory of stable $\left(\infty ,1\right)$-categories.
## Properties
### The homotopy category: triangulated categories
The homotopy category $\mathrm{Ho}\left(C\right)$ of a stable $\left(\infty ,1\right)$-category $C$ – its decategorification to an ordinary category – is less well behaved than the original stable $\left(\infty ,1\right)$-category, but remembers a shadow of some of its structure: this shadow is the structure of a triangulated category on $\mathrm{Ho}\left(C\right)$
• the translation functor $T:\mathrm{Ho}\left(C\right)\to \mathrm{Ho}\left(C\right)$ comes from the suspenesion functor $\Sigma :C\to C$;
• the distinguished triangles in $\mathrm{Ho}\left(C\right)$ are pieces of the fibration sequences in $C$.
For details see StabCat, section 3.
Alternately, one can first pass to a stable derivator, and thence to a triangulated category. Any suitably complete and cocomplete $\left(\infty ,1\right)$-category has an underlying derivator, and the underlying derivator of a stable $\left(\infty ,1\right)$-category is always stable—while the underlying category of any stable derivator is triangulated. But the derivator retains more useful information about the original stable $\left(\infty ,1\right)$-category than does its triangulated homotopy category.
### Models
In direct analogy to how a general (∞,1)-category may be presented by model category, a stable $\left(\infty ,1\right)$-categories may be presented by a
or a
There are further variants and special cases of these models. The following three concepts are equivalent to each other and special cases of the above models, or equivalent in characteristic 0.
A triangulated category linear over a field $k$ can canonically be refined to
• a stable $\left(\infty ,1\right)$-category.
If $k$ has characteristic 0, then all these three concepts become equivalent.
### Stabilization and localization of presheaf $\left(\infty ,1\right)$-categories
###### Proposition
Let $C$ and $D$ be (∞,1)-categories and $\mathrm{Func}\left(C,D\right)$ the (∞,1)-category of (∞,1)-functors between them.
Its stabilization is equivalent to the functor category into the stabilization of $C$:
$\mathrm{Stab}\left(\mathrm{Func}\left(C,D\right)\right)\simeq \mathrm{Func}\left(C,\mathrm{Stab}\left(D\right)\right)\phantom{\rule{thinmathspace}{0ex}}.$Stab(Func(C,D)) \simeq Func(C,Stab(D)) \,.
In particular, in the case $D=$ ∞Grpd for which $\mathrm{Func}\left(C,D\right)=\mathrm{Func}\left(C,\infty \mathrm{Grpd}\right)=:{\mathrm{PSh}}_{\left(\infty ,1\right)}\left(C\right)$ is the (∞,1)-category of (∞,1)-presheaves we have with $\mathrm{Stab}\left(\infty \mathrm{Grpd}\right)=\mathrm{Sp}$ (the stable (∞,1)-category of spectra) that
$\mathrm{Stab}\left({\mathrm{PSh}}_{\left(\infty ,1\right)}\left(C\right)\right)\simeq \mathrm{Func}\left(C,\mathrm{Sp}\right)\phantom{\rule{thinmathspace}{0ex}}.$Stab(PSh_{(\infty,1)}(C)) \simeq Func(C,Sp) \,.
This is StabCat, example 10.13 .
###### Proposition
(“stable Giraud theorem”)
Every stable and presentable (∞,1)-category $C$ is equivalent to an (accessible) left-exact localization of a stabilized $\left(\infty ,1\right)$-presheaf $\left(\infty ,1\right)$-category: there exists a small $\left(\infty ,1\right)$-category $E$ and an adjunction
$C\stackrel{\stackrel{\mathrm{lex}}{←}}{↪}\mathrm{Stab}\left(\mathrm{PSh}\left(E\right)\right)\simeq \mathrm{Func}\left(E,\mathrm{Sp}\right)\phantom{\rule{thinmathspace}{0ex}}.$C \stackrel{\stackrel{lex}{\leftarrow}}{\hookrightarrow} Stab(PSh(E)) \simeq Func(E,Sp) \,.
This is StabCat prop 15.9.
This is the stable analog of the statement that every (∞,1)-category of (∞,1)-sheaves is a left exact localization of an $\left(\infty ,1\right)$-category of presheaves.
See at sheaf of spectra and model structure on presheaves of spectra for more.
In terms of (stable) model categories, something like an analog of this statement is (Schwede-Shipley, theorem 3.3.3).
## References
The abstract (∞,1)-category theoretical notion was introduced and studied in
This appears in a more comprehensive context of higher algebra as section 1 of
A brief introduction is in
• Yonatan Harpaz, Introduction to stable $\infty$-categories, October 2013 (pdf)
A diagram of the interrelation of all the models for stable $\left(\infty ,1\right)$-categories with a useful list of literature for each are these seminar notes:
For discussion of the stable model category models of stable $\infty$-categories see
Revised on November 18, 2013 02:41:29 by Urs Schreiber (89.204.130.220) |
# Where parallels cross
Interesting bits of life
# On the power of macros: a dynamic lazy let
Recently I have been working on making moldable-emacs easier to extend. One of the challenges was to define common variables in a single place. For example, I wanted to make this code
(:given (:fn (let ((a (run-something))
(b (run-something-else)))
...))
:then (:fn (let ((a (run-something))
(b (run-something-else)))
...)))
into this one
(:let ((a (run-something))
(b (run-something-else)))
:given (:fn ...)
:then (:fn ...))
The second piece of code can save a bit of copy paste for when I write molds. I also need that a and b get calculated lazily for the :given clause: if there is something like (and nil a b), I want to skip to calculate the bindings (because it may be useless).
Since I am creating a little Domain Specific Language for defining molds, your Lisp-senses should scream: macros!
Let's start easy.
If you want to write a macro that wraps thing is a let, it is simple.
(defmacro with-my-let (&rest body)
(let ((a (+ 1 2)))
,@body))
(with-my-let (+ a 1))
4
This is good and easy because we know a in advance. In my case I realize the bindings of the let only at run-time. We then need as input the list of bindings.
(defmacro with-my-let (let-bindings &rest body)
(let (,@let-bindings)
,@body))
(with-my-let ((a (+ 1 2))) (+ a 1))
4
So far so good! And what if I get let-bindings as a variable?
(defmacro with-my-let (let-bindings &rest body)
(let (,@let-bindings)
,@body))
(let ((let-bindings '((a (+ 1 2)))))
(with-my-let let-bindings (+ a 1)))
This breaks because the with-my-let macro expands to the following.
(let let-bindings
(+ a 1))
This happens because macros don't evaluate their arguments. But even if they did, let-binding obtains a value only at run time! This is the first challenge: get the bindings at run time.
We need to do this in two steps:
1. "pause" the generation of the code until run time
2. at run time inject the value in the code
Here is how it looks.
(defmacro with-my-let (let-bindings &rest body)
(funcall
(lambda (bindings body)
(eval (let* ,bindings ;; here ,bindings = ((a (+ 1 2)))
,@body)))
,let-bindings ;; here ,let-bindings = let-bindings-var
',body))
(let ((let-bindings-var '((a (+ 1 2)))))
(with-my-let let-bindings-var (+ a 1)))
4
The trick is a function that takes the let-bindings variable we pass. This function is somewhat like a macro: it produces a sexp itself (the bit (let)! But, it evaluates it as well (the eval bit).
Well, let me show you how it expands:
(let ((let-bindings-var '((a (+ 1 2)))))
(funcall
(lambda
(bindings body)
(eval
(let ,bindings ,@body)))
let-bindings-var
'((+ a 1))))
I must confess: it took me some time to fully understand what I did when I wrote it!
Now lets get in the funny bit: what if we want to have lazy bindings? By lazy I mean bindings getting a value only at the latest possible moment. This means that if we don't use a value, we don't invest any time in producing it!
Emacs is so cool that it already has a way to do that: thunk.el. This library provides thunk-let*, which does exactly what we need: it makes all bindings lazy!
So the macro will change only slightly to become amazing!
(defmacro with-my-let (let-bindings &rest body)
(funcall
(lambda (bindings body)
(eval (thunk-let* ,bindings ;; here ,bindings = ((a (+ 1 2)))
,@body)
t))
,let-bindings ;; here ,let-bindings = let-bindings-var
',body))
(let ((let-bindings-var '((a (+ 1 2))
(slow-poke (sleep-for 20)))))
(with-my-let let-bindings-var (+ a 1)))
4
If you try this code, you will see that you will skip slow-poke's long sleep time! All we needed to do was to substitute our let* with thunk-let* AND make sure that sexp is evaluated in a lexical context. You can do that by giving eval an extra argument.
How amazing is this macro?! Well if it is not, let me know because I would still like to improve it, if possible.
And keep in mind that your body must be inline! For example, this cannot work:
(defun f (x)
(+ a x))
(let ((let-bindings-var '((a (+ 1 2))
(slow-poke (sleep-for 20)))))
(with-my-let let-bindings-var (f 1)))
So if you have something like that, you have to pass the binding or inject the function code in the body`.
Now that I gave you that caveat.. we are done!
Thanks to stick around so far and hopefully you will find inspiration to write your own useful (lazy?!) macros!
Happy macro-ing! |
# Homework Help: Cauchy Schwarz proof with alternative dot product definition
1. Aug 7, 2013
### dustbin
1. The problem statement, all variables and given/known data
Does the Cauchy Schwarz inequality hold if we define the dot product of two vectors $A,B \in V_n$ by $\sum_{k=1}^n |a_ib_i|$? If so, prove it.
2. Relevant equations
The Cauchy-Schwarz inequality: $(A\cdot B)^2 \leq (A\cdot A)(B\cdot B)$. Equality holds iff one of the vectors is a scalar multiple of the other.
3. The attempt at a solution
The result is trivial if either vector is the zero vector. Assume $A,B\neq O$.
Let $A',B'\in V_n$ s.t. $a'_i = |a_i|$ and $b'_i = |b_i|$ for each $i=1,2,...,n$. Let $\theta$ be the angle between $A'$ and $B'$. Notice that $A\cdot B = A'\cdot B'$, $||A|| = ||A'||$, $||B|| = ||B'||$, and $A\cdot B > 0$. Then $$A\cdot B = ||A||\,||B||\,\cos\theta \implies A\cdot B = ||A||\,||B||\,|\cos\theta| \leq ||A||\,||B||$$ since $0\leq |\cos\theta|\leq 1$. Then $(A\cdot B)^2 \leq ||A||^2||B||^2 = (A\cdot A)(B\cdot B)$. This proves the inequality.
We now show that equality holds iff $B = kA$ for some $k\in\mathbb{R}$.
($\Longrightarrow$) We prove this direction by the contrapositive. If $B\neq k A$ for any $k\in\mathbb{R}$, then $B' \neq |k| A'$. Hence $\theta \neq \alpha \pi$ for any $\alpha \in \mathbb{Z}$. Thus $0 < |\cos\theta| < 1$ and therefore $A\cdot B < ||A||\,||B||$. In other words, equality does not hold.
($\Longleftarrow$) If $A = kB$ for some $k\in\mathbb{R}$, then $B' = |k| A'$. Hence $\theta = \alpha \pi$ for some $\alpha \in \mathbb{Z}$. Therefore $\cos\theta = \pm 1$. Thus equality holds.
Does this look okay?
Last edited: Aug 7, 2013
2. Aug 7, 2013
### verty
I may have missed something but I think you redefine $A \cdot B$ then try to use the fact that $A \cdot B = ||A||\,||B||\,\cos\theta$, but this is negative if the angle between them is obtuse.
3. Aug 7, 2013
### dustbin
But don't $A',B'$ still satisfy $A'\cdot B' = ||A'||\,||B'||\,\cos\theta$, where $\theta$ is the angle between $A', B'$? As well, $A\cdot B = A'\cdot B'$, $||A|| = ||A'||$, and $||B|| = ||B'||$.
4. Aug 7, 2013
### verty
Oh, you should call it $\theta'$ then, it is the angle between $A'$ and $B'$, not between $A$ and $B$.
5. Aug 7, 2013
### dustbin
Okay. I thought it was clear. Thanks for pointing that out!
Does the proof look okay, then?
6. Aug 7, 2013
### verty
Well, you say the inequality is proved, but the original inequality has a square on the left. Something is amiss.
7. Aug 7, 2013
### dustbin
Edited. What do you tink?
8. Aug 7, 2013
### verty
I think you're done. You were asked to prove the equality and I see no gaps. My work here is done :). |
# Hard derivative
1. Dec 12, 2013
### dawozel
The problem statement, all variables and given/known data
ψ2 = A(2αx2- 1)e-αx2/2
First, calculate dψ2/dx, using A for A, x for x, and a for α.
Second, calculate d2ψ2/dx2.
3. The attempt at a solution
so I got the first derivative correct, it was
A((4*a*x)*exp((-a*x^2)/2) +(2*a*x^2 -1)*(-a*x*exp((-a*x^2)/2)))
but i can seem to calculate the second derivative correctly I'm getting
A((4*a)*(exp((-ax^2)/2)) + (4*a*x)*(-a*x*exp((-ax^2)/2)) +(4*a*x)*(-a*x*exp((-ax^2)/2)) +(2*a*x^2 -1)*(a^2*x^2*exp((-ax^2)/2)))
but this incorrect, am I missing something?
2. Dec 12, 2013
### Staff: Mentor
This is just about impossible to read. I could take a guess at what you're trying to say, but I shouldn't have to. Take a look at this, especially #2: https://www.physicsforums.com/showthread.php?t=617567.
3. Dec 12, 2013
### dawozel
My bad this is the second derivative
$A((4a)(exp(( - ax^2 ) / 2)) + (4ax) * ( - ax * exp(( - ax^2) /2)) + (4ax) * ( - a xexp(( - ax^2 ) / 2)) + (2ax - 1) (a^2x^2 *exp(( - ax^2 ) / 2)))$
4. Dec 12, 2013
### Staff: Mentor
That's a lot better.
Is this ψ2?
Here's what you have, cleaned up a little more, using more LaTeX and fewer parentheses.
$$A((4a)(e^{ - (a/2)x^2}) + (4ax) * ( - ax * e^{-(a/2)x^2}) + (4ax) * ( - a xe^{- (a/2)x^2}) + (2ax - 1) (a^2x^2 *e^{- (a/2)x^2})$$
This ought to be at least close to what you have.
Last edited: Dec 12, 2013
5. Dec 12, 2013
### Hepth
Take a look at your last term, and consider the Product Rule. You missed something.
6. Dec 12, 2013
### scurty
This is the first derivative, correct?
$A[4axe^{-ax^2/2}+(2ax^2-1)(-axe^{-ax^2/2})]$
If so, in your computation of the second derivative, you need to perform a product rule within a product rule.
7. Dec 12, 2013
### dawozel
yes that's my first derivative
8. Dec 12, 2013
### dawozel
So my first derivative was simplified to $\frac{d}{dx}\psi_2(x)=A\left[4\alpha x e^{-\frac{1}{2}\alpha x^2} - (2\alpha x^2 - 1)\alpha x e^{-\frac{1}{2}\alpha x^2}\right]$
and if i factor out the $\alpha x \exp(-\frac{1}{2}\alpha x^2)$
i get that
$\frac{d}{dx}\psi_2(x) = A(\alpha x \exp(-\frac{1}{2}\alpha x^2))(5-2ax^2)$
so my second derivative should be
$A(((aexp((-1/2)ax^2) + (-a^2x^2exp((-1/2)ax^2)) +(4ax))$
is this right?
Last edited: Dec 12, 2013
9. Dec 12, 2013
### scurty
Looks good to me. Can you differentiate that now?
10. Dec 12, 2013
### dawozel
so my second derivative should be
$A(((aexp((-1/2)ax^2) + (-a^2x^2exp((-1/2)ax^2)) +(4ax))$
is this right or am i still missing a product rule?
11. Dec 12, 2013
### scurty
Still missing a bit. Try doing this in two steps. Let $f(x) = \alpha x \exp(-\frac{1}{2}\alpha x^2)$ and $g(x) = 5-2ax^2$
What is the derivative of $\frac{d}{dx}\psi_2(x) = Af(x)g(x)$? (simple application of product rule)
After that, calulate $f'(x)$ and $g'(x)$ and then plug everything into the the second derivative formula you got.
I know this is a lot of tedious work but I hope you'll be able to see why you were leaving out the terms you did.
12. Dec 12, 2013
### dawozel
Hmmmm i may be forgetting to multiply the $F' (x) by G(x)$ and vice versa
Is the derivative closer to
$A((((aexp((-1/2)ax^2) + (-a^2x^2exp((-1/2)ax^2))(5-2ax^2) + (4ax)( \alpha x \exp(-\frac{1}{2}\alpha x^2))$
13. Dec 13, 2013
### scurty
That looks better. You were just forgetting to put in that very last term. One minor issue with a sign error, the derivative of $5-2\alpha x^2$ is $-4\alpha x$ so you need a minus sign in the one place.
And there's a bunch of parentheses; I'll just assume that they line up correctly. Just double check them if you are submitting them for homework.
14. Dec 13, 2013 |
# Why is the Hawaiian Earring closed?
The Hawaiian Earring $X$ is the union of the circles $[x-(1/n)]^2+y^2=(1/n)^2,n=1,2,3...$ with the topology from the plane.
I want to show that $X$ is closed.
I note that $X$ is a countable union of closed sets, which is not necessarily closed. However, I've saw a theorem like this:
The union of a locally finite collection of closed sets is closed.
But again the Hawaiian Earring is not a locally finite collection of closed sets.
I know there may be some problem-specific proofs. But I want to know whether there is a general theorem like the one above that shows $X$ is closed, because I feel that there are something common in this problem but I can't figure it out.
EDIT: I want to know if there is a theorem of the following kind.
When a collection of closed sets (may be infinitely) satisfied condition XXX, then the union of them are still closed.
-
Can you see a reasonably easy way to show that its complement is open? – Qiaochu Yuan May 3 '11 at 3:52
@Qiaochu: I think so. Since all the shapes are circles, I think it not hard to show its complement is open using relationships between radius's and distances. But I want to know a general method when the shapes are not circles and when it is in a more general case. See the EDIT. – Roun May 3 '11 at 4:31
It's true that $X$ is a union of closed sets, but it's also an intersection of closed sets: namely, it's the intersection of $X$ plus a closed ball of radius $\frac{1}{n}$ about the origin for all $n$. (This is the closed-set version of the open-set argument I was hinting at in the comments.)
The point here is that $X$ is locally a finite union of closed sets except at the origin, and one can "approach" the origin using the above intersection. I don't know if there's a particularly productive general statement to be made about this situation. |