title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
Sensible heat | Sensible heat is heat exchanged by a body or thermodynamic system in which the exchange of heat changes the temperature of the body or system, and some macroscopic variables of the body or system, but leaves unchanged certain other macroscopic variables of the body or system, such as volume or pressure.
Usage
The term is used in contrast to a latent heat, which is the amount of heat exchanged that is hidden, meaning it occurs without change of temperature. For example, during a phase change such as the melting of ice, the temperature of the system containing the ice and the liquid is constant until all ice has melted. Latent and sensible heat are complementary terms.
The sensible heat of a thermodynamic process may be calculated as the product of the body's mass (m) with its specific heat capacity (c) and the change in temperature:
Sensible heat and latent heat are not special forms of energy. Rather, they describe exchanges of heat under conditions specified in terms of their effect on a material or a thermodynamic system.
In the writings of the early scientists who provided the foundations of thermodynamics, sensible heat had a clear meaning in calorimetry. James Prescott Joule characterized it in 1847 as an energy that was indicated by the thermometer.
Both sensible and latent heats are observed in many processes while transporting energy in nature. Latent heat is associated with changes of state, measured at constant temperature, especially the phase changes of atmospheric water vapor, mostly vaporization and condensation, whereas sensible heat directly affects the temperature of the atmosphere.
In meteorology, the term 'sensible heat flux' means the conductive heat flux from the Earth's surface to the atmosphere. It is an important component of Earth's surface energy budget. Sensible heat flux is commonly measured with the eddy covariance method.
See also
Eddy covariance flux (eddy correlation, eddy flux)
Enthalpy
Thermodynamic databases for pure substances
References
Atmospheric thermodynamics
Thermodynamics
ja:顕熱 | 0.785889 | 0.988049 | 0.776497 |
Action at a distance | In physics, action at a distance is the concept that an object's motion can be affected by another object without being in physical contact with it; that is, the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity led to new action at a distance models providing alternative to field theories. Under our modern understanding, the four fundamental interactions (gravity, electromagnetism, the strong interaction and the weak interaction) in all of physics are not described by action at a distance.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action-at-a-distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, no medium is required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Direct impact of macroscopic objects seems visually distinguishable from action at a distance. If however the objects are constructed of atoms, and the volume of those atoms is not defined and atoms interact by electric and magnetic forces, the distinction is less clear.
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other models according to the needs of each physical problem.
One role is as a summary of physical phenomena, independent of any understanding of the cause of such an action. For example, astronomical tables of planetary positions can be compactly summarized using Newton's law of universal gravitation, which assumes the planets interact without contact or an intervening medium. As a summary of data, the concept does not need to be evaluated as a plausible physical model.
Action at a distance also acts as a model explaining physical phenomena even in the presence of other models. Again in the case of gravity, hypothesizing an instantaneous force between masses allows the return time of comets to be predicted as well as predicting the existence of previously unknown planets, like Neptune. These triumphs of physics predated the alternative more accurate model for gravity based on general relativity by many decades.
Introductory physics textbooks discuss central forces, like gravity, by models based on action-at-distance without discussing the cause of such forces or issues with it until the topics of relativity and fields are discussed. For example, see The Feynman Lectures on Physics on gravity.
History
Early inquiries into motion
Action-at-a-distance as a physical concept requires identifying objects, distances, and their motion. In antiquity, ideas about the natural world were not organized in these terms. Objects in motion were modeled as living beings. Around 1600, the scientific method began to take root. René Descartes held a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Many experiments with electrical and magnetic materials led to new ideas about forces. These efforts set the stage for Newton's work on forces and gravity.
Newtonian gravity
In 1687 Isaac Newton published his Principia which combined his laws of motion with a new mathematical analysis able to reproduce Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to the square of the distance between them. Thus the motions of planets were predicted by assuming forces working over great distances.
This mathematical expression of the force did not imply a cause. Newton considered action-at-a-distance to be an inadequate model for gravity. Newton, in his words, considered action at a distance to be:
Metaphysical scientists of the early 1700s strongly objected to the unexplained action-at-a-distance in Newton's theory. Gottfried Wilhelm Leibniz complained that the mechanism of gravity was "invisible, intangible, and not mechanical". Moreover, initial comparisons with astronomical data were not favorable. As mathematical techniques improved throughout the 1700s, the theory showed increasing success, predicting the date of the return of Halley's comet and aiding the discovery of planet Neptune in 1846. These successes and the increasingly empirical focus of science towards the 19th century led to acceptance of Newton's theory of gravity despite distaste for action-at-a-distance.
Electrical action at a distance
Electrical and magnetic phenomena also began to be explored systematically in the early 1600s. In William Gilbert's early theory of "electric effluvia," a kind of electric atmosphere, he rules out action-at-a-distance on the grounds that "no action can be performed by matter save by contact".
However subsequent experiments, especially those by Stephen Gray showed electrical effects over distance. Gray developed an experiment call the "electric boy" demonstrating electric transfer without direct contact.
Franz Aepinus was the first to show, in 1759, that a theory of action at a distance for electricity provides a simpler replacement for the electric effluvia theory. Despite this success, Aepinus himself considered the nature of the forces to be unexplained: he did "not approve of the doctrine which assumes the possibility of action at a distance", setting the stage for a shift to theories based on aether.
By 1785 Charles-Augustin de Coulomb showed that two electric charges at rest experience a force inversely proportional to the square of the distance between them, a result now called Coulomb's law. The striking similarity to gravity strengthened the case for action at a distance, at least as a mathematical model.
As mathematical methods improved, especially through the work of Pierre-Simon Laplace, Joseph-Louis Lagrange, and Siméon Denis Poisson, more sophisticated mathematical methods began to influence the thinking of scientists. The concept of potential energy applied to small test particles led to the concept of a scalar field, a mathematical model representing the forces throughout space. While this mathematical model is not a mechanical medium, the mental picture of such a field resembles a medium.
Fields as an alternative
It was Michael Faraday who first suggested that action at a distance, even in the form of a (mathematical) potential field, was inadequate as an account of electric and magnetic forces. Faraday, an empirical experimentalist, cited three reasons in support of some medium transmitting electrical force: 1) electrostatic induction across an insulator depends on the nature of the insulator, 2) cutting a charged insulator causes opposite charges to appear on each half, and 3) electric discharge sparks are curved at an insulator. From these reasons he concluded that the particles of an insulator must be polarized, with each particle contributing to continuous action. He also experimented with magnets, demonstrating lines of force made visible by iron filings. However, in both cases his field-like model depends on particles that interact through an action-at-a-distance: his mechanical field-like model has no more fundamental physical cause than the long-range central field model.
Faraday's observations, as well as others, led James Clerk Maxwell to a breakthrough formulation in 1865, a set of equations that combined electricity and magnetism, both static and dynamic, and which included electromagnetic radiation – light. Maxwell started with elaborate mechanical models but ultimately produced a purely mathematical treatment using dynamical vector fields. The sense that these fields must be set to vibrate to propagate light set off a search of a medium of propagation; the medium was called the luminiferous aether or the aether.
In 1873 Maxwell addressed action at a distance explicitly. He reviews Faraday's lines of force, carefully pointing out that Faraday himself did not provide a mechanical model of these lines in terms of a medium. Nevertheless the many properties of these lines of force imply these "lines must not be regarded as mere mathematical abstractions". Faraday himself viewed these lines of force as a model, a "valuable aid" to the experimentalist, a means to suggest further experiments.
In distinguishing between different kinds of action Faraday suggests three criteria: 1) do additional material objects alter the action?, 2) does the action take time, and 3) does it depend upon the receiving end? For electricity, Faraday knew that all three criteria were met for electric action, but gravity was thought to only meet the third one. After Maxwell's time a fourth criteria, the transmission of energy, was added, thought to also apply to electricity but not gravity. With the advent of new theories of gravity, the modern account would give gravity all of the criteria except dependence on additional objects.
Fields fade into spacetime
The success of Maxwell's field equations led to numerous efforts in the later decades of the 19th century to represent electrical, magnetic, and gravitational fields, primarily with mechanical models. No model emerged that explained the existing phenomena. In particular no good model for stellar aberration, the shift in the position of stars with the Earth's relative velocity. The best models required the ether to be stationary while the Earth moved, but experimental efforts to measure the effect of Earth's motion through the aether found no effect.
In 1892 Hendrik Lorentz proposed a modified aether based on the emerging microscopic molecular model rather than the strictly macroscopic continuous theory of Maxwell. Lorentz investigated the mutual interaction of a moving solitary electrons within a stationary aether. He rederived Maxwell's equations in this way but, critically, in the process he changed them to represent the wave in the coordinates moving electrons. He showed that the wave equations had the same form if they were transformed using a particular scaling factor,
where is the velocity of the moving electrons and is the speed of light. Lorentz noted that if this factor were applied as a length contraction to moving matter in a stationary ether, it would eliminate any effect of motion through the ether, in agreement with experiment.
In 1899, Henri Poincaré questioned the existence of an aether, showing that the principle of relativity prohibits the absolute motion assumed by proponents of the aether model. He named the transformation used by Lorentz the Lorentz transformation but interpreted it as a transformation between two inertial frames with relative velocity . This transformation makes the electromagnetic equations look the same in every uniformly moving inertial frame. Then, in 1905, Albert Einstein demonstrated that the principle of relativity, applied to the simultaneity of time and the constant speed of light, precisely predicts the Lorentz transformation. This theory of special relativity quickly became the modern concept of spacetime.
Thus the aether model, initially so very different from action at a distance, slowly changed to
resemble simple empty space.
In 1905, Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. However, until 1915 gravity stood apart as a force still described by action-at-a-distance. In that year, Einstein showed that a field theory of spacetime, general relativity, consistent with relativity can explain gravity. New effects resulting from this theory were dramatic for cosmology but minor for planetary motion and physics on Earth.
Einstein himself noted Newton's "enormous practical success".
Modern action at a distance
In the early decades of the 20th century Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker independently developed non-instantaneous models for action at a distance consistent with special relativity. In 1949 John Archibald Wheeler and Richard Feynman built on these models to develop a new field-free theory of electromagnetism.
While Maxwell's field equations are generally successful, the Lorentz model of a moving electron interacting with the field encounters mathematical difficulties: the self-energy of the moving point charge within the field is infinite. The Wheeler–Feynman absorber theory of electromagnetism avoids the self-energy issue. They interpret Abraham–Lorentz force, the apparent force resisting electron acceleration, as a real force returning from all the other existing charges in the universe.
The Wheeler–Feynman theory has inspired new thinking about the arrow of time and about the nature of quantum non-locality. The theory has implications for cosmology; it has been extended to quantum mechanics. A similar approach has been applied to develop an alternative theory of gravity consistent with general relativity. John G. Cramer has extended the Wheeler–Feynman ideas to create the transactional interpretation of quantum mechanics.
"Spooky action at a distance"
Albert Einstein wrote to Max Born about issues in quantum mechanics in 1947 and used a phrase translated as "spooky action at a distance", and in 1964, John Stewart Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. The phrase has been picked up and used as a description for the cause of small non-classical correlations between physically separated measurement of entangled quantum states. The correlations are predicted by quantum mechanics (the Bell theorem) and verified by experiments (the Bell test). Rather than a postulate like Newton's gravitational force, this use of "action-at-a-distance" concerns observed correlations which cannot be explained with localized particle-based models. Describing these correlations as "action-at-a-distance" requires assuming that particles became entangled and then traveled to distant locations, an assumption that is not required by quantum mechanics.
Force in quantum field theory
Quantum field theory does not need action at a distance. At the most fundamental level only four forces are needed and each is described as resulting from the exchange of specific bosons. Two are short range: the strong interaction mediated by mesons and the weak interaction mediated by the weak boson; two are long range: electromagnetism mediated by the photon and gravity hypothesized to be mediated by the graviton. However, the entire concept of force is of secondary concern in advanced modern particle physics. Energy forms the basis of physical models and the word action has shifted away from implying a force to a specific technical meaning, an integral over the difference between potential energy and kinetic energy.
See also
References
External links
Force
Concepts in physics | 0.783214 | 0.991373 | 0.776457 |
Center-of-momentum frame | In physics, the center-of-momentum frame (COM frame), also known as zero-momentum frame, is the inertial frame in which the total momentum of the system vanishes. It is unique up to velocity, but not origin. The center of momentum of a system is not a location, but a collection of relative momenta/velocities: a reference frame. Thus "center of momentum" is a short for "center-of-momentum ".
A special case of the center-of-momentum frame is the center-of-mass frame: an inertial frame in which the center of mass (which is a single point) remains at the origin. In all center-of-momentum frames, the center of mass is at rest, but it is not necessarily at the origin of the coordinate system. In special relativity, the COM frame is necessarily unique only when the system is isolated.
Properties
General
The center of momentum frame is defined as the inertial frame in which the sum of the linear momenta of all particles is equal to 0. Let S denote the laboratory reference system and S′ denote the center-of-momentum reference frame. Using a Galilean transformation, the particle velocity in S′ is
where
is the velocity of the mass center. The total momentum in the center-of-momentum system then vanishes:
Also, the total energy of the system is the minimal energy as seen from all inertial reference frames.
Special relativity
In relativity, the COM frame exists for an isolated massive system. This is a consequence of Noether's theorem. In the COM frame the total energy of the system is the rest energy, and this quantity (when divided by the factor c2, where c is the speed of light) gives the rest mass (invariant mass) of the system:
The invariant mass of the system is given in any inertial frame by the relativistic invariant relation
but for zero momentum the momentum term (p/c)2 vanishes and thus the total energy coincides with the rest energy.
Systems that have nonzero energy but zero rest mass (such as photons moving in a single direction, or, equivalently, plane electromagnetic waves) do not have COM frames, because there is no frame in which they have zero net momentum. Due to the invariance of the speed of light, a massless system must travel at the speed of light in any frame, and always possesses a net momentum. Its energy is—for each reference frame—equal to the magnitude of momentum multiplied by the speed of light:
Two-body problem
An example of the usage of this frame is given below – in a two-body collision, not necessarily elastic (where kinetic energy is conserved). The COM frame can be used to find the momentum of the particles much easier than in a lab frame: the frame where the measurement or calculation is done. The situation is analyzed using Galilean transformations and conservation of momentum (for generality, rather than kinetic energies alone), for two particles of mass m1 and m2, moving at initial velocities (before collision) u1 and u2 respectively. The transformations are applied to take the velocity of the frame from the velocity of each particle from the lab frame (unprimed quantities) to the COM frame (primed quantities):
where V is the velocity of the COM frame. Since V is the velocity of the COM, i.e. the time derivative of the COM location R (position of the center of mass of the system):
so at the origin of the COM frame, R' = 0, this implies
The same results can be obtained by applying momentum conservation in the lab frame, where the momenta are p1 and p2:
and in the COM frame, where it is asserted definitively that the total momenta of the particles, p1' and p2', vanishes:
Using the COM frame equation to solve for V returns the lab frame equation above, demonstrating any frame (including the COM frame) may be used to calculate the momenta of the particles. It has been established that the velocity of the COM frame can be removed from the calculation using the above frame, so the momenta of the particles in the COM frame can be
expressed in terms of the quantities in the lab frame (i.e. the given initial values):
notice the relative velocity in the lab frame of particle 1 to 2 is
and the 2-body reduced mass is
so the momenta of the particles compactly reduce to
This is a substantially simpler calculation of the momenta of both particles; the reduced mass and relative velocity can be calculated from the initial velocities in the lab frame and the masses, and the momentum of one particle is simply the negative of the other. The calculation can be repeated for final velocities v1 and v2 in place of the initial velocities u1 and u2, since after the collision the velocities still satisfy the above equations:
so at the origin of the COM frame, R = 0, this implies after the collision
In the lab frame, the conservation of momentum fully reads:
This equation does not imply that
instead, it simply indicates the total mass M multiplied by the velocity of the centre of mass V is the total momentum P of the system:
Similar analysis to the above obtains
where the final relative velocity in the lab frame of particle 1 to 2 is
See also
Laboratory frame of reference
Breit frame
References
Classical mechanics
Coordinate systems
Frames of reference
Geometric centers
Kinematics
Momentum | 0.78985 | 0.982981 | 0.776407 |
Virial theorem | In statistical mechanics, the virial theorem provides a general equation that relates the average over time of the total kinetic energy of a stable system of discrete particles, bound by a conservative force (where the work done is independent of path) with that of the total potential energy of the system. Mathematically, the theorem states
where is the total kinetic energy of the particles, represents the force on the th particle, which is located at position , and angle brackets represent the average over time of the enclosed quantity. The word virial for the right-hand side of the equation derives from vis, the Latin word for "force" or "energy", and was given its technical definition by Rudolf Clausius in 1870.
The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form.
If the force between any two particles of the system results from a potential energy that is proportional to some power of the interparticle distance , the virial theorem takes the simple form
Thus, twice the average total kinetic energy equals times the average total potential energy . Whereas represents the potential energy between two particles of distance , represents the total potential energy of the system, i.e., the sum of the potential energy over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, where equals −1.
History
In 1870, Rudolf Clausius delivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20-year study of thermodynamics. The lecture stated that the mean vis viva of the system is equal to its virial, or that the average kinetic energy is equal to the average potential energy. The virial theorem can be obtained directly from Lagrange's identity as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772. Karl Jacobi's generalization of the identity to N bodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics. The theorem was later utilized, popularized, generalized and further developed by James Clerk Maxwell, Lord Rayleigh, Henri Poincaré, Subrahmanyan Chandrasekhar, Enrico Fermi, Paul Ledoux, Richard Bader and Eugene Parker. Fritz Zwicky was the first to use the virial theorem to deduce the existence of unseen matter, which is now called dark matter. Richard Bader showed the charge distribution of a total system can be partitioned into its kinetic and potential energies that obey the virial theorem. As another example of its many applications, the virial theorem has been used to derive the Chandrasekhar limit for the stability of white dwarf stars.
Illustrative special case
Consider particles with equal mass , acted upon by mutually attractive forces. Suppose the particles are at diametrically opposite points of a circular orbit with radius . The velocities are and , which are normal to forces and . The respective magnitudes are fixed at and . The average kinetic energy of the system in an interval of time from to is
Taking center of mass as the origin, the particles have positions and with fixed magnitude . The attractive forces act in opposite directions as positions, so . Applying the centripetal force formula results in:
as required. Note: If the origin is displaced then we'd obtain the same result. This is because the dot product of the displacement with equal and opposite forces , results in net cancellation.
Statement and derivation
Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step.
For a collection of point particles, the scalar moment of inertia about the origin is defined by the equation
where and represent the mass and position of the th particle. is the position vector magnitude. The scalar is defined by the equation
where is the momentum vector of the th particle. Assuming that the masses are constant, is one-half the time derivative of this moment of inertia
In turn, the time derivative of can be written
where is the mass of the th particle, is the net force on that particle, and is the total kinetic energy of the system according to the velocity of each particle
Connection with the potential energy between particles
The total force on particle is the sum of all the forces from the other particles in the system
where is the force applied by particle on particle . Hence, the virial can be written
Since no particle acts on itself (i.e., for ), we split the sum in terms below and above this diagonal and we add them together in pairs:
where we have assumed that Newton's third law of motion holds, i.e., (equal and opposite reaction).
It often happens that the forces can be derived from a potential energy that is a function only of the distance between the point particles and . Since the force is the negative gradient of the potential energy, we have in this case
which is equal and opposite to , the force applied by particle on particle , as may be confirmed by explicit calculation. Hence,
Thus, we have
Special case of power-law forces
In a common special case, the potential energy between two particles is proportional to a power of their distance
where the coefficient and the exponent are constants. In such cases, the virial is given by the equation
where is the total potential energy of the system
Thus, we have
For gravitating systems the exponent equals −1, giving Lagrange's identity
which was derived by Joseph-Louis Lagrange and extended by Carl Jacobi.
Time averaging
The average of this derivative over a duration of time, , is defined as
from which we obtain the exact equation
The virial theorem states that if , then
There are many reasons why the average of the time derivative might vanish, . One often-cited reason applies to stably-bound systems, that is to say systems that hang together forever and whose parameters are finite. In that case, velocities and coordinates of the particles of the system have upper and lower limits so that , is bounded between two extremes, and , and the average goes to zero in the limit of infinite :
Even if the average of the time derivative of is only approximately zero, the virial theorem holds to the same degree of approximation.
For power-law forces with an exponent , the general equation holds:
For gravitational attraction, equals −1 and the average kinetic energy equals half of the average negative potential energy
This general result is useful for complex gravitating systems such as solar systems or galaxies.
A simple application of the virial theorem concerns galaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied. Doppler effect measurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter.
If the ergodic hypothesis holds for the system under consideration, the averaging need not be taken over time; an ensemble average can also be taken, with equivalent results.
In quantum mechanics
Although originally derived for classical mechanics, the virial theorem also holds for quantum mechanics, as first shown by Fock using the Ehrenfest theorem.
Evaluate the commutator of the Hamiltonian
with the position operator and the momentum operator
of particle ,
Summing over all particles, one finds for
the commutator amounts to
where is the kinetic energy. The left-hand side of this equation is just , according to the Heisenberg equation of motion. The expectation value of this time derivative vanishes in a stationary state, leading to the quantum virial theorem,
Pokhozhaev's identity
In the field of quantum mechanics, there exists another form of the virial theorem, applicable to localized solutions to the stationary nonlinear Schrödinger equation or Klein–Gordon equation, is Pokhozhaev's identity, also known as Derrick's theorem.
Let be continuous and real-valued, with .
Denote .
Let
be a solution to the equation
in the sense of distributions.
Then satisfies the relation
In special relativity
For a single particle in special relativity, it is not the case that . Instead, it is true that , where is the Lorentz factor
and . We have,
The last expression can be simplified to
.
Thus, under the conditions described in earlier sections (including Newton's third law of motion, , despite relativity), the time average for particles with a power law potential is
In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval:
where the more relativistic systems exhibit the larger ratios.
Examples
The virial theorem has a particularly simple form for periodic motion. It can be used to perform perturbative calculation for nonlinear oscillators.
It can also be used to study motion in a central potential. If the central potential is of the form , the virial theorem simplifies to . In particular, for gravitational or electrostatic (Coulomb) attraction, .
Driven damped harmonic oscillator
Analysis based on. For a one-dimensional oscillator with mass , position , driving force , spring constant , and damping coefficient , the equation of motion is
When the oscillator has reached a steady state, it performs a stable oscillation , where is the amplitude and is the phase angle.
Applying the virial theorem, we have , which simplifies to , where is the natural frequency of the oscillator.
To solve the two unknowns, we need another equation. In steady state, the power lost per cycle is equal to the power gained per cycle: , which simplifies to .
Now we have two equations that yield the solution .
Ideal gas law
Consider a container filled with an ideal gas consisting of point masses. The force applied to the point masses is the negative of the forces applied to the wall of the container, which is of the form , where is the unit normal vector pointing outwards. Then the virial theorem statesBy the divergence theorem, . And since the average total kinetic energy , we have .
Dark matter
In 1933, Fritz Zwicky applied the virial theorem to estimate the mass of Coma Cluster, and discovered a discrepancy of mass of about 450, which he explained as due to "dark matter". He refined the analysis in 1937, finding a discrepancy of about 500.
Theoretical analysis
He approximated the Coma cluster as a spherical "gas" of stars of roughly equal mass , which gives . The total gravitational potential energy of the cluster is , giving . Assuming the motion of the stars are all the same over a long enough time (ergodicity), .
Zwicky estimated as the gravitational potential of a uniform ball of constant density, giving .
So by the virial theorem, the total mass of the cluster is
Data
Zwicky estimated that there are galaxies in the cluster, each having observed stellar mass (suggested by Hubble), and the cluster has radius . He also measured the radial velocities of the galaxies by doppler shifts in galactic spectra to be . Assuming equipartition of kinetic energy, .
By the virial theorem, the total mass of the cluster should be . However, the observed mass is , meaning the total mass is 450 times that of observed mass.
Generalizations
Lord Rayleigh published a generalization of the virial theorem in 1900 which was partially reprinted in 1903. Henri Poincaré proved and applied a form of the virial theorem in 1911 to the problem of formation of the Solar System from a proto-stellar cloud (then known as cosmogony). A variational form of the virial theorem was developed in 1945 by Ledoux. A tensor form of the virial theorem was developed by Parker, Chandrasekhar and Fermi. The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law:
A boundary term otherwise must be added.
Inclusion of electromagnetic fields
The virial theorem can be extended to include electric and magnetic fields. The result is
where is the moment of inertia, is the momentum density of the electromagnetic field, is the kinetic energy of the "fluid", is the random "thermal" energy of the particles, and are the electric and magnetic energy content of the volume considered. Finally, is the fluid-pressure tensor expressed in the local moving coordinate system
and is the electromagnetic stress tensor,
A plasmoid is a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion time . If a total mass is confined within a radius , then the moment of inertia is roughly , and the left hand side of the virial theorem is . The terms on the right hand side add up to about , where is the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving for , we find
where is the speed of the ion acoustic wave (or the Alfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time.
Relativistic uniform system
In case when in the physical system the pressure field, the electromagnetic and gravitational fields are taken into account, as well as the field of particles’ acceleration, the virial theorem is written in the relativistic form as follows:
where the value exceeds the kinetic energy of the particles by a factor equal to the Lorentz factor of the particles at the center of the system. Under normal conditions we can assume that , then we can see that in the virial theorem the kinetic energy is related to the potential energy not by the coefficient , but rather by the coefficient close to 0.6. The difference from the classical case arises due to considering the pressure field and the field of particles’ acceleration inside the system, while the derivative of the scalar is not equal to zero and should be considered as the material derivative.
An analysis of the integral theorem of generalized virial makes it possible to find, on the basis of field theory, a formula for the root-mean-square speed of typical particles of a system without using the notion of temperature:
where is the speed of light, is the acceleration field constant, is the mass density of particles, is the current radius.
Unlike the virial theorem for particles, for the electromagnetic field the virial theorem is written as follows:
where the energy considered as the kinetic field energy associated with four-current , and
sets the potential field energy found through the components of the electromagnetic tensor.
In astrophysics
The virial theorem is frequently applied in astrophysics, especially relating the gravitational potential energy of a system to its kinetic or thermal energy. Some common virial relations are
for a mass , radius , velocity , and temperature . The constants are Newton's constant , the Boltzmann constant , and proton mass . Note that these relations are only approximate, and often the leading numerical factors (e.g. or ) are neglected entirely.
Galaxies and cosmology (virial mass and radius)
In astronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial mass" and "virial radius" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models, such as an isothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties.
In galaxy dynamics, the mass of a galaxy is often inferred by measuring the rotation velocity of its gas and stars, assuming circular Keplerian orbits. Using the virial theorem, the velocity dispersion can be used in a similar way. Taking the kinetic energy (per particle) of the system as , and the potential energy (per particle) as we can write
Here is the radius at which the velocity dispersion is being measured, and is the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e.
As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in an order of magnitude sense, or when used self-consistently.
An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on a galaxy or a galaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than the critical density
where is the Hubble parameter and is the gravitational constant. A common choice for the factor is 200, which corresponds roughly to the typical over-density in spherical top-hat collapse (see Virial mass), in which case the virial radius is approximated as
The virial mass is then defined relative to this radius as
Stars
The virial theorem is applicable to the cores of stars, by establishing a relation between gravitational potential energy and thermal kinetic energy (i.e. temperature). As stars on the main sequence convert hydrogen into helium in their cores, the mean molecular weight of the core increases and it must contract to maintain enough pressure to support its own weight. This contraction decreases its potential energy and, the virial theorem states, increases its thermal energy. The core temperature increases even as energy is lost, effectively a negative specific heat. This continues beyond the main sequence, unless the core becomes degenerate since that causes the pressure to become independent of temperature and the virial relation with equals −1 no longer holds.
See also
Virial coefficient
Virial stress
Virial mass
Chandrasekhar tensor
Chandrasekhar virial equations
Derrick's theorem
Equipartition theorem
Ehrenfest theorem
Pokhozhaev's identity
References
Further reading
External links
The Virial Theorem at MathPages
Gravitational Contraction and Star Formation, Georgia State University
Physics theorems
Dynamics (mechanics)
Solid mechanics
Concepts in physics
Equations of astronomy | 0.779879 | 0.99553 | 0.776392 |
Ionization | Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected.
Uses
Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application.
Production of ions
Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization.
Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years.
Adiabatic ionization
Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state.
The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche.
Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used.
Ionization energy of atoms
The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells.
Semi-classical description of ionization
Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier.
Quantum mechanical description of ionization
The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC) ,but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice.
In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible.
Tunnel ionization
Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI).
Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter.
The rate of MPI on atom with an ionization potential in a linearly polarized laser with frequency is given by
where
is the Keldysh parameter,
,
is the peak electric field of the laser and
.
The coefficients , and are given by
The coefficient is given by
where
Quasi-static tunnel ionization
The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when approaches zero. The rate of QST is given by
As compared to the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable.
Strong field approximation for the ionization rate
The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by
where:
with being the ponderomotive energy,
is the minimum number of photons necessary to ionize the atom,
is the double Bessel function,
with the angle between the momentum of the electron, p, and the electric field of the laser, F,
FT is the three-dimensional Fourier transform, and
incorporates the Coulomb correction in the SFA model.
Population trapping
In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by photons and the resonant state undergo an avoided crossing at the resonance intensity . The minimum distance, , at the avoided crossing is proportional to the generalized Rabi frequency, coupling the two states. According to Story et al., the probability of remaining in the ground state, , is given by
where is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where , then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized.
In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping".
We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin.
Non-sequential multiple ionization
The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel there is another channel which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states;
where is the rate of quasi-static tunneling to i'th charge state and are some constants depending on the wavelength of the laser (but not on the pulse duration).
Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative.
The electron rescattering model was independently developed by Kuchiev, Schafer et al, Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential of the laser. Corkum's model places a cut-off limit on the minimum intensity ( is proportional to intensity) where ionization due to re-scattering can occur.
The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization.
Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process.
Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules
The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism:
MPI of electrons from the inner orbitals of the molecule which results in a molecular ion in ro-vibrational levels of an excited electronic state;
Rapid radiationless transition to the high-lying ro-vibrational levels of a lower electronic state; and
Subsequent dissociation of the ion to different fragments through various fragmentation channels.
The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required.
Kramers–Henneberger frame
The Kramers–Henneberger frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian:
In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field:
where for a monochromatic plane wave.
By applying a transformation to the laboratory frame equal to the quiver motion one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point :
The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are and (sometimes called the “excursion amplitude’, obtained from ).
From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ (the cycle-average of the oscillating potential). The interpretation of the presence of is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory and can be seen as the potential of the smeared out nuclear charge along its trajectory.
The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory.
Dissociation – distinction
A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs.
See also
Above threshold ionization
Chemical ionization
Electron ionization
Ionization chamber – Instrument for detecting gaseous ionization, used in ionizing radiation measurements
Ion source
Photoionization
Thermal ionization
Townsend avalanche – The chain reaction of ionization occurring in a gas with an applied electric field
Table
References
External links
Ions
Molecular physics
Atomic physics
Physical chemistry
Quantum chemistry
Mass spectrometry | 0.778287 | 0.99742 | 0.77628 |
Beer–Lambert law | The Beer–Bouguer–Lambert (BBL) extinction law is an empirical relationship describing the attenuation in intensity of a radiation beam passing through a macroscopically homogenous medium with which it interacts. Formally, it states that the intensity of radiation decays exponentially in the absorbance of the medium, and that said absorbance is proportional to the length of beam passing through the medium, the concentration of interacting matter along that path, and a constant representing said matter's propensity to interact.
The extinction law's primary application is in chemical analysis, where it underlies the Beer–Lambert law, commonly called Beer's law. Beer's law states that a beam of visible light passing through a chemical solution of fixed geometry experiences absorption proportional to the solute concentration. Other applications appear in physical optics, where it quantifies astronomical extinction and the absorption of photons, neutrons, or rarefied gases.
Forms of the BBL law date back to the mid-eighteenth century, but it only took its modern form during the early twentieth.
History
The first work towards the BBL law began with astronomical observations Pierre Bouguer performed in the early eighteenth century and published in 1729. Bouguer needed to compensate for the refraction of light by the earth's atmosphere, and found it necessary to measure the local height of the atmosphere. The latter, he sought to obtain through variations in the observed intensity of known stars. When calibrating this effect, Bouguer discovered that light intensity had an exponential dependence on length traveled through the atmosphere (in Bouguer's terms, a geometric progression).
Bouguer's work was then popularized in Johann Heinrich Lambert's Photometria in 1760. Lambert expressed the law, which states that the loss of light intensity when it propagates in a medium is directly proportional to intensity and path length, in a mathematical form quite similar to that used in modern physics. Lambert began by assuming that the intensity of light traveling into an absorbing body would be given by the differential equation which is compatible with Bouguer's observations. The constant of proportionality was often termed the "optical density" of the body. As long as is constant along a distance , the exponential attenuation law, follows from integration.
In 1852, August Beer noticed that colored solutions also appeared to exhibit a similar attenuation relation. In his analysis, Beer does not discuss Bouguer and Lambert's prior work, writing in his introduction that "Concerning the absolute magnitude of the absorption that a particular ray of light suffers during its propagation through an absorbing medium, there is no information available." Beer may have omitted reference to Bouguer's work because there is a subtle physical difference between color absorption in solutions and astronomical contexts. Solutions are homogeneous and do not scatter light at common analytical wavelengths (ultraviolet, visible, or infrared), except at entry and exit. Thus light within a solution is reasonably approximated as due to absorption alone. In Bouguer's context, atmospheric dust or other inhomogeneities could also scatter light away from the detector. Modern texts combine the two laws because scattering and absorption have the same effect. Thus a scattering coefficient and an absorption coefficient can be combined into a total extinction coefficient .
Importantly, Beer also seems to have conceptualized his result in terms of a given thickness' opacity, writing "If is the coefficient (fraction) of diminution, then this coefficient (fraction) will have the value for double this thickness." Although this geometric progression is mathematically equivalent to the modern law, modern treatments instead emphasize the logarithm of , which clarifies that concentration and path length have equivalent effects on the absorption. An early, possibly the first, modern formulation was given by Robert Luther and Andreas Nikolopulos in 1913.
Mathematical formulations
There are several equivalent formulations of the BBL law, depending on the precise choice of measured quantities. All of them state that, provided that the physical state is held constant, the extinction process is linear in the intensity of radiation and amount of radiatively-active matter, a fact sometimes called the fundamental law of extinction. Many of them then connect the quantity of radiatively-active matter to a length traveled and a concentration or number density . The latter two are related by Avogadro's number: .
A collimated beam (directed radiation) with cross-sectional area will encounter particles (on average) during its travel. However, not all of these particles interact with the beam. Propensity to interact is a material-dependent property, typically summarized in absorptivity or scattering cross-section . These almost exhibit another Avogadro-type relationship: . The factor of appears because physicists tend to use natural logarithms and chemists decadal logarithms.
Beam intensity can also be described in terms of multiple variables: the intensity or radiant flux . In the case of a collimated beam, these are related by , but is often used in non-collimated contexts. The ratio of intensity (or flux) in to out is sometimes summarized as a transmittance coefficient .
When considering an extinction law, dimensional analysis can verify the consistency of the variables, as logarithms (being nonlinear) must always be dimensionless.
Formulation
The simplest formulation of Beer's relates the optical attenuation of a physical material containing a single attenuating species of uniform concentration to the optical path length through the sample and absorptivity of the species. This expression is:The quantities so equated are defined to be the absorbance , which depends on the logarithm base. The Naperian absorbance is then given by and satisfies
If multiple species in the material interact with the radiation, then their absorbances add. Thus a slightly more general formulation is that where the sum is over all possible radiation-interacting ("translucent") species, and indexes those species.
In situations where length may vary significantly, absorbance is sometimes summarized in terms of an attenuation coefficient
In atmospheric science and radiation shielding applications, the attenuation coefficient may vary significantly through an inhomogenous material. In those situations, the most general form of the Beer-Lambert law states that the total attenuation can be obtained by integrating the attenuation coefficient over small slices of the beamline: These formulations then reduce to the simpler versions when there is only one active species and the attenuation coefficients are constant.
Derivation
There are two factors that determine the degree to which a medium containing particles will attenuate a light beam: the number of particles encountered by the light beam, and the degree to which each particle extinguishes the light.
Assume that a beam of light enters a material sample. Define as an axis parallel to the direction of the beam. Divide the material sample into thin slices, perpendicular to the beam of light, with thickness sufficiently small that one particle in a slice cannot obscure another particle in the same slice when viewed along the direction. The radiant flux of the light that emerges from a slice is reduced, compared to that of the light that entered, by where is the (Napierian) attenuation coefficient, which yields the following first-order linear, ordinary differential equation:
The attenuation is caused by the photons that did not make it to the other side of the slice because of scattering or absorption. The solution to this differential equation is obtained by multiplying the integrating factorthroughout to obtainwhich simplifies due to the product rule (applied backwards) to
Integrating both sides and solving for for a material of real thickness , with the incident radiant flux upon the slice and the transmitted radiant flux givesand finally
Since the decadic attenuation coefficient is related to the (Napierian) attenuation coefficient by we also have
To describe the attenuation coefficient in a way independent of the number densities of the attenuating species of the material sample, one introduces the attenuation cross section has the dimension of an area; it expresses the likelihood of interaction between the particles of the beam and the particles of the species in the material sample:
One can also use the molar attenuation coefficients where is the Avogadro constant, to describe the attenuation coefficient in a way independent of the amount concentrations of the attenuating species of the material sample:
Validity
Under certain conditions the Beer–Lambert law fails to maintain a linear relationship between attenuation and concentration of analyte. These deviations are classified into three categories:
Real—fundamental deviations due to the limitations of the law itself.
Chemical—deviations observed due to specific chemical species of the sample which is being analyzed.
Instrument—deviations which occur due to how the attenuation measurements are made.
There are at least six conditions that need to be fulfilled in order for the Beer–Lambert law to be valid. These are:
The attenuators must act independently of each other.
The attenuating medium must be homogeneous in the interaction volume.
The attenuating medium must not scatter the radiation—no turbidity—unless this is accounted for as in DOAS.
The incident radiation must consist of parallel rays, each traversing the same length in the absorbing medium.
The incident radiation should preferably be monochromatic, or have at least a width that is narrower than that of the attenuating transition. Otherwise a spectrometer as detector for the power is needed instead of a photodiode which cannot discriminate between wavelengths.
The incident flux must not influence the atoms or molecules; it should only act as a non-invasive probe of the species under study. In particular, this implies that the light should not cause optical saturation or optical pumping, since such effects will deplete the lower level and possibly give rise to stimulated emission.
If any of these conditions are not fulfilled, there will be deviations from the Beer–Lambert law.
The law tends to break down at very high concentrations, especially if the material is highly scattering. Absorbance within range of 0.2 to 0.5 is ideal to maintain linearity in the Beer–Lambert law. If the radiation is especially intense, nonlinear optical processes can also cause variances. The main reason, however, is that the concentration dependence is in general non-linear and Beer's law is valid only under certain conditions as shown by derivation below. For strong oscillators and at high concentrations the deviations are stronger. If the molecules are closer to each other interactions can set in. These interactions can be roughly divided into physical and chemical interactions. Physical interaction do not alter the polarizability of the molecules as long as the interaction is not so strong that light and molecular quantum state intermix (strong coupling), but cause the attenuation cross sections to be non-additive via electromagnetic coupling. Chemical interactions in contrast change the polarizability and thus absorption.
In solids, attenuation is usually an addition of absorption coefficient (creation of electron-hole pairs) or scattering (for example Rayleigh scattering if the scattering centers are much smaller than the incident wavelength). Also note that for some systems we can put (1 over inelastic mean free path) in place of
Applications
In plasma physics
The BBL extinction law also arises as a solution to the BGK equation.
Chemical analysis by spectrophotometry
The Beer–Lambert law can be applied to the analysis of a mixture by spectrophotometry, without the need for extensive pre-processing of the sample. An example is the determination of bilirubin in blood plasma samples. The spectrum of pure bilirubin is known, so the molar attenuation coefficient is known. Measurements of decadic attenuation coefficient are made at one wavelength that is nearly unique for bilirubin and at a second wavelength in order to correct for possible interferences. The amount concentration is then given by
For a more complicated example, consider a mixture in solution containing two species at amount concentrations and . The decadic attenuation coefficient at any wavelength is, given by
Therefore, measurements at two wavelengths yields two equations in two unknowns and will suffice to determine the amount concentrations and as long as the molar attenuation coefficients of the two components, and are known at both wavelengths. This two system equation can be solved using Cramer's rule. In practice it is better to use linear least squares to determine the two amount concentrations from measurements made at more than two wavelengths. Mixtures containing more than two components can be analyzed in the same way, using a minimum of wavelengths for a mixture containing components.
The law is used widely in infra-red spectroscopy and near-infrared spectroscopy for analysis of polymer degradation and oxidation (also in biological tissue) as well as to measure the concentration of various compounds in different food samples. The carbonyl group attenuation at about 6 micrometres can be detected quite easily, and degree of oxidation of the polymer calculated.
In-atmosphere astronomy
The Bouguer–Lambert law may be applied to describe the attenuation of solar or stellar radiation as it travels through the atmosphere. In this case, there is scattering of radiation as well as absorption. The optical depth for a slant path is , where refers to a vertical path, is called the relative airmass, and for a plane-parallel atmosphere it is determined as where is the zenith angle corresponding to the given path. The Bouguer-Lambert law for the atmosphere is usually written
where each is the optical depth whose subscript identifies the source of the absorption or scattering it describes:
refers to aerosols (that absorb and scatter);
are uniformly mixed gases (mainly carbon dioxide (CO2) and molecular oxygen (O2) which only absorb);
is nitrogen dioxide, mainly due to urban pollution (absorption only);
are effects due to Raman scattering in the atmosphere;
is water vapour absorption;
is ozone (absorption only);
is Rayleigh scattering from molecular oxygen and nitrogen (responsible for the blue color of the sky);
the selection of the attenuators which have to be considered depends on the wavelength range and can include various other compounds. This can include tetraoxygen, HONO, formaldehyde, glyoxal, a series of halogen radicals and others.
is the optical mass or airmass factor, a term approximately equal (for small and moderate values of ) to where is the observed object's zenith angle (the angle measured from the direction perpendicular to the Earth's surface at the observation site). This equation can be used to retrieve , the aerosol optical thickness, which is necessary for the correction of satellite images and also important in accounting for the role of aerosols in climate.
See also
Applied spectroscopy
Atomic absorption spectroscopy
Absorption spectroscopy
Cavity ring-down spectroscopy
Clausius–Mossotti relation
Infra-red spectroscopy
Job plot
Laser absorption spectrometry
Lorentz–Lorenz relation
Logarithm
Polymer degradation
Scientific laws named after people
Quantification of nucleic acids
Tunable diode laser absorption spectroscopy
Transmittance#Beer–Lambert law
References
External links
Beer–Lambert Law Calculator
Beer–Lambert Law Simpler Explanation
Eponymous laws of physics
Scattering, absorption and radiative transfer (optics)
Spectroscopy
Electromagnetic radiation
Visibility | 0.777449 | 0.998489 | 0.776274 |
Work (physics) | In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force.
For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction.
Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by:
If the force is variable, then work is given by the line integral:
where is the tiny change in displacement vector.
Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy.
History
The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it.
Early concepts of work
Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote:
In 1686, the German philosopher Gottfried Leibniz wrote:
In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's.
Etymology
According to the 1957 physics textbook by Max Jammer, the term work was introduced in 1826 by the French mathematician Gaspard-Gustave Coriolis as "weight lifted through a height", which is based on the use of early steam engines to lift buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now".
Units
The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889), which is defined as the work required to exert a force of one newton through a displacement of one metre.
The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work.
Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is equivalent to 0.07376 ft-lbs.
Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit.
Work and energy
The work done by a constant force of magnitude on a point that moves a displacement in a straight line in the direction of the force is the product
For example, if a force of 10 newtons acts along a point that travels 2 metres, then . This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity.
The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance.
Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work.
From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy corresponding to the linear velocity and angular velocity of that body,
The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy of the object,
These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy.
The work/energy principles discussed here are identical to electric work/energy principles.
Constraint forces
Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system.
For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded.
Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping.
For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball.
The magnetic force on a charged particle is , where is the charge, is the velocity of the particle, and is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so . The dot product of two perpendicular vectors is always zero, so the work , and the magnetic force does not do work. It can change the direction of motion but never change the speed.
Mathematical calculation
For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application.
Work is the result of a force on a point that follows a curve , with a velocity , at each instant. The small amount of work that occurs over an instant of time is calculated as
where the is the power over the instant . The sum of these small amounts of work over the trajectory of the point yields the work,
where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent.
If the force is always directed along this line, and the magnitude of the force is , then this integral simplifies to
where is displacement along the line. If is constant, in addition to being directed along the line, then the integral simplifies further to
where s is the displacement of the point along the line.
This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product , where is the angle between the force vector and the direction of movement, that is
When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge.
Work done by a variable force
Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (, where is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows:
Thus, the work done for a variable force can be expressed as a definite integral of force over displacement.
If the displacement as a variable of time is given by , then work done by the variable force from to is:
Thus, the work done for a variable force can be expressed as a definite integral of power over time.
Torque and rotation
A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as
where the is the power over the instant . The sum of these small amounts of work over the trajectory of the rigid body yields the work,
This integral is computed along the trajectory of the rigid body with an angular velocity that varies with time, and is therefore said to be path dependent.
If the angular velocity vector maintains a constant direction, then it takes the form,
where is the angle of rotation about the constant unit vector . In this case, the work of the torque becomes,
where is the trajectory from to . This integral depends on the rotational trajectory , and is therefore path-dependent.
If the torque is aligned with the angular velocity vector so that,
and both the torque and angular velocity are constant, then the work takes the form,
This result can be understood more simply by considering the torque as arising from a force of constant magnitude , being applied perpendicularly to a lever arm at a distance , as shown in the figure. This force will act through the distance along the circular arc , so the work done is
Introduce the torque , to obtain
as presented above.
Notice that only the component of torque in the direction of the angular velocity vector contributes to the work.
Work and potential energy
The scalar product of a force and the velocity of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, , defines the work input to the system by the force.
Path dependence
Therefore, the work done by a force on an object that travels along a curve is given by the line integral:
where defines the trajectory and is the velocity along this trajectory.
In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent.
The time derivative of the integral for work yields the instantaneous power,
Path independence
If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function , that can be evaluated at the two points and to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
The function is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces.
In this case, the gradient of work yields
and the force F is said to be "derivable from a potential."
Because the potential defines a force at every point in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity of the body, that is
Work by gravity
In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is and the gravitational force on an object of mass m is . It is convenient to imagine this gravitational force concentrated at the center of mass of the object.
If an object with weight is displaced upwards or downwards a vertical distance , the work done on the object is:
where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight.
In space
The force of gravity exerted by a mass on another mass is given by
where is the position vector from to and is the unit vector in the direction of .
Let the mass move at the velocity ; then the work of gravity on this mass as it moves from position to is given by
Notice that the position and velocity of the mass are given by
where and are the radial and tangential unit vectors directed relative to the vector from to , and we use the fact that Use this to simplify the formula for work of gravity to,
This calculation uses the fact that
The function
is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy.
Work by a spring
Consider a spring that exerts a horizontal force that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve , is calculated using its velocity, , to obtain
For convenience, consider contact with the spring occurs at , then the integral of the product of the distance and the x-velocity, , over time is . The work is the product of the distance times the spring force, which is also dependent on distance; hence the result.
Work by a gas
The work done by a body of gas on its surroundings is:
where is pressure, is volume, and and are initial and final volumes.
Work–energy principle
The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy ,
where and are the speeds of the particle before and after the work is done, and is its mass.
The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system.
(Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics.
This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another.
Derivation for a particle moving along a straight line
In the case the resultant force is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation (Newton's second law), and the particle displacement can be expressed by the equation
which follows from (see Equations of motion).
The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains:
Other derivation:
In the general case of rectilinear motion, when the net force is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle:
General derivation of the work–energy principle for a particle
For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle:
The identity requires some algebra.
From the identity and definition
it follows
The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case.
Derivation for a particle in constrained movement
In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle.
To see this, consider a particle P that follows the trajectory with a force acting on it. Isolate the particle from its environment to expose constraint forces , then Newton's Law takes the form
where is the mass of the particle.
Vector formulation
Note that n dots above a vector indicates its nth time derivative.
The scalar product of each side of Newton's law with the velocity vector yields
because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point to the point to obtain
The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time to time . This can also be written as
This integral is computed along the trajectory of the particle and is therefore path dependent.
The right side of the first integral of Newton's equations can be simplified using the following identity
(see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy,
where the kinetic energy of the particle is defined by the scalar quantity,
Tangential and normal components
It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory , such that
where
Then, the scalar product of velocity with acceleration in Newton's second law takes the form
where the kinetic energy of the particle is defined by the scalar quantity,
The result is the work–energy principle for particle dynamics,
This derivation can be generalized to arbitrary rigid body systems.
Moving in a straight line (skid to a stop)
Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to . The constraint forces between the vehicle and the road define , and we have
For convenience let the trajectory be along the X-axis, so and the velocity is , then , and , where Fx is the component of F along the X-axis, so
Integration of both sides yields
If is constant along the trajectory, then the integral of velocity is distance, so
As an example consider a car skidding to a stop, where k is the coefficient of friction and W is the weight of the car. Then the force along the trajectory is . The velocity v of the car can be determined from the length of the skid using the work–energy principle,
This formula uses the fact that the mass of the vehicle is .
Coasting down an inclined surface (gravity racing)
Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity , of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected.
Let the trajectory of the vehicle following the road be which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity , while the force of the road on the vehicle is the constraint force . Newton's second law yields,
The scalar product of this equation with the velocity, , yields
where is the magnitude of . The constraint forces between the vehicle and the road cancel from this equation because , which means they do no work.
Integrate both sides to obtain
The weight force W is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore,
Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle.
In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance in feet down a 6% grade to reach the velocity is at least
This formula uses the fact that the weight of the vehicle is .
Work of forces acting on a rigid body
The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body.
The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by
The velocity of the points along their trajectories are
where is the angular velocity vector obtained from the skew symmetric matrix
known as the angular velocity matrix.
The small amount of work by the forces over the small displacements can be determined by approximating the displacement by so
or
This formula can be rewritten to obtain
where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body.
References
Bibliography
External links
Work–energy principle
Energy properties
Scalar physical quantities
Mechanical engineering
Mechanical quantities
Force
Length | 0.777127 | 0.998887 | 0.776262 |
Euclidean vector | In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Euclidean vectors can be added and scaled to form a vector space. A vector quantity is a vector-valued physical quantity, including units of measurement and possibly a support, formulated as a directed line segment. A vector is frequently depicted graphically as an arrow connecting an initial point A with a terminal point B, and denoted by
A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space.
Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.
History
The vector concept, as it is known today, is the result of a gradual development over a period of more than 200 years. About a dozen people contributed significantly to its development. In 1835, Giusto Bellavitis abstracted the basic idea when he established the concept of equipollence. Working in a Euclidean plane, he made equipollent any pair of parallel line segments of the same length and orientation. Essentially, he realized an equivalence relation on the pairs of points (bipoints) in the plane, and thus erected the first space of vectors in the plane. The term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum of a real number (also called scalar) and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments. As complex numbers use an imaginary unit to complement the real line, Hamilton considered the vector to be the imaginary part of a quaternion:
Several other mathematicians developed vector-like systems in the middle of the nineteenth century, including Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O'Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis that is similar to today's system, and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870s. Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇. In 1878, Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers—and others working in three dimensions and skeptical of the fourth.
Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901, Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs's lectures, which banished any mention of quaternions in the development of vector calculus.
Overview
In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a relative direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above-mentioned geometric entities are a special kind of abstract vectors, as they are elements of a special kind of vector space called Euclidean space. This particular article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors.
A Euclidean vector may possess a definite initial point and terminal point; such a condition may be emphasized calling the result a bound vector. When only the magnitude and direction of the vector matter, and the particular initial or terminal points are of no importance, the vector is called a free vector. The distinction between bound and free vectors is especially relevant in mechanics, where a force applied to a body has a point of contact (see resultant force and couple).
Two arrows and in space represent the same free vector if they have the same magnitude and direction: that is, they are equipollent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin.
The term vector also has generalizations to higher dimensions, and to more formal approaches with much wider applications.
Further information
In classical Euclidean geometry (i.e., synthetic geometry), vectors were introduced (during the 19th century) as equivalence classes under equipollence, of ordered pairs of points; two pairs and being equipollent if the points , in this order, form a parallelogram. Such an equivalence class is called a vector, more precisely, a Euclidean vector. The equivalence class of is often denoted
A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the line segment ) and same direction (e.g., the direction from to ). In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction. For example, velocity, forces and acceleration are represented by vectors.
In modern geometry, Euclidean spaces are often defined from linear algebra. More precisely, a Euclidean space is defined as a set to which is associated an inner product space of finite dimension over the reals and a group action of the additive group of which is free and transitive (See Affine space for details of this construction). The elements of are called translations. It has been proven that the two definitions of Euclidean spaces are equivalent, and that the equivalence classes under equipollence may be identified with translations.
Sometimes, Euclidean vectors are considered without reference to a Euclidean space. In this case, a Euclidean vector is an element of a normed vector space of finite dimension over the reals, or, typically, an element of the real coordinate space equipped with the dot product. This makes sense, as the addition in such a vector space acts freely and transitively on the vector space itself. That is, is a Euclidean space, with itself as an associated vector space, and the dot product as an inner product.
The Euclidean space is often presented as the standard Euclidean space of dimension . This is motivated by the fact that every Euclidean space of dimension is isomorphic to the Euclidean space More precisely, given such a Euclidean space, one may choose any point as an origin. By Gram–Schmidt process, one may also find an orthonormal basis of the associated vector space (a basis such that the inner product of two basis vectors is 0 if they are different and 1 if they are equal). This defines Cartesian coordinates of any point of the space, as the coordinates on this basis of the vector These choices define an isomorphism of the given Euclidean space onto by mapping any point to the -tuple of its Cartesian coordinates, and every vector to its coordinate vector.
Examples in one dimension
Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters would be 4 m or −4 m, depending on its direction, and its magnitude would be 4 m regardless.
In physics and engineering
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For instance, the velocity 5 meters per second upward could be represented by the vector (0, 5) (in 2 dimensions with the positive y-axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction, but fail to follow the rules of vector addition, are angular displacement and electric current. Consequently, these are not vectors.
In Cartesian space
In the Cartesian coordinate system, a bound vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points and in space determine the bound vector pointing from the point on the x-axis to the point on the y-axis.
In Cartesian coordinates, a free vector may be thought of in terms of a corresponding bound vector, in this sense, whose initial point has the coordinates of the origin . It is then determined by the coordinates of that bound vector's terminal point. Thus the free vector represented by (1, 0, 0) is a vector of unit length—pointing along the direction of the positive x-axis.
This coordinate representation of free vectors allows their algebraic features to be expressed in a convenient numerical fashion. For example, the sum of the two (free) vectors (1, 2, 3) and (−2, 0, 4) is the (free) vector
Euclidean and affine vectors
In the geometrical and physical settings, it is sometimes possible to associate, in a natural way, a length or magnitude and a direction to vectors. In addition, the notion of direction is strictly associated with the notion of an angle between two vectors. If the dot product of two vectors is defined—a scalar-valued product of two vectors—then it is also possible to define a length; the dot product gives a convenient algebraic characterization of both angle (a function of the dot product between any two non-zero vectors) and length (the square root of the dot product of a vector by itself). In three dimensions, it is further possible to define the cross product, which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram). In any dimension (and, in particular, higher dimensions), it is possible to define the exterior product, which (among other things) supplies an algebraic characterization of the area and orientation in space of the n-dimensional parallelotope defined by n vectors.
In a pseudo-Euclidean space, a vector's squared length can be positive, negative, or zero. An important example is Minkowski space (which is important to our understanding of special relativity).
However, it is not always possible or desirable to define the length of a vector. This more general type of spatial vector is the subject of vector spaces (for free vectors) and affine spaces (for bound vectors, as each represented by an ordered pair of "points"). One physical example comes from thermodynamics, where many quantities of interest can be considered vectors in a space with no notion of length or angle.
Generalizations
In physics, as well as mathematics, a vector is often identified with a tuple of components, or list of numbers, that act as scalar coefficients for a set of basis vectors. When the basis is transformed, for example by rotation or stretching, then the components of any vector in terms of that basis also transform in an opposite sense. The vector itself has not changed, but the basis has, so the components of the vector must change to compensate. The vector is called covariant or contravariant, depending on how the transformation of the vector's components is related to the transformation of the basis. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement), or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of basis) from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm—a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm—a covariant change in value (for more, see covariance and contravariance of vectors). Tensors are another type of quantity that behave in this way; a vector is one type of tensor.
In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition, because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction".
Representations
Vectors are usually denoted in lowercase boldface, as in , and , or in lowercase italic boldface, as in a. (Uppercase letters are typically used to represent matrices.) Other conventions include or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. , which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as or AB. In German literature, it was especially common to represent vectors with small fraktur letters such as .
Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here, the point A is called the origin, tail, base, or initial point, and the point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction.
On a two-dimensional diagram, a vector perpendicular to the plane of the diagram is sometimes desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the flights of an arrow from the back.
In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system.
As an example in two dimensions (see figure), the vector from the origin O = (0, 0) to the point A = (2, 3) is simply written as
The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation is usually deemed not necessary (and is indeed rarely used).
In three dimensional Euclidean space (or ), vectors are identified with triples of scalar components:
also written,
This can be generalised to n-dimensional Euclidean space (or ).
These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows:
Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them:
These have the intuitive interpretation as vectors of unit length pointing up the x-, y-, and z-axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in can be expressed in the form:
or
where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections).
In introductory physics textbooks, the standard basis vectors are often denoted instead (or , in which the hat symbol typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus,
The notation ei is compatible with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering.
Decomposition or resolution
As explained above, a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set.
The decomposition or resolution of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected.
Moreover, the use of Cartesian unit vectors such as as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of an arbitrary basis, including the unit vectors of a cylindrical coordinate system or spherical coordinate system. The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry, respectively.
The choice of a basis does not affect the properties of a vector or its behaviour under transformations.
A vector can also be broken up with respect to "non-fixed" basis vectors that change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it.
In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame).
Properties and operations
The following section uses the Cartesian coordinate system with basis vectors
and assumes that all vectors have the origin as a common base point. A vector a will be written as
Equality
Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors
and
are equal if
Opposite, parallel, and antiparallel vectors
Two vectors are opposite if they have the same magnitude but opposite direction; so two vectors
and
are opposite if
Two vectors are equidirectional (or codirectional) if they have the same direction but not necessarily the same magnitude.
Two vectors are parallel if they have the same or opposite direction but not necessarily the same magnitude.
Addition and subtraction
The sum of a and b of two vectors may be defined as
The resulting vector is sometimes called the resultant vector of a and b.
The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below:
This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c).
The difference of a and b is
Subtraction of two vectors can be geometrically illustrated as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector (-b) + a, with (-b) being the opposite of b, see drawing. And (-b) + a = a − b.
Scalar multiplication
A vector may also be multiplied, or re-scaled, by any real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is
Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector.
If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below:
Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b.
Length
The length, magnitude or norm of the vector a is denoted by ‖a‖ or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm").
The length of the vector a can be computed with the Euclidean norm,
which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors.
This happens to be equal to the square root of the dot product, discussed below, of the vector with itself:
Unit vector
A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.
To normalize a vector , scale the vector by the reciprocal of its length ‖a‖. That is:
Zero vector
The zero vector is the vector with length zero. Written out in coordinates, the vector is , and it is commonly denoted , 0, or simply 0. Unlike any other vector, it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector that is a multiple of the zero vector). The sum of the zero vector with any vector a is a (that is, ).
Dot product
The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b, and is defined as:
where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point, and then the length of a is multiplied with the length of the component of b that points in the same direction as a.
The dot product can also be defined as the sum of the products of the components of each vector as
Cross product
The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as
where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (−n).
The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (although a and b are not necessarily orthogonal). This is the right-hand rule.
The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.
The cross product can be written as
For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below).
Scalar triple product
The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as:
It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed.
In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows
The scalar triple product is linear in all three entries and anti-symmetric in the following sense:
Conversion between multiple Cartesian bases
All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the e basis {e1, e2, e3}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the e basis, a vector a is expressed, by definition, as
The scalar components in the e basis are, by definition,
In another orthonormal basis n = {n1, n2, n3} that is not necessarily aligned with e, the vector a is expressed as
and the scalar components in the n basis are, by definition,
The values of p, q, r, and u, v, w relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector a in both cases. It is common to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express u, v, w in terms of p, q, r is to use column matrices along with a direction cosine matrix containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form
Distributing the dot-multiplication gives
Replacing each dot product with a unique scalar gives
and these equations can be expressed as the single matrix equation
This matrix equation relates the scalar components of a in the n basis (u,v, and w) with those in the e basis (p, q, and r). Each matrix element cjk is the direction cosine relating nj to ek. The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product. Therefore,
By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines). The properties of a rotation matrix are such that its inverse is equal to its transpose. This means that the "rotation matrix from e to n" is the transpose of "rotation matrix from n to e".
The properties of a direction cosine matrix, C are:
the determinant is unity, |C| = 1;
the inverse is equal to the transpose;
the rows and columns are orthogonal unit vectors, therefore their dot products are zero.
The advantage of this method is that a direction cosine matrix can usually be obtained independently by using Euler angles or a quaternion to relate the two vector bases, so the basis conversions can be performed directly, without having to work out all the dot products described above.
By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.
Other dimensions
With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as
and in four dimensions as
The cross product does not readily generalise to other dimensions, though the closely related exterior product does, whose result is a bivector. In two dimensions this is simply a pseudoscalar
A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products.
Physics
Vectors have many uses in physics and other sciences.
Length and units
In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1 m:50 N and 1:250 respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance.
Vector-valued functions
Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions.
Position, velocity and acceleration
The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin
The position vector has dimensions of length.
Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector
which specifies the position of y relative to x. The length of this vector gives the straight-line distance from x to y. Displacement has the dimensions of length.
The velocity v of a point or particle is a vector, its length gives the speed. For constant velocity the position at time t will be
where x0 is the position at time t = 0. Velocity is the time derivative of position. Its dimensions are length/time.
Acceleration a of a point is vector which is the time derivative of velocity. Its dimensions are length/time2.
Force, energy, work
Force is a vector with dimensions of mass×length/time2 (N m s -2) and Newton's second law is the scalar multiplication
Work is the dot product of force and displacement
Vectors, pseudovectors, and transformations
An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform opposite to the basis" under changes of basis. The vector itself does not change when the basis is transformed; instead, the components of the vector make a change that cancels the change in the basis. In other words, if the reference axes (and the basis derived from it) were rotated in one direction, the component representation of the vector would rotate in the opposite way to generate the same final vector. Similarly, if the reference axes were stretched in one direction, the components of the vector would reduce in an exactly compensating way. Mathematically, if the basis undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to , then a contravariant vector v must be similarly transformed via . This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration.
In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule.
Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors.
One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors.
This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties.
See also
Affine space, which distinguishes between vectors and points
Banach space
Clifford algebra
Complex number
Coordinate system
Covariance and contravariance of vectors
Four-vector, a non-Euclidean vector in Minkowski space (i.e. four-dimensional spacetime), important in relativity
Function space
Grassmann's Ausdehnungslehre
Hilbert space
Normal vector
Null vector
Parity (physics)
Position (geometry)
Pseudovector
Quaternion
Tangential and normal components (of a vector)
Tensor
Unit vector
Vector bundle
Vector calculus
Vector notation
Vector-valued function
Notes
References
Mathematical treatments
.
.
.
.
Physical treatments
External links
Online vector identities (PDF)
Introducing Vectors A conceptual introduction (applied mathematics)
Kinematics
Abstract algebra
Vector calculus
Linear algebra
Concepts in physics
Vectors (mathematics and physics)
Analytic geometry
Euclidean geometry | 0.778558 | 0.997041 | 0.776254 |
Uncertainty principle | The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables.
First introduced in 1927 by German physicist Werner Heisenberg, the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:
where is the reduced Planck constant.
The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements.
Position-momentum
It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.
Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation , where is the wavenumber.
In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable is performed, then the system is in a particular eigenstate of that observable. However, the particular eigenstate of the observable need not be an eigenstate of another observable : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.
Visualization
The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension.
The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform.
Wave mechanics interpretation
According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle.
The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is
The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is
In the case of the single-mode plane wave, is 1 if and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.
On the other hand, consider a wave function that is a sum of many waves, which we may write as
where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes
with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.
One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation.
The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound.
Proof of the Kennard inequality using wave mechanics
We are interested in the variances of position and momentum, defined as
Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form
The function can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space:
where the asterisk denotes the complex conjugate.
With this inner product defined, we note that the variance for position can be written as
We can repeat this for momentum by interpreting the function as a vector, but we can also take advantage of the fact that and are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts:
where in the integration by parts, the cancelled term vanishes because the wave function vanishes at infinity, and the final two integrations re-assert the Fourier transforms. Often the term is called the momentum operator in position space. Applying Plancherel's theorem and then Parseval's theorem, we see that the variance for momentum can be written as
The Cauchy–Schwarz inequality asserts that
The modulus squared of any complex number z can be expressed as
we let and and substitute these into the equation above to get
All that remains is to evaluate these inner products.
Plugging this into the above inequalities, we get
or taking the square root
with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that and are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables.
Matrix mechanics interpretation
(Ref )
In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators and , one defines their commutator as
In the case of position and momentum, the commutator is the canonical commutation relation
The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue . By definition, this means that Applying the commutator to yields
where is the identity operator.
Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue . If this were true, then one could write
On the other hand, the above canonical commutation relation requires that
This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.
When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations,
As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.
Examples
(Refs )
Quantum harmonic oscillator stationary states
Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators:
Using the standard rules for creation and annihilation operators on the energy eigenstates,
the variances may be computed directly,
The product of these standard deviations is then
In particular, the above Kennard bound is saturated for the ground state , for which the probability density is just the normal distribution.
Quantum harmonic oscillators with Gaussian initial condition
In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as
where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the -dependent solution. After many cancelations, the probability densities reduce to
where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as
From the relations
we can conclude the following (the right most equality holds only when ):
Coherent states
A coherent state is a right eigenstate of the annihilation operator,
which may be represented in terms of Fock states as
In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances,
Therefore, every coherent state saturates the Kennard bound
with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.
Particle in a box
Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are
and
where and we have used the de Broglie relation . The variances of and can be calculated explicitly:
The product of the standard deviations is therefore
For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case
Constant momentum
Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to
where we have introduced a reference scale , with describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are
Since and , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is
such that the uncertainty product can only increase with time as
Energy–time uncertainty principle
Energy spectrum line-width vs lifetime
An energy–time uncertainty relation like
has a long, controversial history; the meaning of and varies and different formulations have different arenas of validity. However, one well-known application is both well established and experimentally verified: the connection between the life-time of a resonance state, and its energy width :
In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states.
An informal, heuristic meaning of the principle is the following:A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width).
Time in quantum mechanics
The concept of "time" in quantum mechanics offers many challenges. There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. The mathematical treatment of stable and unstable quantum systems differ. These factors combine to make energy–time uncertainty principles controversial.
Three notions of "time" can be distinguished: external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events.
An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy requires a time interval . However, Yakir Aharonov and David Bohm have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal.
Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock".
Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts.
Mandelstam–Tamm
In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator relates time dependence of the average value of to the average of its commutator with the Hamiltonian:
The value of is then substituted in the Robertson uncertainty relation for the energy operator and :
giving
(whenever the denominator is nonzero).
While this is a universal result, it depends upon the observable chosen and that the deviations and are computed for a particular state.
Identifying and the characteristic time
gives an energy–time relationship
Although has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This can be interpreted as time for which the expectation value of the observable, changes by an amount equal to one standard deviation.
Examples:
The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty.
A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors.
Two energy states with energies superimposed to create a composite state
The probability amplitude of this state has a time-dependent interference term:
The oscillation period varies inversely with the energy difference: .
Each example has a different meaning for the time uncertainty, according to the observable and state used.
Quantum field theory
Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution.
The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. The energy of the universe is not an exactly known parameter at all times. When events transpire at very short time intervals, there is uncertainty in the energy of these events.
Intrinsic quantum uncertainty
Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.
Mathematical formalism
Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed a formulation for arbitrary Hermitian operator operators
expressed in terms of their standard deviation
where the brackets indicate an expectation value. For a pair of operators and , define their commutator as
and the Robertson uncertainty relation is given by
Erwin Schrödinger showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson-Schrödinger uncertainty relation,
where the anticommutator, is used.
Mixed states
The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.
The Maccone–Pati uncertainty relations
The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Yichen Huang.) For two non-commuting observables and the first stronger uncertainty relation is given by
where , , is a normalized vector that is orthogonal to the state of the system and one should choose the sign of to make this real quantity a positive number.
The second stronger uncertainty relation is given by
where is a state orthogonal to .
The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One may note that can be an eigenstate of without being an eigenstate of either or . However, when is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless is an eigenstate of both.
Improving the Robertson–Schrödinger uncertainty relation based on decompositions of the density matrix
The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components in any decomposition of the density matrix given as
Here, for the probabilities and hold. Then, using the relation
for ,
it follows that
where the function in the bound is defined
The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation
where on the right-hand side there is a concave roof over the decompositions of the density matrix.
The improved relation above is saturated by all single-qubit quantum states.
With similar arguments, one can derive a relation with a convex roof on the right-hand side
where denotes the quantum Fisher information and the density matrix is decomposed to pure states as
The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four.
A simpler inequality follows without a convex roof
which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have
while for pure states the equality holds.
Phase space
In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:
Choosing , we arrive at
Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative.
The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant,
or, explicitly, after algebraic manipulation,
Examples
Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.
Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation implies the Kennard inequality from above:
Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as , and hence j ≥ m, among others.
For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter
Limitations
The derivation of the Robertson inequality for operators and requires and to be defined. There are quantum systems where these conditions are not valid.
One example is a quantum particle on a ring, where the wave function depends on an angular variable in the interval . Define "position" and "momentum" operators and by
and
with periodic boundary conditions on . The definition of depends the range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, . More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space.
Now let be any of the eigenstates of , which are given by . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that
The Robertson uncertainty principle does not apply in this case: is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on .
For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and are defined in the state , the Heisenberg uncertainty principle holds, even if fails to be in the domain of or of .
Additional uncertainty relations
Heisenberg limit
In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten.
Systematic and statistical errors
The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect.
If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds:
Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as
The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.
Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality.
Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time):
The two simultaneous measurements on A and B are necessarily unsharp or weak.
It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson
and Ozawa relations we obtain
The four terms can be written as:
Defining:
as the inaccuracy in the measured values of the variable A and
as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors:
Quantum entropic uncertainty principle
For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.
A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by I. I. Hirschman and proven in 1975 by W. Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs and where
and
the Shannon information entropies
and
are subject to the following constraint,
where the logarithms may be in any base.
The probability distribution functions associated with the position wave function and the momentum wave function have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by
where and are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function and the momentum wavefunction , the above constraint can be written for the corresponding entropies as
where is the Planck constant.
Depending on one's choice of the product, the expression may be written in many ways. If is chosen to be , then
If, instead, is chosen to be , then
If and are chosen to be unity in whatever system of units are being used, then
where is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.
The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities
(equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because
In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).
A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is
To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as
Under the above definition, the entropic uncertainty relation is
Here we note that is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research.
Uncertainty relation with three angular momentum components
For a particle of total angular momentum the following uncertainty relation holds
where are angular momentum components. The relation can be derived from
and
The relation can be strengthened as
where is the quantum Fisher information.
Harmonic analysis
In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds,
Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function and its Fourier transform :
Signal processing
In the context of signal processing, and in particular time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies
where and are the standard deviations of the time and frequency energy or power (i.e. squared) representations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude ; squaring reduces each by a factor .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals (see bandwidth-limited pulse).
Stated alternatively, "One cannot simultaneously sharply localize a signal (function ) in both the time domain and frequency domain (, its Fourier transform)".
When applied to filters, the result implies that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.
Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.
As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier.
Discrete Fourier transform
Let be a sequence of N complex numbers and be its discrete Fourier transform.
Denote by the number of non-zero elements in the time sequence and by the number of non-zero elements in the frequency sequence . Then,
This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa).
More generally, if T and W are subsets of the integers modulo N, let denote the time-limiting operator and band-limiting operators, respectively. Then
where the norm is the operator norm of operators on the Hilbert space of functions on the integers modulo N. This inequality has implications for signal reconstruction.
When N is a prime number, a stronger inequality holds:
Discovered by Terence Tao, this inequality is also sharp.
Benedicks's theorem
Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where is non-zero and the set of points where is non-zero cannot both be small.
Specifically, it is impossible for a function in and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is
One expects that the factor may be replaced by , which is only known if either or is convex.
Hardy's uncertainty principle
The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for and to both be "very rapidly decreasing". Specifically, if in is such that
and
( an integer),
then, if , while if , then there is a polynomial of degree such that
This was later improved as follows: if is such that
then
where is a polynomial of degree and is a real positive definite matrix.
This result was stated in Beurling's complete works without proof and proved in Hörmander (the case ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.
A full description of the case as well as the following extension to Schwartz class distributions appears in ref.
History
In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation.
In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts.
In his celebrated 1927 paper "" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication.
In his 1930 Chicago lecture he refined his principle:
Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.
Kennard in 1927 first proved the modern inequality:
where , and , are the standard deviations of position and momentum. (Heisenberg only proved relation for the special case of Gaussian states.) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality.
Terminology and translation
Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit",
to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language.
Heisenberg's microscope
The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device.
He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.
Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.
The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable.
Critical reactions
The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.
Ideal detached observer
Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German):
Einstein's slit
The first of Einstein's thought experiments challenging the uncertainty principle went as follows:
Consider a particle passing through a slit of width . The slit introduces an uncertainty in momentum of approximately because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum.
Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.
A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.
Einstein's box
Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."
Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock," because of Einstein's own theory of gravity's effect on time.
"Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."
EPR paradox for entangled particles
In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality.
In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables.
Popper's criticism
Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory.
In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften, and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:
[Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements. [original emphasis]
Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox.
Free will
Some scientists including Arthur Compton and Martin Heisenberg have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.
Thermodynamics
There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox.
Rejection of the principle
Uncertainty principles relate quantum particles–electrons for example–to classical concepts–position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors" as Kemble says.
Applications
Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.
See also
— when an attempt is made to use a statistical measure for purposes of control (directing), its statistical validity breaks down
(Heisenberg's recollections)
References
External links
Stanford Encyclopedia of Philosophy entry
Quantum mechanics
Principles
Mathematical physics
Inequalities
Werner Heisenberg
Scientific laws
1927 in science
1927 in Germany | 0.776574 | 0.999565 | 0.776236 |
Physical constant | A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that cannot be explained by a theory and therefore must be measured experimentally. It is distinct from a mathematical constant, which has a fixed numerical value, but does not directly involve any physical measurement.
There are many physical constants in science, some of the most widely recognized being the speed of light in vacuum c, the gravitational constant G, the Planck constant h, the electric constant ε0, and the elementary charge e. Physical constants can take many dimensional forms: the speed of light signifies a maximum speed for any object and its dimension is length divided by time; while the proton-to-electron mass ratio, is dimensionless.
The term "fundamental physical constant" is sometimes used to refer to universal-but-dimensioned physical constants such as those mentioned above. Increasingly, however, physicists reserve the expression for the narrower case of dimensionless universal physical constants, such as the fine-structure constant α, which characterizes the strength of the electromagnetic interaction.
Physical constants, as discussed here, should not be confused with empirical constants, which are coefficients or parameters assumed to be constant in a given context without being fundamental. Examples include the characteristic time, characteristic length, or characteristic number (dimensionless) of a given system, or material constants (e.g., Madelung constant, electrical resistivity, and heat capacity) of a particular material or substance.
Characteristics
Physical constants are parameters in a physical theory that cannot be explained by that theory. This may be due to the apparent fundamental nature of the constant or due to limitations in the theory. Consequently, physical constants must be measured experimentally.
The set of parameters considered physical constants change as physical models change and how fundamental they appear can change. For example, , the speed of light, was originally considered a property of light, a specific system. The discovery and verification of Maxwell's equations connected the same quantity with an entire system, electromagnetism. When the theory of special relativity emerged, the quantity came to be understood as the basis of causality. The speed of light is so fundamental it now defines the international unit of length.
Relationship to units
Numerical values
Whereas the physical quantity indicated by a physical constant does not depend on the unit system used to express the quantity, the numerical values of dimensional physical constants do depend on choice of unit system. The term "physical constant" refers to the physical quantity, and not to the numerical value within any given system of units. For example, the speed of light is defined as having the numerical value of when expressed in the SI unit metres per second, and as having the numerical value of 1 when expressed in the natural units Planck length per Planck time. While its numerical value can be defined at will by the choice of units, the speed of light itself is a single physical constant.
International System of Units
Since 2019 revision, all of the units in the International System of Units have been defined in terms of fixed natural phenomena, including three fundamental constants: the speed of light in vacuum, c; the Planck constant, h; and the elementary charge, e.
As a result of the new definitions, an SI unit like the kilogram can be written in terms of fundamental constants and one experimentally measured constant, ΔνCs:
1 kg = .
Natural units
It is possible to combine dimensional universal physical constants to define fixed quantities of any desired dimension, and this property has been used to construct various systems of natural units of measurement. Depending on the choice and arrangement of constants used, the resulting natural units may be convenient to an area of study. For example, Planck units, constructed from c, G, ħ, and kB give conveniently sized measurement units for use in studies of quantum gravity, and atomic units, constructed from ħ, me, e and 4πε0 give convenient units in atomic physics. The choice of constants used leads to widely varying quantities.
Number of fundamental constants
The number of fundamental physical constants depends on the physical theory accepted as "fundamental". Currently, this is the theory of general relativity for gravitation and the Standard Model for electromagnetic, weak and strong nuclear interactions and the matter fields. Between them, these theories account for a total of 19 independent fundamental constants. There is, however, no single "correct" way of enumerating them, as it is a matter of arbitrary choice which quantities are considered "fundamental" and which as "derived". Uzan lists 22 "fundamental constants of our standard model" as follows:
the gravitational constant G,
the speed of light c,
the Planck constant h,
the 9 Yukawa couplings for the quarks and leptons (equivalent to specifying the rest mass of these elementary particles),
2 parameters of the Higgs field potential,
4 parameters for the quark mixing matrix,
3 coupling constants for the gauge groups SU(3) × SU(2) × U(1) (or equivalently, two coupling constants and the Weinberg angle),
a phase for the quantum chromodynamics vacuum.
The number of 19 independent fundamental physical constants is subject to change under possible extensions of the Standard Model, notably by the introduction of neutrino mass (equivalent to seven additional constants, i.e. 3 Yukawa couplings and 4 lepton mixing parameters).
The discovery of variability in any of these constants would be equivalent to the discovery of "new physics".
The question as to which constants are "fundamental" is neither straightforward nor meaningless, but a question of interpretation of the physical theory regarded as fundamental; as pointed out by , not all physical constants are of the same importance, with some having a deeper role than others. proposed a classification schemes of three types of constants:
A: physical properties of particular objects
B: characteristic of a class of physical phenomena
C: universal constants
The same physical constant may move from one category to another as the understanding of its role deepens; this has notably happened to the speed of light, which was a class A constant (characteristic of light) when it was first measured, but became a class B constant (characteristic of electromagnetic phenomena) with the development of classical electromagnetism, and finally a class C constant with the discovery of special relativity.
Tests on time-independence
By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification.
Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments can in principle only put an upper bound on the relative change per year. For the fine-structure constant, this upper bound is comparatively low, at roughly 10−17 per year (as of 2008).
The gravitational constant is much more difficult to measure with precision, and conflicting measurements in the 2000s have inspired the controversial suggestions of a periodic variation of its value in a 2015 paper. However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for the gravitational constant over the last nine billion years.
Similarly, an upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy.
It is problematic to discuss the proposed rate of change (or lack thereof) of a single dimensional physical constant in isolation. The reason for this is that the choice of units is arbitrary, making the question of whether a constant is undergoing change an artefact of the choice (and definition) of the units.
For example, in SI units, the speed of light was given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now. Similarly, with effect from May 2019, the Planck constant has a defined value, such that all SI base units are now defined in terms of fundamental physical constants. With this change, the international prototype of the kilogram is being retired as the last physical object used in the definition of any SI unit.
Tests on the immutability of physical constants look at dimensionless quantities, i.e. ratios between quantities of like dimensions, in order to escape this problem. Changes in physical constants are not meaningful if they result in an observationally indistinguishable universe. For example, a "change" in the speed of light c would be meaningless if accompanied by a corresponding change in the elementary charge e so that the expression (the fine-structure constant) remained unchanged.
Dimensionless physical constants
Any ratio between physical constants of the same dimensions results in a dimensionless physical constant, for example, the proton-to-electron mass ratio. The fine-structure constant α is the best known dimensionless fundamental physical constant. It is the value of the elementary charge squared expressed in Planck units. This value has become a standard example when discussing the derivability or non-derivability of physical constants. Introduced by Arnold Sommerfeld, its value and uncertainty as determined at the time was consistent with 1/137. This motivated Arthur Eddington (1929) to construct an argument why its value might be 1/137 precisely, which related to the Eddington number, his estimate of the number of protons in the Universe. By the 1940s, it became clear that the value of the fine-structure constant deviates significantly from the precise value of 1/137, refuting Eddington's argument.
Fine-tuned universe
Some physicists have explored the notion that if the dimensionless physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist. There are a variety of interpretations of the constants' values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that the universe is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist.
Table of physical constants
The table below lists some frequently used constants and their CODATA recommended values. For a more extended list, refer to List of physical constants.
See also
List of common physics notations
List of mathematical constants
List of physical constants
Mathematical constant
References
External links
Sixty Symbols, University of Nottingham
IUPAC – Gold Book | 0.779164 | 0.996155 | 0.776169 |
Units of energy | Energy is defined via work, so the SI unit of energy is the same as the unit of work – the joule (J), named in honour of James Prescott Joule and his experiments on the mechanical equivalent of heat. In slightly more fundamental terms, is equal to 1 newton metre and, in terms of SI base units
An energy unit that is used in atomic physics, particle physics and high energy physics is the electronvolt (eV). One eV is equivalent to .
In spectroscopy the unit cm−1 ≈ is used to represent energy since energy is inversely proportional to wavelength from the equation .
In discussions of energy production and consumption, the units barrel of oil equivalent and ton of oil equivalent are often used.
British imperial / US customary units
The British imperial units and U.S. customary units for both energy and work include the foot-pound force (1.3558 J), the British thermal unit (BTU) which has various values in the region of 1055 J, the horsepower-hour (2.6845 MJ), and the gasoline gallon equivalent (about 120 MJ).
The table illustrates the wide range of magnitudes among conventional units of energy. For example, 1 BTU is equivalent to about 1,000 joules, and there are 25 orders-of-magnitude difference between a kilowatt-hour and an electron-volt.
Electricity
A unit of electrical energy, particularly for utility bills, is the kilowatt-hour (kWh); one kilowatt-hour is equivalent to . Electricity usage is often given in units of kilowatt-hours per year or other time period. This is actually a measurement of average power consumption, meaning the average rate at which energy is transferred. One kilowatt-hour per year is around 0.11 watts.
Natural gas
Natural gas is often sold in units of energy content or by volume. Common units for selling by energy content are joules or therms. One therm is equal to about . Common units for selling by volume are cubic metre or cubic feet. Natural gas in the US is sold in therms or 100 cubic feet (100 ft3 = 1 Ccf). In Australia, natural gas is sold in cubic metres. One cubic metre contains about 38 megajoules. In most of the world, natural gas is sold in gigajoules.
Food industry
The calorie is defined as the amount of thermal energy necessary to raise the temperature of one gram of water by 1 Celsius degree, from a temperature of , at a pressure of . For thermochemistry a calorie of is used, but other calories have also been defined, such as the International Steam Table calorie of . In many regions, food energy is measured in large calories or kilocalories equalling , sometimes written capitalized as . In the European Union, food energy labeling in joules is mandatory, often with calories as supplementary information.
Atom physics and chemistry
In physics and chemistry, it is common to measure energy on the atomic scale in the non-SI, but convenient, units electronvolts (eV). 1 eV is equivalent to the kinetic energy acquired by an electron in passing through a potential difference of 1 volt in a vacuum. It is common to use the SI magnitude prefixes (e.g. milli-, mega- etc) with electronvolts. Because of the relativistic equivalence between mass and energy, the eV is also sometimes used as a unit of mass. The Hartree (the atomic unit of energy) is commonly used in the field of computational chemistry since such units arise directly from the calculation algorithms without any need for conversion. Historically Rydberg units have been used.
Spectroscopy
In spectroscopy and related fields it is common to measure energy levels in units of reciprocal centimetres. These units (cm−1) are strictly speaking not energy units but units proportional to energies, with being the proportionality constant.
Explosions
A gram of TNT releases upon explosion. To define the tonne of TNT, this was standardized to giving a value of for the tonne of TNT.
See also
Energy consumption
Conversion of units of temperature
Conversion of units of energy, work, or amount of heat
Kilokaiser
List of unusual units of measurement
Maximum demand indicator
Orders of magnitude (energy)
erg
Foe (unit)
References
Conversion of units of measurement | 0.779732 | 0.995342 | 0.7761 |
Ultrarelativistic limit | In physics, a particle is called ultrarelativistic when its speed is very close to the speed of light . Notations commonly used are or or where is the Lorentz factor, and is the speed of light.
The energy of an ultrarelativistic particle is almost completely due to its kinetic energy . The total energy can also be approximated as where is the Lorentz invariant momentum.
This can result from holding the mass fixed and increasing the kinetic energy to very large values or by holding the energy fixed and shrinking the mass to very small values which also imply a very large . Particles with a very small mass do not need much energy to travel at a speed close to c. The latter is used to derive orbits of massless particles such as the photon from those of massive particles (cf. Kepler problem in general relativity).
Ultrarelativistic approximations
Below are few ultrarelativistic approximations when . The rapidity is denoted :
Motion with constant proper acceleration: , where is the distance traveled, is proper acceleration (with ), is proper time, and travel starts at rest and without changing direction of acceleration (see proper acceleration for more details).
Fixed target collision with ultrarelativistic motion of the center of mass: where and are energies of the particle and the target respectively (so ), and is energy in the center of mass frame.
Accuracy of the approximation
For calculations of the energy of a particle, the relative error of the ultrarelativistic limit for a speed is about %, and for it is just %. For particles such as neutrinos, whose (Lorentz factor) are usually above ( practically indistinguishable from ), the approximation is essentially exact.
Other limits
The opposite case is a so-called classical particle, where its speed is much smaller than . Its kinetic energy can be approximated by first term of the binomial series:
See also
Relativistic particle
Classical mechanics
Special relativity
Aichelburg–Sexl ultraboost
References
Special relativity
Approximations
pt:Limite ultra-relativístico | 0.796902 | 0.97384 | 0.776055 |
Pleonasm | Pleonasm (; , ) is redundancy in linguistic expression, such as in "black darkness," "burning fire," "the man he said," or "vibrating with motion." It is a manifestation of tautology by traditional rhetorical criteria. Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.
Usage
Most often, pleonasm is understood to mean a word or phrase which is useless, clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use of idiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as a redundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.
Idiomatic expressions
Some pleonastic phrases are part of a language's idiom, like tuna fish, chain mail and safe haven in American English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such as It might be possible or perhaps it's possible, where both terms (verb might or adverb perhaps along with the adjective possible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction between ontological possibility and epistemic possibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (in logical terms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibility per se is far less widespread among speakers of most other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:
French: or .
Portuguese: , lit. "What is it that", a more emphatic way of saying "what is"; usually suffices.
Romanian: or .
Typical Spanish pleonasms
– I am going to go up upstairs, "" not being necessary.
– enter inside, "" not being necessary.
Turkish has many pleonastic constructs because certain verbs necessitate objects:
– to eat food.
– to write writing.
– to exit outside.
– to enter inside.
– to play a game.
In a satellite-framed language like English, verb phrases containing particles that denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Professional and scholarly use
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "terms and conditions", "each and every" are legal doublets that are part of legally operative language that is often drafted into legal documents. A classic example of such usage was that by the Lord Chancellor at the time (1864), Lord Westbury, in the English case of Gorely, when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).
Of the aforementioned phrases, "terms and conditions" may not be pleonastic in some legal systems, as they refer not to a set provisions forming part of a contract, but rather to the specific terms conditioning the effect of the contract or a contractual provision to a future event. In these cases, terms and conditions imply respectively the certainty or uncertainty of said event (e.g., in Brazilian law, a testament has the initial term for coming into force the death of the testator, while a health insurance has the condition of the insured suffering a, or one of a set of, certain injurie(s) from a or one of a set of certain causes).
Stylistic preference
In addition, pleonasms can serve purposes external to meaning. For example, a speaker who is too terse is often interpreted as lacking ease or grace, because, in oral and sign language, sentences are spontaneously created without the benefit of editing. The restriction on the ability to plan often creates many redundancies. In written language, removing words that are not strictly necessary sometimes makes writing seem stilted or awkward, especially if the words are cut from an idiomatic expression.
On the other hand, as is the case with any literary or rhetorical effect, excessive use of pleonasm weakens writing and speech; words distract from the content. Writers who want to obfuscate a certain thought may obscure their meaning with excess verbiage. William Strunk Jr. advocated concision in The Elements of Style (1918):
Literary uses
Examples from Baroque, Mannerist, and Victorian provide a counterpoint to Strunk's advocacy of concise writing:
"This was the most unkindest cut of all." — William Shakespeare, Julius Caesar (Act 3, Scene 2, 183)
"I will be brief: your noble son is mad:/Mad call I it; for, to define true madness,/What is't but to be nothing else but mad?" — Hamlet (Act 2, Scene 2)
"Let me tell you this, when social workers offer you, free, gratis and for nothing, something to hinder you from swooning, which with them is an obsession, it is useless to recoil ..." — Samuel Beckett, Molloy
Types
There are various kinds of pleonasm, including bilingual tautological expressions, syntactic pleonasm, semantic pleonasm and morphological pleonasm:
Bilingual tautological expressions
A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages. An example of a bilingual tautological expression is the Yiddish expression mayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water". Its first element, mayim, derives from the Hebrew ['majim] "water". Its second element, vaser, derives from the Middle High German word "water".
According to Ghil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.
The following are examples of bilingual tautological compounds in Yiddish:
fíntster khóyshekh "very dark", literally "dark darkness", traceable back to the Middle High German word "dark" and the Hebrew word חושך ħōshekh "darkness".
khameréyzļ "womanizer", literally "donkey-donkey", traceable back to the Hebrew word חמור [ħă'mōr] "donkey" and the Middle High German word "donkey".
The following are examples of bilingual tautological first names (anthroponyms) in Yiddish:
Dov-Ber, literally "bear-bear", traceable back to the Hebrew word dov "bear" and the Middle High German word "bear".
Tsvi-Hirsh, literally "deer-deer", traceable back to the Hebrew word tsvi "deer" and the Middle High German word "deer".
Ze'ev-Volf, literally "wolf-wolf", traceable back to the Hebrew word ze'ev "wolf" and the Middle High German word "wolf".
Arye-Leyb, literally "lion-lion", traceable back to the Hebrew word arye "lion" and the Middle High German word "lion".
Examples occurring in English-language contexts include:
River Avon, literally "River River", from Welsh.
the Sahara Desert, literally "the The Desert Desert", from Arabic.
the La Brea Tar Pits, literally "the The Tar Tar Pits", from Spanish.
the Los Angeles Angels, literally "the The Angels Angels", from Spanish.
the hoi polloi, literally "the the many", from Greek.
Carmarthen Castle, may actually have "castle" in it three times: In its Welsh form, Castell Caerfyrddin, "Caer" means fort, while "fyrddin" is thought to be derived from the Latin Moridunum ("sea fort") making Carmarthen Castle "fort sea-fort castle".
Mount Maunganui, Lake Rotoroa, and Motutapu Island in New Zealand are "Mount Mount Big", "Lake Lake Long", and "Island Sacred Island" respectively, from Māori.
Syntactic pleonasm
Syntactic pleonasm occurs when the grammar of a language makes certain function words optional. For example, consider the following English sentences:
"I know you're coming."
"I know that you're coming."
In this construction, the conjunction that is optional when joining a sentence to a verb phrase with know. Both sentences are grammatically correct, but the word that is pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use of that makes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjective that also does not fit such an example. Also, some writers may use "that" for technical clarity reasons. In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs in Spanish with subject pronouns. Since Spanish is a null-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
""
""
In this case, the pronoun ('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone or intention—this depends on pragmatics rather than grammar). Such differing but syntactically equivalent constructions, in many languages, may also indicate a difference in register.
The process of deleting pronouns is called pro-dropping, and it also happens in many other languages, such as Korean, Japanese, Hungarian, Latin, Italian, Portuguese, Swahili, Slavic languages, and the Lao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (or dummy pronoun) is used; only the first sentence in the following pair is acceptable English:
"It's raining."
"Is raining."
In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonastic it is marked as ungrammatical although no meaning is lost by the omission. Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic) expletives, or dummy pronouns. Compare:
"There is rain."
"Today is rain."
The pleonastic , expressing uncertainty in formal French, works as follows:
""('I fear it may rain.')
""('These ideas are harder to understand than I thought.')
Two more striking examples of French pleonastic construction are and .
The word / is translated as 'today', but originally means "on the day of today" since the now obsolete means "today". The expression (translated as "on the day of today") is common in spoken language and demonstrates that the original construction of is lost. It is considered a pleonasm.
The phrase meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."
When Robert South said, "It is a pleonasm, a figure usual in Scripture, by a multiplicity of expressions to signify one notable thing", he was observing the Biblical Hebrew poetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of the Psalms are split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up the Old Testament were written. See also parallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", from Peter Gabriel's "Mercy Street").
Types of syntactic pleonasm
Overinflection: Many languages with inflection, as a result of convention, tend to inflect more words in a given phrase than actually needed in order to express a single grammatical property. Take for example the German, ("The old women speak"). Even though the use of the plural form of the noun ("woman", plural ) shows the grammatical number of the noun phrase, agreement in the German language still dictates that the definite article , attributive adjective , and the verb must all also be in the plural. Not all languages are quite as redundant however, and will permit inflection for number when there is an obvious numerical marker, as is the case with Hungarian, which does have a plural proper, but would express two flowers as two flower. (The same is the case in Celtic languages, where numerical markers precede singular nouns.) The main contrast between Hungarian and other tongues such as German or even English (to a lesser extent) is that in either of the latter, expressing plurality when already evident is not optional, but mandatory; making the neglect of these rules result in an ungrammatical sentence. As well as for number, our aforementioned German phrase also overinflects for grammatical case.
Multiple negation: In some languages, repeated negation may be used for emphasis, as in the English sentence, "There ain't nothing wrong with that". While a literal interpretation of this sentence would be "There is not nothing wrong with that", i.e. "There is something wrong with that", the intended meaning is, in fact, the opposite: "There is nothing wrong with that" or "There isn't anything wrong with that." The repeated negation is used pleonastically for emphasis. However, this is not always the case. In the sentence "I don't not like it", the repeated negative may be used to convey ambivalence ("I neither like nor dislike it") or even affirmation ("I do like it"). (Rhetorically, this becomes the device of litotes; it can be difficult to distinguish litotes from pleonastic double negation, a feature which may be used for ironic effect.) Although the use of "double negatives" for emphatic purposes is sometimes discouraged in standard English, it is mandatory in other languages like Spanish or French. For example, the Spanish phrase ('It is nothing') contains both a negated verb ("") and another negative, the word for nothing ("").
Multiple affirmations: In English, repeated affirmation can be used to add emphasis to an affirmative statement, just as repeated negation can add emphasis to a negative one. A sentence like I do love you, with a stronger intonation on the do, uses double affirmation. This is because English, by default, automatically expresses its sentences in the affirmative and must then alter the sentence in one way or another to express the opposite. Therefore, the sentence I love you is already affirmative, and adding the extra do only adds emphasis and does not change the meaning of the statement.
Double possession: The double genitive of English, as with a friend of mine, is seemingly pleonastic, and therefore has been stigmatized, but it has a long history of use by careful writers and has been analyzed as either a partitive genitive or an appositive genitive.
Multiple quality gradation: In English, different degrees of comparison (comparatives and superlatives) are created through a morphological change to an adjective (e.g., "prettier", "fastest") or a syntactic construction (e.g., "more complex", "most impressive"). It is thus possible to combine both forms for additional emphasis: "more bigger" or "bestest". This may be considered ungrammatical but is common in informal speech for some English speakers. "The most unkindest cut of all" is from Shakespeare's Julius Caesar. Musical notation has a repeated Italian superlative in fortississimo and pianississimo.
Not all uses of constructions such as "more bigger" are pleonastic, however. Some speakers who use such utterances do so in an attempt, albeit a grammatically unconventional one, to create a non-pleonastic construction: A person who says "X is more bigger than Y" may, in the context of a conversation featuring a previous comparison of some object Z with Y, mean "The degree by which X exceeds Y in size is greater than the degree by which Z exceeds Y in size". This usage amounts to the treatment of "bigger than Y" as a single grammatical unit, namely an adjective itself admitting of degrees, such that "X is more bigger than Y" is equivalent to "X is more bigger-than-Y than Z is."[alternatively, "X is bigger than Y more than Z is."] Another common way to express this is: "X is even bigger than Z."
Semantic pleonasm
Semantic pleonasm is a question more of style and usage than of grammar. Linguists usually call this redundancy to avoid confusion with syntactic pleonasm, a more important phenomenon for theoretical linguistics. It usually takes one of two forms: Overlap or prolixity.
Overlap: One word's semantic component is subsumed by the other:
"Receive a free gift with every purchase."; a gift is usually already free.
"A tuna fish sandwich."
"The plumber fixed our hot water heater." (This pleonasm was famously attacked by American comedian George Carlin, but is not truly redundant; a device that increases the temperature of cold water to room temperature would also be a water heater.)
The Big Friendly Giant (title of a children's book by Roald Dahl); giants are inherently already "big".
Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
"I'm going down south."(South is not really "down", it is just drawn that way on maps by convention.)
"You can't seem to face up to the facts."
"He entered into the room."
"Every mother's child" (as in 'The Christmas Song' by Nat King Cole', also known as 'Chestnuts roasting...'). (Being a child, or a human at all, generally implies being the child of/to a mother. So the redundancy here is used to broaden the context of the child's curiosity regarding the sleigh of Santa Claus, including the concept of maternity. The full line goes: "And every mother's child is gonna spy, to see if reindeer really know how to fly". One can furthermore argue that the word "mother" is included for the purpose of lyrical flow, adding two syllables, which make the line sound complete, as "every child" would be too short to fit the lyrical/rhyme scheme.)
"What therefore God hath joined together, let no man put asunder."
"He raised up his hands in a gesture of surrender."
"Where are you at?"
"Located" or similar before a preposition: "the store is located on Main St." The preposition contains the idea of locatedness and does not need a servant.
"The house itself" for "the house", and similar: unnecessary re-specifiers.
"Actual fact": fact.
"On a daily basis": daily.
"This particular item": this item.
"Different" or "separate" after numbers: for example:
"Four different species" are merely "four species", as two non-different species are together one same species. (However, in "a discount if you buy ten different items", "different" has meaning, because if the ten items include two packets of frozen peas of the same weight and brand, those ten items are not all different.)
"Nine separate cars": cars are always separate.
"Despite the fact that": although.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
It will simply be accepted as synonymous with "tuna".
It will be perceived as redundant (and thus perhaps silly, illogical, ignorant, inefficient, dialectal, odd, and/or intentionally humorous).
It will imply a distinction. A reader of "tuna fish" could properly wonder: "Is there a kind of tuna which is not a fish? There is, after all, a dolphin mammal and a dolphin fish." This assumption turns out to be correct, as a "tuna" can also mean a prickly pear. Further, "tuna fish" is sometimes used to refer to the flesh of the animal as opposed to the animal itself (similar to the distinction between beef and cattle). Similarly, while all sound-making horns use air, an "air horn" has a special meaning: one that uses compressed air specifically; while most clocks tell time, a "time clock" specifically means one that keeps track of workers' presence at the workplace.
It will be perceived as a verbal clarification, since the word "tuna" is quite short, and may, for example, be misheard as "tune" followed by an aspiration, or (in dialects that drop the final -r sound) as "tuner".
Careful speakers, and writers, too, are aware of pleonasms, especially with cases such as "tuna fish", which is normally used only in some dialects of American English, and would sound strange in other variants of the language, and even odder in translation into other languages.
Similar situations are:
"Ink pen" instead of merely "pen" in the southern United States, where "pen" and "pin" are pronounced similarly.
"Extra accessories" which must be ordered separately for a new camera, as distinct from the accessories provided with the camera as sold.
Not all constructions that are typically pleonasms are so in all cases, nor are all constructions derived from pleonasms themselves pleonastic:
"Put that glass over there on the table." This could, depending on room layout, mean "Put that glass on the table across the room, not the table right in front of you"; if the room were laid out like that, most English speakers would intuitively understand that the distant, not immediate table was the one being referred to; however, if there were only one table in the room, the phrase would indeed be pleonastic. Also, it could mean, "Put that glass on the spot (on the table) which I am gesturing to"; thus, in this case, it is not pleonastic.
"I'm going way down South." This may imply "I'm going much farther south than you might think if I didn't stress the southerliness of my destination"; but such phrasing is also sometimes—and sometimes jokingly—used pleonastically when simply "south" would do; it depends upon the context, the intent of the speaker/writer, and ultimately even on the expectations of the listener/reader.
Morphemic pleonasm
Morphemes, not just words, can enter the realm of pleonasm: Some word-parts are simply optional in various languages and dialects. A familiar example to American English speakers would be the allegedly optional "-al-", probably most commonly seen in "" vs. "publicly"—both spellings are considered correct/acceptable in American English, and both pronounced the same, in this dialect, rendering the "" spelling pleonastic in US English; in other dialects it is "required", while it is quite conceivable that in another generation or so of American English it will be "forbidden". This treatment of words ending in "-ic", "-ac", etc., is quite inconsistent in US English—compare "maniacally" or "forensically" with "stoicly" or "heroicly"; "forensicly" doesn't look "right" in any dialect, but "heroically" looks internally redundant to many Americans. (Likewise, there are thousands of mostly American Google search results for "eroticly", some in reputable publications, but it does not even appear in the 23-volume, 23,000-page, 500,000-definition Oxford English Dictionary (OED), the largest in the world; and even American dictionaries give the correct spelling as "erotically".) In a more modern pair of words, Institute of Electrical and Electronics Engineers dictionaries say that "electric" and "electrical" mean the same thing. However, the usual adverb form is "electrically". (For example, "The glass rod is electrically charged by rubbing it with silk".)
Some (mostly US-based) prescriptive grammar pundits would say that the "-ly" not "-ally" form is "correct" in any case in which there is no "-ical" variant of the basic word, and vice versa; i.e. "maniacally", not "maniacly", is correct because "maniacal" is a word, while "publicly", not "", must be correct because "publical" is (arguably) not a real word (it does not appear in the OED). This logic is in doubt, since most if not all "-ical" constructions arguably are "real" words and most have certainly occurred more than once in "reputable" publications and are also immediately understood by any educated reader of English even if they "look funny" to some, or do not appear in popular dictionaries. Additionally, there are numerous examples of words that have very widely accepted extended forms that have skipped one or more intermediary forms, e.g., "disestablishmentarian" in the absence of "disestablishmentary" (which does not appear in the OED). At any rate, while some US editors might consider "-ally" vs. "-ly" to be pleonastic in some cases, the majority of other English speakers would not, and many "-ally" words are not pleonastic to anyone, even in American English.
The most common definitely pleonastic morphological usage in English is "irregardless", which is very widely criticized as being a non-word. The standard usage is "regardless", which is already negative; adding the additional negative ir- is interpreted by some as logically reversing the meaning to "with regard to/for", which is certainly not what the speaker intended to convey. (According to most dictionaries that include it, "irregardless" appears to derive from confusion between "regardless" and "irrespective", which have overlapping meanings.)
Morphemic pleonasm in Modern Standard Chinese
There are several instances in Chinese vocabulary where pleonasms and cognate objects are present. Their presence usually indicate the plural form of the noun or the noun in formal context.
('book(s)' – in general)
('paper, tissue, pieces of paper' – formal)
In some instances, the pleonasmic form of the verb is used with the intention as an emphasis to one meaning of the verb, isolating them from their idiomatic and figurative uses. But over time, the pseudo-object, which sometimes repeats the verb, is almost inherently coupled with the it.
For example, the word ('to sleep') is an intransitive verb, but may express different meaning when coupled with objects of prepositions as in "to sleep with". However, in Mandarin, is usually coupled with a pseudo-character , yet it is not entirely a cognate object, to express the act of resting.
('I want sleep'). Although such usage of is not found among native speakers of Mandarin and may sound awkward, this expression is grammatically correct and it is clear that means 'to sleep/to rest' in this context.
('I want to sleep') and ('I'm going to sleep'). In this context, ('to sleep') is a complete verb and native speakers often express themselves this way. Adding this particle clears any suspicion from using it with any direct object shown in the next example:
('I want to have sex with her') and ('I want to sleep with her'). When the verb follows an animate direct object the meaning changes dramatically. The first instance is mainly seen in colloquial speech. Note that the object of preposition of "to have sex with" is the equivalent of the direct object of in Mandarin.
One can also find a way around this verb, using another one which does not is used to express idiomatic expressions nor necessitate a pleonasm, because it only has one meaning:
('I want "to dorm)
Nevertheless, is a verb used in high-register diction, just like English verbs with Latin roots.
There is no relationship found between Chinese and English regarding verbs that can take pleonasms and cognate objects. Although the verb to sleep may take a cognate object as in "sleep a restful sleep", it is a pure coincidence, since verbs of this form are more common in Chinese than in English; and when the English verb is used without the cognate objects, its diction is natural and its meaning is clear in every level of diction, as in "I want to sleep" and "I want to have a rest".
Subtler redundancies
In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level:
"It's déjà vu all over again."
"I never make predictions, especially about the future."
The redundancy of these two well-known statements is deliberate, for humorous effect. (See Yogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—by Yogi Berra—is not really a pleonasm, but rather an ironic play on words.
Alternatively it could be an analogy between predict and guess.
However, "It's déjà vu all over again" could mean that there was earlier another déjà vu of the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced a déjà vu of a different idea.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question. Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
"I'm after putting it on the table."('I [have] put it on the table.') This example further shows that the effect, whether pleonastic or only pseudo-pleonastic, can apply to words and word-parts, and multi-word phrases, given that the fullest rendition would be "I am after putting it on the table".
"Have a look at your man there."('Have a look at that man there.') An example of word substitution, rather than addition, that seems illogical outside the dialect. This common possessive-seeming construction often confuses the non-Irish enough that they do not at first understand what is meant. Even "Have a look at that man there" is arguably further doubly redundant, in that a shorter "Look at that man" version would convey essentially the same meaning.
"She's my wife so she is."('She's my wife.') Duplicate subject and verb, post-complement, used to emphasize a simple factual statement or assertion.
All of these constructions originate from the application of Irish Gaelic grammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as:
"I abso-fuckin'-lutely agree!"(tmesis, for stress)
"Topless-shmopless—nudity doesn't distract me."(shm-reduplication, for humor)
The latter of these is a result of Yiddish influences on modern English, especially East Coast US English.
Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also called prolixity or logorrhea. Compare:
"The sound of the loud music drowned out the sound of the burglary."
"The loud music drowned out the sound of the burglary."
or even:
"The music drowned out the burglary."
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are both tumid euphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, an oxymoron results when two seemingly contradictory words are adjoined.
Foreign words
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
"We went to the El Restaurante restaurant."
"The La Brea tar pits are fascinating."
"Roast beef served with au jus sauce."
"Please R.S.V.P."
"The Schwarzwald Forest is deep and dark."
"The Drakensberg Mountains are in South Africa."
"We will vacation in Timor-Leste."
LibreOffice office suite.
The hoi polloi.
I'd like to have a chai tea.
"That delicious Queso cheese."
"Some salsa sauce on the side?."
These sentences use phrases which mean, respectively, "the restaurant restaurant", "the tar tar", "with juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not a proper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas like Montreal or the American Southwest when mixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating".
Most find it best to not even drop articles when using proper nouns made from foreign languages:
"The movie is playing at the El Capitan theater."
However, there are some exceptions to this, for example:
"Jude Bellingham plays for Real Madrid in La Liga." ("La Liga" literally means "The League" in Spanish)
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would say must—be present where it would otherwise be "forbidden":
"Stephen King's The Shining is scary."(Normally, the article would be left off following a possessive.)
"I'm having an An American Werewolf in London movie night at my place."(Seemingly doubled article, which would be taken for a stutter or typographical error in other contexts.)
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., the Sahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "The Los Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example is Torpenhow Hill in Cumbria, where some of the elements in the name likely mean "hill". See the List of tautological place names for many more examples.
The word tsetse means "fly" in the Tswana language, a Bantu language spoken in Botswana and South Africa. This word is the root of the English name for a biting fly found in Africa, the tsetse fly.
Acronyms and initialisms
Acronyms and initialisms can also form the basis for redundancies; this is known humorously as RAS syndrome (for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example:
"I forgot my PIN number for the ATM machine." (Personal Identification Number number; Automated Teller Machine machine)
"I upgraded the RAM memory of my computer." (Random Access Memory memory)
"She is infected with the HIV virus." (Human Immunodeficiency Virus virus)
"I have installed a CMS system on my server." (Content Management System system)
"The SI system of units is the modern form of the metric system." (International System system)
(See RAS syndrome for many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Typographical
Some redundancies are simply typographical. For instance, when a short inflexional word like "the" occurs at the end of a line, it is very common to accidentally repeat it at the beginning of the following line, and a large number of readers would not even notice it.
Apparent redundancies that actually are not redundant
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
"She slept a deep sleep."
Or, a classic example from Latin:
mutatis mutandis = "with change made to what needs to be changed" (an ablative absolute construction)
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
"We wept tears of joy."
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form is polyptoton, the stylistic repetition of the same word or words derived from the same root:
"...[T]he only thing we have to fear is fear itself." — Franklin D. Roosevelt, "First Inaugural Address", March 1933.
"With eager feeding[,] food doth choke the feeder." — William Shakespeare, Richard II, II, i, 37.
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
Semantic pleonasm and context
In many cases of semantic pleonasm, the status of a word as pleonastic depends on context. The relevant context can be as local as a neighboring word, or as global as the extent of a speaker's knowledge. In fact, many examples of redundant expressions are not inherently redundant, but can be redundant if used one way, and are not redundant if used another way. The "up" in "climb up" is not always redundant, as in the example "He climbed up and then fell down the mountain." Many other examples of pleonasm are redundant only if the speaker's knowledge is taken into account. For example, most English speakers would agree that "tuna fish" is redundant because tuna is a kind of fish. However, given the knowledge that "tuna" can also refer to a kind of edible prickly pear, the "fish" in "tuna fish" can be seen as non-pleonastic, but rather a disambiguator between the fish and the prickly pear.
Conversely, to English speakers who do not know Spanish, there is nothing redundant about "the La Brea tar pits" because the name "La Brea" is opaque: the speaker does not know that it is Spanish for "the tar" and thus "the La Brea Tar Pits" translates to "the the tar tar pits". Similarly, even though scuba stands for "self-contained underwater breathing apparatus", a phrase like "the scuba gear" would probably not be considered pleonastic because "scuba" has been reanalyzed into English as a simple word, and not an acronym suggesting the pleonastic word sequence "apparatus gear". (Most do not even know that it is an acronym and do not spell it SCUBA or S.C.U.B.A. Similar examples are radar and laser.)
See also
Notes
References
Citations
Bibliography
External links
Figures of speech
Linguistics
Rhetoric
Semantics
Syntax | 0.779963 | 0.994977 | 0.776046 |
Rote learning | Rote learning is a memorization technique based on repetition. The method rests on the premise that the recall of repeated material becomes faster the more one repeats it. Some of the alternatives to rote learning include meaningful learning, associative learning, spaced repetition and active learning.
Versus critical thinking
Rote learning is widely used in the mastery of foundational knowledge. Examples of school topics where rote learning is frequently used include phonics in reading, the periodic table in chemistry, multiplication tables in mathematics, anatomy in medicine, cases or statutes in law, basic formulae in any science, etc. By definition, rote learning eschews comprehension, so by itself it is an ineffective tool in mastering any complex subject at an advanced level. For instance, one illustration of rote learning can be observed in preparing quickly for exams, a technique which may be colloquially referred to as "cramming".
Rote learning is sometimes disparaged with the derogative terms parrot fashion, regurgitation, cramming, or mugging because one who engages in rote learning may give the wrong impression of having understood what they have written or said. It is strongly discouraged by many new curriculum standards. For example, science and mathematics standards in the United States specifically emphasize the importance of deep understanding over the mere recall of facts, which is seen to be less important. The National Council of Teachers of Mathematics stated: More than ever, mathematics must include the mastery of concepts instead of mere memorization and the following of procedures. More than ever, school mathematics must include an understanding of how to use technology to arrive meaningfully at solutions to problems instead of endless attention to increasingly outdated computational tedium.However, advocates of traditional education have criticized the new American standards as slighting learning basic facts and elementary arithmetic, and replacing content with process-based skills. In math and science, rote methods are often used, for example to memorize formulas. There is greater understanding if students commit a formula to memory through exercises that use the formula rather than through rote repetition of the formula. Newer standards often recommend that students derive formulas themselves to achieve the best understanding. Nothing is faster than rote learning if a formula must be learned quickly for an imminent test and rote methods can be helpful for committing an understood fact to memory. However, students who learn with understanding are able to transfer their knowledge to tasks requiring problem-solving with greater success than those who learn only by rote.
On the other side, those who disagree with the inquiry-based philosophy maintain that students must first develop computational skills before they can understand concepts of mathematics. These people would argue that time is better spent practicing skills rather than in investigations inventing alternatives, or justifying more than one correct answer or method. In this view, estimating answers is insufficient and, in fact, is considered to be dependent on strong foundational skills. Learning abstract concepts of mathematics is perceived to depend on a solid base of knowledge of the tools of the subject. Thus, these people believe that rote learning is an important part of the learning process.
In computer science
Rote learning is also used to describe a simple learning pattern used in machine learning, although it does not involve repetition, unlike the usual meaning of rote learning. The machine is programmed to keep a history of calculations and compare new input against its history of inputs and outputs, retrieving the stored output if present. This pattern requires that the machine can be modeled as a pure function — always producing same output for same input — and can be formally described as follows:
f() → → store ((),())
Rote learning was used by Samuel's Checkers on an IBM 701, a milestone in the use of artificial intelligence.
Learning methods for school
The flashcard, outline, and mnemonic device are traditional tools for memorizing course material and are examples of rote learning.
See also
References
External links
Education reform
Learning methods
Memorization
Pedagogy | 0.779889 | 0.995012 | 0.775998 |
Variable renewable energy | Variable renewable energy (VRE) or intermittent renewable energy sources (IRES) are renewable energy sources that are not dispatchable due to their fluctuating nature, such as wind power and solar power, as opposed to controllable renewable energy sources, such as dammed hydroelectricity or bioenergy, or relatively constant sources, such as geothermal power.
The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure.
Options to absorb large shares of variable energy into the grid include using storage, improved interconnection between different variable sources to smooth out supply, using dispatchable energy sources such as hydroelectricity and having overcapacity, so that sufficient energy is produced even when weather is less favourable. More connections between the energy sector and the building, transport and industrial sectors may also help.
Background and terminology
The penetration of intermittent renewables in most power grids is low: global electricity generation in 2021 was 7% wind and 4% solar. However, in 2021 Denmark, Luxembourg and Uruguay generated over 40% of their electricity from wind and solar. Characteristics of variable renewables include their unpredictability, variability, and low operating costs. These, along with renewables typically being asynchronous generators, provide a challenge to grid operators, who must make sure supply and demand are matched. Solutions include energy storage, demand response, availability of overcapacity and sector coupling. Smaller isolated grids may be less tolerant to high levels of penetration.
Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power.
Several key terms are useful for understanding the issue of intermittent power sources. These terms are not standardized, and variations may be used. Most of these terms also apply to traditional power plants.
Intermittency or variability is the extent to which a power source fluctuates. This has two aspects: a predictable variability, such as the day-night cycle, and an unpredictable part (imperfect local weather forecasting). The term intermittent can be used to refer to the unpredictable part, with variable then referring to the predictable part.
Dispatchability is the ability of a given power source to increase and decrease output quickly on demand. The concept is distinct from intermittency; dispatchability is one of several ways system operators match supply (generator's output) to system demand (technical loads).
Penetration is the amount of electricity generated from a particular source as a percentage of annual consumption.
Nominal power or nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source's output under ideal conditions, such as maximum usable wind or high sun on a clear summer day.
Capacity factor, average capacity factor, or load factor is the ratio of actual electrical generation over a given period of time, usually a year, to actual generation in that time period. Basically, it is the ratio between the how much electricity a plant produced and how much electricity a plant would have produced if were running at its nameplate capacity for the entire time period.
Firm capacity or firm power is "guaranteed by the supplier to be available at all times during a period covered by a commitment".
Capacity credit: the amount of conventional (dispatchable) generation power that can be potentially removed from the system while keeping the reliability, usually expressed as a percentage of the nominal power.
Foreseeability or predictability is how accurately the operator can anticipate the generation: for example tidal power varies with the tides but is completely foreseeable because the orbit of the moon can be predicted exactly, and improved weather forecasts can make wind power more predictable.
Sources
Dammed hydroelectricity, biomass and geothermal are dispatchable as each has a store of potential energy; wind and solar without storage can be decreased (curtailed) but are not dispatchable.
Wind power
Grid operators use day ahead forecasting to determine which of the available power sources to use the next day, and weather forecasting is used to predict the likely wind power and solar power output available. Although wind power forecasts have been used operationally for decades, the IEA is organizing international collaboration to further improve their accuracy.
Wind-generated power is a variable resource, and the amount of electricity produced at any given point in time by a given plant will depend on wind speeds, air density, and turbine characteristics, among other factors. If wind speed is too low then the wind turbines will not be able to make electricity, and if it is too high the turbines will have to be shut down to avoid damage. While the output from a single turbine can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over larger and larger areas the average power output becomes less variable.
Intermittence: Regions smaller than synoptic scale, less than about 1000 km long, the size of an average country, have mostly the same weather and thus around the same wind power, unless local conditions favor special winds. Some studies show that wind farms spread over a geographically diverse area will as a whole rarely stop producing power altogether. This is rarely the case for smaller areas with uniform geography such as Ireland, Scotland and Denmark which have several days per year with little wind power.
Capacity factor: Wind power typically has an annual capacity factor of 25–50%, with offshore wind outperforming onshore wind.
Dispatchability: Because wind power is not by itself dispatchable wind farms are sometimes built with storage.
Capacity credit: At low levels of penetration, the capacity credit of wind is about the same as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit percentage drops.
Variability: Site dependent. Sea breezes are much more constant than land breezes. Seasonal variability may reduce output by 50%.
Reliability: A wind farm has high technical reliability when the wind blows. That is, the output at any given time will only vary gradually due to falling wind speeds or storms, the latter necessitating shut downs. A typical wind farm is unlikely to have to shut down in less than half an hour at the extreme, whereas an equivalent-sized power station can fail totally instantaneously and without warning. The total shutdown of wind turbines is predictable via weather forecasting. The average availability of a wind turbine is 98%, and when a turbine fails or is shut down for maintenance it only affects a small percentage of the output of a large wind farm.
Predictability: Although wind is variable, it is also predictable in the short term. There is an 80% chance that wind output will change less than 10% in an hour and a 40% chance that it will change 10% or more in 5 hours.
Because wind power is generated by large numbers of small generators, individual failures do not have large impacts on power grids. This feature of wind has been referred to as resiliency.
Solar power
Intermittency inherently affects solar energy, as the production of renewable electricity from solar sources depends on the amount of sunlight at a given place and time. Solar output varies throughout the day and through the seasons, and is affected by dust, fog, cloud cover, frost or snow. Many of the seasonal factors are fairly predictable, and some solar thermal systems make use of heat storage to produce grid power for a full day.
Variability: In the absence of an energy storage system, solar does not produce power at night, little in bad weather and varies between seasons. In many countries, solar produces most energy in seasons with low wind availability and vice versa.
Capacity factor Standard photovoltaic solar has an annual average capacity factor of 10-20%, but panels that move and track the sun have a capacity factor up to 30%. Thermal solar parabolic trough with storage 56%. Thermal solar power tower with storage 73%.
The impact of intermittency of solar-generated electricity will depend on the correlation of generation with demand. For example, solar thermal power plants such as Nevada Solar One are somewhat matched to summer peak loads in areas with significant cooling demands, such as the south-western United States. Thermal energy storage systems like the small Spanish Gemasolar Thermosolar Plant can improve the match between solar supply and local consumption. The improved capacity factor using thermal storage represents a decrease in maximum capacity, and extends the total time the system generates power.
Run-of-the-river hydroelectricity
In many countries new large dams are no longer being built, because of the environmental impact of reservoirs. Run of the river projects have continued to be built. The absence of a reservoir results in both seasonal and annual variations in electricity generated.
Tidal power
Tidal power is the most predictable of all the variable renewable energy sources. The tides reverse twice a day, but they are never intermittent, on the contrary they are completely reliable.
Wave power
Waves are primarily created by wind, so the power available from waves tends to follow that available from wind, but due to the mass of the water is less variable than wind power. Wind power is proportional to the cube of the wind speed, while wave power is proportional to the square of the wave height.
Solutions for their integration
The displaced dispatchable generation could be coal, natural gas, biomass, nuclear, geothermal or storage hydro. Rather than starting and stopping nuclear or geothermal, it is cheaper to use them as constant base load power. Any power generated in excess of demand can displace heating fuels, be converted to storage or sold to another grid. Biofuels and conventional hydro can be saved for later when intermittents are not generating power. Some forecast that “near-firm” renewables (batteries with solar and/or wind) power will be cheaper than existing nuclear by the late 2020s: therefore they say base load power will not be needed.
Alternatives to burning coal and natural gas which produce fewer greenhouse gases may eventually make fossil fuels a stranded asset that is left in the ground. Highly integrated grids favor flexibility and performance over cost, resulting in more plants that operate for fewer hours and lower capacity factors.
All sources of electrical power have some degree of variability, as do demand patterns which routinely drive large swings in the amount of electricity that suppliers feed into the grid. Wherever possible, grid operations procedure are designed to match supply with demand at high levels of reliability, and the tools to influence supply and demand are well-developed. The introduction of large amounts of highly variable power generation may require changes to existing procedures and additional investments.
The capacity of a reliable renewable power supply, can be fulfilled by the use of backup or extra infrastructure and technology, using mixed renewables to produce electricity above the intermittent average, which may be used to meet regular and unanticipated supply demands. Additionally, the storage of energy to fill the shortfall intermittency or for emergencies can be part of a reliable power supply.
In practice, as the power output from wind varies, partially loaded conventional plants, which are already present to provide response and reserve, adjust their output to compensate. While low penetrations of intermittent power may use existing levels of response and spinning reserve, the larger overall variations at higher penetrations levels will require additional reserves or other means of compensation.
Operational reserve
All managed grids already have existing operational and "spinning" reserve to compensate for existing uncertainties in the power grid. The addition of intermittent resources such as wind does not require 100% "back-up" because operating reserves and balancing requirements are calculated on a system-wide basis, and not dedicated to a specific generating plant.
Some gas, or hydro power plants are partially loaded and then controlled to change as demand changes or to replace rapidly lost generation. The ability to change as demand changes is termed "response". The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve".
Generally thermal plants running as peaking plants will be less efficient than if they were running as base load. Hydroelectric facilities with storage capacity, such as the traditional dam configuration, may be operated as base load or peaking plants.
Grids can contract for grid battery plants, which provide immediately available power for an hour or so, which gives time for other generators to be started up in the event of a failure, and greatly reduces the amount of spinning reserve required.
Demand response
Demand response is a change in consumption of energy to better align with supply. It can take the form of switching off loads, or absorb additional energy to correct supply/demand imbalances. Incentives have been widely created in the American, British and French systems for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take them offline whenever there is a shortage of capacity, or conversely to increase load when there is a surplus.
Certain types of load control allow the power company to turn loads off remotely if insufficient power is available. In France large users such as CERN cut power usage as required by the System Operator - EDF under the encouragement of the EJP tariff.
Energy demand management refers to incentives to adjust use of electricity, such as higher rates during peak hours. Real-time variable electricity pricing can encourage users to adjust usage to take advantage of periods when power is cheaply available and avoid periods when it is more scarce and expensive. Some loads such as desalination plants, electric boilers and industrial refrigeration units, are able to store their output (water and heat). Several papers also concluded that Bitcoin mining loads would reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy power stations and therefore accelerate transition to sustainable energy. But others argue that Bitcoin mining can never be sustainable.
Instantaneous demand reduction. Most large systems also have a category of loads which instantly disconnect when there is a generation shortage, under some mutually beneficial contract. This can give instant load reductions or increases.
Storage
At times of low load where non-dispatchable output from wind and solar may be high, grid stability requires lowering the output of various dispatchable generating sources or even increasing controllable loads, possibly by using energy storage to time-shift output to times of higher demand. Such mechanisms can include:
Pumped storage hydropower is the most prevalent existing technology used, and can substantially improve the economics of wind power. The availability of hydropower sites suitable for storage will vary from grid to grid. Typical round trip efficiency is 80%.
Traditional lithium-ion is the most common type used for grid-scale battery storage . Rechargeable flow batteries can serve as a large capacity, rapid-response storage medium. Hydrogen can be created through electrolysis and stored for later use.
Flywheel energy storage systems have some advantages over chemical batteries. Along with substantial durability which allows them to be cycled frequently without noticeable life reduction, they also have very fast response and ramp rates. They can go from full discharge to full charge within a few seconds. They can be manufactured using non-toxic and environmentally friendly materials, easily recyclable once the service life is over.
Thermal energy storage stores heat. Stored heat can be used directly for heating needs or converted into electricity. In the context of a CHP plant a heat storage can serve as a functional electricity storage at comparably low costs. Ice storage air conditioning Ice can be stored inter seasonally and can be used as a source of air-conditioning during periods of high demand. Present systems only need to store ice for a few hours but are well developed.
Storage of electrical energy results in some lost energy because storage and retrieval are not perfectly efficient. Storage also requires capital investment and space for storage facilities.
Geographic diversity and complementing technologies
The variability of production from a single wind turbine can be high. Combining any additional number of turbines, for example, in a wind farm, results in lower statistical variation, as long as the correlation between the output of each turbine is imperfect, and the correlations are always imperfect due to the distance between each turbine. Similarly, geographically distant wind turbines or wind farms have lower correlations, reducing overall variability. Since wind power is dependent on weather systems, there is a limit to the benefit of this geographic diversity for any power system.
Multiple wind farms spread over a wide geographic area and gridded together produce power more constantly and with less variability than smaller installations. Wind output can be predicted with some degree of confidence using weather forecasts, especially from large numbers of turbines/farms. The ability to predict wind output is expected to increase over time as data is collected, especially from newer facilities.
Electricity produced from solar energy tends to counterbalance the fluctuating supplies generated from wind. Normally it is windiest at night and during cloudy or stormy weather, and there is more sunshine on clear days with less wind. Besides, wind energy has often a peak in the winter season, whereas solar energy has a peak in the summer season; the combination of wind and solar reduces the need for dispatchable backup power.
In some locations, electricity demand may have a high correlation with wind output, particularly in locations where cold temperatures drive electric consumption, as cold air is denser and carries more energy.
The allowable penetration may be increased with further investment in standby generation. For instance some days could produce 80% intermittent wind and on the many windless days substitute 80% dispatchable power like natural gas, biomass and Hydro.
Areas with existing high levels of hydroelectric generation may ramp up or down to incorporate substantial amounts of wind. Norway, Brazil, and Manitoba all have high levels of hydroelectric generation, Quebec produces over 90% of its electricity from hydropower, and Hydro-Québec is the largest hydropower producer in the world. The U.S. Pacific Northwest has been identified as another region where wind energy is complemented well by existing hydropower. Storage capacity in hydropower facilities will be limited by size of reservoir, and environmental and other considerations.
Connecting grid internationally
It is often feasible to export energy to neighboring grids at times of surplus, and import energy when needed. This practice is common in Europe and between the US and Canada. Integration with other grids can lower the effective concentration of variable power: for instance, Denmark's high penetration of VRE, in the context of the German/Dutch/Scandinavian grids with which it has interconnections, is considerably lower as a proportion of the total system. Hydroelectricity that compensates for variability can be used across countries.
The capacity of power transmission infrastructure may have to be substantially upgraded to support export/import plans. Some energy is lost in transmission. The economic value of exporting variable power depends in part on the ability of the exporting grid to provide the importing grid with useful power at useful times for an attractive price.
Sector coupling
Demand and generation can be better matched when sectors such as mobility, heat and gas are coupled with the power system. The electric vehicle market is for instance expected to become the largest source of storage capacity. This may be a more expensive option appropriate for high penetration of variable renewables, compared to other sources of flexibility. The International Energy Agency says that sector coupling is needed to compensate for the mismatch between seasonal demand and supply.
Electric vehicles can be charged during periods of low demand and high production, and in some places send power back from the vehicle-to-grid.
Penetration
Penetration refers to the proportion of a primary energy (PE) source in an electric power system, expressed as a percentage. There are several methods of calculation yielding different penetrations. The penetration can be calculated either as:
the nominal capacity (installed power) of a PE source divided by the peak load within an electric power system; or
the nominal capacity (installed power) of a PE source divided by the total capacity of the electric power system; or
the electrical energy generated by a PE source in a given period, divided by the demand of the electric power system in this period.
The level of penetration of intermittent variable sources is significant for the following reasons:
Power grids with significant amounts of dispatchable pumped storage, hydropower with reservoir or pondage or other peaking power plants such as natural gas-fired power plants are capable of accommodating fluctuations from intermittent power more easily.
Relatively small electric power systems without strong interconnection (such as remote islands) may retain some existing diesel generators but consuming less fuel, for flexibility until cleaner energy sources or storage such as pumped hydro or batteries become cost-effective.
In the early 2020s wind and solar produce 10% of the world's electricity, but supply in the 40-55% penetration range has already been implemented in several systems, with over 65% planned for the UK by 2030.
There is no generally accepted maximum level of penetration, as each system's capacity to compensate for intermittency differs, and the systems themselves will change over time. Discussion of acceptable or unacceptable penetration figures should be treated and used with caution, as the relevance or significance will be highly dependent on local factors, grid structure and management, and existing generation capacity.
For most systems worldwide, existing penetration levels are significantly lower than practical or theoretical maximums.
Maximum penetration limits
Maximum penetration of combined wind and solar is estimated at around 70% to 90% without regional aggregation, demand management or storage; and up to 94% with 12 hours of storage. Economic efficiency and cost considerations are more likely to dominate as critical factors; technical solutions may allow higher penetration levels to be considered in future, particularly if cost considerations are secondary.
Economic impacts of variability
Estimates of the cost of wind and solar energy may include estimates of the "external" costs of wind and solar variability, or be limited to the cost of production. All electrical plant has costs that are separate from the cost of production, including, for example, the cost of any necessary transmission capacity or reserve capacity in case of loss of generating capacity. Many types of generation, particularly fossil fuel derived, will have cost externalities such as pollution, greenhouse gas emission, and habitat destruction, which are generally not directly accounted for.
The magnitude of the economic impacts is debated and will vary by location, but is expected to rise with higher penetration levels. At low penetration levels, costs such as operating reserve and balancing costs are believed to be insignificant.
Intermittency may introduce additional costs that are distinct from or of a different magnitude than for traditional generation types. These may include:
Transmission capacity: transmission capacity may be more expensive than for nuclear and coal generating capacity due to lower load factors. Transmission capacity will generally be sized to projected peak output, but average capacity for wind will be significantly lower, raising cost per unit of energy actually transmitted. However transmission costs are a low fraction of total energy costs.
Additional operating reserve: if additional wind and solar does not correspond to demand patterns, additional operating reserve may be required compared to other generating types, however this does not result in higher capital costs for additional plants since this is merely existing plants running at low output - spinning reserve. Contrary to statements that all wind must be backed by an equal amount of "back-up capacity", intermittent generators contribute to base capacity "as long as there is some probability of output during peak periods". Back-up capacity is not attributed to individual generators, as back-up or operating reserve "only have meaning at the system level".
Balancing costs: to maintain grid stability, some additional costs may be incurred for balancing of load with demand. Although improvements to grid balancing can be costly, they can lead to long term savings.
In many countries for many types of variable renewable energy, from time to time the government invites companies to tender sealed bids to construct a certain capacity of solar power to connect to certain electricity substations. By accepting the lowest bid the government commits to buy at that price per kWh for a fixed number of years, or up to a certain total amount of power. This provides certainty for investors against highly volatile wholesale electricity prices. However they may still risk exchange rate volatility if they borrowed in foreign currency.
Examples by country
Great Britain
The operator of the British electricity system has said that it will be capable of operating zero-carbon by 2025, whenever there is enough renewable generation, and may be carbon negative by 2033. The company, National Grid Electricity System Operator, states that new products and services will help reduce the overall cost of operating the system.
Germany
In countries with a considerable amount of renewable energy, solar energy causes price drops around noon every day. PV production follows the higher demand during these hours. The images below show two weeks in 2022 in Germany, where renewable energy has a share of over 40%. Prices also drop every night and weekend due to low demand. In hours without PV and wind power, electricity prices rise. This can lead to demand side adjustments. While industry is dependent on the hourly prices, most private households still pay a fixed tariff. With smart meters, private consomers can also be motivated i.e. to load an electric car when enough renewable energy is available and prices are cheap.
Steerable flexibility in electricity production is essential to back up variable energy sources. The German example shows that pumped hydro storage, gas plants and hard coal jump in fast. Lignite varies on a daily basis. Nuclear power and biomass can theoretically adjust to a certain extent. However, in this case incentives still seem not be high enough.
See also
Combined cycle hydrogen power plant
Cost of electricity by source
Energy security and renewable technology
Ground source heat pump
List of energy storage power plants
Spark spread: calculating the cost of back up
References
Further reading
External links
Grid Integration of Wind Energy
Electric power distribution
Energy storage
Renewable energy | 0.788147 | 0.984548 | 0.775969 |
Astrophysical jet | An astrophysical jet is an astronomical phenomenon where outflows of ionised matter are emitted as extended beams along the axis of rotation. When this greatly accelerated matter in the beam approaches the speed of light, astrophysical jets become relativistic jets as they show effects from special relativity.
The formation and powering of astrophysical jets are highly complex phenomena that are associated with many types of high-energy astronomical sources. They likely arise from dynamic interactions within accretion disks, whose active processes are commonly connected with compact central objects such as black holes, neutron stars or pulsars. One explanation is that tangled magnetic fields are organised to aim two diametrically opposing beams away from the central source by angles only several degrees wide Jets may also be influenced by a general relativity effect known as frame-dragging.
Most of the largest and most active jets are created by supermassive black holes (SMBH) in the centre of active galaxies such as quasars and radio galaxies or within galaxy clusters. Such jets can exceed millions of parsecs in length. Other astronomical objects that contain jets include cataclysmic variable stars, X-ray binaries and gamma-ray bursts (GRB). Jets on a much smaller scale (~parsecs) may be found in star forming regions, including T Tauri stars and Herbig–Haro objects; these objects are partially formed by the interaction of jets with the interstellar medium. Bipolar outflows may also be associated with protostars, or with evolved post-AGB stars, planetary nebulae and bipolar nebulae.
Relativistic jets
Relativistic jets are beams of ionised matter accelerated close to the speed of light. Most have been observationally associated with central black holes of some active galaxies, radio galaxies or quasars, and also by galactic stellar black holes, neutron stars or pulsars. Beam lengths may extend between several thousand, hundreds of thousands or millions of parsecs. Jet velocities when approaching the speed of light show significant effects of the special theory of relativity; for example, relativistic beaming that changes the apparent beam brightness.
Massive central black holes in galaxies have the most powerful jets, but their structure and behaviours are similar to those of smaller galactic neutron stars and black holes. These SMBH systems are often called microquasars and show a large range of velocities. SS 433 jet, for example, has a mean velocity of 0.26c. Relativistic jet formation may also explain observed gamma-ray bursts, which have the most relativistic jets known, being ultrarelativistic.
Mechanisms behind the composition of jets remain uncertain, though some studies favour models where jets are composed of an electrically neutral mixture of nuclei, electrons, and positrons, while others are consistent with jets composed of positron–electron plasma. Trace nuclei swept up in a relativistic positron–electron jet would be expected to have extremely high energy, as these heavier nuclei should attain velocity equal to the positron and electron velocity.
Rotation as possible energy source
Because of the enormous amount of energy needed to launch a relativistic jet, some jets are possibly powered by spinning black holes. However, the frequency of high-energy astrophysical sources with jets suggests combinations of different mechanisms indirectly identified with the energy within the associated accretion disk and X-ray emissions from the generating source. Two early theories have been used to explain how energy can be transferred from a black hole into an astrophysical jet:
Blandford–Znajek process. This theory explains the extraction of energy from magnetic fields around an accretion disk, which are dragged and twisted by the spin of the black hole. Relativistic material is then feasibly launched by the tightening of the field lines.
Penrose mechanism. Here energy is extracted from a rotating black hole by frame dragging, which was later theoretically proven by Reva Kay Williams to be able to extract relativistic particle energy and momentum, and subsequently shown to be a possible mechanism for jet formation. This effect includes using general relativistic gravitomagnetism.
Relativistic jets from neutron stars
Jets may also be observed from spinning neutron stars. An example is pulsar IGR J11014-6103, which has the largest jet so far observed in the Milky Way, and whose velocity is estimated at 80% the speed of light (0.8c). X-ray observations have been obtained, but there is no detected radio signature nor accretion disk. Initially, this pulsar was presumed to be rapidly spinning, but later measurements indicate the spin rate is only 15.9 Hz. Such a slow spin rate and lack of accretion material suggest the jet is neither rotation nor accretion powered, though it appears aligned with the pulsar rotation axis and perpendicular to the pulsar's true motion.
Other images
See also
Accretion disk
Bipolar outflow
Blandford–Znajek process
Herbig–Haro object
Penrose process
CGCG 049-033, elliptical galaxy located 600 million light-years from Earth, known for having the longest galactic jet discovered
Gamma-ray burst
Solar jet
References
External links
NASA – Ask an Astrophysicist: Black Hole Bipolar Jets
SPACE.com – Twisted Physics: How Black Holes Spout Off
Hubble Video Shows Shock Collision inside Black Hole Jet (Article)
Space plasmas
Black holes
Jet, Astrophysical
Concepts in stellar astronomy | 0.779187 | 0.995826 | 0.775934 |
Climate model | Numerical climate models (or climate system models) are mathematical models that can simulate the interactions of important drivers of climate. These drivers are the atmosphere, oceans, land surface and ice. Scientists use climate models to study the dynamics of the climate system and to make projections of future climate and of climate change. Climate models can also be qualitative (i.e. not numerical) models and contain narratives, largely descriptive, of possible futures.
Climate models take account of incoming energy from the Sun as well as outgoing energy from Earth. An imbalance results in a change in temperature. The incoming energy from the Sun is in the form of short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared. The outgoing energy is in the form of long wave (far) infrared electromagnetic energy. These processes are part of the greenhouse effect.
Climate models vary in complexity. For example, a simple radiant heat transfer model treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models) and horizontally. More complex models are the coupled atmosphere–ocean–sea ice global climate models. These types of models solve the full equations for mass transfer, energy transfer and radiant exchange. In addition, other types of models can be interlinked. For example Earth System Models include also land use as well as land use changes. This allows researchers to predict the interactions between climate and ecosystems.
Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. Scientists divide the planet into a 3-dimensional grid and apply the basic equations to those grids. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points. These are coupled with oceanic models to simulate climate variability and change that occurs on different timescales due to shifting ocean currents and the much larger combined volume and heat capacity of the global ocean. External drivers of change may also be applied. Including an ice-sheet model better accounts for long term effects such as sea level rise.
Uses
There are three major types of institution where climate models are developed, implemented and used:
National meteorological services: Most national weather services have a climatology section.
Universities: Relevant departments include atmospheric sciences, meteorology, climatology, and geography.
National and international research laboratories: Examples include the National Center for Atmospheric Research (NCAR, in Boulder, Colorado, US), the Geophysical Fluid Dynamics Laboratory (GFDL, in Princeton, New Jersey, US), Los Alamos National Laboratory, the Hadley Centre for Climate Prediction and Research (in Exeter, UK), the Max Planck Institute for Meteorology in Hamburg, Germany, or the Laboratoire des Sciences du Climat et de l'Environnement (LSCE), France.
Big climate models are essential but they are not perfect. Attention still needs to be given to the real world (what is happening and why). The global models are essential to assimilate all the observations, especially from space (satellites) and produce comprehensive analyses of what is happening, and then they can be used to make predictions/projections. Simple models have a role to play that is widely abused and fails to recognize the simplifications such as not including a water cycle.
General circulation models (GCMs)
Energy balance models (EBMs)
Simulation of the climate system in full 3-D space and time was impractical prior to the establishment of large computational facilities starting in the 1960s. In order to begin to understand which factors may have changed Earth's paleoclimate states, the constituent and dimensional complexities of the system needed to be reduced. A simple quantitative model that balanced incoming/outgoing energy was first developed for the atmosphere in the late 19th century. Other EBMs similarly seek an economical description of surface temperatures by applying the conservation of energy constraint to individual columns of the Earth-atmosphere system.
Essential features of EBMs include their relative conceptual simplicity and their ability to sometimes produce analytical solutions. Some models account for effects of ocean, land, or ice features on the surface budget. Others include interactions with parts of the water cycle or carbon cycle. A variety of these and other reduced system models can be useful for specialized tasks that supplement GCMs, particularly to bridge gaps between simulation and understanding.
Zero-dimensional models
Zero-dimensional models consider Earth as a point in space, analogous to the pale blue dot viewed by Voyager 1 or an astronomer's view of very distant objects. This dimensionless view while highly limited is still useful in that the laws of physics are applicable in a bulk fashion to unknown objects, or in an appropriate lumped manner if some major properties of the object are known. For example, astronomers know that most planets in our own solar system feature some kind of solid/liquid surface surrounded by a gaseous atmosphere.
Model with combined surface and atmosphere
A very simple model of the radiative equilibrium of the Earth is
where
the left hand side represents the total incoming shortwave power (in Watts) from the Sun
the right hand side represents the total outgoing longwave power (in Watts) from Earth, calculated from the Stefan–Boltzmann law.
The constant parameters include
S is the solar constant – the incoming solar radiation per unit area—about 1367 W·m−2
r is Earth's radius—approximately 6.371×106 m
π is the mathematical constant (3.141...)
is the Stefan–Boltzmann constant—approximately 5.67×10−8 J·K−4·m−2·s−1
The constant can be factored out, giving a nildimensional equation for the equilibrium
where
the left hand side represents the incoming shortwave energy flux from the Sun in W·m−2
the right hand side represents the outgoing longwave energy flux from Earth in W·m−2.
The remaining variable parameters which are specific to the planet include
is Earth's average albedo, measured to be 0.3.
is Earth's average surface temperature, measured as about 288 K as of year 2020
is the effective emissivity of Earth's combined surface and atmosphere (including clouds). It is a quantity between 0 and 1 that is calculated from the equilibrium to be about 0.61. For the zero-dimensional treatment it is equivalent to an average value over all viewing angles.
This very simple model is quite instructive. For example, it shows the temperature sensitivity to changes in the solar constant, Earth albedo, or effective Earth emissivity. The effective emissivity also gauges the strength of the atmospheric greenhouse effect, since it is the ratio of the thermal emissions escaping to space versus those emanating from the surface.
The calculated emissivity can be compared to available data. Terrestrial surface emissivities are all in the range of 0.96 to 0.99 (except for some small desert areas which may be as low as 0.7). Clouds, however, which cover about half of the planet's surface, have an average emissivity of about 0.5 (which must be reduced by the fourth power of the ratio of cloud absolute temperature to average surface absolute temperature) and an average cloud temperature of about . Taking all this properly into account results in an effective earth emissivity of about 0.64 (earth average temperature ).
Models with separated surface and atmospheric layers
Dimensionless models have also been constructed with functionally separated atmospheric layers from the surface. The simplest of these is the zero-dimensional, one-layer model, which may be readily extended to an arbitrary number of atmospheric layers. The surface and atmospheric layer(s) are each characterized by a corresponding temperature and emissivity value, but no thickness. Applying radiative equilibrium (i.e conservation of energy) at the interfaces between layers produces a set of coupled equations which are solvable.
Layered models produce temperatures that better estimate those observed for Earth's surface and atmospheric levels. They likewise further illustrate the radiative heat transfer processes which underlie the greenhouse effect. Quantification of this phenomenon using a version of the one-layer model was first published by Svante Arrhenius in year 1896.
Radiative-convective models
Water vapor is a main determinant of the emissivity of Earth's atmosphere. It both influences the flows of radiation and is influenced by convective flows of heat in a manner that is consistent with its equilibrium concentration and temperature as a function of elevation (i.e. relative humidity distribution). This has been shown by refining the zero dimension model in the vertical to a one-dimensional radiative-convective model which considers two processes of energy transport:
upwelling and downwelling radiative transfer through atmospheric layers that both absorb and emit infrared radiation
upward transport of heat by air and vapor convection, which is especially important in the lower troposphere.
Radiative-convective models have advantages over simpler models and also lay a foundation for more complex models. They can estimate both surface temperature and the temperature variation with elevation in a more realistic manner. They also simulate the observed decline in upper atmospheric temperature and rise in surface temperature when trace amounts of other non-condensible greenhouse gases such as carbon dioxide are included.
Other parameters are sometimes included to simulate localized effects in other dimensions and to address the factors that move energy about Earth. For example, the effect of ice-albedo feedback on global climate sensitivity has been investigated using a one-dimensional radiative-convective climate model.
Higher-dimension models
The zero-dimensional model may be expanded to consider the energy transported horizontally in the atmosphere. This kind of model may well be zonally averaged. This model has the advantage of allowing a rational dependence of local albedo and emissivity on temperature – the poles can be allowed to be icy and the equator warm – but the lack of true dynamics means that horizontal transports have to be specified.
Early examples include research of Mikhail Budyko and William D. Sellers who worked on the Budyko-Sellers model). This work also showed the role of positive feedback in the climate system and has been considered foundational for the energy balance models since its publication in 1969.<ref name=
Earth systems models of intermediate complexity (EMICs)
Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap. One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of half a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.
Box models
Box models are simplified versions of complex systems, reducing them to boxes (or reservoirs) linked by fluxes. The boxes are assumed to be mixed homogeneously. Within a given box, the concentration of any chemical species is therefore uniform. However, the abundance of a species within a given box may vary as a function of time due to the input to (or loss from) the box or due to the production, consumption or decay of this species within the box.
Simple box models, i.e. box model with a small number of boxes whose properties (e.g. their volume) do not change with time, are often useful to derive analytical formulas describing the dynamics and steady-state abundance of a species. More complex box models are usually solved using numerical techniques.
Box models are used extensively to model environmental systems or ecosystems and in studies of ocean circulation and the carbon cycle.
They are instances of a multi-compartment model.
History
Increase of forecasts confidence over time
The IPCC stated in 2010 it has increased confidence in forecasts coming from climate models:"There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases."
Coordination of research
The World Climate Research Programme (WCRP), hosted by the World Meteorological Organization (WMO), coordinates research activities on climate modelling worldwide.
A 2012 U.S. National Research Council report discussed how the large and diverse U.S. climate modeling enterprise could evolve to become more unified. Efficiencies could be gained by developing a common software infrastructure shared by all U.S. climate researchers, and holding an annual climate modeling forum, the report found.
Issues
Electricity consumption
Cloud-resolving climate models are nowadays run on high intensity super-computers which have a high power consumption and thus cause CO2 emissions. They require exascale computing (billion billion – i.e., a quintillion – calculations per second). For example, the Frontier exascale supercomputer consumes 29 MW. It can simulate a year’s worth of climate at cloud resolving scales in a day.
Techniques that could lead to energy savings, include for example: "reducing floating point precision computation; developing machine learning algorithms to avoid unnecessary computations; and creating a new generation of scalable numerical algorithms that would enable higher throughput in terms of simulated years per wall clock day."
Parametrization
See also
Atmospheric reanalysis
Chemical transport model
Atmospheric Radiation Measurement (ARM) (in the US)
Climate Data Exchange
Climateprediction.net
Numerical Weather Prediction
Static atmospheric model
Tropical cyclone prediction model
Verification and validation of computer simulation models
CICE sea ice model
References
External links
Why results from the next generation of climate models matter CarbonBrief, Guest post by Belcher, Boucher, Sutton, 21 March 2019
Climate models on the web:
NCAR/UCAR Community Climate System Model (CCSM)
Do it yourself climate prediction
Primary research GCM developed by NASA/GISS (Goddard Institute for Space Studies)
Original NASA/GISS global climate model (GCM) with a user-friendly interface for PCs and Macs
CCCma model info and interface to retrieve model data
Numerical climate and weather models | 0.785937 | 0.987128 | 0.77582 |
Theory of relativity | The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.
The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.
Development and acceptance
Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.
Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.
The term "theory of relativity" was based on the expression "relative theory" used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity".
By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics.
By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory.
Special relativity
Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity).
The speed of light in vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.
The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion.
Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock.
Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in vacuum.
The effect of gravity can only travel through space at the speed of light, not faster or instantaneously.
Mass–energy equivalence: , energy and mass are equivalent and transmutable.
Relativistic mass, idea used by some researchers.
The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)
General relativity
General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s.
In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it.
Some of the consequences of general relativity are:
Gravitational time dilation: Clocks run slower in deeper gravitational wells.
Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars).
Light deflection: Rays of light bend in the presence of a gravitational field.
Frame-dragging: Rotating masses "drag along" the spacetime around them.
Expansion of the universe: The universe is expanding, and certain components within the universe can accelerate the expansion.
Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.
Experimental evidence
Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions.
Tests of special relativity
Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.
Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology.
The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the Earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity.
While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames.
The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity.
Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations.
Tests of general relativity
General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging.
Modern applications
Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user, and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects in order to work with precision, such as the consequences of the Earth's gravitational field. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
See also
Doubly special relativity
Galilean invariance
General relativity references
Special relativity references
References
Further reading
The Meaning of Relativity Albert Einstein: Four lectures delivered at Princeton University, May 1921
How I created the theory of relativity Albert Einstein, 14 December 1922; Physics Today August 1982
Relativity Sidney Perkowitz Encyclopædia Britannica
External links
Albert Einstein
Articles containing video clips
Theoretical physics | 0.776053 | 0.999648 | 0.77578 |
Four-force | In the special theory of relativity, four-force is a four-vector that replaces the classical force.
In special relativity
The four-force is defined as the rate of change in the four-momentum of a particle with respect to the particle's proper time. Hence,:
For a particle of constant invariant mass , the four-momentum is given by the relation , where is the four-velocity. In analogy to Newton's second law, we can also relate the four-force to the four-acceleration, , by equation:
Here
and
where , and are 3-space vectors describing the velocity, the momentum of the particle and the force acting on it respectively; and is the total energy of the particle.
Including thermodynamic interactions
From the formulae of the previous section it appears that the time component of the four-force is the power expended, , apart from relativistic corrections . This is only true in purely mechanical situations, when heat exchanges vanish or can be neglected.
In the full thermo-mechanical case, not only work, but also heat contributes to the change in energy, which is the time component of the energy–momentum covector. The time component of the four-force includes in this case a heating rate , besides the power . Note that work and heat cannot be meaningfully separated, though, as they both carry inertia. This fact extends also to contact forces, that is, to the stress–energy–momentum tensor.
Therefore, in thermo-mechanical situations the time component of the four-force is not proportional to the power but has a more generic expression, to be given case by case, which represents the supply of internal energy from the combination of work and heat, and which in the Newtonian limit becomes .
In general relativity
In general relativity the relation between four-force, and four-acceleration remains the same, but the elements of the four-force are related to the elements of the four-momentum through a covariant derivative with respect to proper time.
In addition, we can formulate force using the concept of coordinate transformations between different coordinate systems. Assume that we know the correct expression for force in a coordinate system at which the particle is momentarily at rest. Then we can perform a transformation to another system to get the corresponding expression of force. In special relativity the transformation will be a Lorentz transformation between coordinate systems moving with a relative constant velocity whereas in general relativity it will be a general coordinate transformation.
Consider the four-force acting on a particle of mass which is momentarily at rest in a coordinate system. The relativistic force in another coordinate system moving with constant velocity , relative to the other one, is obtained using a Lorentz transformation:
where .
In general relativity, the expression for force becomes
with covariant derivative . The equation of motion becomes
where is the Christoffel symbol. If there is no external force, this becomes the equation for geodesics in the curved space-time. The second term in the above equation, plays the role of a gravitational force. If is the correct expression for force in a freely falling frame , we can use then the equivalence principle to write the four-force in an arbitrary coordinate :
Examples
In special relativity, Lorentz four-force (four-force acting on a charged particle situated in an electromagnetic field) can be expressed as:
where
is the electromagnetic tensor,
is the four-velocity, and
is the electric charge.
See also
four-vector
four-velocity
four-acceleration
four-momentum
four-gradient
References
Four-vectors
Force | 0.800775 | 0.968757 | 0.775756 |
Angular velocity | In physics, angular velocity (symbol or , the lowercase Greek letter omega), also known as angular frequency vector, is a pseudovector representation of how the angular position or orientation of an object changes with time, i.e. how quickly an object rotates (spins or revolves) around an axis of rotation and how fast the axis itself changes direction.
The magnitude of the pseudovector, , represents the angular speed (or angular frequency), the angular rate at which the object rotates (spins or revolves). The pseudovector direction is normal to the instantaneous plane of rotation or angular displacement.
There are two types of angular velocity:
Orbital angular velocity refers to how fast a point object revolves about a fixed origin, i.e. the time rate of change of its angular position relative to the origin.
Spin angular velocity refers to how fast a rigid body rotates with respect to its center of rotation and is independent of the choice of origin, in contrast to orbital angular velocity.
Angular velocity has dimension of angle per unit time; this is analogous to linear velocity, with angle replacing distance, with time in common. The SI unit of angular velocity is radians per second, although degrees per second (°/s) is also common. The radian is a dimensionless quantity, thus the SI units of angular velocity are dimensionally equivalent to reciprocal seconds, s−1, although rad/s is preferable to avoid confusion with rotation velocity in units of hertz (also equivalent to s−1).
The sense of angular velocity is conventionally specified by the right-hand rule, implying clockwise rotations (as viewed on the plane of rotation); negation (multiplication by −1) leaves the magnitude unchanged but flips the axis in the opposite direction.
For example, a geostationary satellite completes one orbit per day above the equator (360 degrees per 24 hours) has angular velocity magnitude (angular speed) ω = 360°/24 h = 15°/h (or 2π rad/24 h ≈ 0.26 rad/h) and angular velocity direction (a unit vector) parallel to Earth's rotation axis (, in the geocentric coordinate system). If angle is measured in radians, the linear velocity is the radius times the angular velocity, . With orbital radius 42,000 km from the Earth's center, the satellite's tangential speed through space is thus v = 42,000 km × 0.26/h ≈ 11,000 km/h. The angular velocity is positive since the satellite travels prograde with the Earth's rotation (the same direction as the rotation of Earth).
Orbital angular velocity of a point particle
Particle in two dimensions
In the simplest case of circular motion at radius , with position given by the angular displacement from the x-axis, the orbital angular velocity is the rate of change of angle with respect to time: . If is measured in radians, the arc-length from the positive x-axis around the circle to the particle is , and the linear velocity is , so that .
In the general case of a particle moving in the plane, the orbital angular velocity is the rate at which the position vector relative to a chosen origin "sweeps out" angle. The diagram shows the position vector from the origin to a particle , with its polar coordinates . (All variables are functions of time .) The particle has linear velocity splitting as , with the radial component parallel to the radius, and the cross-radial (or tangential) component perpendicular to the radius. When there is no radial component, the particle moves around the origin in a circle; but when there is no cross-radial component, it moves in a straight line from the origin. Since radial motion leaves the angle unchanged, only the cross-radial component of linear velocity contributes to angular velocity.
The angular velocity ω is the rate of change of angular position with respect to time, which can be computed from the cross-radial velocity as:
Here the cross-radial speed is the signed magnitude of , positive for counter-clockwise motion, negative for clockwise. Taking polar coordinates for the linear velocity gives magnitude (linear speed) and angle relative to the radius vector; in these terms, , so that
These formulas may be derived doing , being a function of the distance to the origin with respect to time, and a function of the angle between the vector and the x axis. Then:
which is equal to:
(see Unit vector in cylindrical coordinates).
Knowing , we conclude that the radial component of the velocity is given by , because is a radial unit vector; and the perpendicular component is given by because is a perpendicular unit vector.
In two dimensions, angular velocity is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the radius vector turns counter-clockwise, and negative if clockwise. Angular velocity then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes.
Particle in three dimensions
In three-dimensional space, we again have the position vector r of a moving particle. Here, orbital angular velocity is a pseudovector whose magnitude is the rate at which r sweeps out angle (in radians per unit of time), and whose direction is perpendicular to the instantaneous plane in which r sweeps out angle (i.e. the plane spanned by r and v). However, as there are two directions perpendicular to any plane, an additional condition is necessary to uniquely specify the direction of the angular velocity; conventionally, the right-hand rule is used.
Let the pseudovector be the unit vector perpendicular to the plane spanned by r and v, so that the right-hand rule is satisfied (i.e. the instantaneous direction of angular displacement is counter-clockwise looking from the top of ). Taking polar coordinates in this plane, as in the two-dimensional case above, one may define the orbital angular velocity vector as:
where θ is the angle between r and v. In terms of the cross product, this is:
From the above equation, one can recover the tangential velocity as:
Spin angular velocity of a rigid body or reference frame
Given a rotating frame of three unit coordinate vectors, all the three must have the same angular speed at each instant. In such a frame, each vector may be considered as a moving particle with constant scalar radius.
The rotating frame appears in the context of rigid bodies, and special tools have been developed for it: the spin angular velocity may be described as a vector or equivalently as a tensor.
Consistent with the general definition, the spin angular velocity of a frame is defined as the orbital angular velocity of any of the three vectors (same for all) with respect to its own center of rotation. The addition of angular velocity vectors for frames is also defined by the usual vector addition (composition of linear movements), and can be useful to decompose the rotation as in a gimbal. All components of the vector can be calculated as derivatives of the parameters defining the moving frames (Euler angles or rotation matrices). As in the general case, addition is commutative: .
By Euler's rotation theorem, any rotating frame possesses an instantaneous axis of rotation, which is the direction of the angular velocity vector, and the magnitude of the angular velocity is consistent with the two-dimensional case.
If we choose a reference point fixed in the rigid body, the velocity of any point in the body is given by
Components from the basis vectors of a body-fixed frame
Consider a rigid body rotating about a fixed point O. Construct a reference frame in the body consisting of an orthonormal set of vectors fixed to the body and with their common origin at O. The spin angular velocity vector of both frame and body about O is then
where is the time rate of change of the frame vector due to the rotation.
This formula is incompatible with the expression for orbital angular velocity
as that formula defines angular velocity for a single point about O, while the formula in this section applies to a frame or rigid body. In the case of a rigid body a single has to account for the motion of all particles in the body.
Components from Euler angles
The components of the spin angular velocity pseudovector were first calculated by Leonhard Euler using his Euler angles and the use of an intermediate frame:
One axis of the reference frame (the precession axis)
The line of nodes of the moving frame with respect to the reference frame (nutation axis)
One axis of the moving frame (the intrinsic rotation axis)
Euler proved that the projections of the angular velocity pseudovector on each of these three axes is the derivative of its associated angle (which is equivalent to decomposing the instantaneous rotation into three instantaneous Euler rotations). Therefore:
This basis is not orthonormal and it is difficult to use, but now the velocity vector can be changed to the fixed frame or to the moving frame with just a change of bases. For example, changing to the mobile frame:
where are unit vectors for the frame fixed in the moving body. This example has been made using the Z-X-Z convention for Euler angles.
Tensor
See also
Angular acceleration
Angular frequency
Angular momentum
Areal velocity
Isometry
Orthogonal group
Rigid body dynamics
Vorticity
References
External links
A college text-book of physics By Arthur Lalanne Kimball (Angular Velocity of a particle)
Angle
Kinematic properties
Rotational symmetry
Temporal rates
Tensors
Velocity | 0.777153 | 0.998118 | 0.775691 |
Physics | Physics is the scientific study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines. A scientist who specializes in the field of physics is called a physicist.
Physics is one of the oldest academic disciplines and, through its inclusion of astronomy, perhaps the oldest. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy.
Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of new products that have dramatically transformed modern-day society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus.
History
The word physics comes from the Latin ('study of nature'), which itself is a borrowing of the Greek ( 'natural science'), a term derived from ( 'origin, nature, property').
Ancient astronomy
Astronomy is one of the oldest natural sciences. Early civilizations dating before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which could not explain the positions of the planets.
According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere.
Natural philosophy
Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus.
Aristotle and Hellenistic Physics
During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy developed along many lines of inquiry. Aristotle (, Aristotélēs) (384–322 BCE), a student of Plato,
wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC. Aristotelian physics was influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today.
He explained ideas such as motion (and gravity) with the theory of four elements.
Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place. Because of their differing densities, each element will revert to its own specific place in the atmosphere. So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included 1) heavier objects will fall faster, the speed being proportional to the weight and 2) the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air). He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it. The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "primer mover" as the ultimate source of all motion in the world (Book 8 of his treatise Physics).
Medieval European and Islamic
The Western Roman Empire fell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as the Byzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics.
In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest.
In sixth-century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noted its flaws. He introduced the theory of impetus. Aristotle's physics was not scrutinized until Philoponus appeared; unlike Aristotle, who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics Philoponus wrote:But this is completely erroneous, and our view may be corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a very small one. And so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the otherPhiloponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries later, during the Scientific Revolution. Galileo cited Philoponus substantially in his works when arguing that Aristotelian physics was flawed. In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus. It was a step toward the modern ideas of inertia and momentum.
Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method.
The most notable innovations under Islamic scholarship were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. In his Treatise on Light as well as in his Kitāb al-Manāẓir, he presented a study of the phenomenon of the camera obscura (his thousand-year-old version of the pinhole camera) and delved further into the way the eye itself works. Using the knowledge of previous scholars, he began to explain how light enters the eye. He asserted that the light ray is focused, but the actual explanation of how light projected to the back of the eye had to wait until 1604. His Treatise on Light explained the camera obscura, hundreds of years before the modern development of photography.
The seven-volume Book of Optics (Kitab al-Manathir) influenced thinking across disciplines from the theory of visual perception to the nature of perspective in medieval art, in both the East and the West, for more than 600 years. This included later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to Johannes Kepler.
The translation of The Book of Optics had an impact on Europe. From it, later European scholars were able to build devices that replicated those Ibn al-Haytham had built and understand the way vision works.
Classical
Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics.
Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name). Newton also developed calculus, the mathematical study of continuous change, which provided new mathematical methods for solving physical problems.
The discovery of laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased. The laws comprising classical physics remain widely used for objects on everyday scales travelling at non-relativistic speeds, since they provide a close approximation in such situations, and theories such as quantum mechanics and the theory of relativity simplify to their classical equivalents at such scales. Inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century.
Modern
Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light. Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales.
Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac. From this early work, and work in related fields, the Standard Model of particle physics was derived. Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012, all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research. Areas of mathematics in general are important to this field, such as the study of probabilities and groups.
Core theories
Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature. For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at a speed much less than the speed of light. These theories continue to be areas of active research today. Chaos theory, an aspect of classical mechanics, was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Newton (1642–1727).
These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity.
Classical theory
Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, acoustics, optics, thermodynamics, and electromagnetism. Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received. Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing, and electroacoustics, the manipulation of audible sound waves using electronics.
Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest.
Modern theory
Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics.
Fundamental concepts in modern physics
Causality
Covariance
Action
Physical field
Symmetry
Physical interaction
Statistical ensemble
Quantum
Wave
Particle
Distinction between classical and modern physics
While physics itself aims to discover universal laws, its theories lie in explicit domains of applicability.
Loosely speaking, the laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light. Outside of this domain, observations do not match predictions provided by classical mechanics. Einstein contributed the framework of special relativity, which replaced notions of absolute time and space with spacetime and allowed an accurate description of systems whose components have speeds approaching the speed of light. Planck, Schrödinger, and others introduced quantum mechanics, a probabilistic notion of particles and interactions that allowed an accurate description of atomic and subatomic scales. Later, quantum field theory unified quantum mechanics and special relativity. General relativity allowed for a dynamical, curved spacetime, with which highly massive systems and the large-scale structure of the universe can be well-described. General relativity has not yet been unified with the other fundamental descriptions; several candidate theories of quantum gravity are being developed.
Philosophy and relation to other fields
Physics, as with the rest of science, relies on the philosophy of science and its "scientific method" to advance knowledge of the physical world. The scientific method employs a priori and a posteriori reasoning as well as the use of Bayesian inference to measure the validity of a given theory.
Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism, and realism.
Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism, and Erwin Schrödinger, who wrote on quantum mechanics. The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking, a view Penrose discusses in his book, The Road to Reality. Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views.
Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras, Plato, Galileo, and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields.
Physics uses mathematics to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research.
Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data.
The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical. The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for.
Distinction between fundamental vs. applied physics
Physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics. Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves.
Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem.
The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics.
Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations.
With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology.
There is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics).
Research
Scientific method
Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory.
A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation.
Theory and experiment
Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment).
Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory.
Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way. Beyond the known universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions.
Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists.
Scope and aims
Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science". Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together.
For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information).
Research fields
Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach.
Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare.
The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table.
Nuclear and particle
Particle physics is the study of the elementary constituents of matter and energy and the interactions between them. In addition, particle physicists design and develop the high-energy accelerators, detectors, and computer programs necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles.
Currently, the interactions of elementary particles and fields are described by the Standard Model. The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces. Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively). The Standard Model also predicts a particle known as the Higgs boson. In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson, an integral part of the Higgs mechanism.
Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology.
Atomic, molecular, and optical
Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view).
Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics.
Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm.
Condensed matter
Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong.
The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms. More exotic condensed phases include the superfluid and the Bose–Einstein condensate found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials, and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices.
Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics. Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering.
Astrophysics
Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy.
Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang.
The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter.
Numerous possibilities and discoveries are anticipated to emerge from new data from the Fermi Gamma-ray Space Telescope over the upcoming decade and vastly revise or clarify existing models of the universe. In particular, the potential for a tremendous discovery surrounding dark matter is possible over the next several years. Fermi will search for evidence that dark matter is composed of weakly interacting massive particles, complementing similar experiments with the Large Hadron Collider and other underground detectors.
IBEX is already yielding new astrophysical discoveries: "No one knows what is creating the ENA (energetic neutral atoms) ribbon" along the termination shock of the solar wind, "but everyone agrees that it means the textbook picture of the heliosphere—in which the Solar System's enveloping pocket filled with the solar wind's charged particles is plowing through the onrushing 'galactic wind' of the interstellar medium in the shape of a comet—is wrong."
Current research
Research in physics is continually progressing on a large number of fronts.
In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity. Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers.
In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing.
Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections.
These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said:
Physics Education
Careers
See also
Lists
Notes
References
Sources
External links
Physics at Quanta Magazine
Usenet Physics FAQ – FAQ compiled by sci.physics and other physics newsgroups
Website of the Nobel Prize in physics – Award for outstanding contributions to the subject
World of Physics – Online encyclopedic dictionary of physics
Nature Physics – Academic journal
Physics – Online magazine by the American Physical Society
– Directory of physics related media
The Vega Science Trust – Science videos, including physics
HyperPhysics website – Physics and astronomy mind-map from Georgia State University
Physics at MIT OpenCourseWare – Online course material from Massachusetts Institute of Technology
The Feynman Lectures on Physics | 0.775791 | 0.999779 | 0.77562 |
Physics of firearms | From the viewpoint of physics (dynamics, to be exact), a firearm, as for most weapons, is a system for delivering maximum destructive energy to the target with minimum delivery of energy on the shooter. The momentum delivered to the target, however, cannot be any more than that (due to recoil) on the shooter. This is due to conservation of momentum, which dictates that the momentum imparted to the bullet is equal and opposite to that imparted to the gun-shooter system.
Firearm energy efficiency
From a thermodynamic point of view, a firearm is a special type of piston engine, or in general heat engine where the bullet has a function of a piston. The energy conversion efficiency of a firearm strongly depends on its construction, especially on its caliber and barrel length.
However, for illustration, here is the energy balance of a typical small firearm for .300 Hawk ammunition:
Barrel friction 2%
Projectile motion 32%
Hot gases 34%
Barrel heat 30%
Unburned propellant 1%.
which is comparable with a typical piston engine.
Higher efficiency can be achieved in longer barrel firearms because they have better volume ratio. However, the efficiency gain is less than corresponding to the volume ratio, because the expansion is not truly adiabatic and burnt gas becomes cold faster because of exchange of heat with the barrel. Large firearms (such as cannons) achieve smaller barrel-heating loss because they have better volume-to-surface ratio.
High barrel diameter is also helpful because lower barrel friction is induced by sealing compared to the accelerating force. The force is proportional to the square of the barrel diameter while sealing needs are proportional to the perimeter by the same pressure.
Force
According to Newtonian mechanics, if the gun and shooter are at rest initially, the force on the bullet will be equal to that on the gun-shooter. This is due to Newton's third law of motion (For every action, there is an equal and opposite reaction). Consider a system where the gun and shooter have a combined mass g and the bullet has a mass b. When the gun is fired, the two masses move away from one another with velocities g and b respectively. But the law of conservation of momentum states that the magnitudes of their momenta must be equal, and as momentum is a vector quantity and their directions are opposite:
In technical mathematical terms, the derivative of momentum with respect to time is force, which implies the force on the bullet will equal the force on the gun, and the momentum of the bullet/shooter can be derived via integrating the force-time function of the bullet or shooter. This is mathematically written as follows:
Where represent the gun, bullet, time, mass, velocity and force respectively.
Hollywood depictions of firearm victims being thrown through plate-glass windows are inaccurate. Were this the case, the shooter would also be thrown backwards, experiencing an even greater change in momentum in the opposite direction. Gunshot victims frequently fall or collapse when shot; this is less a result of the momentum of the bullet pushing them over, but is primarily caused by physical damage or psychological effects, perhaps combined with being off balance. This is not the case if the victim is hit by heavier projectiles such as 20 mm cannon shell, where the momentum effects can be enormous; this is why very few such weapons can be fired without being mounted on a weapons platform or involve a recoilless system (e.g. a recoilless rifle).
Example:
A .44 Remington Magnum with a jacketed bullet is fired at at a target. What velocity is imparted to the target (assume the bullet remains embedded in the target and thus practically loses all its velocity)?
Let and stand for the mass and velocity of the bullet, the latter just before hitting the target, and let and stand for the mass and velocity of the target after being hit.
Conservation of momentum requires
= .
Solving for the target's velocity gives
= / = 0.016 kg × 360 m/s / 77 kg = 0.07 m/s = 0.17 mph.
This shows the target, with its great mass, barely moves at all. This is despite ignoring drag forces, which would in reality cause the bullet to lose energy and momentum in flight.
Velocity
From Eq. 1 we can write for the velocity of the gun/shooter: V = mv/M. This shows that despite the high velocity of the bullet, the small bullet-mass to shooter-mass ratio results in a low recoil velocity (V) although the force and momentum are equal.
Kinetic energy
However, the smaller mass of the bullet, compared to that of the gun-shooter system, allows significantly more kinetic energy to be imparted to the bullet than to the shooter. The kinetic energy for the two systems are for the gun-shooter system and for the bullet. The energy imparted to the shooter can then be written as:
For the ratio of these energies we have:
The ratio of the kinetic energies is the same as the ratio of the masses (and is independent of velocity). Since the mass of the bullet is much less than that of the shooter there is more kinetic energy transferred to the bullet than to the shooter. Once discharged from the weapon, the bullet's energy decays throughout its flight, until the remainder is dissipated by colliding with a target (e.g. deforming the bullet and target).
Transfer of energy
When the bullet strikes, its high velocity and small frontal cross-section means that it will exert highly focused stresses in any object it hits. This usually results in it penetrating any softer material, such as flesh. The energy is then dissipated along the wound channel formed by the passage of the bullet. See terminal ballistics for a fuller discussion of these effects.
Bulletproof vests work by dissipating the bullet's energy in another way; the vest's material, usually Aramid (Kevlar or Twaron), works by presenting a series of material layers which catch the bullet and spread its imparted force over a larger area, hopefully bringing it to a stop before it can penetrate into the body behind the vest. While the vest can prevent a bullet from penetrating, the wearer will still be affected by the momentum of the bullet, which can produce contusions.
See also
Ballistics
Internal ballistics
External ballistics
Table of handgun and rifle cartridges
References
Firearms
Applied and interdisciplinary physics
Classical mechanics
de:Innenballistik | 0.792957 | 0.978088 | 0.775581 |
Hamilton–Jacobi equation | In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy.
In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.
Overview
The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation
for a system of particles at coordinates . The function is the system's Hamiltonian giving the system's energy. The solution of the equation is the action functional, , called Hamilton's principal function in older textbooks.
The solution can be related to the system Lagrangian by an indefinite integral of the form used in the principle of least action:
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.
Mathematical formulation
Notation
Boldface variables such as represent a list of generalized coordinates,
A dot over a variable or list signifies the time derivative (see Newton's notation). For example,
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as
The action functional (a.k.a. Hamilton's principal function)
Definition
Let the Hessian matrix be invertible. The relation
shows that the Euler–Lagrange equations form a system of second-order ordinary differential equations. Inverting the matrix transforms this system into
Let a time instant and a point in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every the initial value problem with the conditions and has a locally unique solution Additionally, let there be a sufficiently small time interval such that extremals with different initial velocities would not intersect in The latter means that, for any and any there can be at most one extremal for which and Substituting into the action functional results in the Hamilton's principal function (HPF)
where
Formula for the momenta
The momenta are defined as the quantities This section shows that the dependency of on disappears, once the HPF is known.
Indeed, let a time instant and a point in the configuration space be fixed. For every time instant and a point let be the (unique) extremal from the definition of the Hamilton's principal function . Call the velocity at . Then
Formula
Given the Hamiltonian of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function ,
Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating as the generating function for a canonical transformation of the classical Hamiltonian
The conjugate momenta correspond to the first derivatives of with respect to the generalized coordinates
As a solution to the Hamilton–Jacobi equation, the principal function contains undetermined constants, the first of them denoted as , and the last one coming from the integration of .
The relationship between and then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities
are also constants of motion, and these equations can be inverted to find as a function of all the and constants and time.
Comparison with other formulations of mechanics
The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the generalized coordinates and the time . The generalized momenta do not appear, except as derivatives of , the classical action.
For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta .
Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.
Derivation using a canonical transformation
Any canonical transformation involving a type-2 generating function leads to the relations
and Hamilton's equations in terms of the new variables and new Hamiltonian have the same form:
To derive the HJE, a generating function is chosen in such a way that, it will make the new Hamiltonian . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta are usually denoted , i.e. and the new generalized coordinates are typically denoted as , so .
Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant :
the HJE automatically arises
When solved for , these also give us the useful equations
or written in components for clarity
Ideally, these N equations can be inverted to find the original generalized coordinates as a function of the constants and , thus solving the original problem.
Separation of variables
When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative in the HJE must be a constant, usually denoted, giving the separated solution
where the time-independent function is sometimes called the abbreviated action or Hamilton's characteristic function and sometimes written (see action principle names). The reduced Hamilton–Jacobi equation can then be written
To illustrate separability for other variables, a certain generalized coordinate and its derivative are assumed to appear together as a single function
in the Hamiltonian
In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates
Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as ), yielding a first-order ordinary differential equation for
In fortunate cases, the function can be separated completely into functions
In such a case, the problem devolves to ordinary differential equations.
The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.
Examples in various coordinate systems
Spherical coordinates
In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written
The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions such that can be written in the analogous form
Substitution of the completely separated solution
into the HJE yields
This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for
where is a constant of the motion that eliminates the dependence from the Hamilton–Jacobi equation
The next ordinary differential equation involves the generalized coordinate
where is again a constant of the motion that eliminates the dependence and reduces the HJE to the final ordinary differential equation
whose integration completes the solution for .
Elliptic cylindrical coordinates
The Hamiltonian in elliptic cylindrical coordinates can be written
where the foci of the ellipses are located at on the -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that has an analogous form
where , and are arbitrary functions. Substitution of the completely separated solution
into the HJE yields
Separating the first ordinary differential equation
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
which itself may be separated into two independent ordinary differential equations
that, when solved, provide a complete solution for .
Parabolic cylindrical coordinates
The Hamiltonian in parabolic cylindrical coordinates can be written
The Hamilton–Jacobi equation is completely separable in these coordinates provided that has an analogous form
where , , and are arbitrary functions. Substitution of the completely separated solution
into the HJE yields
Separating the first ordinary differential equation
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
which itself may be separated into two independent ordinary differential equations
that, when solved, provide a complete solution for .
Waves and particles
Optical wave fronts and trajectories
The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface that the light emitted at time has reached at time . Light rays and wave fronts are dual: if one is known, the other can be deduced.
More precisely, geometrical optics is a variational problem where the “action” is the travel time along a path, where is the medium's index of refraction and is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.
The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.
The wave front at time , for a system initially at at time , is defined as the collection of points such that . If is known, the momentum is immediately deduced.
Once is known, tangents to the trajectories are computed by solving the equationfor , where is the Lagrangian. The trajectories are then recovered from the knowledge of .
Relationship to the Schrödinger equation
The isosurfaces of the function can be determined at any time t. The motion of an -isosurface as a function of time is defined by the motions of the particles beginning at the points on the isosurface. The motion of such an isosurface can be thought of as a wave moving through -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave
where is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having be a complex number. The Hamilton–Jacobi equation is then rewritten as
which is the Schrödinger equation.
Conversely, starting with the Schrödinger equation and our ansatz for , it can be deduced that
The classical limit of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,
Applications
HJE in a gravitational field
Using the energy–momentum relation in the form
for a particle of rest mass travelling in curved space, where are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and is the speed of light. Setting the four-momentum equal to the four-gradient of the action ,
gives the Hamilton–Jacobi equation in the geometry determined by the metric :
in other words, in a gravitational field.
HJE in electromagnetic fields
For a particle of rest mass and electric charge moving in electromagnetic field with four-potential in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor has a form
and can be solved for the Hamilton principal action function to obtain further solution for the particle trajectory and momentum:
where and with the cycle average of the vector potential.
A circularly polarized wave
In the case of circular polarization,
Hence
where , implying the particle moving along a circular trajectory with a permanent radius and an invariable value of momentum directed along a magnetic field vector.
A monochromatic linearly polarized plane wave
For the flat, monochromatic, linearly polarized wave with a field directed along the axis
hence
implying the particle figure-8 trajectory with a long its axis oriented along the electric field vector.
An electromagnetic wave with a solenoidal magnetic field
For the electromagnetic wave with axial (solenoidal) magnetic field:
hence
where is the magnetic field magnitude in a solenoid with the effective radius , inductivity , number of windings , and an electric current magnitude through the solenoid windings. The particle motion occurs along the figure-8 trajectory in plane set perpendicular to the solenoid axis with arbitrary azimuth angle due to axial symmetry of the solenoidal magnetic field.
See also
Canonical transformation
Constant of motion
Hamiltonian vector field
Hamilton–Jacobi–Einstein equation
WKB approximation
Action-angle coordinates
References
Further reading
Hamiltonian mechanics
Symplectic geometry
Partial differential equations
William Rowan Hamilton | 0.780095 | 0.994198 | 0.775569 |
Electric field | An electric field (sometimes called E-field) is the physical field that surrounds electrically charged particles. Charged particles exert attractive forces on each other when their charges are opposite, and repulse each other when their charges are the same. Because these forces are exerted mutually, two charges must be present for the forces to take place. The electric field of a single charge (or group of charges) describes their capacity to exert such forces on another charged object. These forces are described by Coulomb's law, which says that the greater the magnitude of the charges, the greater the force, and the greater the distance between them, the weaker the force. Thus, we may informally say that the greater the charge of an object, the stronger its electric field. Similarly, an electric field is stronger nearer charged objects and weaker further away. Electric fields originate from electric charges and time-varying electric currents. Electric fields and magnetic fields are both manifestations of the electromagnetic field, Electromagnetism is one of the four fundamental interactions of nature.
Electric fields are important in many areas of physics, and are exploited in electrical technology. For example, in atomic physics and chemistry, the interaction in the electric field between the atomic nucleus and electrons is the force that holds these particles together in atoms. Similarly, the interaction in the electric field between atoms is the force responsible for chemical bonding that result in molecules.
The electric field is defined as a vector field that associates to each point in space the force per unit of charge exerted on an infinitesimal test charge at rest at that point. The SI unit for the electric field is the volt per meter (V/m), which is equal to the newton per coulomb (N/C).
Description
The electric field is defined at each point in space as the force that would be experienced by a infinitesimally small stationary test charge at that point divided by the charge. The electric field is defined in terms of force, and force is a vector (i.e. having both magnitude and direction), so it follows that an electric field may be described by a vector field. The electric field acts between two charges similarly to the way that the gravitational field acts between two masses, as they both obey an inverse-square law with distance. This is the basis for Coulomb's law, which states that, for stationary charges, the electric field varies with the source charge and varies inversely with the square of the distance from the source. This means that if the source charge were doubled, the electric field would double, and if you move twice as far away from the source, the field at that point would be only one-quarter its original strength.
The electric field can be visualized with a set of lines whose direction at each point is the same as those of the field, a concept introduced by Michael Faraday, whose term 'lines of force' is still sometimes used. This illustration has the useful property that, when drawn so that each line represents the same amount of flux, the strength of the field is proportional to the density of the lines. Field lines due to stationary charges have several important properties, including that they always originate from positive charges and terminate at negative charges, they enter all good conductors at right angles, and they never cross or close in on themselves. The field lines are a representative concept; the field actually permeates all the intervening space between the lines. More or fewer lines may be drawn depending on the precision to which it is desired to represent the field. The study of electric fields created by stationary charges is called electrostatics.
Faraday's law describes the relationship between a time-varying magnetic field and the electric field. One way of stating Faraday's law is that the curl of the electric field is equal to the negative time derivative of the magnetic field. In the absence of time-varying magnetic field, the electric field is therefore called conservative (i.e. curl-free). This implies there are two kinds of electric fields: electrostatic fields and fields arising from time-varying magnetic fields. While the curl-free nature of the static electric field allows for a simpler treatment using electrostatics, time-varying magnetic fields are generally treated as a component of a unified electromagnetic field. The study of magnetic and electric fields that change over time is called electrodynamics.
Mathematical formulation
Electric fields are caused by electric charges, described by Gauss's law, and time varying magnetic fields, described by Faraday's law of induction. Together, these laws are enough to define the behavior of the electric field. However, since the magnetic field is described as a function of electric field, the equations of both fields are coupled and together form Maxwell's equations that describe both fields as a function of charges and currents.
Electrostatics
In the special case of a steady state (stationary charges and currents), the Maxwell-Faraday inductive effect disappears. The resulting two equations (Gauss's law and Faraday's law with no induction term ), taken together, are equivalent to Coulomb's law, which states that a particle with electric charge at position exerts a force on a particle with charge at position of:
where
is the force on charged particle caused by charged particle .
is the permittivity of free space.
is a unit vector directed from to .
is the displacement vector from to .
Note that must be replaced with , permittivity, when charges are in non-empty media.
When the charges and have the same sign this force is positive, directed away from the other charge, indicating the particles repel each other. When the charges have unlike signs the force is negative, indicating the particles attract.
To make it easy to calculate the Coulomb force on any charge at position this expression can be divided by leaving an expression that only depends on the other charge (the source charge)
where
is the component of the electric field at due to .
This is the electric field at point due to the point charge ; it is a vector-valued function equal to the Coulomb force per unit charge that a positive point charge would experience at the position .
Since this formula gives the electric field magnitude and direction at any point in space (except at the location of the charge itself, , where it becomes infinite) it defines a vector field.
From the above formula it can be seen that the electric field due to a point charge is everywhere directed away from the charge if it is positive, and toward the charge if it is negative, and its magnitude decreases with the inverse square of the distance from the charge.
The Coulomb force on a charge of magnitude at any point in space is equal to the product of the charge and the electric field at that point
The SI unit of the electric field is the newton per coulomb (N/C), or volt per meter (V/m); in terms of the SI base units it is kg⋅m⋅s−3⋅A−1.
Superposition principle
Due to the linearity of Maxwell's equations, electric fields satisfy the superposition principle, which states that the total electric field, at a point, due to a collection of charges is equal to the vector sum of the electric fields at that point due to the individual charges. This principle is useful in calculating the field created by multiple point charges. If charges are stationary in space at points , in the absence of currents, the superposition principle says that the resulting field is the sum of fields generated by each particle as described by Coulomb's law:
where
is the unit vector in the direction from point to point
is the displacement vector from point to point .
Continuous charge distributions
The superposition principle allows for the calculation of the electric field due to a distribution of charge density . By considering the charge in each small volume of space at point as a point charge, the resulting electric field, , at point can be calculated as
where
is the unit vector pointing from to .
is the displacement vector from to .
The total field is found by summing the contributions from all the increments of volume by integrating the charge density over the volume :
Similar equations follow for a surface charge with surface charge density on surface
and for line charges with linear charge density on line
Electric potential
If a system is static, such that magnetic fields are not time-varying, then by Faraday's law, the electric field is curl-free. In this case, one can define an electric potential, that is, a function such that This is analogous to the gravitational potential. The difference between the electric potential at two points in space is called the potential difference (or voltage) between the two points.
In general, however, the electric field cannot be described independently of the magnetic field. Given the magnetic vector potential, , defined so that , one can still define an electric potential such that:
where is the gradient of the electric potential and is the partial derivative of with respect to time.
Faraday's law of induction can be recovered by taking the curl of that equation
which justifies, a posteriori, the previous form for .
Continuous vs. discrete charge representation
The equations of electromagnetism are best described in a continuous description. However, charges are sometimes best described as discrete points; for example, some models may describe electrons as point sources where charge density is infinite on an infinitesimal section of space.
A charge located at can be described mathematically as a charge density , where the Dirac delta function (in three dimensions) is used. Conversely, a charge distribution can be approximated by many small point charges.
Electrostatic fields
Electrostatic fields are electric fields that do not change with time. Such fields are present when systems of charged matter are stationary, or when electric currents are unchanging. In that case, Coulomb's law fully describes the field.
Parallels between electrostatic and gravitational fields
Coulomb's law, which describes the interaction of electric charges:
is similar to Newton's law of universal gravitation:
(where ).
This suggests similarities between the electric field and the gravitational field , or their associated potentials. Mass is sometimes called "gravitational charge".
Electrostatic and gravitational forces both are central, conservative and obey an inverse-square law.
Uniform fields
A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of boundary effects (near the edge of the planes, the electric field is distorted because the plane does not continue). Assuming infinite planes, the magnitude of the electric field is:
where is the potential difference between the plates and is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nano-applications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of , achieved by applying a voltage of the order of 1 volt between conductors spaced 1 μm apart.
Electromagnetic fields
Electromagnetic fields are electric and magnetic fields, which may change with time, for instance when charges are in motion. Moving charges produce a magnetic field in accordance with Ampère's circuital law (with Maxwell's addition), which, along with Maxwell's other equations, defines the magnetic field, , in terms of its curl:
where is the current density, is the vacuum permeability, and is the vacuum permittivity.
Both the electric current density and the partial derivative of the electric field with respect to time, contribute to the curl of the magnetic field. In addition, the Maxwell–Faraday equation states
These represent two of Maxwell's four equations and they intricately link the electric and magnetic fields together, resulting in the electromagnetic field. The equations represent a set of four coupled multi-dimensional partial differential equations which, when solved for a system, describe the combined behavior of the electromagnetic fields. In general, the force experienced by a test charge in an electromagnetic field is given by the Lorentz force law:
Energy in the electric field
The total energy per unit volume stored by the electromagnetic field is
where is the permittivity of the medium in which the field exists, its magnetic permeability, and and are the electric and magnetic field vectors.
As and fields are coupled, it would be misleading to split this expression into "electric" and "magnetic" contributions. In particular, an electrostatic field in any given frame of reference in general transforms into a field with a magnetic component in a relatively moving frame. Accordingly, decomposing the electromagnetic field into an electric and magnetic component is frame-specific, and similarly for the associated energy.
The total energy stored in the electromagnetic field in a given volume is
Electric displacement field
Definitive equation of vector fields
In the presence of matter, it is helpful to extend the notion of the electric field into three vector fields:
where is the electric polarization – the volume density of electric dipole moments, and is the electric displacement field. Since and are defined separately, this equation can be used to define . The physical interpretation of is not as clear as (effectively the field applied to the material) or (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents.
Constitutive relation
The and fields are related by the permittivity of the material, .
For linear, homogeneous, isotropic materials and are proportional and constant throughout the region, there is no position dependence:
For inhomogeneous materials, there is a position dependence throughout the material:
For anisotropic materials the and fields are not parallel, and so and are related by the permittivity tensor (a 2nd order tensor field), in component form:
For non-linear media, and are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy.
Relativistic effects on electric field
Point charge in uniform motion
The invariance of the form of Maxwell's equations under Lorentz transformation can be used to derive the electric field of a uniformly moving point charge. The charge of a particle is considered frame invariant, as supported by experimental evidence. Alternatively the electric field of uniformly moving point charges can be derived from the Lorentz transformation of four-force experienced by test charges in the source's rest frame given by Coulomb's law and assigning electric field and magnetic field by their definition given by the form of Lorentz force. However the following equation is only applicable when no acceleration is involved in the particle's history where Coulomb's law can be considered or symmetry arguments can be used for solving Maxwell's equations in a simple manner. The electric field of such a uniformly moving point charge is hence given by:
where is the charge of the point source, is the position vector from the point source to the point in space, is the ratio of observed speed of the charge particle to the speed of light and is the angle between and the observed velocity of the charged particle.
The above equation reduces to that given by Coulomb's law for non-relativistic speeds of the point charge. Spherical symmetry is not satisfied due to breaking of symmetry in the problem by specification of direction of velocity for calculation of field. To illustrate this, field lines of moving charges are sometimes represented as unequally spaced radial lines which would appear equally spaced in a co-moving reference frame.
Propagation of disturbances in electric fields
Special theory of relativity imposes the principle of locality, that requires cause and effect to be time-like separated events where the causal efficacy does not travel faster than the speed of light. Maxwell's laws are found to confirm to this view since the general solutions of fields are given in terms of retarded time which indicate that electromagnetic disturbances travel at the speed of light. Advanced time, which also provides a solution for Maxwell's law are ignored as an unphysical solution.For the motion of a charged particle, considering for example the case of a moving particle with the above described electric field coming to an abrupt stop, the electric fields at points far from it do not immediately revert to that classically given for a stationary charge. On stopping, the field around the stationary points begin to revert to the expected state and this effect propagates outwards at the speed of light while the electric field lines far away from this will continue to point radially towards an assumed moving charge. This virtual particle will never be outside the range of propagation of the disturbance in electromagnetic field, since charged particles are restricted to have speeds slower than that of light, which makes it impossible to construct a Gaussian surface in this region that violates Gauss's law. Another technical difficulty that supports this is that charged particles travelling faster than or equal to speed of light no longer have a unique retarded time. Since electric field lines are continuous, an electromagnetic pulse of radiation is generated that connects at the boundary of this disturbance travelling outwards at the speed of light. In general, any accelerating point charge radiates electromagnetic waves however, non-radiating acceleration is possible in a systems of charges.
Arbitrarily moving point charge
For arbitrarily moving point charges, propagation of potential fields such as Lorenz gauge fields at the speed of light needs to be accounted for by using Liénard–Wiechert potential. Since the potentials satisfy Maxwell's equations, the fields derived for point charge also satisfy Maxwell's equations. The electric field is expressed as:
where is the charge of the point source, is retarded time or the time at which the source's contribution of the electric field originated, is the position vector of the particle, is a unit vector pointing from charged particle to the point in space, is the velocity of the particle divided by the speed of light, and is the corresponding Lorentz factor. The retarded time is given as solution of:
The uniqueness of solution for for given , and is valid for charged particles moving slower than speed of light. Electromagnetic radiation of accelerating charges is known to be caused by the acceleration dependent term in the electric field from which relativistic correction for Larmor formula is obtained.
There exist yet another set of solutions for Maxwell's equation of the same form but for advanced time instead of retarded time given as a solution of:
Since the physical interpretation of this indicates that the electric field at a point is governed by the particle's state at a point of time in the future, it is considered as an unphysical solution and hence neglected. However, there have been theories exploring the advanced time solutions of Maxwell's equations, such as Feynman Wheeler absorber theory.
The above equation, although consistent with that of uniformly moving point charges as well as its non-relativistic limit, are not corrected for quantum-mechanical effects.
Common formulæ
Electric field infinitely close to a conducting surface in electrostatic equilibrium having charge density at that point is since charges are only formed on the surface and the surface at the infinitesimal scale resembles an infinite 2D plane. In the absence of external fields, spherical conductors exhibit a uniform charge distribution on the surface and hence have the same electric field as that of uniform spherical surface distribution.
See also
Classical electromagnetism
Relativistic electromagnetism
Electricity
History of electromagnetic theory
Electromagnetic field
Magnetism
Teltron tube
Teledeltos, a conductive paper that may be used as a simple analog computer for modelling fields
References
External links
Electric field in "Electricity and Magnetism", R Nave – Hyperphysics, Georgia State University
Frank Wolfs's lectures at University of Rochester, chapters 23 and 24
Fields – a chapter from an online textbook
Electrostatics
Electromagnetic quantities
Electromagnetism | 0.776403 | 0.998925 | 0.775568 |
Inelastic collision | An inelastic collision, in contrast to an elastic collision, is a collision in which kinetic energy is not conserved due to the action of internal friction.
In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms, causing a heating effect, and the bodies are deformed.
The molecules of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules' translational motion and their internal degrees of freedom with each collision. At any one instant, half the collisions are – to a varying extent – inelastic (the pair possesses less kinetic energy after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across an entire sample, molecular collisions are elastic.
Although inelastic collisions do not conserve kinetic energy, they do obey conservation of momentum. Simple ballistic pendulum problems obey the conservation of kinetic energy only when the block swings to its largest angle.
In nuclear physics, an inelastic collision is one in which the incoming particle causes the nucleus it strikes to become excited or to break up. Deep inelastic scattering is a method of probing the structure of subatomic particles in much the same way as Rutherford probed the inside of the atom (see Rutherford scattering). Such experiments were performed on protons in the late 1960s using high-energy electrons at the Stanford Linear Accelerator (SLAC). As in Rutherford scattering, deep inelastic scattering of electrons by proton targets revealed that most of the incident electrons interact very little and pass straight through, with only a small number bouncing back. This indicates that the charge in the proton is concentrated in small lumps, reminiscent of Rutherford's discovery that the positive charge in an atom is concentrated at the nucleus. However, in the case of the proton, the evidence suggested three distinct concentrations of charge (quarks) and not one.
Formula
The formula for the velocities after a one-dimensional collision is:
where
va is the final velocity of the first object after impact
vb is the final velocity of the second object after impact
ua is the initial velocity of the first object before impact
ub is the initial velocity of the second object before impact
ma is the mass of the first object
mb is the mass of the second object
CR is the coefficient of restitution; if it is 1 we have an elastic collision; if it is 0 we have a perfectly inelastic collision, see below.
In a center of momentum frame the formulas reduce to:
For two- and three-dimensional collisions the velocities in these formulas are the components perpendicular to the tangent line/plane at the point of contact.
If assuming the objects are not rotating before or after the collision, the normal impulse is:
where is the normal vector.
Assuming no friction, this gives the velocity updates:
Perfectly inelastic collision
A perfectly inelastic collision occurs when the maximum amount of kinetic energy
of a system is lost. In a perfectly inelastic collision, i.e., a zero coefficient of restitution, the colliding particles stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. It is necessary to consider conservation of momentum: (Note: In the sliding block example above, momentum of the two body system is only conserved if the surface has zero friction. With friction, momentum of the two bodies is transferred to the surface that the two bodies are sliding upon. Similarly, if there is air resistance, the momentum of the bodies can be transferred to the air.) The equation below holds true for the two-body (Body A, Body B) system collision in the example above. In this example, momentum of the system is conserved because there is no friction between the sliding bodies and the surface.
where v is the final velocity, which is hence given by
The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. The change in kinetic energy is hence:
where μ is the reduced mass and urel is the relative velocity of the bodies before collision. With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation).
Partially inelastic collisions
Partially inelastic collisions are the most common form of collisions in the real world. In this type of collision, the objects involved in the collisions do not stick, but some kinetic energy is still lost. Friction, sound and heat are some ways the kinetic energy can be lost through partial inelastic collisions.
See also
Collision
Elastic collision
Coefficient of restitution
References
Classical mechanics
Collision
Particle physics
Scattering
ru:Удар#Абсолютно неупругий удар | 0.782892 | 0.990645 | 0.775568 |
Einstein relation (kinetic theory) | In physics (specifically, the kinetic theory of gases), the Einstein relation is a previously unexpected connection revealed independently by William Sutherland in 1904, Albert Einstein in 1905, and by Marian Smoluchowski in 1906 in their works on Brownian motion. The more general form of the equation in the classical case is
where
is the diffusion coefficient;
is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ;
is the Boltzmann constant;
is the absolute temperature.
This equation is an early example of a fluctuation-dissipation relation.
Note that the equation above describes the classical case and should be modified when quantum effects are relevant.
Two frequently used important special forms of the relation are:
Einstein–Smoluchowski equation, for diffusion of charged particles:
Stokes–Einstein–Sutherland equation, for diffusion of spherical particles through a liquid with low Reynolds number:
Here
is the electrical charge of a particle;
is the electrical mobility of the charged particle;
is the dynamic viscosity;
is the radius of the spherical particle.
Special cases
Electrical mobility equation (classical case)
For a particle with electrical charge , its electrical mobility is related to its generalized mobility by the equation . The parameter is the ratio of the particle's terminal drift velocity to an applied electric field. Hence, the equation in the case of a charged particle is given as
where
is the diffusion coefficient.
is the electrical mobility.
is the electric charge of particle (C, coulombs)
is the electron temperature or ion temperature in plasma (K).
If the temperature is given in volts, which is more common for plasma:
where
is the charge number of particle (unitless)
is electron temperature or ion temperature in plasma (V).
Electrical mobility equation (quantum case)
For the case of Fermi gas or a Fermi liquid, relevant for the electron mobility in normal metals like in the free electron model, Einstein relation should be modified:
where is Fermi energy.
Stokes–Einstein–Sutherland equation
In the limit of low Reynolds number, the mobility μ is the inverse of the drag coefficient . A damping constant is frequently used for the inverse momentum relaxation time (time needed for the inertia momentum to become negligible compared to the random momenta) of the diffusive object. For spherical particles of radius r, Stokes' law gives
where is the viscosity of the medium. Thus the Einstein–Smoluchowski relation results into the Stokes–Einstein–Sutherland relation
This has been applied for many years to estimating the self-diffusion coefficient in liquids, and a version consistent with isomorph theory has been confirmed by computer simulations of the Lennard-Jones system.
In the case of rotational diffusion, the friction is , and the rotational diffusion constant is
This is sometimes referred to as the Stokes–Einstein–Debye relation.
Semiconductor
In a semiconductor with an arbitrary density of states, i.e. a relation of the form between the density of holes or electrons and the corresponding quasi Fermi level (or electrochemical potential) , the Einstein relation is
where is the electrical mobility (see for a proof of this relation). An example assuming a parabolic dispersion relation for the density of states and the Maxwell–Boltzmann statistics, which is often used to describe inorganic semiconductor materials, one can compute (see density of states):
where is the total density of available energy states, which gives the simplified relation:
Nernst–Einstein equation
By replacing the diffusivities in the expressions of electric ionic mobilities of the cations and anions from the expressions of the equivalent conductivity of an electrolyte the Nernst–Einstein equation is derived:
were R is the gas constant.
Proof of the general case
The proof of the Einstein relation can be found in many references, for example see the work of Ryogo Kubo.
Suppose some fixed, external potential energy generates a conservative force (for example, an electric force) on a particle located at a given position . We assume that the particle would respond by moving with velocity (see Drag (physics)). Now assume that there are a large number of such particles, with local concentration as a function of the position. After some time, equilibrium will be established: particles will pile up around the areas with lowest potential energy , but still will be spread out to some extent because of diffusion. At equilibrium, there is no net flow of particles: the tendency of particles to get pulled towards lower , called the drift current, perfectly balances the tendency of particles to spread out due to diffusion, called the diffusion current (see drift-diffusion equation).
The net flux of particles due to the drift current is
i.e., the number of particles flowing past a given position equals the particle concentration times the average velocity.
The flow of particles due to the diffusion current is, by Fick's law,
where the minus sign means that particles flow from higher to lower concentration.
Now consider the equilibrium condition. First, there is no net flow, i.e. . Second, for non-interacting point particles, the equilibrium density is solely a function of the local potential energy , i.e. if two locations have the same then they will also have the same (e.g. see Maxwell-Boltzmann statistics as discussed below.) That means, applying the chain rule,
Therefore, at equilibrium:
As this expression holds at every position , it implies the general form of the Einstein relation:
The relation between and for classical particles can be modeled through Maxwell-Boltzmann statistics
where is a constant related to the total number of particles. Therefore
Under this assumption, plugging this equation into the general Einstein relation gives:
which corresponds to the classical Einstein relation.
See also
Smoluchowski factor
Conductivity (electrolytic)
Stokes radius
Ion transport number
References
External links
Einstein relation calculators
ion diffusivity
Statistical mechanics
Relation | 0.779504 | 0.994928 | 0.77555 |
Isotropy | In physics and geometry, isotropy is uniformity in all orientations. Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix or , hence anisotropy. Anisotropy is also used to describe situations where properties vary systematically, dependent on direction. Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented.
Mathematics
Within mathematics, isotropy has a few different meanings:
Isotropic manifolds A manifold is isotropic if the geometry on the manifold is the same regardless of direction. A similar concept is homogeneity.
Isotropic quadratic form A quadratic form q is said to be isotropic if there is a non-zero vector v such that ; such a v is an isotropic vector or null vector. In complex geometry, a line through the origin in the direction of an isotropic vector is an isotropic line.
Isotropic coordinates Isotropic coordinates are coordinates on an isotropic chart for Lorentzian manifolds.
Isotropy groupAn isotropy group is the group of isomorphisms from any object to itself in a groupoid. An isotropy representation is a representation of an isotropy group.
Isotropic position A probability distribution over a vector space is in isotropic position if its covariance matrix is the identity.
Isotropic vector field The vector field generated by a point source is said to be isotropic if, for any spherical neighborhood centered at the point source, the magnitude of the vector determined by any point on the sphere is invariant under a change in direction. For an example, starlight appears to be isotropic.
Physics
Quantum mechanics or particle physics When a spinless particle (or even an unpolarized particle with spin) decays, the resulting decay distribution must be isotropic in the rest frame of the decaying particle - regardless of the detailed physics of the decay. This follows from rotational invariance of the Hamiltonian, which in turn is guaranteed for a spherically symmetric potential.
Gases The kinetic theory of gases also exemplifies isotropy. It is assumed that the molecules move in random directions and as a consequence, there is an equal probability of a molecule moving in any direction. Thus when there are many molecules in the gas, with high probability there will be very similar numbers moving in one direction as any other, demonstrating approximate isotropy.
Fluid dynamics Fluid flow is isotropic if there is no directional preference (e.g. in fully developed 3D turbulence). An example of anisotropy is in flows with a background density as gravity works in only one direction. The apparent surface separating two differing isotropic fluids would be referred to as an isotrope.
Thermal expansion A solid is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid.
Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, the simplest instance being free space.
Optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged for micro-heterogeneous samples if the macroscopic reflectance or transmittance is to be calculated. This can be verified simply by investigating, for example, a polycrystalline material under a polarizing microscope having the polarizers crossed: If the crystallites are larger than the resolution limit, they will be visible.
Cosmology The cosmological principle, which underpins much of modern cosmology (including the Big Bang theory of the evolution of the observable universe), assumes that the universe is both isotropic and homogeneous, meaning that the universe has no preferred location (is the same everywhere) and has no preferred direction. Observations made in 2006 suggest that, on distance-scales much larger than galaxies, galaxy clusters are "Great" features, but small compared to so-called multiverse scenarios.
Materials science
In the study of mechanical properties of materials, "isotropic" means having identical values of a property in all directions. This definition is also used in geology and mineralogy. Glass and metals are examples of isotropic materials. Common anisotropic materials include wood (because its material properties are different parallel to and perpendicular to the grain) and layered rocks such as slate.
Isotropic materials are useful since they are easier to shape, and their behavior is easier to predict. Anisotropic materials can be tailored to the forces an object is expected to experience. For example, the fibers in carbon fiber materials and rebars in reinforced concrete are oriented to withstand tension.
Microfabrication
In industrial processes, such as etching steps, "isotropic" means that the process proceeds at the same rate, regardless of direction. Simple chemical reaction and removal of a substrate by an acid, a solvent or a reactive gas is often very close to isotropic. Conversely, "anisotropic" means that the attack rate of the substrate is higher in a certain direction. Anisotropic etch processes, where vertical etch-rate is high but lateral etch-rate is very small, are essential processes in microfabrication of integrated circuits and MEMS devices.
Antenna (radio)
An isotropic antenna is an idealized "radiating element" used as a reference; an antenna that broadcasts power equally (calculated by the Poynting vector) in all directions. The gain of an arbitrary antenna is usually reported in decibels relative to an isotropic antenna, and is expressed as dBi or dB(i).
In cells (a.k.a. muscle fibers), the term "isotropic" refers to the light bands (I bands) that contribute to the striated pattern of the cells.
Pharmacology
While it is well established that the skin provides an ideal site for the administration of local and systemic drugs, it presents a formidable barrier to the permeation of most substances. Recently, isotropic formulations have been used extensively in dermatology for drug delivery.
Computer science
ImagingA volume such as a computed tomography is said to have isotropic voxel spacing when the space between any two adjacent voxels is the same along each axis x, y, z. E.g., voxel spacing is isotropic if the center of voxel (i, j, k) is 1.38 mm from that of (i+1, j, k), 1.38 mm from that of (i, j+1, k) and 1.38 mm from that of (i, j, k+1) for all indices i, j, k.
Other sciences
Economics and geography An isotropic region is a region that has the same properties everywhere. Such a region is a construction needed in many types of models.
See also
Rotational invariance
Isotropic bands
Isotropic coordinates
Transverse isotropy
Anisotropic
Bi isotropic
Symmetry
References
Orientation (geometry) | 0.779746 | 0.994605 | 0.775539 |
Electromagnetic stress–energy tensor | In relativistic physics, the electromagnetic stress–energy tensor is the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the negative of the classical Maxwell stress tensor that governs the electromagnetic interactions.
Definition
SI units
In free space and flat space–time, the electromagnetic stress–energy tensor in SI units is
where is the electromagnetic tensor and where is the Minkowski metric tensor of metric signature and Einstein's summation convention over repeated indices is used. When using the metric with signature , the second term of the expression on the right of the equals sign will have opposite sign.
Explicitly in matrix form:
where
is the Poynting vector,
is the Maxwell stress tensor, and c is the speed of light. Thus, is expressed and measured in SI pressure units (pascals).
CGS unit conventions
The permittivity of free space and permeability of free space in cgs-Gaussian units are
then:
and in explicit matrix form:
where Poynting vector becomes:
The stress–energy tensor for an electromagnetic field in a dielectric medium is less well understood and is the subject of the unresolved Abraham–Minkowski controversy.
The element of the stress–energy tensor represents the flux of the μth-component of the four-momentum of the electromagnetic field, , going through a hyperplane ( is constant). It represents the contribution of electromagnetism to the source of the gravitational field (curvature of space–time) in general relativity.
Algebraic properties
The electromagnetic stress–energy tensor has several algebraic properties:
The symmetry of the tensor is as for a general stress–energy tensor in general relativity. The trace of the energy–momentum tensor is a Lorentz scalar; the electromagnetic field (and in particular electromagnetic waves) has no Lorentz-invariant energy scale, so its energy–momentum tensor must have a vanishing trace. This tracelessness eventually relates to the masslessness of the photon.
Conservation laws
The electromagnetic stress–energy tensor allows a compact way of writing the conservation laws of linear momentum and energy in electromagnetism. The divergence of the stress–energy tensor is:
where is the (4D) Lorentz force per unit volume on matter.
This equation is equivalent to the following 3D conservation laws
respectively describing the flux of electromagnetic energy density
and electromagnetic momentum density
where J is the electric current density, ρ the electric charge density, and is the Lorentz force density.
See also
Ricci calculus
Covariant formulation of classical electromagnetism
Mathematical descriptions of the electromagnetic field
Maxwell's equations
Maxwell's equations in curved spacetime
General relativity
Einstein field equations
Magnetohydrodynamics
Vector calculus
References
Tensor physical quantities
Electromagnetism | 0.785439 | 0.987386 | 0.775532 |
Energy flow (ecology) | Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, where in water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient or inefficient that ecosystem is. This decrease in efficiency occurs because organisms need to perform cellular respiration to survive, and energy is lost as heat when cellular respiration is performed. That is also why there are fewer tertiary consumers than there are producers.
Primary production
A producer is any organism that performs photosynthesis. Producers are important because they convert energy from the sun into a storable and usable chemical form of energy, glucose, as well as oxygen. The producers themselves can use the energy stored in glucose to perform cellular respiration. Or, if the producer is consumed by herbivores in the next trophic level, some of the energy is passed on up the pyramid. The glucose stored within producers serves as food for consumers, and so it is only through producers that consumers are able to access the sun’s energy. Some examples of primary producers are algae, mosses, and other plants such as grasses, trees, and shrubs.
Chemosynthetic bacteria perform a process similar to photosynthesis, but instead of energy from the sun they use energy stored in chemicals like hydrogen sulfide. This process, referred to as chemosynthesis, usually occurs deep in the ocean at hydrothermal vents that produce heat and chemicals such as hydrogen, hydrogen sulfide and methane. Chemosynthetic bacteria can use the energy in the bonds of the hydrogen sulfide and oxygen to convert carbon dioxide to glucose, releasing water and sulfur in the process. Organisms that consume the chemosynthetic bacteria can take in the glucose and use oxygen to perform cellular respiration, similar to herbivores consuming producers.
One of the factors that controls primary production is the amount of energy that enters the producer(s), which can be measured using productivity. Only one percent of solar energy enters the producer, the rest bounces off or moves through. Gross primary productivity is the amount of energy the producer actually gets. Generally, 60% of the energy that enters the producer goes to the producer’s own respiration. The net primary productivity is the amount that the plant retains after the amount that it used for cellular respiration is subtracted. Another factor controlling primary production is organic/inorganic nutrient levels in the water or soil that the producer is living in.
Secondary production
Secondary production is the use of energy stored in plants converted by consumers to their own biomass. Different ecosystems have different levels of consumers, all end with one top consumer. Most energy is stored in organic matter of plants, and as the consumers eat these plants they take up this energy. This energy in the herbivores and omnivores is then consumed by carnivores. There is also a large amount of energy that is in primary production and ends up being waste or litter, referred to as detritus. The detrital food chain includes a large amount of microbes, macroinvertebrates, meiofauna, fungi, and bacteria. These organisms are consumed by omnivores and carnivores and account for a large amount of secondary production. Secondary consumers can vary widely in how efficient they are in consuming. The efficiency of energy being passed on to consumers is estimated to be around 10%. Energy flow through consumers differs in aquatic and terrestrial environments.
In aquatic environments
Heterotrophs contribute to secondary production and it is dependent on primary productivity and the net primary products. Secondary production is the energy that herbivores and decomposers use and thus depends on primary productivity. Primarily herbivores and decomposers consume all the carbon from two main organic sources in aquatic ecosystems, autochthonous and allochthonous. Autochthonous carbon comes from within the ecosystem and includes aquatic plants, algae and phytoplankton. Allochthonous carbon from outside the ecosystem is mostly dead organic matter from the terrestrial ecosystem entering the water. In stream ecosystems, approximately 66% of annual energy input can be washed downstream. The remaining amount is consumed and lost as heat.
In terrestrial environments
Secondary production is often described in terms of trophic levels, and while this can be useful in explaining relationships it overemphasizes the rarer interactions. Consumers often feed at multiple trophic levels. Energy transferred above the third trophic level is relatively unimportant. The assimilation efficiency can be expressed by the amount of food the consumer has eaten, how much the consumer assimilates and what is expelled as feces or urine. While a portion of the energy is used for respiration, another portion of the energy goes towards biomass in the consumer. There are two major food chains: The primary food chain is the energy coming from autotrophs and passed on to the consumers; and the second major food chain is when carnivores eat the herbivores or decomposers that consume the autotrophic energy. Consumers are broken down into primary consumers, secondary consumers and tertiary consumers. Carnivores have a much higher assimilation of energy, about 80% and herbivores have a much lower efficiency of approximately 20 to 50%. Energy in a system can be affected by animal emigration/immigration. The movements of organisms are significant in terrestrial ecosystems. Energetic consumption by herbivores in terrestrial ecosystems has a low range of ~3-7%. The flow of energy is similar in many terrestrial environments. The fluctuation in the amount of net primary product consumed by herbivores is generally low. This is in large contrast to aquatic environments of lakes and ponds where grazers have a much higher consumption of around ~33%. Ectotherms and endotherms have very different assimilation efficiencies.
Detritivores
Detritivores consume organic material that is decomposing and are in turn consumed by carnivores. Predator productivity is correlated with prey productivity. This confirms that the primary productivity in ecosystems affects all productivity following.
Detritus is a large portion of organic material in ecosystems. Organic material in temperate forests is mostly made up of dead plants, approximately 62%.
In an aquatic ecosystem, leaf matter that falls into streams gets wet and begins to leech organic material. This happens rather quickly and will attract microbes and invertebrates. The leaves can be broken down into large pieces called coarse particulate organic matter (CPOM). The CPOM is rapidly colonized by microbes. Meiofauna is extremely important to secondary production in stream ecosystems. Microbes breaking down and colonizing this leaf matter are very important to the detritovores. The detritovores make the leaf matter more edible by releasing compounds from the tissues; it ultimately helps soften them. As leaves decay nitrogen will decrease since cellulose and lignin in the leaves is difficult to break down. Thus the colonizing microbes bring in nitrogen in order to aid in the decomposition. Leaf breakdown can depend on initial nitrogen content, season, and species of trees. The species of trees can have variation when their leaves fall. Thus the breakdown of leaves is happening at different times, which is called a mosaic of microbial populations.
Species effect and diversity in an ecosystem can be analyzed through their performance and efficiency. In addition, secondary production in streams can be influenced heavily by detritus that falls into the streams; production of benthic fauna biomass and abundance decreased an additional 47–50% during a study of litter removal and exclusion.
Energy flow across ecosystems
Research has demonstrated that primary producers fix carbon at similar rates across ecosystems. Once carbon has been introduced into a system as a viable source of energy, the mechanisms that govern the flow of energy to higher trophic levels vary across ecosystems. Among aquatic and terrestrial ecosystems, patterns have been identified that can account for this variation and have been divided into two main pathways of control: top-down and bottom-up. The acting mechanisms within each pathway ultimately regulate community and trophic level structure within an ecosystem to varying degrees. Bottom-up controls involve mechanisms that are based on resource quality and availability, which control primary productivity and the subsequent flow of energy and biomass to higher trophic levels. Top-down controls involve mechanisms that are based on consumption by consumers. These mechanisms control the rate of energy transfer from one trophic level to another as herbivores or predators feed on lower trophic levels.
Aquatic vs terrestrial ecosystems
Much variation in the flow of energy is found within each type of ecosystem, creating a challenge in identifying variation between ecosystem types. In a general sense, the flow of energy is a function of primary productivity with temperature, water availability, and light availability. For example, among aquatic ecosystems, higher rates of production are usually found in large rivers and shallow lakes than in deep lakes and clear headwater streams. Among terrestrial ecosystems, marshes, swamps, and tropical rainforests have the highest primary production rates, whereas tundra and alpine ecosystems have the lowest. The relationships between primary production and environmental conditions have helped account for variation within ecosystem types, allowing ecologists to demonstrate that energy flows more efficiently through aquatic ecosystems than terrestrial ecosystems due to the various bottom-up and top-down controls in play.
Bottom-up
The strength of bottom-up controls on energy flow are determined by the nutritional quality, size, and growth rates of primary producers in an ecosystem. Photosynthetic material is typically rich in nitrogen (N) and phosphorus (P) and supplements the high herbivore demand for N and P across all ecosystems. Aquatic primary production is dominated by small, single-celled phytoplankton that are mostly composed of photosynthetic material, providing an efficient source of these nutrients for herbivores. In contrast, multi-cellular terrestrial plants contain many large supporting cellulose structures of high carbon but low nutrient value. Because of this structural difference, aquatic primary producers have less biomass per photosynthetic tissue stored within the aquatic ecosystem than in the forests and grasslands of terrestrial ecosystems. This low biomass relative to photosynthetic material in aquatic ecosystems allows for a more efficient turnover rate compared to terrestrial ecosystems. As phytoplankton are consumed by herbivores, their enhanced growth and reproduction rates sufficiently replace lost biomass and, in conjunction with their nutrient dense quality, support greater secondary production.
Additional factors impacting primary production includes inputs of N and P, which occurs at a greater magnitude in aquatic ecosystems. These nutrients are important in stimulating plant growth and, when passed to higher trophic levels, stimulate consumer biomass and growth rate. If either of these nutrients are in short supply, they can limit overall primary production. Within lakes, P tends to be the greater limiting nutrient while both N and P limit primary production in rivers. Due to these limiting effects, nutrient inputs can potentially alleviate the limitations on net primary production of an aquatic ecosystem. Allochthonous material washed into an aquatic ecosystem introduces N and P as well as energy in the form of carbon molecules that are readily taken up by primary producers. Greater inputs and increased nutrient concentrations support greater net primary production rates, which in turn supports greater secondary production.
Top-down
Top-down mechanisms exert greater control on aquatic primary producers due to the roll of consumers within an aquatic food web. Among consumers, herbivores can mediate the impacts of trophic cascades by bridging the flow of energy from primary producers to predators in higher trophic levels. Across ecosystems, there is a consistent association between herbivore growth and producer nutritional quality. However, in aquatic ecosystems, primary producers are consumed by herbivores at a rate four times greater than in terrestrial ecosystems. Although this topic is highly debated, researchers have attributed the distinction in herbivore control to several theories, including producer to consumer size ratios and herbivore selectivity.
Modeling of top-down controls on primary producers suggests that the greatest control on the flow of energy occurs when the size ratio of consumer to primary producer is the highest. The size distribution of organisms found within a single trophic level in aquatic systems is much narrower than that of terrestrial systems. On land, the consumer size ranges from smaller than the plant it consumes, such as an insect, to significantly larger, such as an ungulate, while in aquatic systems, consumer body size within a trophic level varies much less and is strongly correlated with trophic position. As a result, the size difference between producers and consumers is consistently larger in aquatic environments than on land, resulting in stronger herbivore control over aquatic primary producers.
Herbivores can potentially control the fate of organic matter as it is cycled through the food web. Herbivores tend to select nutritious plants while avoiding plants with structural defense mechanisms. Like support structures, defense structures are composed of nutrient poor, high carbon cellulose. Access to nutritious food sources enhances herbivore metabolism and energy demands, leading to greater removal of primary producers. In aquatic ecosystems, phytoplankton are highly nutritious and generally lack defense mechanisms. This results in greater top-down control because consumed plant matter is quickly released back into the system as labile organic waste. In terrestrial ecosystems, primary producers are less nutritionally dense and are more likely to contain defense structures. Because herbivores prefer nutritionally dense plants and avoid plants or plant parts with defense structures, a greater amount of plant matter is left unconsumed within the ecosystem. Herbivore avoidance of low-quality plant matter may be why terrestrial systems exhibit weaker top-down control on the flow of energy.
See also
References
Further reading
Ecology terminology
Energy
Environmental science
Ecological economics | 0.779002 | 0.995515 | 0.775509 |
Classical physics | Classical physics is a group of physics theories that predate modern, more complete, or more widely applicable theories. If a currently accepted theory is considered to be modern, and its introduction represented a major paradigm shift, then the previous theories, or new theories based on the older paradigm, will often be referred to as belonging to the area of "classical physics".
As such, the definition of a classical theory depends on context. Classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Most often, classical physics refers to pre-1900 physics, while modern physics refers to post-1900 physics, which incorporates elements of quantum mechanics and relativity.
Overview
Classical theory has at least two distinct meanings in physics. In the context of quantum mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm, which includes classical mechanics and relativity. Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics. In the context of general and special relativity, classical theories are those that obey Galilean relativity.
Depending on point of view, among the branches of theory sometimes included in classical physics are variably:
Classical mechanics
Newton's laws of motion
Classical Lagrangian and Hamiltonian formalisms
Classical electrodynamics (Maxwell's equations)
Classical thermodynamics
Classical chaos theory and nonlinear dynamics
Comparison with modern physics
In contrast to classical physics, "modern physics" is a slightly looser term that may refer to just quantum physics or to 20th- and 21st-century physics in general. Modern physics includes quantum theory and relativity, when applicable.
A physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid.
In practice, physical objects ranging from those larger than atoms and molecules, to objects in the macroscopic and astronomical realm, can be well-described (understood) with classical mechanics. Beginning at the atomic level and lower, the laws of classical physics break down and generally do not provide a correct description of nature. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales and field strengths large enough that quantum mechanical effects are negligible. Unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist.
From the point of view of classical physics as being non-relativistic physics, the predictions of general and special relativity are significantly different from those of classical theories, particularly concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Traditionally, light was reconciled with classical mechanics by assuming the existence of a stationary medium through which light propagated, the luminiferous aether, which was later shown not to exist.
Mathematically, classical physics equations are those in which the Planck constant does not appear. According to the correspondence principle and Ehrenfest's theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects and the classical description will suffice. However, one of the most vigorous ongoing fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of quantum physics give rise to classical physics found at the limit of the large scales of the classical level.
Computer modeling and manual calculation, modern and classic comparison
Today, a computer performs millions of arithmetic operations in seconds to solve a classical differential equation, while Newton (one of the fathers of the differential calculus) would take hours to solve the same equation by manual calculation, even if he were the discoverer of that particular equation.
Computer modeling is essential for quantum and relativistic physics. Classical physics is considered the limit of quantum mechanics for a large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics. For example, in many formulations from special relativity, a correction factor (v/c)2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and higher that appear. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities. Computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case. In order to produce reliable models of the world, one can not use classical physics. It is true that quantum theories consume time and computer resources, and the equations of classical physics could be resorted to in order to provide a quick solution, but such a solution would lack reliability.
Computer modeling would use only the energy criteria to determine which theory to use: relativity or quantum theory, when attempting to describe the behavior of an object. A physicist would use a classical model to provide an approximation before more exacting models are applied and those calculations proceed.
In a computer model, there is no need to use the speed of the object if classical physics is excluded. Low-energy objects would be handled by quantum theory and high-energy objects by relativity theory.
See also
Glossary of classical physics
Semiclassical physics
References
History of physics
Philosophy of physics | 0.776942 | 0.998103 | 0.775468 |
Turbulence kinetic energy | In fluid dynamics, turbulence kinetic energy (TKE) is the mean kinetic energy per unit mass associated with eddies in turbulent flow. Physically, the turbulence kinetic energy is characterized by measured root-mean-square (RMS) velocity fluctuations. In the Reynolds-averaged Navier Stokes equations, the turbulence kinetic energy can be calculated based on the closure method, i.e. a turbulence model.
The TKE can be defined to be half the sum of the variances σ² (square of standard deviations σ) of the fluctuating velocity components:
where each turbulent velocity component is the difference between the instantaneous and the average velocity: (Reynolds decomposition). The mean and variance are respectively.
TKE can be produced by fluid shear, friction or buoyancy, or through external forcing at low-frequency eddy scales (integral scale). Turbulence kinetic energy is then transferred down the turbulence energy cascade, and is dissipated by viscous forces at the Kolmogorov scale. This process of production, transport and dissipation can be expressed as:
where:
is the mean-flow material derivative of TKE;
is the turbulence transport of TKE;
is the production of TKE, and
is the TKE dissipation.
Assuming that molecular viscosity is constant, and making the Boussinesq approximation, the TKE equation is:
By examining these phenomena, the turbulence kinetic energy budget for a particular flow can be found.
Computational fluid dynamics
In computational fluid dynamics (CFD), it is impossible to numerically simulate turbulence without discretizing the flow-field as far as the Kolmogorov microscales, which is called direct numerical simulation (DNS). Because DNS simulations are exorbitantly expensive due to memory, computational and storage overheads, turbulence models are used to simulate the effects of turbulence. A variety of models are used, but generally TKE is a fundamental flow property which must be calculated in order for fluid turbulence to be modelled.
Reynolds-averaged Navier–Stokes equations
Reynolds-averaged Navier–Stokes (RANS) simulations use the Boussinesq eddy viscosity hypothesis to calculate the Reynolds stress that results from the averaging procedure:
where
The exact method of resolving TKE depends upon the turbulence model used; – (k–epsilon) models assume isotropy of turbulence whereby the normal stresses are equal:
This assumption makes modelling of turbulence quantities ( and ) simpler, but will not be accurate in scenarios where anisotropic behaviour of turbulence stresses dominates, and the implications of this in the production of turbulence also leads to over-prediction since the production depends on the mean rate of strain, and not the difference between the normal stresses (as they are, by assumption, equal).
Reynolds-stress models (RSM) use a different method to close the Reynolds stresses, whereby the normal stresses are not assumed isotropic, so the issue with TKE production is avoided.
Initial conditions
Accurate prescription of TKE as initial conditions in CFD simulations are important to accurately predict flows, especially in high Reynolds-number simulations. A smooth duct example is given below.
where is the initial turbulence intensity [%] given below, and is the initial velocity magnitude. As an example for pipe flows, with the Reynolds number based on the pipe diameter:
Here is the turbulence or eddy length scale, given below, and is a – model parameter whose value is typically given as 0.09;
The turbulent length scale can be estimated as
with a characteristic length. For internal flows this may take the value of the inlet duct (or pipe) width (or diameter) or the hydraulic diameter.
References
Further reading
Turbulence kinetic energy at CFD Online.
Lacey, R. W. J.; Neary, V. S.; Liao, J. C.; Enders, E. C.; Tritico, H. M. (2012). "The IPOS framework: linking fish swimming performance in altered flows from laboratory experiments to rivers." River Res. Applic. 28 (4), pp. 429–443. doi:10.1002/rra.1584.
Wilcox, D. C. (2006). "Turbulence modeling for CFD". Third edition. DCW Industries, La Canada, USA. ISBN 978-1-928729-08-2.
Computational fluid dynamics
Turbulence
Energy (physics) | 0.78397 | 0.989006 | 0.775351 |
Reversible process (thermodynamics) | In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature.
Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation.
To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur.
While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible.
Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible.
Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in corresponding real processes. Other applications exploit that entropy and internal energy are state functions whose change depends only on the initial and final states of the system, not on how the process occurred. Therefore, the entropy and internal-energy change in a real process can be calculated quite easily by analyzing a reversible process connecting the real initial and final system states. In addition, reversibility defines the thermodynamic condition for chemical equilibrium.
Overview
Thermodynamic processes can be carried out in one of two ways: reversibly or irreversibly. An ideal thermodynamically reversible process is free of dissipative losses and therefore the magnitude of work performed by or on the system would be maximized. The incomplete conversion of heat to work in a cyclic process, however, applies to both reversible and irreversible cycles. The dependence of work on the path of the thermodynamic process is also unrelated to reversibility, since expansion work, which can be visualized on a pressure–volume diagram as the area beneath the equilibrium curve, is different for different reversible expansion processes (e.g. adiabatic, then isothermal; vs. isothermal, then adiabatic) connecting the same initial and final states.
Irreversibility
In an irreversible process, finite changes are made; therefore the system is not at equilibrium throughout the process. In a cyclic process, the difference between the reversible work and the actual work for a process as shown in the following equation:
Boundaries and states
Simple reversible processes change the state of a system in such a way that the net change in the combined entropy of the system and its surroundings is zero. (The entropy of the system alone is conserved only in reversible adiabatic processes.) Nevertheless, the Carnot cycle demonstrates that the state of the surroundings may change in a reversible process as the system returns to its initial state. Reversible processes define the boundaries of how efficient heat engines can be in thermodynamics and engineering: a reversible process is one where the machine has maximum efficiency (see Carnot cycle).
In some cases, it may be important to distinguish between reversible and quasistatic processes. Reversible processes are always quasistatic, but the converse is not always true. For example, an infinitesimal compression of a gas in a cylinder where there is friction between the piston and the cylinder is a quasistatic, but not reversible process. Although the system has been driven from its equilibrium state by only an infinitesimal amount, energy has been irreversibly lost to waste heat, due to friction, and cannot be recovered by simply moving the piston in the opposite direction by the infinitesimally same amount.
Engineering archaisms
Historically, the term Tesla principle was used to describe (among other things) certain reversible processes invented by Nikola Tesla. However, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner. It was developed during Tesla's research in alternating currents where the current's magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved and machinery fastened to the shaft was operated by the engine. If the turbine's operation was reversed, the disks acted as a pump.
Footnotes
See also
Time reversibility
Carnot cycle
Entropy production
Toffoli gate
Time evolution
Quantum circuit
Reversible computing
Maxwell's demon
Stirling engine
References
Thermodynamic processes | 0.781206 | 0.992459 | 0.775315 |
Aeolipile | An aeolipile, aeolipyle, or eolipile, from the Greek "Αἰόλου πύλη," , also known as a Hero's (or Heron's) engine, is a simple, bladeless radial steam turbine which spins when the central water container is heated. Torque is produced by steam jets exiting the turbine. The Greek-Egyptian mathematician and engineer Hero of Alexandria described the device in the 1st century AD, and many sources give him the credit for its invention. However, Vitruvius was the first to describe this appliance in his De architectura.
The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.
The name – derived from the Greek word Αἴολος and Latin word pila – translates to "the ball of Aeolus", Aeolus being the Greek god of the air and wind.
Due to its use of steam as the medium for performing work, the Aeolipile (in profile view) was adopted as the symbol for the U.S. Navy's Boiler Technician Rate - which had formed out of the Watertender, Boilermaker, and Boilerman ratings (that used the same symbol).
Physics
The aeolipile usually consists of a spherical or cylindrical vessel with oppositely bent or curved nozzles projecting outwards. It is designed to rotate on its axis. When the vessel is pressurised with steam, the gas is expelled out of the nozzles, which generates thrust due to the rocket principle as a consequence of the 2nd and 3rd of Newton's laws of motion. When the nozzles, pointing in different directions, produce forces along different lines of action perpendicular to the axis of the bearings, the thrusts combine to result in a rotational moment (mechanical couple), or torque, causing the vessel to spin about its axis. Aerodynamic drag and frictional forces in the bearings build up quickly with increasing rotational speed (rpm) and consume the accelerating torque, eventually cancelling it and achieving a steady state speed.
Typically, and as Hero described the device, the water is heated in a simple boiler which forms part of a stand for the rotating vessel. Where this is the case, the boiler is connected to the rotating chamber by a pair of pipes that also serve as the pivots for the chamber. Alternatively the rotating chamber may itself serve as the boiler, and this arrangement greatly simplifies the pivot/bearing arrangements, as they then do not need to pass steam. This can be seen in the illustration of a classroom model shown here.
History
Both Hero and Vitruvius draw on the much earlier work by Ctesibius (285–222 BC), also known as Ktēsíbios or Tesibius, who was an inventor and mathematician in Alexandria, Ptolemaic Egypt. He wrote the first treatises on the science of compressed air and its uses in pumps.
Vitruvius's description
Vitruvius (c. 80 BC – c. 15 BC) mentions aeolipiles by name:
Hero's description
Hero (c. 10–70 AD) takes a more practical approach, in that he gives instructions how to make one:
Practical usage
It is not known whether the aeolipile was put to any practical use in ancient times, and if it was seen as a pragmatic device, a whimsical novelty, an object of reverence, or some other thing. A source described it as a mere curiosity for the ancient Greeks, or a "party trick". Hero's drawing shows a standalone device, and was presumably intended as a "temple wonder", like many of the other devices described in Pneumatica.
Vitruvius, on the other hand, mentions use of the aeolipile for demonstrating the physical properties of the weather. He describes them as:
After describing the device's construction (see above) he concludes:
In 1543, Blasco de Garay, a scientist and a captain in the Spanish navy, allegedly demonstrated before the Holy Roman Emperor, Charles V and a committee of high officials an invention he claimed could propel large ships in the absence of wind using an apparatus consisted of copper boiler and moving wheels on either side of the ship. This account was preserved by the royal Spanish archives at Simancas. It is proposed that de Garay used Hero's aeolipile and combined it with the technology used in Roman boats and late medieval galleys. Here, de Garay's invention introduced an innovation where the aeolipile had practical usage, which was to generate motion to the paddlewheels, demonstrating the feasibility of steam-driven boats. This claim was denied by Spanish authorities.
See also
Catherine wheel (firework)
Rocket engine
Segner wheel
Steam engine
Steam locomotive
Steam rocket
Tip jet
References
Further reading
History of thermodynamics
Steam engines
Rocket engines
Industrial design
Hellenistic engineering
Early rocketry
Ancient inventions
Ancient Egyptian technology
Egyptian inventions
History of technology | 0.777841 | 0.99674 | 0.775306 |
Cauchy momentum equation | The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
Main equation
In convective (or Lagrangian) form the Cauchy momentum equation is written as:
where
is the flow velocity vector field, which depends on time and space, (unit: )
is time, (unit: )
is the material derivative of , equal to , (unit: )
is the density at a given point of the continuum (for which the continuity equation holds), (unit: )
is the stress tensor, (unit: )
is a vector containing all of the accelerations caused by body forces (sometimes simply gravitational acceleration), (unit: )
is the divergence of stress tensor. (unit: )
Commonly used SI units are given in parentheses although the equations are general in nature and other units can be entered into them or units can be removed at all by nondimensionalization.
Note that only we use column vectors (in the Cartesian coordinate system) above for clarity, but the equation is written using physical components (which are neither covariants ("column") nor contravariants ("row") ). However, if we chose a non-orthogonal curvilinear coordinate system, then we should calculate and write equations in covariant ("row vectors") or contravariant ("column vectors") form.
After an appropriate change of variables, it can also be written in conservation form:
where is the momentum density at a given space-time point, is the flux associated to the momentum density, and contains all of the body forces per unit volume.
Differential derivation
Let us start with the generalized momentum conservation principle which can be written as follows: "The change in system momentum is proportional to the resulting force acting on this system". It is expressed by the formula:
where is momentum at time , and is force averaged over . After dividing by and passing to the limit we get (derivative):
Let us analyse each side of the equation above.
Right side
We split the forces into body forces and surface forces
Surface forces act on walls of the cubic fluid element. For each wall, the X component of these forces was marked in the figure with a cubic element (in the form of a product of stress and surface area e.g. with units ).
Adding forces (their X components) acting on each of the cube walls, we get:
After ordering and performing similar reasoning for components (they have not been shown in the figure, but these would be vectors parallel to the Y and Z axes, respectively) we get:
We can then write it in the symbolic operational form:
There are mass forces acting on the inside of the control volume. We can write them using the acceleration field (e.g. gravitational acceleration):
Left side
Let us calculate momentum of the cube:
Because we assume that tested mass (cube) is constant in time, so
Left and Right side comparison
We have
then
then
Divide both sides by , and because we get:
which finishes the derivation.
Integral derivation
Applying Newton's second law (th component) to a control volume in the continuum being modeled gives:
Then, based on the Reynolds transport theorem and using material derivative notation, one can write
where represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Conservation form
The Cauchy momentum equation can also be put in the following form:
simply by defining:
where is the momentum density at the point considered in the continuum (for which the continuity equation holds), is the flux associated to the momentum density, and contains all of the body forces per unit volume. is the dyad of the velocity.
Here and have same number of dimensions as the flow speed and the body acceleration, while , being a tensor, has .
In the Eulerian forms it is apparent that the assumption of no deviatoric stress brings Cauchy equations to the Euler equations.
Convective acceleration
A significant feature of the Navier–Stokes equations is the presence of convective acceleration: the effect of time-independent acceleration of a flow with respect to space. While individual continuum particles indeed experience time dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Regardless of what kind of continuum is being dealt with, convective acceleration is a nonlinear effect. Convective acceleration is present in most flows (exceptions include one-dimensional incompressible flow), but its dynamic effect is disregarded in creeping flow (also called Stokes flow). Convective acceleration is represented by the nonlinear quantity , which may be interpreted either as or as , with the tensor derivative of the velocity vector . Both interpretations give the same result.
Advection operator vs tensor derivative
The convective acceleration can be thought of as the advection operator acting on the velocity field . This contrasts with the expression in terms of tensor derivative , which is the component-wise derivative of the velocity vector defined by , so that
Lamb form
The vector calculus identity of the cross product of a curl holds:
where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor .
Lamb in his famous classical book Hydrodynamics (1895), used this identity to change the convective term of the flow velocity in rotational form, i.e. without a tensor derivative:
where the vector is called the Lamb vector. The Cauchy momentum equation becomes:
Using the identity:
the Cauchy equation becomes:
In fact, in case of an external conservative field, by defining its potential :
In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes:
And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears due to a vector calculus identity of the triple scalar product:
If the stress tensor is isotropic, then only the pressure enters: (where is the identity tensor), and the Euler momentum equation in the steady incompressible case becomes:
In the steady incompressible case the mass equation is simply:
that is, the mass conservation for a steady incompressible flow states that the density along a streamline is constant. This leads to a considerable simplification of the Euler momentum equation:
The convenience of defining the total head for an inviscid liquid flow is now apparent:
in fact, the above equation can be simply written as:
That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant.
Irrotational flows
The Lamb form is also useful in irrotational flow, where the curl of the velocity (called vorticity) is equal to zero. In that case, the convection term in reduces to
Stresses
The effect of stress in the continuum flow is represented by the and terms; these are gradients of surface forces, analogous to stresses in a solid. Here is the pressure gradient and arises from the isotropic part of the Cauchy stress tensor. This part is given by the normal stresses that occur in almost all situations. The anisotropic part of the stress tensor gives rise to , which usually describes viscous forces; for incompressible flow, this is only a shear effect. Thus, is the deviatoric stress tensor, and the stress tensor is equal to:
where is the identity matrix in the space considered and the shear tensor.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The divergence of the stress tensor can be written as
The effect of the pressure gradient on the flow is to accelerate the flow in the direction from high pressure to low pressure.
As written in the Cauchy momentum equation, the stress terms and are yet unknown, so this equation alone cannot be used to solve problems. Besides the equations of motion—Newton's second law—a force model is needed relating the stresses to the flow motion. For this reason, assumptions based on natural observations are often applied to specify the stresses in terms of the other flow variables, such as velocity and density.
External forces
The vector field represents body forces per unit mass. Typically, these consist of only gravity acceleration, but may include others, such as electromagnetic forces. In non-inertial coordinate frames, other "inertial accelerations" associated with rotating coordinates may arise.
Often, these forces may be represented as the gradient of some scalar quantity , with in which case they are called conservative forces. Gravity in the direction, for example, is the gradient of . Because pressure from such gravitation arises only as a gradient, we may include it in the pressure term as a body force . The pressure and force terms on the right-hand side of the Navier–Stokes equation become
It is also possible to include external influences into the stress term rather than the body force term. This may even include antisymmetric stresses (inputs of angular momentum), in contrast to the usually symmetrical internal contributions to the stress tensor.
Nondimensionalisation
In order to make the equations dimensionless, a characteristic length and a characteristic velocity need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:
Substitution of these inverted relations in the Euler momentum equations yields:
and by dividing for the first coefficient:
Now defining the Froude number:
the Euler number:
and the coefficient of skin-friction or the one usually referred as 'drag coefficient' in the field of aerodynamics:
by passing respectively to the conservative variables, i.e. the momentum density and the force density:
the equations are finally expressed (now omitting the indexes):
Cauchy equations in the Froude limit (corresponding to negligible external field) are named free Cauchy equations:
and can be eventually conservation equations. The limit of high Froude numbers (low external field) is thus notable for such equations and is studied with perturbation theory.
Finally in convective form the equations are:
3D explicit convective forms
Cartesian 3D coordinates
For asymmetric stress tensors, equations in general take the following forms:
Cylindrical 3D coordinates
Below, we write the main equation in pressure-tau form assuming that the stress tensor is symmetrical:
See also
Euler equations (fluid dynamics)
Navier–Stokes equations
Burnett equations
Chapman–Enskog expansion
Notes
References
Continuum mechanics
Eponymous equations of physics
Momentum
Partial differential equations | 0.784495 | 0.988242 | 0.775271 |
Synergetics (Fuller) | Synergetics is the empirical study of systems in transformation, with an emphasis on whole system behaviors unpredicted by the behavior of any components in isolation. R. Buckminster Fuller (1895–1983) named and pioneered the field. His two-volume work Synergetics: Explorations in the Geometry of Thinking, in collaboration with E. J. Applewhite, distills a lifetime of research into book form.
Since systems are identifiable at every scale, synergetics is necessarily interdisciplinary, embracing a broad range of scientific and philosophical topics, especially in the area of geometry, wherein the tetrahedron features as Fuller's model of the simplest system.
Despite mainstream endorsements such as the prologue by Arthur Loeb, and positive dust cover blurbs by U Thant and Arthur C. Clarke, along with the posthumous naming of the carbon allotrope "buckminsterfullerene", synergetics remains an off-beat subject, ignored for decades by most traditional curricula and academic departments, a fact Fuller himself considered evidence of a dangerous level of overspecialization.
His oeuvre inspired many developers to further pioneer offshoots from synergetics, especially geodesic dome and dwelling designs. Among Fuller's contemporaries were Joe Clinton (NASA), Don Richter (Temcor), Kenneth Snelson (tensegrity), J. Baldwin (New Alchemy Institute), and Medard Gabel (World Game). His chief assistants Amy Edmondson and Ed Popko have published primers that help popularize synergetics, Stafford Beer extended synergetics to applications in social dynamics, and J.F. Nystrom proposed a theory of computational cosmography. Research continues.
Definition
Fuller defined synergetics as follows:
A system of mensuration employing 60-degree vectorial coordination comprehensive to both physics and chemistry, and to both arithmetic and geometry, in rational whole numbers ... Synergetics explains much that has not been previously illuminated ... Synergetics follows the cosmic logic of the structural mathematics strategies of nature, which employ the paired sets of the six angular degrees of freedom, frequencies, and vectorially economical actions and their multi-alternative, equi-economical action options ... Synergetics discloses the excruciating awkwardness characterizing present-day mathematical treatment of the interrelationships of the independent scientific disciplines as originally occasioned by their mutual and separate lacks of awareness of the existence of a comprehensive, rational, coordinating system inherent in nature.
Other passages in Synergetics that outline the subject are its introduction (The Wellspring of Reality) and the section on Nature's Coordination (410.01). The chapter on Operational Mathematics (801.00-842.07) provides an easy-to-follow, easy-to-build introduction to some of Fuller's geometrical modeling techniques. So this chapter can help a new reader become familiar with Fuller's approach, style and geometry. One of Fuller's clearest expositions on "the geometry of thinking" occurs in the two-part essay "Omnidirectional Halo" which appears in his book No More Secondhand God.
Amy Edmondson describes synergetics "in the broadest terms, as the study of spatial complexity, and as such is an inherently comprehensive discipline." In her PhD study, Cheryl Clark synthesizes the scope of synergetics as "the study of how nature works, of the patterns inherent in nature, the geometry of environmental forces that impact on humanity."
Here's an abridged list of some of the discoveries Fuller claims for Synergetics again quoting directly:
The rational volumetric quantation or constant proportionality of the octahedron, the cube, the rhombic triacontahedron, and the rhombic dodecahedron when referenced to the tetrahedron as volumetric unity.
The trigonometric identification of the great-circle trajectories of the seven axes of symmetry with the 120 basic disequilibrium LCD triangles of the spherical icosahedron. (See Sec. 1043.00.)
The A and B Quanta Modules.
Omnirationality: the identification of triangling and tetrahedroning with second- and third-powering factors.
Omni-60-degree coordination versus 90-degree coordination.
The integration of geometry and philosophy in a single conceptual system providing a common language and accounting for both the physical and metaphysical.
Significance
Several authors have tried to characterize the importance of synergetics. Amy Edmonson asserts that "Experience with synergetics encourages a new way of approaching and solving problems. Its emphasis on visual and spatial phenomena combined with Fuller's holistic approach fosters the kind of lateral thinking which so often leads to creative breakthroughs.". Cheryl Clark points out that "In his thousands of lectures, Fuller urged his audiences to study synergetics, saying 'I am confident that humanity's survival depends on all of our willingness to comprehend feelingly the way nature works.'"
Tetrahedral accounting
A chief hallmark of this system of mensuration is its unit of volume: a tetrahedron defined by four closest-packed unit-radius spheres. This tetrahedron anchors a set of concentrically arranged polyhedra proportioned in a canonical manner and inter-connected by a twisting-contracting, inside-outing dynamic that Fuller named the jitterbug transformation.
Corresponding to Fuller's use of a regular tetrahedron as his unit of volume is his replacing the cube as his model of 3rd powering.(Fig. 990.01) The relative size of a shape is indexed by its "frequency," a term he deliberately chose for its resonance with scientific meanings. "Size and time are synonymous. Frequency and size are the same phenomenon." (528.00) Shapes not having any size, because purely conceptual in the Platonic sense, are "prefrequency" or "subfrequency" in contrast.
Prime means sizeless, timeless, subfrequency. Prime is prehierarchical. Prime is prefrequency. Prime is generalized, a metaphysical conceptualization experience, not a special case.... (1071.10)
Generalized principles (scientific laws), although communicated energetically, do not inhere in the "special case" episodes, are considered "metaphysical" in that sense.
An energy event is always special case. Whenever we have experienced energy, we have special case. The physicist's first definition of physical is that it is an experience that is extracorporeally, remotely, instrumentally apprehensible. Metaphysical includes all the experiences that are excluded by the definition of physical. Metaphysical is always generalized principle.(1075.11)
Tetrahedral mensuration also involves substituting what Fuller calls the "isotropic vector matrix" (IVM) for the standard XYZ coordinate system, as his principal conceptual backdrop for special case physicality:
The synergetics coordinate system -- in contradistinction to the XYZ coordinate system -- is linearly referenced to the unit-vector-length edges of the regular tetrahedron, each of whose six unit vector edges occur in the isotropic vector matrix as the diagonals of the cube's six faces. (986.203)
The IVM scaffolding or skeletal framework is defined by cubic closest packed spheres (CCP), alternatively known as the FCC or face-centered cubic lattice, or as the octet truss in architecture (on which Fuller held a patent). The space-filling complementary tetrahedra and octahedra characterizing this matrix have prefrequency volumes 1 and 4 respectively (see above).
A third consequence of switching to tetrahedral mensuration is Fuller's review of the standard "dimension" concept. Whereas "height, width and depth" have been promulgated as three distinct dimensions within the Euclidean context, each with its own independence, Fuller considered the tetrahedron a minimal starting point for spatial cognition. His use of "4D" is in many passages close to synonymous with the ordinary meaning of "3D," with the dimensions of physicality (time, mass) considered additional dimensions.
Geometers and "schooled" people speak of length, breadth, and height as constituting a hierarchy of three independent dimensional states -- "one-dimensional," "two-dimensional," and "three-dimensional" -- which can be conjoined like building blocks. But length, breadth, and height simply do not exist independently of one another nor independently of all the inherent characteristics of all systems and of all systems' inherent complex of interrelationships with Scenario Universe.... All conceptual consideration is inherently four-dimensional. Thus the primitive is a priori four-dimensional, always based on the four planes of reference of the tetrahedron. There can never be less than four primitive dimensions. Any one of the stars or point-to-able "points" is a system-ultratunable, tunable, or infratunable but inherently four-dimensional. (527.702, 527.712)
Synergetics does not aim to replace or invalidate pre-existing geometry or mathematics, as evidenced by the opening dedication to H.S.M. Coxeter, whom Fuller considered the greatest geometer of his era. Fuller acknowledges his vocabulary is "remote" even while defending his word choices. (250.30)
Starting with Universe
Fuller's geometric explorations provide an experiential basis for designing and refining a philosophical language. His overarching concern is the co-occurring relationship between tensile and compressive tendencies within an eternally regenerative Universe. "Universe" is a proper name he defines in terms of "partially overlapping scenarios" while avoiding any static picture or model of same. His Universe is "non-simultaneously conceptual":
Because of the fundamental nonsimultaneity of universal structuring, a single, simultaneous, static model of Universe is inherently both nonexistent and conceptually impossible as well as unnecessary. Ergo, Universe does not have a shape. Do not waste your time, as man has been doing for ages, trying to think of a unit shape "outside of which there must be something," or "within which, at center, there must be a smaller something." (307.04)
U = MP describes a first division of Universe into metaphysical and physical aspects, the former associated with invisibly cohesive tension, the latter with energy events, both associative as matter and disassociative as radiation. (162.00)
Synergetics also distinguishes between gravitational and precessional relationships among moving bodies, the latter referring to the vast majority of cosmic relationships, which are non-180-degree and do not involve bodies "falling in" to one another (130.00 533.01, 1009.21). "Precession" is a nuanced term in the synergetics vocabulary, relating to the behavior of gyroscopes, but also to side-effects. (326.13, 1009.92)
Intuitive geometry
Fuller took an intuitive approach to his studies, often going into exhaustive empirical detail while at the same time seeking to cast his findings in their most general philosophical context.
For example, his sphere packing studies led him to generalize a formula for polyhedral numbers: 2 P F2 + 2, where F stands for "frequency" (the number of intervals between balls along an edge) and P for a product of low order primes (some integer). He then related the "multiplicative 2" and "additive 2" in this formula to the convex versus concave aspects of shapes, and to their polar spinnability respectively.
These same polyhedra, developed through sphere packing and related by tetrahedral mensuration, he then spun around their various poles to form great circle networks and corresponding triangular tiles on the surface of a sphere. He exhaustively catalogues the central and surface angles of these spherical triangles and their related chord factors.
Fuller was continually on the lookout for ways to connect the dots, often purely speculatively. As an example of "dot connecting" he sought to relate the 120 basic disequilibrium LCD triangles of the spherical icosahedron to the plane net of his A module.(915.11Fig. 913.01, Table 905.65)
The Jitterbug Transformation provides a unifying dynamic in this work, with much significance attached to the doubling and quadrupling of edges that occur, when a cuboctahedron is collapsed through icosahedral, octahedral and tetrahedral stages, then inside-outed and re-expanded in a complementary fashion. The JT forms a bridge between 3,4-fold rotationally symmetric shapes, and the 5-fold family, such as a rhombic triacontahedron, which later he analyzes in terms of the T module, another tetrahedral wedge with the same volume as his A and B modules.
He models energy transfer between systems by means of the double-edged octahedron and its ability to turn into a spiral (tetrahelix). Energy lost to one system always reappeared somewhere else in his Universe. He modeled a threshold between associative and disassociative energy patterns with his T-to-E module transformation ("E" for "Einstein").(Fig 986.411A)
"Synergetics" is in some ways a library of potential "science cartoons" (scenarios) described in prose and not heavily dependent upon mathematical notations. His demystification of a gyroscope's behavior in terms of a hammer thrower, pea shooter, and garden hose, is a good example of his commitment to using accessible metaphors. (Fig. 826.02A)
His modular dissection of a space-filling tetrahedron or MITE (minimum tetrahedron) into 2 A and 1 B module serves as a basis for more speculations about energy, the former being more energy conservative, the latter more dissipative in his analysis.(986.422921.20, 921.30). His focus is reminiscent of later cellular automaton studies in that tessellating modules would affect their neighbors over successive time intervals.
Social commentary
Synergetics informed Fuller's social analysis of the human condition. He identified "ephemeralization" as the trend towards accomplishing more with less physical resources, as a result of increasing comprehension of such "generalized principles" as E = Mc2.
He remained concerned that humanity's conditioned reflexes were not keeping pace with its engineering potential, emphasizing the "touch and go" nature of our current predicament.
Fuller hoped the streamlining effects of a more 60-degree-based approach within natural philosophy would help bridge the gap between C.P. Snow's "two cultures" and result in a greater level of scientific literacy in the general population. (935.24)
Academic acceptance
Fuller hoped to gain traction for his nomenclature in part by dedicating Synergetics to H.S.M. Coxeter (with permission) and by citing page 71 of the latter's Regular Polytopes in order to suggest where his A & B modules (depicted above), and by extension, many of his other concepts, might enter the mathematical literature (see Fig. 950.12).
Dr. Arthur Loeb provided a prologue and an appendix to Synergetics discussing its overlap with crystallography, chemistry and virology.
Fuller originally achieved more acceptance in the humanities as a poet-philosopher and architect. For example, he features in The Pound Era by Hugh Kenner published in 1971, prior to the publication of Synergetics. The journal Nature circled Operating Manual for Spaceship Earth as one of the five most formative books on sustainability.
Errata
A major error, caught by Fuller himself, involved a misapplication of his Synergetics Constant in Synergetics 1, which led to the mistaken belief he had discovered a radius 1 sphere of 5 tetravolumes. He provided a correction in Synergetics 2 in the form of his T&E module thread. (986.206 - 986.212)
About synergy
Synergetics refers to synergy: either the concept of whole system behaviors not predicted by the behaviors of its parts, or as another term for negative entropy — negentropy.
See also
Cloud Nine
Dymaxion House
Geodesic dome
Quadray coordinates
Octet Truss
Tensegrity
Tetrahedron
Trilinear coordinates
Notes
References
R. Buckminster Fuller (in collaboration with E.J. Applewhite, Synergetics: Explorations in the Geometry of Thinking , online edition hosted by R. W. Gray with permission , originally published by Macmillan , Vol. 1 in 1975 (with a preface and contribution by Arthur L. Loeb; ), and Vol. 2 in 1979, as two hard-bound volumes, re-editions in paperback.
Amy Edmondson, A Fuller Explanation, EmergentWorld LLC, 2007.
External links
Complete On-Line Edition of Fuller's Synergetics
Synergetics on the Web by K. Urner
Synergetics at the Buckminster Fuller Institute
Holism
Cybernetics
Buckminster Fuller | 0.789676 | 0.981747 | 0.775262 |
Euler's three-body problem | In physics and astronomy, Euler's three-body problem is to solve for the motion of a particle that is acted upon by the gravitational field of two other point masses that are fixed in space. This problem is exactly solvable, and yields an approximate solution for particles moving in the gravitational fields of prolate and oblate spheroids. This problem is named after Leonhard Euler, who discussed it in memoirs published in 1760. Important extensions and analyses were contributed subsequently by Lagrange, Liouville, Laplace, Jacobi, Le Verrier, Hamilton, Poincaré, Birkhoff among others.
Euler's problem also covers the case when the particle is acted upon by other inverse-square central forces, such as the electrostatic interaction described by Coulomb's law. The classical solutions of the Euler problem have been used to study chemical bonding, using a semiclassical approximation of the energy levels of a single electron moving in the field of two atomic nuclei, such as the diatomic ion HeH2+. This was first done by Wolfgang Pauli in his doctoral dissertation under Arnold Sommerfeld, a study of the first ion of molecular hydrogen, namely the hydrogen molecule-ion H2+. These energy levels can be calculated with reasonable accuracy using the Einstein–Brillouin–Keller method, which is also the basis of the Bohr model of atomic hydrogen. More recently, as explained further in the quantum-mechanical version, analytical solutions to the eigenvalues (energies) have been obtained: these are a generalization of the Lambert W function.
The exact solution, in the full three dimensional case, can be expressed in terms of Weierstrass's elliptic functions For convenience, the problem may also be solved by numerical methods, such as Runge–Kutta integration of the equations of motion. The total energy of the moving particle is conserved, but its linear and angular momentum are not, since the two fixed centers can apply a net force and torque. Nevertheless, the particle has a second conserved quantity that corresponds to the angular momentum or to the Laplace–Runge–Lenz vector as limiting cases.
The Euler three-body problem is known by a variety of names, such as the problem of two fixed centers, the Euler–Jacobi problem, and the two-center Kepler problem. Various generalizations of Euler's problem are known; these generalizations add linear and inverse cubic forces and up to five centers of force. Special cases of these generalized problems include Darboux's problem and Velde's problem.
Overview and history
Euler's three-body problem is to describe the motion of a particle under the influence of two centers that attract the particle with central forces that decrease with distance as an inverse-square law, such as Newtonian gravity or Coulomb's law. Examples of Euler's problem include an electron moving in the electric field of two nuclei, such as the hydrogen molecule-ion . The strength of the two inverse-square forces need not be equal; for illustration, the two nuclei may have different charges, as in the molecular ion HeH2+.
In Euler's three-body problem we assume that the two centres of attraction are stationary. This is not strictly true in a case like , but the protons experience much less acceleration than the electron. However, the Euler three-body problem does not apply to a planet moving in the gravitational field of two stars, because in that case at least one of the stars experiences acceleration similar to that experienced by the planet.
This problem was first considered by Leonhard Euler, who showed that it had an exact solution in 1760. Joseph Louis Lagrange solved a generalized problem in which the centers exert both linear and inverse-square forces. Carl Gustav Jacob Jacobi showed that the rotation of the particle about the axis of the two fixed centers could be separated out, reducing the general three-dimensional problem to the planar problem.
In 2008, Birkhauser published a book entitled "Integrable Systems in Celestial Mechanics". In this book an Irish mathematician, Diarmuid Ó Mathúna, gives closed form solutions for both the planar two fixed centers problem and the three dimensional problem.
Constants of motion
The problem of two fixed centers conserves energy; in other words, the total energy is a constant of motion. The potential energy is given by
where represents the particle's position, and and are the distances between the particle and the centers of force; and are constants that measure the strength of the first and second forces, respectively. The total energy equals sum of this potential energy with the particle's kinetic energy
where and are the particle's mass and linear momentum, respectively.
The particle's linear and angular momentum are not conserved in Euler's problem, since the two centers of force act like external forces upon the particle, which may yield a net force and torque on the particle. Nevertheless, Euler's problem has a second constant of motion
where is the separation of the two centers of force, and are the angles of the lines connecting the particle to the centers of force, with respect to the line connecting the centers. This second constant of motion was identified by E. T. Whittaker in his work on analytical mechanics, and generalized to dimensions by Coulson and Joseph in 1967. In the Coulson–Joseph form, the constant of motion is written
where denotes the momentum component along the axis on which the attracting centers are located. This constant of motion corresponds to the total angular momentum squared in the limit when the two centers of force converge to a single point, and proportional to the Laplace–Runge–Lenz vector in the limit when one of the centers goes to infinity ( while remains finite).
Quantum mechanical version
A special case of the quantum mechanical three-body problem is the hydrogen molecule ion, . Two of the three bodies are nuclei and the third is a fast moving electron. The two nuclei are 1800 times heavier than the electron and thus modeled as fixed centers. It is well known that the Schrödinger wave equation is separable in prolate spheroidal coordinates and can be decoupled into two ordinary differential equations coupled by the energy eigenvalue and a separation constant.
However, solutions required series expansions from basis sets. Nonetheless, through experimental mathematics, it was found that the energy eigenvalue was mathematically a generalization of the Lambert W function (see Lambert W function and references therein for more details). The hydrogen molecular ion in the case of clamped nuclei can be completely worked out within a Computer algebra system. The fact that its solution is an implicit function is revealing in itself. One of the successes of theoretical physics is not simply a matter that it is amenable to a mathematical treatment but that the algebraic equations involved can be symbolically manipulated until an analytical solution, preferably a closed form solution, is isolated. This type of solution for a special case of the three-body problem shows us the possibilities of what is possible as an analytical solution for the quantum three-body and many-body problem.
Generalizations
An exhaustive analysis of the soluble generalizations of Euler's three-body problem was carried out by Adam Hiltebeitel in 1911. The simplest generalization of Euler's three-body problem is to add a third center of force midway between the original two centers, that exerts only a linear Hooke force. The next generalization is to augment the inverse-square force laws with a force that increases linearly with distance. The final set of generalizations is to add two fixed centers of force at positions that are imaginary numbers, with forces that are both linear and inverse-square laws, together with a force parallel to the axis of imaginary centers and varying as the inverse cube of the distance to that axis.
The solution to the original Euler problem is an approximate solution for the motion of a particle in the gravitational field of a prolate body, i.e., a sphere that has been elongated in one direction, such as a cigar shape. The corresponding approximate solution for a particle moving in the field of an oblate spheroid (a sphere squashed in one direction) is obtained by making the positions of the two centers of force into imaginary numbers. The oblate spheroid solution is astronomically more important, since most planets, stars and galaxies are approximately oblate spheroids; prolate spheroids are very rare.
The analogue of the oblate case in general relativity is a Kerr black hole. The geodesics around this object are known to be integrable, owing to the existence of a fourth constant of motion (in addition to energy, angular momentum, and the magnitude of four-momentum), known as the Carter constant. Euler's oblate three body problem and a Kerr black hole share the same mass moments, and this is most apparent if the metric for the latter is written in Kerr–Schild coordinates.
The analogue of the oblate case augmented with a linear Hooke term is a Kerr–de Sitter black hole. As in Hooke's law, the cosmological constant term depends linearly on distance from the origin, and the Kerr–de Sitter spacetime also admits a Carter-type constant quadratic in the momenta.
Mathematical solutions
Original Euler problem
In the original Euler problem, the two centers of force acting on the particle are assumed to be fixed in space; let these centers be located along the x-axis at ±a. The particle is likewise assumed to be confined to a fixed plane containing the two centers of force. The potential energy of the particle in the field of these centers is given by
where the proportionality constants μ1 and μ2 may be positive or negative. The two centers of attraction can be considered as the foci of a set of ellipses. If either center were absent, the particle would move on one of these ellipses, as a solution of the Kepler problem. Therefore, according to Bonnet's theorem, the same ellipses are the solutions for the Euler problem.
Introducing elliptic coordinates,
the potential energy can be written as
and the kinetic energy as
This is a Liouville dynamical system if ξ and η are taken as φ1 and φ2, respectively; thus, the function Y equals
and the function W equals
Using the general solution for a Liouville dynamical system, one obtains
Introducing a parameter u by the formula
gives the parametric solution
Since these are elliptic integrals, the coordinates ξ and η can be expressed as elliptic functions of u.
See also
Carter constant
Hydrogen molecular ion
Jacobi integral
Lagrangian point
Liouville dynamical system
Three-body problem
Notes
References
Further reading
External links
The Euler Archive
Orbits | 0.783578 | 0.98935 | 0.775233 |
Displacement (geometry) | In geometry and mechanics, a displacement is a vector whose length is the shortest distance from the initial to the final position of a point P undergoing motion. It quantifies both the distance and direction of the net or total motion along a straight line from the initial position to the final position of the point trajectory. A displacement may be identified with the translation that maps the initial position to the final position. Displacement is the shift in location when an object in motion changes from one position to another.
For motion over a given interval of time, the displacement divided by the length of the time interval defines the average velocity (a vector), whose magnitude is the average speed (a scalar quantity).
Formulation
A displacement may be formulated as a relative position (resulting from the motion), that is, as the final position of a point relative to its initial position . The corresponding displacement vector can be defined as the difference between the final and initial positions:
Rigid body
In dealing with the motion of a rigid body, the term displacement may also include the rotations of the body. In this case, the displacement of a particle of the body is called linear displacement (displacement along a line), while the rotation of the body is called angular displacement.
Derivatives
For a position vector that is a function of time , the derivatives can be computed with respect to . The first two derivatives are frequently encountered in physics.
Velocity
Acceleration
Jerk
These common names correspond to terminology used in basic kinematics. By extension, the higher order derivatives can be computed in a similar fashion. Study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite series, enabling several analytical techniques in engineering and physics. The fourth order derivative is called jounce.
Discussion
In considering motions of objects over time, the instantaneous velocity of the object is the rate of change of the displacement as a function of time. The instantaneous speed, then, is distinct from velocity, or the time rate of change of the distance travelled along a specific path. The velocity may be equivalently defined as the time rate of change of the position vector. If one considers a moving initial position, or equivalently a moving origin (e.g. an initial position or origin which is fixed to a train wagon, which in turn moves on its rail track), the velocity of P (e.g. a point representing the position of a passenger walking on the train) may be referred to as a relative velocity; this is opposed to an absolute velocity, which is computed with respect to a point and coordinate axes which are considered to be at rest (a inertial frame of reference such as, for instance, a point fixed on the floor of the train station and the usual vertical and horizontal directions).
See also
Affine space
Deformation (mechanics)
Displacement field (mechanics)
Equipollence (geometry)
Motion vector
Position vector
Radial velocity
Screw displacement
References
External links
Motion (physics)
Length
Vector physical quantities
Geometric measurement
Kinematic properties | 0.779635 | 0.994334 | 0.775217 |
Rotation | Rotation or rotational motion is the circular movement of an object around a central line, known as an axis of rotation. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersecting anywhere inside or outside the figure at a center of rotation. A solid figure has an infinite number of possible axes and angles of rotation, including chaotic rotation (between arbitrary orientations), in contrast to rotation around a axis.
The special case of a rotation with an internal axis passing through the body's own center of mass is known as a spin (or autorotation). In that case, the surface intersection of the internal spin axis can be called a pole; for example, Earth's rotation defines the geographical poles.
A rotation around an axis completely external to the moving body is called a revolution (or orbit), e.g. Earth's orbit around the Sun. The ends of the external axis of revolution can be called the orbital poles.
Either type of rotation is involved in a corresponding type of angular velocity (spin angular velocity and orbital angular velocity) and angular momentum (spin angular momentum and orbital angular momentum).
Mathematics
Mathematically, a rotation is a rigid body movement which, unlike a translation, keeps at least one point fixed. This definition applies to rotations in two dimensions (in a plane), in which exactly one point is kept fixed; and also in three dimensions (in space), in which additional points may be kept fixed (as in rotation around a fixed axis, as infinite line).
All rigid body movements are rotations, translations, or combinations of the two.
A rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion. The axis is perpendicular to the plane of the motion.
If a rotation around a point or axis is followed by a second rotation around the same point/axis, a third rotation results. The reverse (inverse) of a rotation is also a rotation. Thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, e.g. a translation.
Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis, and followed by a rotation around the z axis. That is to say, any spatial rotation can be decomposed into a combination of principal rotations.
Fixed axis vs. fixed point
The combination of any sequence of rotations of an object in three dimensions about a fixed point is always equivalent to a rotation about an axis (which may be considered to be a rotation in the plane that is perpendicular to that axis). Similarly, the rotation rate of an object in three dimensions at any instant is about some axis, although this axis may be changing over time.
In other than three dimensions, it does not make sense to describe a rotation as being around an axis, since more than one axis through the object may be kept fixed; instead, simple rotations are described as being in a plane. In four or more dimensions, a combination of two or more rotations about in a plane is not in general a rotation in a single plane.
Axis of 2-dimensional rotations
2-dimensional rotations, unlike the 3-dimensional ones, possess no axis of rotation, only a point about which the rotation occurs. This is equivalent, for linear transformations, with saying that there is no direction in the plane which is kept unchanged by a 2-dimensional rotation, except, of course, the identity.
The question of the existence of such a direction is the question of existence of an eigenvector for the matrix A representing the rotation. Every 2D rotation around the origin through an angle in counterclockwise direction can be quite simply represented by the following matrix:
A standard eigenvalue determination leads to the characteristic equation
which has
as its eigenvalues. Therefore, there is no real eigenvalue whenever , meaning that no real vector in the plane is kept unchanged by A.
Rotation angle and axis in 3 dimensions
Knowing that the trace is an invariant, the rotation angle for a proper orthogonal 3×3 rotation matrix is found by
Using the principal arc-cosine, this formula gives a rotation angle satisfying . The corresponding rotation axis must be defined to point in a direction that limits the rotation angle to not exceed 180 degrees. (This can always be done because any rotation of more than 180 degrees about an axis can always be written as a rotation having if the axis is replaced with .)
Every proper rotation in 3D space has an axis of rotation, which is defined such that any vector that is aligned with the rotation axis will not be affected by rotation. Accordingly, , and the rotation axis therefore corresponds to an eigenvector of the rotation matrix associated with an eigenvalue of 1. As long as the rotation angle is nonzero (i.e., the rotation is not the identity tensor), there is one and only one such direction. Because A has only real components, there is at least one real eigenvalue, and the remaining two eigenvalues must be complex conjugates of each other (see Eigenvalues and eigenvectors#Eigenvalues and the characteristic polynomial). Knowing that 1 is an eigenvalue, it follows that the remaining two eigenvalues are complex conjugates of each other, but this does not imply that they are complex—they could be real with double multiplicity. In the degenerate case of a rotation angle , the remaining two eigenvalues are both equal to −1. In the degenerate case of a zero rotation angle, the rotation matrix is the identity, and all three eigenvalues are 1 (which is the only case for which the rotation axis is arbitrary).
A spectral analysis is not required to find the rotation axis. If denotes the unit eigenvector aligned with the rotation axis, and if denotes the rotation angle, then it can be shown that . Consequently, the expense of an eigenvalue analysis can be avoided by simply normalizing this vector if it has a nonzero magnitude. On the other hand, if this vector has a zero magnitude, it means that . In other words, this vector will be zero if and only if the rotation angle is 0 or 180 degrees, and the rotation axis may be assigned in this case by normalizing any column of that has a nonzero magnitude.
This discussion applies to a proper rotation, and hence . Any improper orthogonal 3x3 matrix may be written as , in which is proper orthogonal. That is, any improper orthogonal 3x3 matrix may be decomposed as a proper rotation (from which an axis of rotation can be found as described above) followed by an inversion (multiplication by −1). It follows that the rotation axis of is also the eigenvector of corresponding to an eigenvalue of −1.
Rotation plane
As much as every tridimensional rotation has a rotation axis, also every tridimensional rotation has a plane, which is perpendicular to the rotation axis, and which is left invariant by the rotation. The rotation, restricted to this plane, is an ordinary 2D rotation.
The proof proceeds similarly to the above discussion. First, suppose that all eigenvalues of the 3D rotation matrix A are real. This means that there is an orthogonal basis, made by the corresponding eigenvectors (which are necessarily orthogonal), over which the effect of the rotation matrix is just stretching it. If we write A in this basis, it is diagonal; but a diagonal orthogonal matrix is made of just +1s and −1s in the diagonal entries. Therefore, we do not have a proper rotation, but either the identity or the result of a sequence of reflections.
It follows, then, that a proper rotation has some complex eigenvalue. Let v be the corresponding eigenvector. Then, as we showed in the previous topic, is also an eigenvector, and and are such that their scalar product vanishes:
because, since is real, it equals its complex conjugate , and and are both representations of the same scalar product between and .
This means and are orthogonal vectors. Also, they are both real vectors by construction. These vectors span the same subspace as and , which is an invariant subspace under the application of A. Therefore, they span an invariant plane.
This plane is orthogonal to the invariant axis, which corresponds to the remaining eigenvector of A, with eigenvalue 1, because of the orthogonality of the eigenvectors of A.
Rotation of vectors
A vector is said to be rotating if it changes its orientation. This effect is generally only accompanied when its rate of change vector has non-zero perpendicular component to the original vector. This can be shown to be the case by considering a vector which is parameterized by some variable for which:
Which also gives a relation of rate of change of unit vector by taking , to be such a vector: showing that vector is perpendicular to the vector, .
From:
,
since the first term is parallel to and the second perpendicular to it, we can conclude in general that the parallel and perpendicular components of rate of change of a vector independently influence only the magnitude or orientation of the vector respectively. Hence, a rotating vector always has a non-zero perpendicular component of its rate of change vector against the vector itself.
In higher dimensions
As dimensions increase the number of rotation vectors increases. Along a four dimensional space (a hypervolume), rotations occur along x, y, z, and w axis. An object rotated on a w axis intersects through various volumes, where each intersection is equal to a self contained volume at an angle. This gives way to a new axis of rotation in a 4d hypervolume, were a 3d object can be rotated perpendicular to the z axis.
Physics
The speed of rotation is given by the angular frequency (rad/s) or frequency (turns per time), or period (seconds, days, etc.). The time-rate of change of angular frequency is angular acceleration (rad/s2), caused by torque. The ratio of torque to the angular acceleration is given by the moment of inertia.
The angular velocity vector (an axial vector) also describes the direction of the axis of rotation. Similarly, the torque is an axial vector.
The physics of the rotation around a fixed axis is mathematically described with the axis–angle representation of rotations. According to the right-hand rule, the direction away from the observer is associated with clockwise rotation and the direction towards the observer with counterclockwise rotation, like a screw.
Circular motion
It is possible for objects to have periodic circular trajectories without changing their orientation. These types of motion are treated under circular motion instead of rotation, more specifically as a curvilinear translation. Since translation involves displacement of rigid bodies while preserving the orientation of the body, in the case of curvilinear translation, all the points have the same instantaneous velocity whereas relative motion can only be observed in motions involving rotation.
In rotation, the orientation of the object changes and the change in orientation is independent of the observers whose frames of reference have constant relative orientation over time. By Euler's theorem, any change in orientation can be described by rotation about an axis through a chosen reference point. Hence, the distinction between rotation and circular motion can be made by requiring an instantaneous axis for rotation, a line passing through instantaneous center of circle and perpendicular to the plane of motion. In the example depicting curvilinear translation, the center of circles for the motion lie on a straight line but it is parallel to the plane of motion and hence does not resolve to an axis of rotation. In contrast, a rotating body will always have its instantaneous axis of zero velocity, perpendicular to the plane of motion.
More generally, due to Chasles' theorem, any motion of rigid bodies can be treated as a composition of rotation and translation, called general plane motion. A simple example of pure rotation is considered in rotation around a fixed axis.
Cosmological principle
The laws of physics are currently believed to be invariant under any fixed rotation. (Although they do appear to change when viewed from a rotating viewpoint: see rotating frame of reference.)
In modern physical cosmology, the cosmological principle is the notion that the distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe and have no preferred direction, and should, therefore, produce no observable irregularities in the large scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
In particular, for a system which behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved.
Euler rotations
Euler rotations provide an alternative description of a rotation. It is a composition of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves.
These rotations are called precession, nutation, and intrinsic rotation.
Astronomy
In astronomy, rotation is a commonly observed phenomenon; it includes both spin (auto-rotation) and orbital revolution.
Spin
Stars, planets and similar bodies may spin around on their axes. The rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features. An example is sunspots, which rotate around the Sun at the same velocity as the outer gases that make up the Sun.
Under some circumstances orbiting bodies may lock their spin rotation to their orbital rotation around a larger body. This effect is called tidal locking; the Moon is tidal-locked to the Earth.
This rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravitation the closer one is to the equator. Earth's gravity combines both mass effects such that an object weighs slightly less at the equator than at the poles. Another is that over time the Earth is slightly deformed into an oblate spheroid; a similar equatorial bulge develops for other planets.
Another consequence of the rotation of a planet are the phenomena of precession and nutation. Like a gyroscope, the overall effect is a slight "wobble" in the movement of the axis of a planet. Currently the tilt of the Earth's axis to its orbital plane (obliquity of the ecliptic) is 23.44 degrees, but this angle changes slowly (over thousands of years). (See also Precession of the equinoxes and Pole Star.)
Revolution
While revolution is often used as a synonym for rotation, in many fields, particularly astronomy and related fields, revolution, often referred to as orbital revolution for clarity, is used when one body moves around another while rotation is used to mean the movement around an axis. Moons revolve around their planets, planets revolve about their stars (such as the Earth around the Sun); and stars slowly revolve about their galaxial centers. The motion of the components of galaxies is complex, but it usually includes a rotation component.
Retrograde rotation
Most planets in the Solar System, including Earth, spin in the same direction as they orbit the Sun. The exceptions are Venus and Uranus. Venus may be thought of as rotating slowly backward (or being "upside down"). Uranus rotates nearly on its side relative to its orbit. Current speculation is that Uranus started off with a typical prograde orientation and was knocked on its side by a large impact early in its history. The dwarf planet Pluto (formerly considered a planet) is anomalous in several ways, including that it also rotates on its side.
Flight dynamics
In flight dynamics, the principal rotations described with Euler angles above are known as pitch, roll and yaw. The term rotation is also used in aviation to refer to the upward pitch (nose moves up) of an aircraft, particularly when starting the climb after takeoff.
Principal rotations have the advantage of modelling a number of physical systems such as gimbals, and joysticks, so are easily visualised, and are a very compact way of storing a rotation. But they are difficult to use in calculations as even simple operations like combining rotations are expensive to do, and suffer from a form of gimbal lock where the angles cannot be uniquely calculated for certain rotations.
Amusement rides
Many amusement rides provide rotation. A Ferris wheel has a horizontal central axis, and parallel axes for each gondola, where the rotation is opposite, by gravity or mechanically. As a result, at any time the orientation of the gondola is upright (not rotated), just translated. The tip of the translation vector describes a circle. A carousel provides rotation about a vertical axis. Many rides provide a combination of rotations about several axes. In Chair-O-Planes the rotation about the vertical axis is provided mechanically, while the rotation about the horizontal axis is due to the centripetal force. In roller coaster inversions the rotation about the horizontal axis is one or more full cycles, where inertia keeps people in their seats.
Sports
Rotation of a ball or other object, usually called spin, plays a role in many sports, including topspin and backspin in tennis, English, follow and draw in billiards and pool, curve balls in baseball, spin bowling in cricket, flying disc sports, etc. Table tennis paddles are manufactured with different surface characteristics to allow the player to impart a greater or lesser amount of spin to the ball.
Rotation of a player one or more times around a vertical axis may be called spin in figure skating, twirling (of the baton or the performer) in baton twirling, or 360, 540, 720, etc. in snowboarding, etc. Rotation of a player or performer one or more times around a horizontal axis may be called a flip, roll, somersault, heli, etc. in gymnastics, waterskiing, or many other sports, or a one-and-a-half, two-and-a-half, gainer (starting facing away from the water), etc. in diving, etc. A combination of vertical and horizontal rotation (back flip with 360°) is called a möbius in waterskiing freestyle jumping.
Rotation of a player around a vertical axis, generally between 180 and 360 degrees, may be called a spin move and is used as a deceptive or avoidance manoeuvre, or in an attempt to play, pass, or receive a ball or puck, etc., or to afford a player a view of the goal or other players. It is often seen in hockey, basketball, football of various codes, tennis, etc.
See also
Circular motion
Cyclone – large scale rotating air mass
Instant centre of rotation – instantaneously fixed point on an arbitrarily moving rigid body
Mach's principle – speculative hypothesis that a physical law relates the motion of the distant stars to the local inertial frame
Orientation (geometry)
Point reflection
Rolling – motion of two objects in contact with each-other without sliding
Rotation (quantity) – a unitless scalar representing the number of rotations
Rotation around a fixed axis
Rotation formalisms in three dimensions
Rotating locomotion in living systems
Top – spinning toy
References
External links
Product of Rotations at cut-the-knot. cut-the-knot.org
When a Triangle is Equilateral at cut-the-knot. cut-the-knot.org
Rotate Points Using Polar Coordinates, howtoproperly.com
Rotation in Two Dimensions by Sergio Hannibal Mejia after work by Roger Germundsson and Understanding 3D Rotation by Roger Germundsson, Wolfram Demonstrations Project. demonstrations.wolfram.com
Rotation, Reflection, and Frame Change: Orthogonal tensors in computational engineering mechanics, IOP Publishing
Euclidean geometry
Classical mechanics
Orientation (geometry)
Kinematics | 0.77939 | 0.994598 | 0.775179 |
Electromechanics | Electromechanics combines processes and procedures drawn from electrical engineering and mechanical engineering. Electromechanics focuses on the interaction of electrical and mechanical systems as a whole and how the two systems interact with each other. This process is especially prominent in systems such as those of DC or AC rotating electrical machines which can be designed and operated to generate power from a mechanical process (generator) or used to power a mechanical effect (motor). Electrical engineering in this context also encompasses electronics engineering.
Electromechanical devices are ones which have both electrical and mechanical processes. Strictly speaking, a manually operated switch is an electromechanical component due to the mechanical movement causing an electrical output. Though this is true, the term is usually understood to refer to devices which involve an electrical signal to create mechanical movement, or vice versa mechanical movement to create an electric signal. Often involving electromagnetic principles such as in relays, which allow a voltage or current to control another, usually isolated circuit voltage or current by mechanically switching sets of contacts, and solenoids, by which a voltage can actuate a moving linkage as in solenoid valves.
Before the development of modern electronics, electromechanical devices were widely used in complicated subsystems of parts, including electric typewriters, teleprinters, clocks, initial television systems, and the very early electromechanical digital computers. Solid-state electronics have replaced electromechanics in many applications.
History
The first electric motor was invented in 1822 by Michael Faraday. The motor was developed only a year after Hans Christian Ørsted discovered that the flow of electric current creates a proportional magnetic field. This early motor was simply a wire partially submerged into a glass of mercury with a magnet at the bottom. When the wire was connected to a battery a magnetic field was created and this interaction with the magnetic field given off by the magnet caused the wire to spin.
Ten years later the first electric generator was invented, again by Michael Faraday. This generator consisted of a magnet passing through a coil of wire and inducing current that was measured by a galvanometer. Faraday's research and experiments into electricity are the basis of most of modern electromechanical principles known today.
Interest in electromechanics surged with the research into long distance communication. The Industrial Revolution's rapid increase in production gave rise to a demand for intracontinental communication, allowing electromechanics to make its way into public service. Relays originated with telegraphy as electromechanical devices were used to regenerate telegraph signals. The Strowger switch, the Panel switch, and similar devices were widely used in early automated telephone exchanges. Crossbar switches were first widely installed in the middle 20th century in Sweden, the United States, Canada, and Great Britain, and these quickly spread to the rest of the world.
Electromechanical systems saw a massive leap in progress from 1910-1945 as the world was put into global war twice. World War I saw a burst of new electromechanics as spotlights and radios were used by all countries. By World War II, countries had developed and centralized their military around the versatility and power of electromechanics. One example of these still used today is the alternator, which was created to power military equipment in the 1950s and later repurposed for automobiles in the 1960s. Post-war America greatly benefited from the military's development of electromechanics as household work was quickly replaced by electromechanical systems such as microwaves, refrigerators, and washing machines. The electromechanical television systems of the late 19th century were less successful.
Electric typewriters developed, up to the 1980s, as "power-assisted typewriters". They contained a single electrical component, the motor. Where the keystroke had previously moved a typebar directly, now it engaged mechanical linkages that directed mechanical power from the motor into the typebar. This was also true of the later IBM Selectric. At Bell Labs, in the 1946, the Bell Model V computer was developed. It was an electromechanical relay-based device; cycles took seconds. In 1968 electromechanical systems were still under serious consideration for an aircraft flight control computer, until a device based on large scale integration electronics was adopted in the Central Air Data Computer.
Microelectromechanical systems (MEMS)
Microelectromechanical systems (MEMS) have roots in the silicon revolution, which can be traced back to two important silicon semiconductor inventions from 1959: the monolithic integrated circuit (IC) chip by Robert Noyce at Fairchild Semiconductor, and the metal–oxide–semiconductor field-effect transistor (MOSFET) invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered and used surface passivation by silicon dioxide to create the first planar transistors, the first in which drain and source were adjacent at the same surface. MOSFET scaling, the miniaturisation of MOSFETs on IC chips, led to the miniaturisation of electronics (as predicted by Moore's law and Dennard scaling). This laid the foundations for the miniaturisation of mechanical systems, with the development of micromachining technology based on silicon semiconductor devices, as engineers began realizing that silicon chips and MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. One of the first silicon pressure sensors was isotropically micromachined by Honeywell in 1962.
An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters. In the early 21st century, there has been research on nanoelectromechanical systems (NEMS).
Modern practice
Today, electromechanical processes are mainly used by power companies. All fuel based generators convert mechanical movement to electrical power. Some renewable energies such as wind and hydroelectric are powered by mechanical systems that also convert movement to electricity.
In the last thirty years of the 20th century, equipment which would generally have used electromechanical devices became less expensive. This equipment became cheaper because it used more reliably integrated microcontroller circuits containing ultimately a few million transistors, and a program to carry out the same task through logic. With electromechanical components there were only moving parts, such as mechanical electric actuators. This more reliable logic has replaced most electromechanical devices, because any point in a system which must rely on mechanical movement for proper operation will inevitably have mechanical wear and eventually fail. Properly designed electronic circuits without moving parts will continue to operate correctly almost indefinitely and are used in most simple feedback control systems. Circuits without moving parts appear in a large number of items from traffic lights to washing machines.
Another electromechanical device is piezoelectric devices, but they do not use electromagnetic principles. Piezoelectric devices can create sound or vibration from an electrical signal or create an electrical signal from sound or mechanical vibration.
To become an electromechanical engineer, typical college courses involve mathematics, engineering, computer science, designing of machines, and other automotive classes that help gain skill in troubleshooting and analyzing issues with machines. To be an electromechanical engineer a bachelor's degree is required, usually in electrical, mechanical, or electromechanical engineering. As of April 2018, only two universities, Michigan Technological University and Wentworth Institute of Technology, offer the major of electromechanical engineering . To enter the electromechanical field as an entry-level technician, an associative degree is all that is required.
As of 2016, approximately 13,800 people work as electro-mechanical technicians in the US. The job outlook for 2016 to 2026 for technicians is 4% growth which is about an employment change of 500 positions. This outlook is slower than average.
See also
Electromechanical modeling
Adding machine
Automation
Automatic transmission system
Electric machine
Electric power conversion
Electricity meter
Enigma machine
Kerrison Predictor
Mechatronics
Power engineering
Relay
Robotics
SAW filter
Stepping switch
Solenoid valve
Thermostat
Torpedo Data Computer
Unit record equipment
References
Citations
Sources
Davim, J. Paulo, editor (2011) Mechatronics, John Wiley & Sons .
Szolc T., Konowrocki R., Michajlow M., Pregowska A., An Investigation of the Dynamic Electromechanical Coupling Effects in Machine Drive Systems Driven by Asynchronous Motors, Mechanical Systems and Signal Processing, , Vol.49, pp. 118–134, 2014
"WWI: Technology and the weapons of war | NCpedia". www.ncpedia.org. Retrieved 2018-04-22.
Further reading
A first course in electromechanics. By Hugh Hildreth Skilling. Wiley, 1960.
Electromechanics: a first course in electromechanical energy conversion, Volume 1. By Hugh Hildreth Skilling. R. E. Krieger Pub. Co., Jan 1, 1979.
Electromechanics and electrical machinery. By J. F. Lindsay, M. H. Rashid. Prentice-Hall, 1986.
Electromechanical motion devices. By Hi-Dong Chai. Prentice Hall PTR, 1998.
Mechatronics: Electromechanics and Contromechanics. By Denny K. Miu. Springer London, Limited, 2011. | 0.781406 | 0.991996 | 0.775152 |
Energy conversion efficiency | Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1.
Overview
Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter.
Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy.
Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible.
However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5.
When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV ( Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion.
Related, more specific terms include
Electrical efficiency, useful power output per electrical power consumed;
Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work);
Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed;
'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency.
Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision.
Chemical conversion efficiency
The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature.
A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature.
An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V.
For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.
A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83.
The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction.
Fuel heating values and efficiency
In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded.
Wall-plug efficiency, luminous efficiency, and efficacy
In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses.
The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy.
Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%.
Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output.
With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps.
Example of energy conversion efficiency
See also
Cost of electricity by source
Energy efficiency (disambiguation)
EROEI
Exergy efficiency
Figure of merit
Heat of combustion
International Electrotechnical Commission
Perpetual motion
Sensitivity (electronics)
Solar cell efficiency
Coefficient of performance
References
External links
Does it make sense to switch to LED?
Building engineering
Dimensionless numbers of thermodynamics
Energy conservation
Energy conversion
Energy efficiency | 0.783906 | 0.988623 | 0.774987 |
Acoustic wave | Acoustic waves are a type of energy propagation that travels through a medium, such as air, water, or solid objects, by means of adiabatic compression and expansion. Key quantities describing these waves include acoustic pressure, particle velocity, particle displacement, and acoustic intensity. The speed of acoustic waves depends on the medium's properties, such as density and elasticity, with sound traveling at approximately 343 meters per second in air, 1480 meters per second in water, and varying speeds in solids. Examples of acoustic waves include audible sound from speakers, seismic waves causing ground vibrations, and ultrasound used for medical imaging. Understanding acoustic waves is crucial in fields like acoustics, physics, engineering, and medicine, with applications in sound design, noise reduction, and diagnostic imaging.
Wave properties
Acoustic wave is a mechanical wave that transmits energy through the movements of atoms and molecules. Acoustic wave transmits through fluids in longitudinal manner (movement of particles are parallel to the direction of propagation of the wave); in contrast to electromagnetic wave that transmits in transverse manner (movement of particles at a right angle to the direction of propagation of the wave). However, in solids, acoustic wave transmits in both longitudinal and transverse manners due to presence of shear moduli in such a state of matter.
Acoustic wave equation
The acoustic wave equation describes the propagation of sound waves. The acoustic wave equation for sound pressure in one dimension is given by
where
is sound pressure in Pa
is position in the direction of propagation of the wave, in m
is speed of sound in m/s
is time in s
The wave equation for particle velocity has the same shape and is given by
where
is particle velocity in m/s
For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article.
D'Alembert gave the general solution for the lossless wave equation. For sound pressure, a solution would be
where
is angular frequency in rad/s
is time in s
is wave number in rad·m−1
is a coefficient without unit
For the wave becomes a travelling wave moving rightwards, for the wave becomes a travelling wave moving leftwards. A standing wave can be obtained by .
Phase
In a travelling wave pressure and particle velocity are in phase, which means the phase angle between the two quantities is zero.
This can be easily proven using the ideal gas law
where
is pressure in Pa
is volume in m3
is amount in mol
is the universal gas constant with value
Consider a volume . As an acoustic wave propagates through the volume, adiabatic compression and decompression occurs. For adiabatic change the following relation between volume of a parcel of fluid and pressure holds
where is the adiabatic index without unit and the subscript denotes the mean value of the respective variable.
As a sound wave propagates through a volume, the horizontal displacement of a particle occurs along the wave propagation direction.
where
is cross-sectional area in m2
From this equation it can be seen that when pressure is at its maximum, particle displacement from average position reaches zero. As mentioned before, the oscillating pressure for a rightward traveling wave can be given by
Since displacement is maximum when pressure is zero there is a 90 degrees phase difference, so displacement is given by
Particle velocity is the first derivative of particle displacement: . Differentiation of a sine gives a cosine again
During adiabatic change, temperature changes with pressure as well following
This fact is exploited within the field of thermoacoustics.
Propagation speed
The propagation speed, or acoustic velocity, of acoustic waves is a function of the medium of propagation. In general, the acoustic velocity c is given by the Newton-Laplace equation:
where
C is a coefficient of stiffness, the bulk modulus (or the modulus of bulk elasticity for gas mediums),
is the density in kg/m3
Thus the acoustic velocity increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material, and decreases with the density.
For general equations of state, if classical mechanics is used, the acoustic velocity is given by
with as the pressure and the density, where differentiation is taken with respect to adiabatic change.
Phenomena
Acoustic waves are elastic waves that exhibit phenomena like diffraction, reflection and interference. Note that sound waves in air are not polarized since they oscillate along the same direction as they move.
Interference
Interference is the addition of two or more waves that results in a new wave pattern. Interference of sound waves can be observed when two loudspeakers transmit the same signal. At certain locations constructive interference occurs, doubling the local sound pressure. And at other locations destructive interference occurs, causing a local sound pressure of zero pascals.
Standing wave
A standing wave is a special kind of wave that can occur in a resonator. In a resonator superposition of the incident and reflective wave occurs, causing a standing wave. Pressure and particle velocity are 90 degrees out of phase in a standing wave.
Consider a tube with two closed ends acting as a resonator. The resonator has normal modes at frequencies given by
where
is the speed of sound in m/s
is the length of the tube in m
At the ends particle velocity becomes zero since there can be no particle displacement. Pressure however doubles at the ends because of interference of the incident wave with the reflective wave. As pressure is maximum at the ends while velocity is zero, there is a 90 degrees phase difference between them.
Reflection
An acoustic travelling wave can be reflected by a solid surface. If a travelling wave is reflected, the reflected wave can interfere with the incident wave causing a standing wave in the near field. As a consequence, the local pressure in the near field is doubled, and the particle velocity becomes zero.
Attenuation causes the reflected wave to decrease in power as distance from the reflective material increases. As the power of the reflective wave decreases compared to the power of the incident wave, interference also decreases. And as interference decreases, so does the phase difference between sound pressure and particle velocity. At a large enough distance from the reflective material, there is no interference left anymore. At this distance one can speak of the far field.
The amount of reflection is given by the reflection coefficient which is the ratio of the reflected intensity over the incident intensity
Absorption
Acoustic waves can be absorbed. The amount of absorption is given by the absorption coefficient which is given by
where
is the absorption coefficient without a unit
is the reflection coefficient without a unit
Often acoustic absorption of materials is given in decibels instead.
Layered media
When an acoustic wave propagates through a non-homogeneous medium, it will undergo diffraction at the impurities it encounters or at the interfaces between layers of different materials. This is a phenomenon very similar to that of the refraction, absorption and transmission of light in Bragg mirrors. The concept of acoustic wave propagation through periodic media is exploited with great success in acoustic metamaterial engineering.
The acoustic absorption, reflection and transmission in multilayer materials can be calculated with the transfer-matrix method.
See also
Acoustics
Acoustic attenuation
Acoustic metamaterial
Auditory imagery
Audio signal processing
Beat
Biot–Tolstoy–Medwin_diffraction_model
Diffraction
Doppler effect
Echo
Entropy-vorticity wave
Gravity wave
Music
Musical note
Musical tone
Phonon
Physics of music
Pitch
Psychoacoustics
Resonance
Refraction
Reflection
Reverberation
Signal tone
Sound
Sound localization
Soundproofing
Stereo imaging
Structural acoustics
Timbre
Ultrasound
Wave equation
One-way wave equation
List of unexplained sounds
References
zh-yue:聲波
Wave mechanics
Acoustics
Sound | 0.781252 | 0.991901 | 0.774924 |
Stasis (fiction) | A stasis or stasis field, in science fiction, is a confined area of space in which time has been stopped or the contents have been rendered motionless.
Overview
A stasis field is imagined to be a region in which a stasis process is in effect. Stasis fields in fictional settings often have several common characteristics. These include infinite or nearly infinite rigidity, making them "unbreakable objects" and a perfect or nearly-perfect reflective surface. Most science fiction plots rely on a physical device to establish this region. When the device is deactivated, the stasis field collapses, and the stasis effect ends.
Time is often suspended in stasis fields. Such fields thus have the additional property of protecting non-living materials from deterioration. This time dilation can be, from an in-universe perspective, absolute; something thrown into the field, has the field triggered and then reactivated, would fly out as if nothing had happened. Storylines using such fields often have materials as well as living beings surviving thousands or millions of years beyond their normal lifetimes. The property also allows for such plot devices as booby traps, containing, for instance, a nuclear bomb. Once out of the stasis field, the trap is sprung. In such a situation, to avoid the protagonist from seeing what is in the field, the story line would not allow normal beings to see something protected by a stasis field.
One use of stasis fields is as a form of suspended animation: to let passengers and cargoes (normally of spacecraft or sleeper ships) avoid having to experience extremely long periods of time by "skipping over" large sections of it. They may also be used, such as in The Night's Dawn Trilogy, as protection against the effects of extreme acceleration.
There are real phenomena that cause time dilation similar that of a stasis field. Extremely high velocities approaching light speed or immensely powerful gravitational fields such as those existing near the event horizons of black holes will cause time to progress more slowly. However, there is no known theoretical way of causing such time dilation independently of such conditions.
Examples
The Dune series of novels features "nullentropy" containers, where food is preserved indefinitely, as well as entropy-free "no-chambers" or "no-ships" which are undetectable to prescience.
The noted science fiction author Larry Niven used the concept of stasis fields and stasis boxes throughout his many novels and short stories set in the Known Space series. Niven's stasis fields followed conductive surfaces when established, with the resulting frozen space being an invulnerable and reflective object. They were often used as emergency protective devices and to create a weapon called a variable sword: a length of extremely fine wire in a stasis field that enables it to cut through normal matter. For more information, see Slaver stasis field.
A more limited form of stasis field is the "bobble", found in Vernor Vinge's Peace Authority setting. A bobble is perfectly spherical and exists for a fixed period of timeset when the bobble is first created. The duration of a bobble effect cannot be changed. Bobble generators were initially used as weapons, removing their targets from the field of combat.
Another example of a stasis field exists in Joe Haldeman's The Forever War, where stasis field generators are carried by troops to create conditions where melee weapons become the only viable means of combat. Inside the field, no object can travel faster than 16.3 m/s, which includes electrons, photons, and the field itself. Soldiers inside the field wear suits with a special coating to prevent electrical activity within their body from stopping, which will kill them. In the novel, the main character defeats an enemy army, which has besieged a contingent of human troops on a moon, by arming a nuclear bomb inside the field and then moving the field away from the bomb. Once the bomb is revealed, its electrical activity resumes and it detonates, vaporizing the surrounding army and a large chunk of the ground beneath the field.
In Peter F. Hamilton's The Night’s Dawn trilogy (1996-1999), “zero-tau pods” — powered containers inside which time halts — are an important narrative device.
In the computer strategy game StarCraft, the Arbiter unit can use Protoss technology and the Arbiter's psionic power to create a stasis field that traps units in the affected area in blue "crystals" of stopped time, taking them out of the fight and rendering them invulnerable for 30 seconds.
In the Dead Space series, the main character Isaac Clarke carries a wrist-mounted tachyon-based stasis module, which is used to slow enemy Necromorphs to a crawl for a duration. He adapted its use to fight Necromorphs; it was used previously by technicians to slow down malfunctioning equipment. Medical use of the technology is later seen in Dead Space 2, with stasis beds; the protagonist had also been kept in stasis between games.
The game Mass Effect has a biotic power called "Stasis" that can trap an enemy in a stasis field, rendering them immobile and invincible to damage. The duration of this effect is usually dependent on the user's skill level.
In the Star Wars RPG series Knights of the Old Republic, Jedi who follow the path of the light are able to use "Stasis" powers, using the force to alter time and freeze an enemy in place. Unlike true stasis, it allows external events to affect the victim. The original game also uses a similar effect, where Dark Jedi trap party members to engage the player in a duel.
In the Justice League Unlimited episode "The Cat and the Canary", Green Arrow uses a stunner to put himself into a form of stasis while fighting Wildcat in an attempt to end his cage fighting career by falsely convincing him he killed Green Arrow.
In the Invader Zim episode "Walk For Your Lives", Zim creates a time stasis field and uses it on Dib as an experiment to show to the Tallest, causing him to move slowly. As well, it produces an explosion which occurs slowly; Zim throws Dib into it speed it up, causing it to move at normal speed.
In the animated series, Generator Rex, the main antagonist, Van Kleiss, is transported back in time to Ancient Egypt during an accident. While there, he creates a stasis chamber and is awoken at multiple points throughout history before returning to the present in the episode "A History of Time".
The Space themed MMO Eve Online features a weapon called a stasis webifier. When activated against an enemy ship, it reduces the target's speed. Multiple 'webs' can be used on a ship at once.
In Half-Life, the protagonist Gordon Freeman is put into a state of Stasis after a brief discussion with the G-Man. A similar incidence happens to Adrian Shepherd at the end of Half Life: Opposing Force, when the G-Man puts him into a state of stasis "for further evaluation".
At the end of Portal, Chell is put in stasis for many years until she is awakened at the beginning of Portal 2.
In Project Eden, one character is frozen in stasis for 15 years. Stasis can also be used offensively to slow down enemies.
In the first episode of Red Dwarf, "The End", Dave Lister, third-class technician of the mining ship "Red Dwarf", is put into a stasis booth as punishment for breaking rules on the ship. However, during his time in stasis, lethal radiation leaks into the ship as a result of a malfunction, killing the crew. Lister is revived three million years later by the ship's computer, Holly, once the high radiation levels have subsided.
In Catherine Asaro's Skolian Empire books, the Skolians use quasis to freeze time during interstellar travel.
The 2008 novel The Last Colony describes a "sapper field" technology which can be set to modify various energetic properties of objects, such as weapons.
The 2012 Expanse series of novels describes a "slow zone" of outer space where no kinetic technological process is allowed to operate above a set speed. This allows organic life to operate normally, but instantaneously slows any over-speeding artifact or kinetic component down to that speed.
In Amphibia, King Andrias puts Marcy Wu in a state of suspended animation after killing her in the season 2 finale "True Colors". She is then used as a host for Andrias's master, the Core, leading her to become an antagonist until she is freed from its control.
In The 100, during season 5, a group of prisoners awakes from cryopreservation after a little more than 100 years. They were on penal labor on a ship mining asteroids, but were put into cryopreservation for this period when the Earth had become temporarily uninhabitable.
In Xenoblade Chronicles 3, the world of Aionios is in a state of constant stasis, referred to as "the endless now", as a result of Z taking control of Origin.
In The Legend of Zelda: Breath of the Wild, Stasis is one of the Runes Link can use. It allows him to freeze objects in suspended time and launch them by building up kinetic energy.
See also
Force field (physics)
Force field (technology)
Tractor beam
References
Science fiction themes
Fictional technology
Fiction about time | 0.781006 | 0.992165 | 0.774887 |
Atmospheric thermodynamics | Atmospheric thermodynamics is the study of heat-to-work transformations (and their reverse) that take place in the Earth's atmosphere and manifest as weather or climate. Atmospheric thermodynamics use the laws of classical thermodynamics, to describe and explain such phenomena as the properties of moist air, the formation of clouds, atmospheric convection, boundary layer meteorology, and vertical instabilities in the atmosphere. Atmospheric thermodynamic diagrams are used as tools in the forecasting of storm development. Atmospheric thermodynamics forms a basis for cloud microphysics and convection parameterizations used in numerical weather models and is used in many climate considerations, including convective-equilibrium climate models.
Overview
The atmosphere is an example of a non-equilibrium system. Atmospheric thermodynamics describes the effect of buoyant forces that cause the rise of less dense (warmer) air, the descent of more dense air, and the transformation of water from liquid to vapor (evaporation) and its condensation. Those dynamics are modified by the force of the pressure gradient and that motion is modified by the Coriolis force. The tools used include the law of energy conservation, the ideal gas law, specific heat capacities, the assumption of isentropic processes (in which entropy is a constant), and moist adiabatic processes (during which no energy is transferred as heat). Most of tropospheric gases are treated as ideal gases and water vapor, with its ability to change phase from vapor, to liquid, to solid, and back is considered one of the most important trace components of air.
Advanced topics are phase transitions of water, homogeneous and in-homogeneous nucleation, effect of dissolved substances on cloud condensation, role of supersaturation on formation of ice crystals and cloud droplets. Considerations of moist air and cloud theories typically involve various temperatures, such as equivalent potential temperature, wet-bulb and virtual temperatures. Connected areas are energy, momentum, and mass transfer, turbulence interaction between air particles in clouds, convection, dynamics of tropical cyclones, and large scale dynamics of the atmosphere.
The major role of atmospheric thermodynamics is expressed in terms of adiabatic and diabatic forces acting on air parcels included in primitive equations of air motion either as grid resolved or subgrid parameterizations. These equations form a basis for the numerical weather and climate predictions.
History
In the early 19th century thermodynamicists such as Sadi Carnot, Rudolf Clausius, and Émile Clapeyron developed mathematical models on the dynamics of fluid bodies and vapors related to the combustion and pressure cycles of atmospheric steam engines; one example is the Clausius–Clapeyron equation. In 1873, thermodynamicist Willard Gibbs published "Graphical Methods in the Thermodynamics of Fluids."
These sorts of foundations naturally began to be applied towards the development of theoretical models of atmospheric thermodynamics which drew the attention of the best minds. Papers on atmospheric thermodynamics appeared in the 1860s that treated such topics as dry and moist adiabatic processes. In 1884 Heinrich Hertz devised first atmospheric thermodynamic diagram (emagram). Pseudo-adiabatic process was coined by von Bezold describing air as it is lifted, expands, cools, and eventually precipitates its water vapor; in 1888 he published voluminous work entitled "On the thermodynamics of the atmosphere".
In 1911 von Alfred Wegener published a book "Thermodynamik der Atmosphäre", Leipzig, J. A. Barth.
From here the development of atmospheric thermodynamics as a branch of science began to take root. The term "atmospheric thermodynamics", itself, can be traced to Frank W. Verys 1919 publication: "The radiant properties of the earth from the standpoint of atmospheric thermodynamics" (Occasional scientific papers of the Westwood Astrophysical Observatory). By the late 1970s various textbooks on the subject began to appear. Today, atmospheric thermodynamics is an integral part of weather forecasting.
Chronology
1751 Charles Le Roy recognized dew point temperature as point of saturation of air
1782 Jacques Charles made hydrogen balloon flight measuring temperature and pressure in Paris
1784 Concept of variation of temperature with height was suggested
1801–1803 John Dalton developed his laws of pressures of vapours
1804 Joseph Louis Gay-Lussac made balloon ascent to study weather
1805 Pierre Simon Laplace developed his law of pressure variation with height
1841 James Pollard Espy publishes paper on convection theory of cyclone energy
1856 William Ferrel presents dynamics causing westerlies
1889 Hermann von Helmholtz and John William von Bezold used the concept of potential temperature, von Bezold used adiabatic lapse rate and pseudoadiabat
1893 Richard Asman constructs first aerological sonde (pressure-temperature-humidity)
1894 John Wilhelm von Bezold used concept of equivalent temperature
1926 Sir Napier Shaw introduced tephigram
1933 Tor Bergeron published paper on "Physics of Clouds and Precipitation" describing precipitation from supercooled (due to condensational growth of ice crystals in presence of water drops)
1946 Vincent J. Schaeffer and Irving Langmuir performed the first cloud seeding experiment
1986 K. Emanuel conceptualizes tropical cyclone as Carnot heat engine
Applications
Hadley Circulation
The Hadley Circulation can be considered as a heat engine. The Hadley circulation is identified with rising of warm and moist air in the equatorial region with the descent of colder air in the subtropics corresponding to a thermally driven direct circulation, with consequent net production of kinetic energy. The thermodynamic efficiency of the Hadley system, considered as a heat engine, has been relatively constant over the 1979~2010 period, averaging 2.6%. Over the same interval, the power generated by the Hadley regime has risen at an average rate of about 0.54 TW per yr; this reflects an increase in energy input to the system consistent with the observed trend in the tropical sea surface temperatures.
Tropical cyclone Carnot cycle
The thermodynamic behavior of a hurricane can be modelled as a heat engine that operates between the heat reservoir of the sea at a temperature of about 300K (27 °C) and the heat sink of the tropopause at a temperature of about 200K (−72 °C) and in the process converts heat energy into mechanical energy of winds. Parcels of air traveling close to the sea surface take up heat and water vapor, the warmed air rises and expands and cools as it does so causes condensation and precipitation. The rising air, and condensation, produces circulatory winds that are propelled by the Coriolis force, which whip up waves and increase the amount of warm moist air that powers the cyclone. Both a decreasing temperature in the upper troposphere or an increasing temperature of the atmosphere close to the surface will increase the maximum winds observed in hurricanes. When applied to hurricane dynamics it defines a Carnot heat engine cycle and predicts maximum hurricane intensity.
Water vapor and global climate change
The Clausius–Clapeyron relation shows how the water-holding capacity of the atmosphere increases by about 8% per Celsius increase in temperature. (It does not directly depend on other parameters like the pressure or density.) This water-holding capacity, or "equilibrium vapor pressure", can be approximated using the August-Roche-Magnus formula
(where is the equilibrium or saturation vapor pressure in hPa, and is temperature in degrees Celsius). This shows that when atmospheric temperature increases (e.g., due to greenhouse gases) the absolute humidity should also increase exponentially (assuming a constant relative humidity). However, this purely thermodynamic argument is subject of considerable debate because convective processes might cause extensive drying due to increased areas of subsidence, efficiency of precipitation could be influenced by the intensity of convection, and because cloud formation is related to relative humidity.
See also
Atmospheric convection
Atmospheric temperature
Atmospheric wave
Chemical thermodynamics
Cloud physics
Equilibrium thermodynamics
Fluid dynamics
Non-equilibrium thermodynamics
Thermodynamics
Special topics
Lorenz, E. N., 1955, Available potential energy and the maintenance of the general circulation, Tellus, 7, 157–167.
Emanuel, K, 1986, Part I. An air-sea interaction theory for tropical cyclones, J. Atmos. Sci. 43, 585, (energy cycle of the mature hurricane has been idealized here as Carnot engine that converts heat energy extracted from the ocean to mechanical energy).
References
Further reading
Curry, J.A. and P.J. Webster, 1999, Thermodynamics of Atmospheres and Oceans. Academic Press, London, 467 pp (textbook for graduates)
Dufour, L. et, Van Mieghem, J. – Thermodynamique de l'Atmosphère, Institut Royal Meteorologique de Belgique, 1975. 278 pp (theoretical approach). First edition of this book – 1947.
Emanuel, K.A.(1994): Atmospheric Convection, Oxford University Press. (thermodynamics of tropical cyclones).
Iribarne, J.V. and Godson, W.L., Atmospheric thermodynamics, Dordrecht, Boston, Reidel (basic textbook).
Petty, G.W., A First Course in Atmospheric Thermodynamics, Sundog Publishing, Madison, Wisconsin, (undergraduate textbook).
von Alfred Wegener, Thermodynamik der Atmosphare, Leipzig, J. A. Barth, 1911, 331pp.
Wilford Zdunkowski, Thermodynamics of the atmosphere: a course in theoretical meteorology, Cambridge, Cambridge University Press, 2004.
Gliding technology | 0.806736 | 0.960468 | 0.774844 |
Dirac delta function | In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
such that
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
Motivation and overview
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time it is struck by another ball, imparting it with a momentum , with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is ; the units of are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval That is,
Then the momentum at any time is found by integration:
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as , giving a result everywhere except at :
Here the functions are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence) is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
which holds for all should continue to hold in the limit. So, in the equation it is understood that the limit is always taken .
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects and are equal everywhere except at yet have integrals that are different. According to Lebesgue integration theory, if and are functions such that almost everywhere, then is integrable if and only if is integrable and the integrals of and are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
History
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
which is tantamount to the introduction of the -function in the form:
Later, Augustin Cauchy expressed the theorem using exponentials:
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
where the δ-function is expressed as
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. At the end of the 19th century, Oliver Heaviside used formal Fourier series to manipulate the unit impulse. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics and used in his textbook The Principles of Quantum Mechanics. He called it the "delta function" since he used it as a continuous analogue of the discrete Kronecker delta.
Definitions
The Dirac delta function can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
and which is also constrained to satisfy the identity
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
As a measure
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset of the real line as an argument, and returns if , and otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then represents the mass contained in the set . One may then define the integral against as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure satisfies
for all continuous compactly supported functions . The measure is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on , the delta measure is characterized by its cumulative distribution function, which is the unit step function.
This means that is the integral of the cumulative indicator function with respect to the measure ; to wit,
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
All higher moments of are zero. In particular, characteristic function and moment generating function are both equal to one.
As a distribution
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function . Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function .
For to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer there is an integer and a constant such that for every test function , one has the inequality
where represents the supremum. With the distribution, one has such an inequality (with with for all . Thus is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being ).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function , one has
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation defines a Daniell integral on the space of all compactly supported continuous functions which, by the Riesz representation theorem, can be represented as the Lebesgue integral of with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
Generalizations
The delta function can be defined in -dimensional Euclidean space as the measure such that
for every compactly supported continuous function . As a measure, the -dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with , one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if is a set, is a marked point, and is any sigma algebra of subsets of , then the measure defined on sets by
is the delta measure or unit mass concentrated at .
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold centered at the point is defined as the following distribution:
for all compactly supported smooth real-valued functions on . A common special case of this construction is a case in which is an open set in the Euclidean space .
On a locally compact Hausdorff space , the Dirac delta measure concentrated at a point is the Radon measure associated with the Daniell integral on compactly supported continuous functions . At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping is a continuous embedding of into the space of finite Radon measures on , equipped with its vague topology. Moreover, the convex hull of the image of under this embedding is dense in the space of probability measures on .
Properties
Scaling and symmetry
The delta function satisfies the following scaling property for a non-zero scalar :
and so
Scaling property proof:
where a change of variable is used. If is negative, i.e., , then
Thus,
In particular, the delta function is an even distribution (symmetry), in the sense that
which is homogeneous of degree .
Algebraic properties
The distributional product of with is equal to zero:
More generally, for all positive integers .
Conversely, if , where and are distributions, then
for some constant .
Translation
The integral of any function multiplied by the time-delayed Dirac delta is
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function with the time-delayed Dirac delta is to time-delay by the same amount:
The sifting property holds under the precise condition that be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
Composition with a function
More generally, the delta distribution may be composed with a smooth function in such a way that the familiar change of variables formula holds, that
provided that is a continuously differentiable function with nowhere zero. That is, there is a unique way to assign meaning to the distribution so that this identity holds for all compactly supported test functions . Therefore, the domain must be broken up to exclude the point. This distribution satisfies if is nowhere zero, and otherwise if has a real root at , then
It is natural therefore to the composition for continuously differentiable functions by
where the sum extends over all roots of , which are assumed to be simple. Thus, for example
In the integral form, the generalized scaling property may be written as
Indefinite integral
For a constant and a "well-behaved" arbitrary real-valued function ,
where is the Heaviside step function and is an integration constant.
Properties in n dimensions
The delta distribution in an -dimensional space satisfies the following scaling property instead,
so that is a homogeneous distribution of degree .
Under any reflection or rotation , the delta function is invariant,
As in the one-variable case, it is possible to define the composition of with a bi-Lipschitz function uniquely so that the following holds
for all compactly supported functions .
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function such that the gradient of is nowhere zero, the following identity holds
where the integral on the right is over , the -dimensional surface defined by with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if is a smooth hypersurface of , then we can associate to the distribution that integrates any compactly supported smooth function over :
where is the hypersurface measure associated to . This generalization is associated with the potential theory of simple layer potentials on . If is a domain in with smooth boundary , then is equal to the normal derivative of the indicator function of in the distribution sense,
where is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
Fourier transform
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing of tempered distributions with Schwartz functions. Thus is defined as the unique tempered distribution satisfying
for all Schwartz functions . And indeed it follows from this that
As a result of this identity, the convolution of the delta function with any other tempered distribution is simply :
That is to say that is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for , and once it is known, it characterizes the system completely. See .
The inverse Fourier transform of the tempered distribution is the delta function. Formally, this is expressed as
and more rigorously, it follows since
for all Schwartz functions .
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on . Formally, one has
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
is
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
Derivatives
The derivative of the Dirac delta distribution, denoted and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions by
The first equality here is a kind of integration by parts, for if were a true function then
By mathematical induction, the -th derivative of is defined similarly as the distribution given on test functions by
In particular, is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
More properly, one has
where is the translation operator, defined on functions by , and on a distribution by
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
Furthermore, the convolution of with a compactly-supported, smooth function is
which follows from the properties of the distributional derivative of a convolution.
Higher dimensions
More generally, on an open set in the -dimensional Euclidean space , the Dirac delta distribution centered at a point is defined by
for all , the space of all smooth functions with compact support on . If is any multi-index with and denotes the associated mixed partial derivative operator, then the -th derivative of is given by
That is, the -th derivative of is the distribution whose value on any test function is the -th derivative of at (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If is any distribution on supported on the set consisting of a single point, then there is an integer and coefficients such that
Representations of the delta function
The delta function can be viewed as the limit of a sequence of functions
where is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions having compact support, or that this limit holds for all smooth functions with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
Approximations to the identity
Typically a nascent delta function can be constructed in the following manner. Let be an absolutely integrable function on of total integral , and define
In dimensions, one uses instead the scaling
Then a simple change of variables shows that also has integral . One may show that holds for all continuous compactly supported functions , and so converges weakly to in the sense of measures.
The constructed in this way are known as an approximation to the identity. This terminology is because the space of absolutely integrable functions is closed under the operation of convolution of functions: whenever and are in . However, there is no identity in for the convolution product: no element such that for all . Nevertheless, the sequence does approximate such an identity in the sense that
This limit holds in the sense of mean convergence (convergence in ). Further conditions on the , for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing to be a suitably normalized bump function, for instance
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking to be a hat function. With this choice of , one has
which are all continuous and compactly supported, although not smooth and so not a mollifier.
Probabilistic considerations
In the context of probability theory, it is natural to impose the additional condition that the initial in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking to be any probability distribution at all, and letting as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, has mean and has small higher moments. For instance, if is the uniform distribution on also known as the rectangular function, then:
Another example is with the Wigner semicircle distribution
This is continuous and compactly supported, but not a mollifier because it is not smooth.
Semigroups
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of with must satisfy
for all . Convolution semigroups in that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
in which the limit is as usual understood in the weak sense. Setting gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
The heat kernel
The heat kernel, defined by
represents the temperature in an infinite wire at time , if a unit of heat energy is stored at the origin of the wire at time . This semigroup evolves according to the one-dimensional heat equation:
In probability theory, is a normal distribution of variance and mean . It represents the probability density at time of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space , the heat kernel is
and has the same physical interpretation, . It also represents a nascent delta function in the sense that in the distribution sense as .
The Poisson kernel
The Poisson kernel
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
where the operator is rigorously defined as the Fourier multiplier
Oscillatory integrals
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in :
The solution represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
and the Bessel function
Plane wave decomposition
One approach to the study of a linear partial differential equation
where is a differential operator on , is to seek first a fundamental solution, which is a solution of the equation
When is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
where is a plane wave function, meaning that it has the form
for some vector . Such an equation can be resolved (if the coefficients of are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose so that is an even integer, and for a real number , put
Then is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure of for in the unit sphere :
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function ,
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of from its integrals over hyperplanes. For instance, if is odd and , then the integral on the right hand side is
where is the Radon transform of :
An alternative equivalent expression of the plane wave decomposition is:
Fourier kernels
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The -th partial sum of the Fourier series of a function of period is defined by convolution (on the interval ) with the Dirichlet kernel:
Thus,
where
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval tends to a multiple of the delta function as . This is interpreted in the distribution sense, that
for every compactly supported function . Thus, formally one has
on the interval .
Despite this, the result does not hold for all compactly supported functions: that is does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
The Fejér kernels tend to the delta function in a stronger sense that
for every compactly supported function . The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
Hilbert space theory
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in , and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of and to give a stronger topology on which the delta function defines a bounded linear functional.
Sobolev spaces
The Sobolev embedding theorem for Sobolev spaces on the real line implies that any square-integrable function such that
is automatically continuous, and satisfies in particular
Thus is a bounded linear functional on the Sobolev space . Equivalently is an element of the continuous dual space of . More generally, in dimensions, one has provided .
Spaces of holomorphic functions
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if is a domain in the complex plane with smooth boundary, then
for all holomorphic functions in that are continuous on the closure of . As a result, the delta function is represented in this class of holomorphic functions by the Cauchy integral:
Moreover, let be the Hardy space consisting of the closure in of all holomorphic functions in continuous up to the boundary of . Then functions in uniquely extend to holomorphic functions in , and the Cauchy integral formula continues to hold. In particular for , the delta function is a continuous linear functional on . This is a special case of the situation in several complex variables in which, for smooth domains , the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space of square-integrable holomorphic functions in an open set . This is a closed subspace of , and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in at a point of is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel , the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
Resolutions of the identity
Given a complete orthonormal basis set of functions in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector can be expressed as
The coefficients {αn} are found as
which may be represented by the notation:
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of takes the dyadic form:
Letting denote the identity operator on the Hilbert space, the expression
is called a resolution of the identity. When the Hilbert space is the space of square-integrable functions on a domain , the quantity:
is an integral operator, and the expression for can be rewritten
The right-hand side converges to in the sense. It need not hold in a pointwise sense, even when is a continuous function. Nevertheless, it is common to abuse notation and write
resulting in the representation of the delta function:
With a suitable rigged Hilbert space where contains all compactly supported smooth functions, this summation may converge in , depending on the properties of the basis . In most cases of practical interest, the orthonormal basis comes from an integral or differential operator, in which case the series converges in the distribution sense.
Infinitesimal delta functions
Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function one has as anticipated by Fourier and Cauchy.
Dirac comb
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if is any Schwartz function, then the periodization of is given by the convolution
In particular,
is precisely the Poisson summation formula.
More generally, this formula remains to be true if is a tempered distribution of rapid descent or, equivalently, if is a slowly growing, ordinary function within the space of tempered distributions.
Sokhotski–Plemelj theorem
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution , the Cauchy principal value of the function , defined by
Sokhotsky's formula states that
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions ,
Relationship to the Kronecker delta
The Kronecker delta is the quantity defined by
for all integers , . This function then satisfies the following analog of the sifting property: if (for in the set of all integers) is any doubly infinite sequence, then
Similarly, for any real or complex valued continuous function on , the Dirac delta satisfies the sifting property
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
Applications
Probability theory
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function of a discrete distribution consisting of points , with corresponding probabilities , can be written as
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If is a continuous differentiable function, then the density of can be written as
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process is given by
and represents the amount of time that the process spends at the point in the range of the process. More precisely, in one dimension this integral can be written
where is the indicator function of the interval
Quantum mechanics
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set of wave functions is orthonormal if
where is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function can be expressed as a linear combination of the with complex coefficients:
where . Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, . The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of "generalized eigenfunctions", labeled by the points of the real line, given by
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by .
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator . In that case, there is a set of real numbers (the spectrum) and a collection of distributions with such that
That is, are the generalized eigenvectors of . If they form an "orthonormal basis" in the distribution sense, that is:
then for any test function ,
where . That is, there is a resolution of the identity
where the operator-valued integral is again understood in the weak sense. If the spectrum of has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
Structural mechanics
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse at time can be written
where is the mass, is the deflection, and is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
where is the bending stiffness of the beam, is the deflection, is the spatial coordinate, and is the load distribution. If a beam is loaded by a point force at , the load distribution is written
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces at a distance apart. They then produce a moment acting on the beam. Now, let the distance approach the limit zero, while is kept constant. The load distribution, assuming a clockwise moment acting at , is written
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
See also
Atom (measure theory)
Laplacian of the indicator
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
KhanAcademy.org video lesson
The Dirac Delta function, a tutorial on the Dirac delta function.
Video Lectures – Lecture 23, a lecture by Arthur Mattuck.
The Dirac delta measure is a hyperfunction
We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure.
Fourier analysis
Generalized functions
Measure theory
Digital signal processing
Delta function
Schwartz distributions | 0.775261 | 0.999444 | 0.77483 |
Ionizing radiation | Ionizing radiation (US, ionising radiation in the UK), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum.
Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, nearly all types of laser light, infrared, microwaves, and radio waves are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area cannot be sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV.
Ionizing subatomic particles include alpha particles, beta particles, and neutrons. These particles are created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission.
Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).
Ionizing radiation is used in a wide variety of fields such as medicine, nuclear power, research, and industrial manufacturing, but presents a health hazard if proper measures against excessive exposure are not taken. Exposure to ionizing radiation causes cell damage to living tissue and organ damage. In high acute doses, it will result in radiation burns and radiation sickness, and lower level doses over a protracted time can cause cancer. The International Commission on Radiological Protection (ICRP) issues guidance on ionizing radiation protection, and the effects of dose uptake on human health.
Directly ionizing radiation
Ionizing radiation may be grouped as directly or indirectly ionizing.
Any charged particle with mass can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. Such particles include atomic nuclei, electrons, muons, charged pions, protons, and energetic charged nuclei stripped of their electrons. When moving at relativistic speeds (near the speed of light, c) these particles have enough kinetic energy to be ionizing, but there is considerable speed variation. For example, a typical alpha particle moves at about 5% of c, but an electron with 33 eV (just enough to ionize) moves at about 1% of c.
Two of the first types of directly ionizing radiation to be discovered are alpha particles which are helium nuclei ejected from the nucleus of an atom during radioactive decay, and energetic electrons, which are called beta particles.
Natural cosmic rays are made up primarily of relativistic protons but also include heavier atomic nuclei like helium ions and HZE ions. In the atmosphere such particles are often stopped by air molecules, and this produces short-lived charged pions, which soon decay to muons, a primary type of cosmic ray radiation that reaches the surface of the earth. Pions can also be produced in large amounts in particle accelerators.
Alpha particles
Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Alpha particle emissions are generally produced in the process of alpha decay.
Alpha particles are a strongly ionizing form of radiation, but when emitted by radioactive decay they have low penetration power and can be absorbed by a few centimeters of air, or by the top layer of human skin. More powerful alpha particles from ternary fission are three times as energetic, and penetrate proportionately farther in air. The helium nuclei that form 10–12% of cosmic rays, are also usually of much higher energy than those produced by radioactive decay and pose shielding problems in space. However, this type of radiation is significantly absorbed by the Earth's atmosphere, which is a radiation shield equivalent to about 10 meters of water.
The alpha particle was named by Ernest Rutherford after the first letter in the Greek alphabet, α, when he ranked the known radioactive emissions in descending order of ionising effect in 1899. The symbol is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as or indicating a Helium ion with a +2 charge (missing its two electrons). If the ion gains electrons from its environment, the alpha particle can be written as a normal (electrically neutral) helium atom .
Beta particles
Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactive nuclei, such as potassium-40. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (β). There are two forms of beta decay, β− and β+, which respectively give rise to the electron and the positron. Beta particles are much less penetrating than gamma radiation, but more penetrating than alpha particles.
High-energy beta particles may produce X-rays known as bremsstrahlung ("braking radiation") or secondary electrons (delta ray) as they pass through matter. Both of these can cause an indirect ionization effect. Bremsstrahlung is of concern when shielding beta emitters, as the interaction of beta particles with some shielding materials produces Bremsstrahlung. The effect is greater with material having high atomic numbers, so material with low atomic numbers is used for beta source shielding.
Positrons and other types of antimatter
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. When a low-energy positron collides with a low-energy electron, annihilation occurs, resulting in their conversion into the energy of two or more gamma ray photons (see electron–positron annihilation). As positrons are positively charged particles they can directly ionize an atom through Coulomb interactions.
Positrons can be generated by positron emission nuclear decay (through weak interactions), or by pair production from a sufficiently energetic photon. Positrons are common artificial sources of ionizing radiation used in medical positron emission tomography (PET) scans.
Charged nuclei
Charged nuclei are characteristic of galactic cosmic rays and solar particle events and except for alpha particles (charged helium nuclei) have no natural sources on earth. In space, however, very high energy protons, helium nuclei, and HZE ions can be initially stopped by relatively thin layers of shielding, clothes, or skin. However, the resulting interaction will generate secondary radiation and cause cascading biological effects. If just one atom of tissue is displaced by an energetic proton, for example, the collision will cause further interactions in the body. This is called "linear energy transfer" (LET), which utilizes elastic scattering.
LET can be visualized as a billiard ball hitting another in the manner of the conservation of momentum, sending both away with the energy of the first ball divided between the two unequally. When a charged nucleus strikes a relatively slow-moving nucleus of an object in space, LET occurs and neutrons, alpha particles, low-energy protons, and other nuclei will be released by the collisions and contribute to the total absorbed dose of tissue.
Indirectly ionizing radiation
Indirectly ionizing radiation is electrically neutral and does not interact strongly with matter, therefore the bulk of the ionization effects are due to secondary ionization.
Photon radiation
Even though photons are electrically neutral, they can ionize atoms indirectly through the photoelectric effect and the Compton effect. Either of those interactions will cause the ejection of an electron from an atom at relativistic speeds, turning that electron into a beta particle (secondary beta particle) that will ionize other atoms. Since most of the ionized atoms are due to the secondary beta particles, photons are indirectly ionizing radiation.
Radiated photons are called gamma rays if they are produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. They are called x-rays if produced outside the nucleus. The generic term "photon" is used to describe both.
X-rays normally have a lower energy than gamma rays, and an older convention was to define the boundary as a wavelength of 10−11 m (or a photon energy of 100 keV). That threshold was driven by historic limitations of older X-ray tubes and low awareness of isomeric transitions. Modern technologies and discoveries have shown an overlap between X-ray and gamma energies. In many fields they are functionally identical, differing for terrestrial studies only in origin of the radiation. In astronomy, however, where radiation origin often cannot be reliably determined, the old energy division has been preserved, with X-rays defined as being between about 120 eV and 120 keV, and gamma rays as being of any energy above 100 to 120 keV, regardless of source. Most astronomical "gamma-ray astronomy" are known not to originate in nuclear radioactive processes but, rather, result from processes like those that produce astronomical X-rays, except driven by much more energetic electrons.
Photoelectric absorption is the dominant mechanism in organic materials for photon energies below 100 keV, typical of classical X-ray tube originated X-rays. At energies beyond 100 keV, photons ionize matter increasingly through the Compton effect, and then indirectly through pair production at energies beyond 5 MeV. The accompanying interaction diagram shows two Compton scatterings happening sequentially. In every scattering event, the gamma ray transfers energy to an electron, and it continues on its path in a different direction and with reduced energy.
Definition boundary for lower-energy photons
The lowest ionization energy of any element is 3.89 eV, for caesium. However, US Federal Communications Commission material defines ionizing radiation as that with a photon energy greater than 10 eV (equivalent to a far ultraviolet wavelength of 124 nanometers). Roughly, this corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen, both about 14 eV. In some Environmental Protection Agency references, the ionization of a typical water molecule at an energy of 33 eV is referenced as the appropriate biological threshold for ionizing radiation: this value represents the so-called W-value, the colloquial name for the ICRU's mean energy expended in a gas per ion pair formed, which combines ionization energy plus the energy lost to other processes such as excitation. At 38 nanometers wavelength for electromagnetic radiation, 33 eV is close to the energy at the conventional 10 nm wavelength transition between extreme ultraviolet and X-ray radiation, which occurs at about 125 eV. Thus, X-ray radiation is always ionizing, but only extreme-ultraviolet radiation can be considered ionizing under all definitions.
Neutrons
Neutrons have a neutral electrical charge often misunderstood as zero electrical charge and thus often do not directly cause ionization in a single step or interaction with matter. However, fast neutrons will interact with the protons in hydrogen via linear energy transfer, energy that a particle transfers to the material it is moving through. This mechanism scatters the nuclei of the materials in the target area, causing direct ionization of the hydrogen atoms. When neutrons strike the hydrogen nuclei, proton radiation (fast protons) results. These protons are themselves ionizing because they are of high energy, are charged, and interact with the electrons in matter.
Neutrons that strike other nuclei besides hydrogen will transfer less energy to the other particle if linear energy transfer does occur. But, for many nuclei struck by neutrons, inelastic scattering occurs. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal or somewhere in between. It is also dependent on the nuclei it strikes and its neutron cross section.
In inelastic scattering, neutrons are readily absorbed in a type of nuclear reaction called neutron capture and attributes to the neutron activation of the nucleus. Neutron interactions with most types of matter in this manner usually produce radioactive nuclei. The abundant oxygen-16 nucleus, for example, undergoes neutron activation, rapidly decays by a proton emission forming nitrogen-16, which decays to oxygen-16. The short-lived nitrogen-16 decay emits a powerful beta ray. This process can be written as:
16O (n,p) 16N (fast neutron capture possible with >11 MeV neutron)
16N → 16O + β− (Decay t1/2 = 7.13 s)
This high-energy β− further interacts rapidly with other nuclei, emitting high-energy γ via Bremsstrahlung
While not a favorable reaction, the 16O (n,p) 16N reaction is a major source of X-rays emitted from the cooling water of a pressurized water reactor and contributes enormously to the radiation generated by a water-cooled nuclear reactor while operating.
For the best shielding of neutrons, hydrocarbons that have an abundance of hydrogen are used.
In fissile materials, secondary neutrons may produce nuclear chain reactions, causing a larger amount of ionization from the daughter products of fission.
Outside the nucleus, free neutrons are unstable and have a mean lifetime of 14 minutes, 42 seconds. Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay:
In the adjacent diagram, a neutron collides with a proton of the target material, and then becomes a fast recoil proton that ionizes in turn. At the end of its path, the neutron is captured by a nucleus in an (n,γ)-reaction that leads to the emission of a neutron capture photon. Such photons always have enough energy to qualify as ionizing radiation.
Physical effects
Nuclear effects
Neutron radiation, alpha radiation, and extremely energetic gamma (> ~20 MeV) can cause nuclear transmutation and induced radioactivity. The relevant mechanisms are neutron activation, alpha absorption, and photodisintegration. A large enough number of transmutations can change macroscopic properties and cause targets to become radioactive themselves, even after the original source is removed.
Chemical effects
Ionization of molecules can lead to radiolysis (breaking chemical bonds), and formation of highly reactive free radicals. These free radicals may then react chemically with neighbouring materials even after the original radiation has stopped. (e.g., ozone cracking of polymers by ozone formed by ionization of air). Ionizing radiation can also accelerate existing chemical reactions such as polymerization and corrosion, by contributing to the activation energy required for the reaction. Optical materials deteriorate under the effect of ionizing radiation.
High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purple color. The glow can be observed, e.g., during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or the inside of a damaged nuclear reactor like during the Chernobyl disaster.
Monatomic fluids, e.g. molten sodium, have no chemical bonds to break and no crystal lattice to disturb, so they are immune to the chemical effects of ionizing radiation. Simple diatomic compounds with very negative enthalpy of formation, such as hydrogen fluoride will reform rapidly and spontaneously after ionization.
Electrical effects
The ionization of materials temporarily increases their conductivity, potentially permitting damaging current levels. This is a particular hazard in semiconductor microelectronics employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra-atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods.
Proton radiation found in space can also cause single-event upsets in digital circuits. The electrical effects of ionizing radiation are exploited in gas-filled radiation detectors, e.g. the Geiger-Muller counter or the ion chamber.
Health effects
Most adverse health effects of exposure to ionizing radiation may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to killing or malfunction of cells following high doses from radiation burns.
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is one cause of chronic myelogenous leukemia, although most people with CML have not been exposed to radiation. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial.
The most widely accepted model, the Linear no-threshold model (LNT), holds that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.
Although DNA is always susceptible to damage by ionizing radiation, the DNA molecule may also be damaged by radiation with enough energy to excite certain molecular bonds to form pyrimidine dimers. This energy may be less than ionizing, but near to it. A good example is ultraviolet spectrum energy which begins at about 3.1 eV (400 nm) at close to the same energy level which can cause sunburn to unprotected skin, as a result of photoreactions in collagen and (in the UV-B range) also damage in DNA (for example, pyrimidine dimers). Thus, the mid and lower ultraviolet electromagnetic spectrum is damaging to biological tissues as a result of electronic excitation in molecules which falls short of ionization, but produces similar non-thermal effects. To some extent, visible light and also ultraviolet A (UVA) which is closest to visible energies, have been proven to result in formation of reactive oxygen species in skin, which cause indirect damage since these are electronically excited molecules which can inflict reactive damage, although they do not cause sunburn (erythema). Like ionization-damage, all these effects in skin are beyond those produced by simple thermal effects.
Measurement of radiation
The table below shows radiation and dose quantities in SI and non-SI units.
Uses of radiation
Ionizing radiation has many industrial, military, and medical uses. Its usefulness must be balanced with its hazards, a compromise that has shifted over time. For example, at one time, assistants in shoe shops in the US used X-rays to check a child's shoe size, but this practice was halted when the risks of ionizing radiation were better understood.
Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old).
Sources of radiation
Ionizing radiation is generated through nuclear reactions, nuclear decay, by very high temperature, or via acceleration of charged particles in electromagnetic fields. Natural sources include the sun, lightning and supernova explosions. Artificial sources include nuclear reactors, particle accelerators, and x-ray tubes.
The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized types of human exposures.
The International Commission on Radiological Protection manages the International System of Radiological Protection, which sets recommended limits for dose uptake.
Background radiation
Background radiation comes from both natural and human-made sources.
The global average exposure of humans to ionizing radiation is about 3 mSv (0.3 rem) per year, 80% of which comes from nature. The remaining 20% results from exposure to human-made radiation sources, primarily from medical imaging. Average human-made exposure is much higher in developed countries, mostly due to CT scans and nuclear medicine.
Natural background radiation comes from five primary sources: cosmic radiation, solar radiation, external terrestrial sources, radiation in the human body, and radon.
The background rate for natural radiation varies considerably with location, being as low as 1.5 mSv/a (1.5 mSv per year) in some areas and over 100 mSv/a in others. The highest level of purely natural radiation recorded on the Earth's surface is 90 μGy/h (0.8 Gy/a) on a Brazilian black beach composed of monazite. The highest background radiation in an inhabited area is found in Ramsar, primarily due to naturally radioactive limestone used as a building material. Some 2000 of the most exposed residents receive an average radiation dose of 10 mGy per year, (1 rad/yr) ten times more than the ICRP recommended limit for exposure to the public from artificial sources. Record levels were found in a house where the effective radiation dose due to external radiation was 135 mSv/a, (13.5 rem/yr) and the committed dose from radon was 640 mSv/a (64.0 rem/yr). This unique case is over 200 times higher than the world average background radiation. Despite the high levels of background radiation that the residents of Ramsar receive there is no compelling evidence that they experience a greater health risk. The ICRP recommendations are conservative limits and may represent an over representation of the actual health risk. Generally radiation safety organization recommend the most conservative limits assuming it is best to err on the side of caution. This level of caution is appropriate but should not be used to create fear about background radiation danger. Radiation danger from background radiation may be a serious threat but is more likely a small overall risk compared to all other factors in the environment.
Cosmic radiation
The Earth, and all living things on it, are constantly bombarded by radiation from outside our solar system. This cosmic radiation consists of relativistic particles: positively charged nuclei (ions) from 1 amu protons (about 85% of it) to 26 amu iron nuclei and even beyond. (The high-atomic number particles are called HZE ions.) The energy of this radiation can far exceed that which humans can create, even in the largest particle accelerators (see ultra-high-energy cosmic ray). This radiation interacts in the atmosphere to create secondary radiation that rains down, including x-rays, muons, protons, antiprotons, alpha particles, pions, electrons, positrons, and neutrons.
The dose from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and based largely on the geomagnetic field, altitude, and solar cycle. The cosmic-radiation dose rate on airplanes is so high that, according to the United Nations UNSCEAR 2000 Report (see links at bottom), airline flight crew workers receive more dose on average than any other worker, including those in nuclear power plants. Airline crews receive more cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where this type of radiation is maximal.
Cosmic rays also include high-energy gamma rays, which are far beyond the energies produced by solar or human sources.
External terrestrial sources
Most materials on Earth contain some radioactive atoms, even if in small quantities. Most of the dose received from these sources is from gamma-ray emitters in building materials, or rocks and soil when outside. The major radionuclides of concern for terrestrial radiation are isotopes of potassium, uranium, and thorium. Each of these sources has been decreasing in activity since the formation of the Earth.
Internal radiation sources
All earthly materials that are the building blocks of life contain a radioactive component. As humans, plants, and animals consume food, air, and water, an inventory of radioisotopes builds up within the organism (see banana equivalent dose). Some radionuclides, like potassium-40, emit a high-energy gamma ray that can be measured by sensitive electronic radiation measurement systems. These internal radiation sources contribute to an individual's total radiation dose from natural background radiation.
Radon
An important source of natural radiation is radon gas, which seeps continuously from bedrock but can, because of its high density, accumulate in poorly ventilated houses.
Radon-222 is a gas produced by the α-decay of radium-226. Both are a part of the natural uranium decay chain. Uranium is found in soil throughout the world in varying concentrations. Radon is the largest cause of lung cancer among non-smokers and the second-leading cause overall.
Radiation exposure
There are three standard ways to limit exposure:
Time: For people exposed to radiation in addition to natural background radiation, limiting or minimizing the exposure time will reduce the dose from the source of radiation.
Distance: Radiation intensity decreases sharply with distance, according to an inverse-square law (in an absolute vacuum).
Shielding: Air or skin can be sufficient to substantially attenuate alpha radiation, while sheet metal or plastic is often sufficient to stop beta radiation. Barriers of lead, concrete, or water are often used to give effective protection from more penetrating forms of ionizing radiation such as gamma rays and neutrons. Some radioactive materials are stored or handled underwater or by remote control in rooms constructed of thick concrete or lined with lead. There are special plastic shields that stop beta particles, and air will stop most alpha particles. The effectiveness of a material in shielding radiation is determined by its half-value thicknesses, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. Some generally accepted thicknesses of attenuating material are 5 mm of aluminum for most beta particles, and 3 inches of lead for gamma radiation.
These can all be applied to natural and human-made sources. For human-made sources the use of Containment is a major tool in reducing dose uptake and is effectively a combination of shielding and isolation from the open environment. Radioactive materials are confined in the smallest possible space and kept out of the environment such as in a hot cell (for radiation) or glove box (for contamination). Radioactive isotopes for medical use, for example, are dispensed in closed handling facilities, usually gloveboxes, while nuclear reactors operate within closed systems with multiple barriers that keep the radioactive materials contained. Work rooms, hot cells and gloveboxes have slightly reduced air pressures to prevent escape of airborne material to the open environment.
In nuclear conflicts or civil nuclear releases civil defense measures can help reduce exposure of populations by reducing ingestion of isotopes and occupational exposure. One is the issue of potassium iodide (KI) tablets, which blocks the uptake of radioactive iodine (one of the major radioisotope products of nuclear fission) into the human thyroid gland.
Occupational exposure
Occupationally exposed individuals are controlled within the regulatory framework of the country they work in, and in accordance with any local nuclear licence constraints. These are usually based on the recommendations of the International Commission on Radiological Protection.
The ICRP recommends limiting artificial irradiation. For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period.
The radiation exposure of these individuals is carefully monitored with the use of dosimeters and other radiological protection instruments which will measure radioactive particulate concentrations, area gamma dose readings and radioactive contamination. A legal record of dose is kept.
Examples of activities where occupational exposure is a concern include:
Airline crew (the most exposed population)
Industrial radiography
Medical radiology and nuclear medicine
Uranium mining
Nuclear power plant and nuclear fuel reprocessing plant workers
Research laboratories (government, university and private)
Some human-made radiation sources affect the body through direct radiation, known as effective dose (radiation) while others take the form of radioactive contamination and irradiate the body from within. The latter is known as committed dose.
Public exposure
Medical procedures, such as diagnostic X-rays, nuclear medicine, and radiation therapy are by far the most significant source of human-made radiation exposure to the general public. Some of the major radionuclides used are I-131, Tc-99m, Co-60, Ir-192, and Cs-137. The public is also exposed to radiation from consumer products, such as tobacco (polonium-210), combustible fuels (gas, coal, etc.), televisions, luminous watches and dials (tritium), airport X-ray systems, smoke detectors (americium), electron tubes, and gas lantern mantles (thorium).
Of lesser magnitude, members of the public are exposed to radiation from the nuclear fuel cycle, which includes the entire sequence from processing uranium to the disposal of the spent fuel. The effects of such exposure have not been reliably measured due to the extremely low doses involved. Opponents use a cancer per dose model to assert that such activities cause several hundred cases of cancer per year, an application of the widely accepted Linear no-threshold model (LNT).
The International Commission on Radiological Protection recommends limiting artificial irradiation to the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.
In a nuclear war, gamma rays from both the initial weapon explosion and fallout would be the sources of radiation exposure.
Spaceflight
Massive particles are a concern for astronauts outside the Earth's magnetic field who would receive solar particles from solar proton events (SPE) and galactic cosmic rays from cosmic sources. These high-energy charged nuclei are blocked by Earth's magnetic field but pose a major health concern for astronauts traveling to the Moon and to any distant location beyond the Earth orbit. Highly charged HZE ions in particular are known to be extremely damaging, although protons make up the vast majority of galactic cosmic rays. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts.
Air travel
Air travel exposes people on aircraft to increased radiation from space as compared to sea level, including cosmic rays and from solar flare events. Software programs such as Epcard, CARI, SIEVERT, PCAIRE are attempts to simulate exposure by aircrews and passengers. An example of a measured dose (not simulated dose) is 6 μSv per hour from London Heathrow to Tokyo Narita on a high-latitude polar route. However, dosages can vary, such as during periods of high solar activity. The United States FAA requires airlines to provide flight crew with information about cosmic radiation, and an International Commission on Radiological Protection recommendation for the general public is no more than 1 mSv per year. In addition, many airlines do not allow pregnant flightcrew members, to comply with a European Directive. The FAA has a recommended limit of 1 mSv total for a pregnancy, and no more than 0.5 mSv per month. Information originally based on Fundamentals of Aerospace Medicine published in 2008.
Radiation hazard warning signs
Hazardous levels of ionizing radiation are signified by the trefoil sign on a yellow background. These are usually posted at the boundary of a radiation controlled area or in any place where radiation levels are significantly above background due to human intervention.
The red ionizing radiation warning symbol (ISO 21482) was launched in 2007, and is intended for IAEA Category 1, 2 and 3 sources defined as dangerous sources capable of death or serious injury, including food irradiators, teletherapy machines for cancer treatment and industrial radiography units. The symbol is to be placed on the device housing the source, as a warning not to dismantle the device or to get any closer. It will not be visible under normal use, only if someone attempts to disassemble the device. The symbol will not be located on building access doors, transportation packages or containers.
See also
European Committee on Radiation Risk
International Commission on Radiological Protection – manages the International System of Radiological Protection
Ionometer
Irradiated mail
National Council on Radiation Protection and Measurements – US national organisation
Nuclear safety
Nuclear semiotics
Radiant energy
Exposure (radiation)
Radiation hormesis
Radiation physics
Radiation protection
Radiation Protection Convention, 1960
Radiation protection of patients
Sievert
Treatment of infections after accidental or hostile exposure to ionizing radiation
References
Literature
External links
The Nuclear Regulatory Commission regulates most commercial radiation sources and non-medical exposures in the US:
NLM Hazardous Substances Databank – Ionizing Radiation
United Nations Scientific Committee on the Effects of Atomic Radiation 2000 Report Volume 1: Sources, Volume 2: Effects
Beginners Guide to Ionising Radiation Measurement
(from CT scans and xrays).
Free Radiation Safety Course
Health Physics Society Public Education Website
Oak Ridge Reservation Basic Radiation Facts
Carcinogens
Mutagens
Radioactivity
Radiobiology
Radiation health effects
Radiation protection | 0.775827 | 0.998708 | 0.774824 |
Evapotranspiration | Evapotranspiration (ET) refers to the combined processes which move water from the Earth's surface (open water and ice surfaces, bare soil and vegetation) into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in water resource management agricultural irrigation.
Definition
Evapotranspiration is defined as: "The combined processes through which water is transferred to the atmosphere from open water and ice surfaces, bare soil and vegetation that make up the Earth’s surface."
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Influencing factors
Primary factors
Levels of evapotranspiration in a given area are primarily controlled by three factors: Firstly, the amount of water present. Secondly, the amount of energy present in the air and soil (e.g. heat, measured by the global surface temperature); and thirdly the ability of the atmosphere to take up water (humidity).
Regarding the second factor (energy and heat): climate change has increased global temperatures (see instrumental temperature record). This global warming has increased evapotranspiration over land. The increased evapotranspiration is one of the effects of climate change on the water cycle.
Secondary factors
Vegetation type
Vegetation type impacts levels of evapotranspiration. For example, herbaceous plants generally transpire less than woody plants, because they usually have less extensive foliage. Also, plants with deep reaching roots can transpire water more constantly, because those roots can pull more water into the plant and leaves. Another example is that conifer forests tend to have higher rates of evapotranspiration than deciduous broadleaf forests, particularly in the dormant winter and early spring seasons, because they are evergreen.
Vegetation coverage
Transpiration is a larger component of evapotranspiration (relative to evaporation) in vegetation-abundant areas. As a result, denser vegetation, like forests, may increase evapotranspiration and reduce water yield.
Two exceptions to this are cloud forests and rainforests. In cloud forests, trees collect the liquid water in fog or low clouds onto their surface, which eventually drips down to the ground. These trees still contribute to evapotranspiration, but often collect more water than they evaporate or transpire. In rainforests, water yield is increased (compared to cleared, unforested land in the same climatic zone) as evapotranspiration increases humidity within the forest (a portion of which condenses and returns quickly as precipitation experienced at ground level as rain). The density of the vegetation blocks sunlight and reduces temperatures at ground level (thereby reducing losses due to surface evaporation), and reduces wind speeds (thereby reducing the loss of airborne moisture). The combined effect results in increased surface stream flows and a higher ground water table whilst the rainforest is preserved. Clearing of rainforests frequently leads to desertification as ground level temperatures and wind speeds increase, vegetation cover is lost or intentionally destroyed by clearing and burning, soil moisture is reduced by wind, and soils are easily eroded by high wind and rainfall events.
Soil and irrigation
In areas that are not irrigated, actual evapotranspiration is usually no greater than precipitation, with some buffer and variations in time depending on the soil's ability to hold water. It will usually be less because some water will be lost due to percolation or surface runoff. An exception is areas with high water tables, where capillary action can cause water from the groundwater to rise through the soil matrix back to the surface. If potential evapotranspiration is greater than the actual precipitation, then soil will dry out until conditions stabilize, unless irrigation is used.
Measurements
Direct measurement
Evapotranspiration can be measured directly with a weighing or pan lysimeter. A lysimeter continuously measures the weight of a plant and associated soil, and any water added by precipitation or irrigation. The change in storage of water in the soil is then modeled by measuring the change in weight. When used properly, this allows for precise measurement of evapotranspiration over small areas.
Indirect estimation
Because atmospheric vapor flux is difficult or time-consuming to measure directly, evapotranspiration is typically estimated by one of several different methods that do not rely on direct measurement.
Catchment water balance
Evapotranspiration may be estimated by evaluating the water balance equation for a given area:. The water balance equation relates the change in water stored within the basin (S) to its input and outputs:
In the equation, the change in water stored within the basin (ΔS) is related to precipitation (P) (water going into the basin), and evapotranspiration (ET), streamflow (Q), and groundwater recharge (D) (water leaving the basin). By rearranging the equation, ET can be estimated if values for the other variables are known:
Energy balance
A second methodology for estimation is by calculating the energy balance.
where λE is the energy needed to change the phase of water from liquid to gas, Rn is the net radiation, G is the soil heat flux and H is the sensible heat flux. Using instruments like a scintillometer, soil heat flux plates or radiation meters, the components of the energy balance can be calculated and the energy available for actual evapotranspiration can be solved.
The SEBAL and METRIC algorithms solve for the energy balance at the Earth's surface using satellite imagery. This allows for both actual and potential evapotranspiration to be calculated on a pixel-by-pixel basis. Evapotranspiration is a key indicator for water management and irrigation performance. SEBAL and METRIC can map these key indicators in time and space, for days, weeks or years.
Estimation from meteorological data
Given meteorological data like wind, temperature, and humidity, reference ET can be calculated. The most general and widely used equation for calculating reference ET is the Penman equation. The Penman–Monteith variation is recommended by the Food and Agriculture Organization and the American Society of Civil Engineers. The simpler Blaney–Criddle equation was popular in the Western United States for many years but it is not as accurate in wet regions with higher humidity. Other equations for estimating evapotranspiration from meteorological data include the Makkink equation, which is simple but must be calibrated to a specific location, and the Hargreaves equations.
To convert the reference evapotranspiration to the actual crop evapotranspiration, a crop coefficient and a stress coefficient must be used. Crop coefficients, as used in many hydrological models, usually change over the year because crops are seasonal and, in general, plant behaviour varies over the year: perennial plants mature over multiple seasons, while annuals do not survive more than a few, so stress responses can significantly depend upon many aspects of plant type and condition.
Potential evapotranspiration
List of remote sensing based evapotranspiration models
ALEXI
BAITSSS
METRIC
Abtew Method
SEBAL
SEBS
SSEBop
PT-JPL
ETMonitor
ETLook
ETWatch
See also
Eddy covariance flux (aka eddy correlation, eddy flux)
Effects of climate change on the water cycle
Hydrology (agriculture)
Hydrologic Evaluation of Landfill Performance (HELP)
Latent heat flux
Water Evaluation And Planning system (WEAP)
Soil plant atmosphere continuum
Deficit irrigation
Biotic pump
References
External links
New Mexico Eddy Covariance Flux Network (Rio-ET)
Texas Evapotranspiration Network
Use and Construction of a Lysimeter to Measure Evapotranspiration
Washoe County (NV) Et Project
US Geological Survey
Hydrology
Agrometeorology
Ecological processes
Irrigation
Meteorological quantities
Water conservation
Water and the environment
Meteorological phenomena | 0.779499 | 0.993944 | 0.774779 |
Newton's cradle | Newton's cradle is a device, usually made of metal, that demonstrates the principles of conservation of momentum and conservation of energy in physics with swinging spheres. When one sphere at the end is lifted and released, it strikes the stationary spheres, compressing them and thereby transmitting a pressure wave through the stationary spheres, which creates a force that pushes the last sphere upward. The last sphere swings back and strikes the stationary spheres, repeating the effect in the opposite direction. The device is named after 17th-century English scientist Sir Isaac Newton and was designed by French scientist Edme Mariotte. It is also known as Newton's pendulum, Newton's balls, Newton's rocker or executive ball clicker (since the device makes a click each time the balls collide, which they do repeatedly in a steady rhythm).
Operation
When one of the end balls ("the first") is pulled sideways, the attached string makes it follow an upward arc. When the ball is let go, it strikes the second ball and comes to nearly a dead stop. The ball on the opposite side acquires most of the velocity of the first ball and swings in an arc almost as high as the release height of the first ball. This shows that the last ball receives most of the energy and momentum of the first ball. The impact produces a sonic wave that propagates through the intermediate balls. Any efficiently elastic material such as steel does this, as long as the kinetic energy is temporarily stored as potential energy in the compression of the material rather than being lost as heat. This is similar to bouncing one coin of a line of touching coins by striking it with another coin, and which happens even if the first struck coin is constrained by pressing on its center such that it cannot move.
There are slight movements in all the balls after the initial strike, but the last ball receives most of the initial energy from the impact of the first ball. When two (or three) balls are dropped, the two (or three) balls on the opposite side swing out. Some say that this behavior demonstrates the conservation of momentum and kinetic energy in elastic collisions. However, if the colliding balls behave as described above with the same mass possessing the same velocity before and after the collisions, then any function of mass and velocity is conserved in such an event. Thus, this first-level explanation is a true, but not a complete description of the motion.
Physics explanation
Newton's cradle can be modeled fairly accurately with simple mathematical equations with the assumption that the balls always collide in pairs. If one ball strikes four stationary balls that are already touching, these simple equations can not explain the resulting movements in all five balls, which are not due to friction losses. For example, in a real Newton's cradle the fourth has some movement and the first ball has a slight reverse movement. All the animations in this article show idealized action (simple solution) that only occurs if the balls are not touching initially and only collide in pairs.
Simple solution
The conservation of momentum and kinetic energy can be used to find the resulting velocities for two colliding perfectly elastic objects. These two equations are used to determine the resulting velocities of the two objects. For the case of two balls constrained to a straight path by the strings in the cradle, the velocities are a single number instead of a 3D vector for 3D space, so the math requires only two equations to solve for two unknowns. When the two objects have the same mass, the solution is simple: the moving object stops relative to the stationary one and the stationary one picks up all the other's initial velocity. This assumes perfectly elastic objects, so there is no need to account for heat and sound energy losses.
Steel does not compress much, but its elasticity is very efficient, so it does not cause much waste heat. The simple effect from two same-mass efficiently elastic colliding objects constrained to a straight path is the basis of the effect seen in the cradle and gives an approximate solution to all its activities.
For a sequence of same-mass elastic objects constrained to a straight path, the effect continues to each successive object. For example, when two balls are dropped to strike three stationary balls in a cradle, there is an unnoticed but crucial small distance between the two dropped balls, and the action is as follows: the first moving ball that strikes the first stationary ball (the second ball striking the third ball) transfers all of its momentum to the third ball and stops. The third ball then transfers the momentum to the fourth ball and stops, and then the fourth to the fifth ball.
Right behind this sequence, the second moving ball is transferring its momentum to the first moving ball that just stopped, and the sequence repeats immediately and imperceptibly behind the first sequence, ejecting the fourth ball right behind the fifth ball with the same small separation that was between the two initial striking balls. If they are simply touching when they strike the third ball, precision requires the more complete solution below.
Other examples of this effect
The effect of the last ball ejecting with a velocity nearly equal to the first ball can be seen in sliding a coin on a table into a line of identical coins, as long as the striking coin and its twin targets are in a straight line. The effect can similarly be seen in billiard balls. The effect can also be seen when a sharp and strong pressure wave strikes a dense homogeneous material immersed in a less-dense medium. If the identical atoms, molecules, or larger-scale sub-volumes of the dense homogeneous material are at least partially elastically connected to each other by electrostatic forces, they can act as a sequence of colliding identical elastic balls.
The surrounding atoms, molecules, or sub-volumes experiencing the pressure wave act to constrain each other similarly to how the string constrains the cradle's balls to a straight line. As a medical example, lithotripsy shock waves can be sent through the skin and tissue without harm to burst kidney stones. The side of the stones opposite to the incoming pressure wave bursts, not the side receiving the initial strike. In the Indian game carrom, a striker stops after hitting a stationery playing piece, transferring all of its momentum into the piece that was hit.
When the simple solution applies
For the simple solution to precisely predict the action, no pair in the midst of colliding may touch the third ball, because the presence of the third ball effectively makes the struck ball appear more massive. Applying the two conservation equations to solve the final velocities of three or more balls in a single collision results in many possible solutions, so these two principles are not enough to determine resulting action.
Even when there is a small initial separation, a third ball may become involved in the collision if the initial separation is not large enough. When this occurs, the complete solution method described below must be used.
Small steel balls work well because they remain efficiently elastic with little heat loss under strong strikes and do not compress much (up to about 30 μm in a small Newton's cradle). The small, stiff compressions mean they occur rapidly, less than 200 microseconds, so steel balls are more likely to complete a collision before touching a nearby third ball. Softer elastic balls require a larger separation to maximize the effect from pair-wise collisions.
More complete solution
A cradle that best follows the simple solution needs to have an initial separation between the balls that measures at least twice the amount that any one ball compresses, but most do not. This section describes the action when the initial separation is not enough and in subsequent collisions that involve more than two balls even when there is an initial separation. This solution simplifies to the simple solution when only two balls touch during a collision. It applies to all perfectly elastic identical balls that have no energy losses due to friction and can be approximated by materials such as steel, glass, plastic, and rubber.
For two balls colliding, only the two equations for conservation of momentum and energy are needed to solve the two unknown resulting velocities. For three or more simultaneously colliding elastic balls, the relative compressibilities of the colliding surfaces are the additional variables that determine the outcome. For example, five balls have four colliding points and scaling (dividing) three of them by the fourth gives the three extra variables needed to solve for all five post-collision velocities.
Newtonian, Lagrangian, Hamiltonian, and stationary action are the different ways of mathematically expressing classical mechanics. They describe the same physics but must be solved by different methods. All enforce the conservation of energy and momentum. Newton's law has been used in research papers. It is applied to each ball and the sum of forces is made equal to zero. So there are five equations, one for each ball—and five unknowns, one for each velocity. If the balls are identical, the absolute compressibility of the surfaces becomes irrelevant, because it can be divided out of both sides of all five equations, producing zero.
Determining the velocities for the case of one ball striking four initially touching balls is found by modeling the balls as weights with non-traditional springs on their colliding surfaces. Most materials, like steel, that are efficiently elastic approximately follow Hooke's force law for springs, , but because the area of contact for a sphere increases as the force increases, colliding elastic balls follow Hertz's adjustment to Hooke's law, . This and Newton's law for motion are applied to each ball, giving five simple but interdependent differential equations that can be solved numerically.
When the fifth ball begins accelerating, it is receiving momentum and energy from the third and fourth balls through the spring action of their compressed surfaces. For identical elastic balls of any type with initially touching balls, the action is the same for the first strike, except the time to complete a collision increases in softer materials. Forty to fifty percent of the kinetic energy of the initial ball from a single-ball strike is stored in the ball surfaces as potential energy for most of the collision process. Of the initial velocity, 13% is imparted to the fourth ball (which can be seen as a 3.3-degree movement if the fifth ball moves out 25 degrees) and there is a slight reverse velocity in the first three balls, the first ball having the largest at −7% of the initial velocity. This separates the balls, but they come back together just before as the fifth ball returns. This is due to the pendulum phenomenon of different small angle disturbances having approximately the same time to return to the center.
The Hertzian differential equations predict that if two balls strike three, the fifth and fourth balls will leave with velocities of 1.14 and 0.80 times the initial velocity. This is 2.03 times more kinetic energy in the fifth ball than the fourth ball, which means the fifth ball would swing twice as high in the vertical direction as the fourth ball. But in a real Newton's cradle, the fourth ball swings out as far as the fifth ball. To explain the difference between theory and experiment, the two striking balls must have at least ≈ 10 μm separation (given steel, 100 g, and 1 m/s). This shows that in the common case of steel balls, unnoticed separations can be important and must be included in the Hertzian differential equations, or the simple solution gives a more accurate result.
Effect of pressure waves
The forces in the Hertzian solution above were assumed to propagate in the balls immediately, which is not the case. Sudden changes in the force between the atoms of material build up to form a pressure wave. Pressure waves (sound) in steel travel about 5 cm in 10 microseconds, which is about 10 times faster than the time between the first ball striking and the last ball being ejected. The pressure waves reflect back and forth through all five balls about ten times, although dispersing to less of a wavefront with more reflections. This is fast enough for the Hertzian solution to not require a substantial modification to adjust for the delay in force propagation through the balls. In less-rigid but still very elastic balls such as rubber, the propagation speed is slower, but the duration of collisions is longer, so the Hertzian solution still applies. The error introduced by the limited speed of the force propagation biases the Hertzian solution towards the simple solution because the collisions are not affected as much by the inertia of the balls that are further away.
Identically shaped balls help the pressure waves converge on the contact point of the last ball: at the initial strike point one pressure wave goes forward to the other balls while another goes backward to reflect off the opposite side of the first ball, and then it follows the first wave, being exactly one ball's diameter behind. The two waves meet up at the last contact point because the first wave reflects off the opposite side of the last ball and it meets up at the last contact point with the second wave. Then they reverberate back and forth like this about 10 times until the first ball stops connecting with the second ball. Then the reverberations reflect off the contact point between the second and third balls, but still converge at the last contact point, until the last ball is ejected—but it is less of a wavefront with each reflection.
Effect of different types of balls
Using different types of material does not change the action as long as the material is efficiently elastic. The size of the spheres does not change the results unless the increased weight exceeds the elastic limit of the material. If the solid balls are too large, energy is being lost as heat, because the elastic limit increases with the radius raised to the power 1.5, but the energy which had to be absorbed and released increases as the cube of the radius. Making the contact surfaces flatter can overcome this to an extent by distributing the compression to a larger amount of material but it can introduce an alignment problem. Steel is better than most materials because it allows the simple solution to apply more often in collisions after the first strike, its elastic range for storing energy remains good despite the higher energy caused by its weight, and the higher weight decreases the effect of air resistance.
Uses
The most common application is that of a desktop executive toy. Another use is as an educational physics demonstration, as an example of conservation of momentum and conservation of energy.
History
The principle demonstrated by the device, the law of impacts between bodies, was first demonstrated by the French physicist Abbé Mariotte in the 17th century. His work on the topic was first presented to the French Academy of Sciences in 1671; it was published in 1673 as Traité de la percussion ou choc des corps ("Treatise on percussion or shock of bodies").
Newton acknowledged Mariotte's work, along with Wren, Wallis and Huygens as the pioneers of experiments on the collisions of pendulum balls, in his Principia.
Christiaan Huygens used pendulums to study collisions. His work, De Motu Corporum ex Percussione (On the Motion of Bodies by Collision) published posthumously in 1703, contains a version of Newton's first law and discusses the collision of suspended bodies including two bodies of equal mass with the motion of the moving body being transferred to the one at rest.
There is much confusion over the origins of the modern Newton's cradle. Marius J. Morin has been credited as being the first to name and make this popular executive toy. However, in early 1967, an English actor, Simon Prebble, coined the name "Newton's cradle" (now used generically) for the wooden version manufactured by his company, Scientific Demonstrations Ltd. After some initial resistance from retailers, they were first sold by Harrods of London, thus creating the start of an enduring market for executive toys. Later a very successful chrome design for the Carnaby Street store Gear was created by the sculptor and future film director Richard Loncraine.
The largest cradle device in the world was designed by MythBusters and consisted of five one-ton concrete and steel rebar-filled buoys suspended from a steel truss. The buoys also had a steel plate inserted in between their two-halves to act as a "contact point" for transferring the energy; this cradle device did not function well because concrete is not elastic so most of the energy was lost to a heat buildup in the concrete. A smaller-scale version constructed by them consists of five chrome steel ball bearings, each weighing , and is nearly as efficient as a desktop model.
The cradle device with the largest-diameter collision balls on public display was visible for more than a year in Milwaukee, Wisconsin, at the retail store American Science and Surplus (see photo). Each ball was an inflatable exercise ball in diameter (encased in steel rings), and was supported from the ceiling using extremely strong magnets. It was dismantled in early August 2010 due to maintenance concerns.
In popular culture
Newton's cradle appears in some films, often as a trope on the desk of a lead villain such as Paul Newman's role in The Hudsucker Proxy, Magneto in X-Men, and the Kryptonians in Superman II. It was used to represent the unyielding position of the NFL towards head injuries in Concussion. It has also been used as a relaxing diversion on the desk of lead intelligent/anxious/sensitive characters such as Henry Winkler's role in Night Shift, Dustin Hoffman's role in Straw Dogs, and Gwyneth Paltrow's role in Iron Man 2. It was featured more prominently as a series of clay pots in Rosencrantz and Guildenstern Are Dead, and as a row of 1968 Eero Aarnio bubble chairs with scantily clad women in them in Gamer. In Storks, Hunter, the CEO of Cornerstore, has one not with balls, but with little birds. Newton's cradle is an item in Nintendo's Animal Crossing where it is referred to as "executive toy". In 2017, an episode of the Omnibus podcast, featuring Jeopardy! champion Ken Jennings and musician John Roderick, focused on the history of Newton's cradle. Newton's cradle is also featured on the desk of Deputy White House Communications Director Sam Seaborn in The West Wing. In the Futurama episode "The Day the Earth Stood Stupid", professor Hubert Farnsworth is shown with his head in a Newton's cradle and saying he's a genius as Philip J. Fry walks by.
Progressive rock band Dream Theater uses the cradle as imagery in album art of their 2005 release Octavarium. Rock band Jefferson Airplane used the cradle on the 1968 album Crown of Creation as a rhythm device to create polyrhythms on an instrumental track.
See also
Galilean cannon
Pendulum wave – another demonstration with pendulums swinging in parallel without collision
References
Literature
B. Brogliato: Nonsmooth Mechanics. Models, Dynamics and Control, Springer, 2nd Edition, 1999.
External links
Educational toys
Office toys
Novelty items
Metal toys
Physics education
Science demonstrations
Science education materials
Office equipment | 0.775609 | 0.998889 | 0.774747 |
Impulse (physics) | In classical mechanics, impulse (symbolized by or Imp) is the change in momentum of an object. If the initial momentum of an object is , and a subsequent momentum is , the object has received an impulse :
Momentum is a vector quantity, so impulse is also a vector quantity.
Newton’s second law of motion states that the rate of change of momentum of an object is equal to the resultant force acting on the object:
so the impulse delivered by a steady force acting for time Δt is:
The impulse delivered by a varying force is the integral of the force with respect to time:
The SI unit of impulse is the newton second (N⋅s), and the dimensionally equivalent unit of momentum is the kilogram metre per second (kg⋅m/s). The corresponding English engineering unit is the pound-second (lbf⋅s), and in the British Gravitational System, the unit is the slug-foot per second (slug⋅ft/s).
Mathematical derivation in the case of an object of constant mass
Impulse produced from time to is defined to be
where is the resultant force applied from to .
From Newton's second law, force is related to momentum by
Therefore,
where is the change in linear momentum from time to . This is often called the impulse-momentum theorem (analogous to the work-energy theorem).
As a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant:
where
is the resultant force applied,
and are times when the impulse begins and ends, respectively,
is the mass of the object,
is the final velocity of the object at the end of the time interval, and
is the initial velocity of the object when the time interval begins.
Impulse has the same units and dimensions as momentum. In the International System of Units, these are . In English engineering units, they are .
The term "impulse" is also used to refer to a fast-acting force or impact. This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time. This sort of change is a step change, and is not physically possible. However, this is a useful model for computing the effects of ideal collisions (such as in videogame physics engines). Additionally, in rocketry, the term "total impulse" is commonly used and is considered synonymous with the term "impulse".
Variable mass
The application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet- or rocket-propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse. This fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicle's propulsive change in velocity to the engine's specific impulse (or nozzle exhaust velocity) and the vehicle's propellant-mass ratio.
See also
Wave–particle duality defines the impulse of a wave collision. The preservation of momentum in the collision is then called phase matching. Applications include:
Compton effect
Nonlinear optics
Acousto-optic modulator
Electron phonon scattering
Dirac delta function, mathematical abstraction of a pure impulse
Notes
Bibliography
External links
Dynamics
Classical mechanics
Vector physical quantities
Mechanical quantities
de:Impuls#Kraftstoß | 0.777192 | 0.996853 | 0.774746 |
Acclimatization | Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do.
Names
The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it.
Methods
Biochemical
In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments.
Morphological
Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species).
The theory
While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes).
Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data.
The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation.
Examples
Plants
Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate.
In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux.
Animals
Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity).
Humans
The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected.
Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake.
See also
Acclimatisation society
Beneficial acclimation hypothesis
Heat index
Introduced species
Phenotypic plasticity
Wind chill
References
Physiology
Ecological processes
Climate
Biology terminology | 0.779159 | 0.994293 | 0.774712 |
Dissipative system | A dissipative system is a thermodynamically open system which is operating out of, and often far from, thermodynamic equilibrium in an environment with which it exchanges energy and matter. A tornado may be thought of as a dissipative system. Dissipative systems stand in contrast to conservative systems.
A dissipative structure is a dissipative system that has a dynamical regime that is in some sense in a reproducible steady state. This reproducible steady state may be reached by natural evolution of the system, by artifice, or by a combination of these two.
Overview
A dissipative structure is characterized by the spontaneous appearance of symmetry breaking (anisotropy) and the formation of complex, sometimes chaotic, structures where interacting particles exhibit long range correlations. Examples in everyday life include convection, turbulent flow, cyclones, hurricanes and living organisms. Less common examples include lasers, Bénard cells, droplet cluster, and the Belousov–Zhabotinsky reaction.
One way of mathematically modeling a dissipative system is given in the article on wandering sets: it involves the action of a group on a measurable set.
Dissipative systems can also be used as a tool to study economic systems and complex systems. For example, a dissipative system involving self-assembly of nanowires has been used as a model to understand the relationship between entropy generation and the robustness of biological systems.
The Hopf decomposition states that dynamical systems can be decomposed into a conservative and a dissipative part; more precisely, it states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and an invariant dissipative set.
Dissipative structures in thermodynamics
Russian-Belgian physical chemist Ilya Prigogine, who coined the term dissipative structure, received the Nobel Prize in Chemistry in 1977 for his pioneering work on these structures, which have dynamical regimes that can be regarded as thermodynamic steady states, and sometimes at least can be described by suitable extremal principles in non-equilibrium thermodynamics.
In his Nobel lecture, Prigogine explains how thermodynamic systems far from equilibrium can have drastically different behavior from systems close to equilibrium. Near equilibrium, the local equilibrium hypothesis applies and typical thermodynamic quantities such as free energy and entropy can be defined locally. One can assume linear relations between the (generalized) flux and forces of the system. Two celebrated results from linear thermodynamics are the Onsager reciprocal relations and the principle of minimum entropy production. After efforts to extend such results to systems far from equilibrium, it was found that they do not hold in this regime and opposite results were obtained.
One way to rigorously analyze such systems is by studying the stability of the system far from equilibrium. Close to equilibrium, one can show the existence of a Lyapunov function which ensures that the entropy tends to a stable maximum. Fluctuations are damped in the neighborhood of the fixed point and a macroscopic description suffices. However, far from equilibrium stability is no longer a universal property and can be broken. In chemical systems, this occurs with the presence of autocatalytic reactions, such as in the example of the Brusselator. If the system is driven beyond a certain threshold, oscillations are no longer damped out, but may be amplified. Mathematically, this corresponds to a Hopf bifurcation where increasing one of the parameters beyond a certain value leads to limit cycle behavior. If spatial effects are taken into account through a reaction–diffusion equation, long-range correlations and spatially ordered patterns arise, such as in the case of the Belousov–Zhabotinsky reaction. Systems with such dynamic states of matter that arise as the result of irreversible processes are dissipative structures.
Recent research has seen reconsideration of Prigogine's ideas of dissipative structures in relation to biological systems.
Dissipative systems in control theory
Willems first introduced the concept of dissipativity in systems theory to describe dynamical systems by input-output properties. Considering a dynamical system described by its state , its input and its output , the input-output correlation is given a supply rate . A system is said to be dissipative with respect to a supply rate if there exists a continuously differentiable storage function such that , and
.
As a special case of dissipativity, a system is said to be passive if the above dissipativity inequality holds with respect to the passivity supply rate .
The physical interpretation is that is the energy stored in the system, whereas is the energy that is supplied to the system.
This notion has a strong connection with Lyapunov stability, where the storage functions may play, under certain conditions of controllability and observability of the dynamical system, the role of Lyapunov functions.
Roughly speaking, dissipativity theory is useful for the design of feedback control laws for linear and nonlinear systems. Dissipative systems theory has been discussed by V.M. Popov, J.C. Willems, D.J. Hill, and P. Moylan. In the case of linear invariant systems, this is known as positive real transfer functions, and a fundamental tool is the so-called Kalman–Yakubovich–Popov lemma which relates the state space and the frequency domain properties of positive real systems. Dissipative systems are still an active field of research in systems and control, due to their important applications.
Quantum dissipative systems
As quantum mechanics, and any classical dynamical system, relies heavily on Hamiltonian mechanics for which time is reversible, these approximations are not intrinsically able to describe dissipative systems. It has been proposed that in principle, one can couple weakly the system – say, an oscillator – to a bath, i.e., an assembly of many oscillators in thermal equilibrium with a broad band spectrum, and trace (average) over the bath. This yields a master equation which is a special case of a more general setting called the Lindblad equation that is the quantum equivalent of the classical Liouville equation. The well-known form of this equation and its quantum counterpart takes time as a reversible variable over which to integrate, but the very foundations of dissipative structures imposes an irreversible and constructive role for time.
Recent research has seen the quantum extension of Jeremy England's theory of dissipative adaptation (which generalizes Prigogine's ideas of dissipative structures to far-from-equilibrium statistical mechanics, as stated above).
Applications on dissipative systems of dissipative structure concept
The framework of dissipative structures as a mechanism to understand the behavior of systems in constant interexchange of energy has been successfully applied on different science fields and applications, as in optics, population dynamics and growth and chemomechanical structures.
See also
Autocatalytic reactions and order creation
Autopoiesis
Autowave
Conservation equation
Complex system
Dynamical system
Extremal principles in non-equilibrium thermodynamics
Information metabolism
Loschmidt's paradox
Non-equilibrium thermodynamics
Relational order theories
Self-organization
Viable system theory
Vortex engine
Notes
References
B. Brogliato, R. Lozano, B. Maschke, O. Egeland, Dissipative Systems Analysis and Control. Theory and Applications. Springer Verlag, London, 2nd Ed., 2007.
Davies, Paul The Cosmic Blueprint Simon & Schuster, New York 1989 (abridged— 1500 words) (abstract— 170 words) — self-organized structures.
Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009.
Prigogine, Ilya, Time, structure and fluctuations. Nobel Lecture, 8 December 1977.
J.C. Willems. Dissipative dynamical systems, part I: General theory; part II: Linear systems with quadratic supply rates. Archive for Rationale mechanics Analysis, vol.45, pp. 321–393, 1972.
External links
The dissipative systems model The Australian National University
Thermodynamic systems
Systems theory
Non-equilibrium thermodynamics | 0.787428 | 0.983777 | 0.774653 |
Relativistic mechanics | In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics.
As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference.
Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In addition to modifying notions of space and time, SR forces one to reconsider the concepts of mass, momentum, and energy all of which are important constructs in Newtonian mechanics. SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated. Consequently, another modification is the concept of the center of mass of a system, which is straightforward to define in classical mechanics but much less obvious in relativity – see relativistic center of mass for details.
The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor, which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four-dimensional spacetime, which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors, or four-dimensional tensors. The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector).
Relativistic kinematics
The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows:
In the above, is the proper time of the path through spacetime, called the world-line, followed by the object velocity the above represents, and
is the four-position; the coordinates of an event. Due to time dilation, the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by:
where is the Lorentz factor:
(either version may be quoted) so it follows:
The first three terms, excepting the factor of , is the velocity as seen by the observer in their own reference frame. The is determined by the velocity between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames.
Relativistic dynamics
Rest mass and relativistic mass
The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written . If an object moves with velocity in some other reference frame, the quantity is often called the object's "relativistic mass" in that frame.
Some authors use to denote rest mass, but for the sake of clarity this article will follow the convention of using for relativistic mass and for rest mass.
Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught.
Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful.
See mass in special relativity for more information on this debate.
A particle whose rest mass is zero is called massless. Photons and gravitons are thought to be massless, and neutrinos are nearly so.
Relativistic energy and momentum
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR.
The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors:
The energy and momentum of an object with invariant mass , moving with velocity with respect to a given frame of reference, are respectively given by
The factor comes from the definition of the four-velocity described above. The appearance of may be stated in an alternative way, which will be explained in the next section.
The kinetic energy, , is defined as
and the speed as a function of kinetic energy is given by
The spatial momentum may be written as , preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for explicitly through the 4-velocity or coordinate time.
A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by , multiplying the momentum by , and noting that the two expressions are equal. This yields
may then be eliminated by dividing this equation by and squaring,
dividing the definition of energy by and squaring,
and substituting:
This is the relativistic energy–momentum relation.
While the energy and the momentum depend on the frame of reference in which they are measured, the quantity is invariant. Its value is times the squared magnitude of the 4-momentum vector.
The invariant mass of a system may be written as
Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame.
Mass–energy equivalence
The relativistic energy–momentum equation holds for all particles, even for massless particles for which m0 = 0. In this case:
When substituted into Ev = c2p, this gives v = c: massless particles (such as photons) always travel at the speed of light.
Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel.
Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest (v = 0, p = 0), there is a non-zero mass remaining: m0 = E/c2.
The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles.
The mass of systems and conservation of invariant mass
For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles:
The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame. In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c2
This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it.
An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame, causes an increase in energy and momentum without an increase in invariant mass. E = m0c2, however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero.
Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy."
Closed (isolated) systems
In a "totally-closed" system (i.e., isolated system) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest ΔE = Δmc2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system.
Chemical and nuclear reactions
In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds.
In chemistry, the mass differences associated with the emitted energy are around 10−9 of the molecular mass. However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science.
Center of momentum frame
The equation E = m0c2 applies only to isolated systems in their center of momentum frame. It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass (invariant mass) of the system.
Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and "matter", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.
For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them.
Angular momentum
In relativistic mechanics, the time-varying mass moment
and orbital 3-angular momentum
of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle:
where ∧ denotes the exterior product. This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution.
Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields.
Force
In special relativity, Newton's second law does not hold in the form F = ma, but it does if it is expressed as
where p = γ(v)m0v is the momentum as defined above and m0 is the invariant mass. Thus, the force is given by
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation
|-
|
Starting from
Carrying out the derivatives gives
If the acceleration is separated into the part parallel to the velocity (a∥) and the part perpendicular to it (a⊥), so that:
one gets
By construction a∥ and v are parallel, so (v·a∥)v is a vector with magnitude v2a∥ in the direction of v (and hence a∥) which allows the replacement:
then
|}
Consequently, in some old texts, γ(v)3m0 is referred to as the longitudinal mass, and γ(v)m0 is referred to as the transverse mass, which is numerically the same as the relativistic mass. See mass in special relativity.
If one inverts this to calculate acceleration from force, one gets
The force described in this section is the classical 3-D force which is not a four-vector. This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion. It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume) is a four-vector (density of weight +1) when combined with the negative of the density of power transferred.
Torque
The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:
or in tensor components:
where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass.
Kinetic energy
The work-energy theorem says the change in kinetic energy is equal to the work done on the body. In special relativity:
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation
|-
|
|}
If in the initial state the body was at rest, so v0 = 0 and γ0(v0) = 1, and in the final state it has speed v1 = v, setting γ1(v1) = γ(v), the kinetic energy is then;
a result that can be directly obtained by subtracting the rest energy m0c2 from the total relativistic energy γ(v)m0c2.
Newtonian limit
The Lorentz factor γ(v) can be expanded into a Taylor series or binomial series for (v/c)2 < 1, obtaining:
and consequently
For velocities much smaller than that of light, one can neglect the terms with c2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
See also
Twin paradox
Relativistic equations
Relativistic heat conduction
Classical electromagnetism and special relativity
Relativistic system (mathematics)
Relativistic Lagrangian mechanics
References
Notes
Further reading
General scope and special/general relativity
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987,
Electromagnetism and special relativity
Classical mechanics and special relativity
General relativity
Theory of relativity | 0.785385 | 0.986266 | 0.774599 |
Tesla (unit) | The tesla (symbol: T) is the unit of magnetic flux density (also called magnetic B-field strength) in the International System of Units (SI).
One tesla is equal to one weber per square metre. The unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Serbian-American electrical and mechanical engineer Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin.
Definition
A particle, carrying a charge of one coulomb (C), and moving perpendicularly through a magnetic field of one tesla, at a speed of one metre per second (m/s), experiences a force with magnitude one newton (N), according to the Lorentz force law. That is,
As an SI derived unit, the tesla can also be expressed in terms of other units. For example, a magnetic flux of 1 weber (Wb) through a surface of one square meter is equal to a magnetic flux density of 1 tesla. That is,
Expressed only in SI base units, 1 tesla is:
where A is ampere, kg is kilogram, and s is second.
Additional equivalences result from the derivation of coulombs from amperes (A), :
the relationship between newtons and joules (J), :
and the derivation of the weber from volts (V), :
Electric vs. magnetic field
In the production of the Lorentz force, the difference between electric fields and magnetic fields is that a force from a magnetic field on a charged particle is generally due to the charged particle's movement, while the force imparted by an electric field on a charged particle is not due to the charged particle's movement. This may be appreciated by looking at the units for each. The unit of electric field in the MKS system of units is newtons per coulomb, N/C, while the magnetic field (in teslas) can be written as N/(C⋅m/s). The dividing factor between the two types of field is metres per second (m/s), which is velocity. This relationship immediately highlights the fact that whether a static electromagnetic field is seen as purely magnetic, or purely electric, or some combination of these, is dependent upon one's reference frame (that is, one's velocity relative to the field).
In ferromagnets, the movement creating the magnetic field is the electron spin (and to a lesser extent electron orbital angular momentum). In a current-carrying wire (electromagnets) the movement is due to electrons moving through the wire (whether the wire is straight or circular).
Conversion to non-SI units
One tesla is equivalent to:
For the relation to the units of the magnetising field (ampere per metre or oersted), see the article on permeability.
Examples
The following examples are listed in the ascending order of the magnetic-field strength.
(31.869 μT) – strength of Earth's magnetic field at 0° latitude, 0° longitude
(40 μT) – walking under a high-voltage power line
(5 mT) – the strength of a typical refrigerator magnet
0.3 T – the strength of solar sunspots
1 T to 2.4 T – coil gap of a typical loudspeaker magnet
1.5 T to 3 T – strength of medical magnetic resonance imaging systems in practice, experimentally up to 17 T
4 T – strength of the superconducting magnet built around the CMS detector at CERN
5.16 T – the strength of a specially designed room temperature Halbach array
8 T – the strength of LHC magnets
11.75 T – the strength of INUMAC magnets, largest MRI scanner
13 T – strength of the superconducting ITER magnet system
14.5 T – highest magnetic field strength ever recorded for an accelerator steering magnet at Fermilab
16 T – magnetic field strength required to levitate a frog (by diamagnetic levitation of the water in its body tissues) according to the 2000 Ig Nobel Prize in Physics
17.6 T – strongest field trapped in a superconductor in a lab as of July 2014
20 T - strength of the large scale high temperature superconducting magnet developed by MIT and Commonwealth Fusion Systems to be used in fusion reactors
27 T – maximal field strengths of superconducting electromagnets at cryogenic temperatures
35.4 T – the current (2009) world record for a superconducting electromagnet in a background magnetic field
45 T – the current (2015) world record for continuous field magnets
97.4 T – strongest magnetic field produced by a "non-destructive" magnet
100 T – approximate magnetic field strength of a typical white dwarf star
1200 T – the field, lasting for about 100 microseconds, formed using the electromagnetic flux-compression technique
109 T – Schwinger limit above which the electromagnetic field itself is expected to become nonlinear
108 – 1011 T (100 MT – 100 GT) – magnetic strength range of magnetar neutron stars
Notes and references
External links
Gauss ↔ Tesla Conversion Tool
SI derived units
Units of magnetic flux density
1960 introductions
Unit | 0.776144 | 0.997957 | 0.774559 |
Thermodynamic free energy | In thermodynamics, the thermodynamic free energy is one of the state functions of a thermodynamic system (the others being internal energy, enthalpy, entropy, etc.). The change in the free energy is the maximum amount of work that the system can perform in a process at constant temperature, and its sign indicates whether the process is thermodynamically favorable or forbidden. Since free energy usually contains potential energy, it is not absolute but depends on the choice of a zero point. Therefore, only relative free energy values, or changes in free energy, are physically meaningful.
The free energy is the portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy.
The Gibbs free energy is given by , where is the enthalpy, is the absolute temperature, and is the entropy. , where is the internal energy, is the pressure, and is the volume. is the most useful for processes involving a system at constant pressure and temperature , because, in addition to subsuming any entropy change due merely to heat, a change in also excludes the work needed to "make space for additional molecules" produced by various processes. Gibbs free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure, hence its utility to solution-phase chemists, including biochemists.
The historically earlier Helmholtz free energy is defined in contrast as . Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant . Thus its appellation "work content", and the designation . Since it makes no reference to any quantities involved in work (such as and ), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system at constant temperature, and it can increase at most by the amount of work done on a system isothermally. The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore work.)
Historically, the term 'free energy' has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy, denoted by (or ), while in chemistry, free energy most often refers to the Gibbs free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations.
Meaning of "free"
The basic definition of "energy" is a measure of a body's (in thermodynamics, the system's) ability to cause change. For example, when a person pushes a heavy box a few metres forward, that person exerts mechanical energy, also known as work, on the box over a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved. Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called "useful energy", because energy was converted from one form into the intended purpose, i.e. mechanical use. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work to push the box. This energy conversion, however, was not straightforward: while some internal energy went into pushing the box, some was diverted away (lost) in the form of heat (transferred thermal energy).
For a reversible process, heat is the product of the absolute temperature and the change in entropy of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is , and the energy lost in the form of heat is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform at constant temperature.
Mathematically, free energy is expressed as
This expression has commonly been interpreted to mean that work is extracted from the internal energy while represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the internal energy change is and the expansion work is derived exclusively from the term supposedly not available to perform work. But it is noteworthy that the derivative form of the free energy: (for Helmholtz free energy) does indeed indicate that a spontaneous change in a non-reactive system's free energy (NOT the internal energy) comprises the available energy to do work (compression in this case) and the unavailable energy . Similar expression can be written for the Gibbs free energy change.
In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as "free heat", "combined heat", "radiant heat", specific heat, heat capacity, "absolute heat", "latent caloric", "free" or "perceptible" caloric (calorique sensible), among others.
In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’" In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not. The use of the words "latent heat" implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become "free" or perceptible.
During the early 19th century, the concept of perceptible or free caloric began to be referred to as "free heat" or "heat set free". In 1824, for example, the French physicist Sadi Carnot, in his famous "Reflections on the Motive Power of Fire", speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression , in which the change in A (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature.
Thus, in traditional use, the term "free" was attached to Gibbs free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work.’ With reference to the Gibbs free energy, we need to add the qualification that it is the energy free for non-volume work and compositional changes.
An increasing number of books and journal articles do not include the attachment "free", referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’.
Application
Just like the general concept of energy, free energy has a few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature , volume , pressure , etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is:
This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at a constant volume. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the volume is kept constant throughout the course of a reaction. Therefore, the heat of the reaction is a direct measure of the free energy change, . In solution chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction is equal to the enthalpy change of the system. Under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy .
These functions have a minimum in chemical equilibrium, as long as certain variables (, and or ) are held constant. In addition, they also have theoretical importance in deriving Maxwell relations. Work other than may be added, e.g., for electrochemical cells, or work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.
is the number of molecules (alternatively, moles) of type in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for processes at uniform pressure and temperature are (assuming only work):
where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant and , conditions which are easy to achieve experimentally, and which approximately characterize living creatures. Under these conditions, it simplifies to
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as times a corresponding increase in the entropy of the system and/or its surrounding.
An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area.
The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles.
Work and free energy change
For a reversible isothermal process, ΔS = qrev/T and therefore the definition of A results in
(at constant temperature)
This tells us that the change in free energy equals the reversible or maximum work for a process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for a reversible adiabatic expansion of an ideal gas, Importantly, for a heat engine, including the Carnot cycle, the free-energy change after a full cycle is zero, while the engine produces nonzero work. It is important to note that for heat engines and other thermal systems, the free energies do not offer convenient characterizations; internal energy and enthalpy are the preferred potentials for characterizing thermal systems.
Free energy change and spontaneous processes
According to the second law of thermodynamics, for any process that occurs in a closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For a process at constant temperature and pressure without non-PV work, this inequality transforms into . Similarly, for a process at constant temperature and volume, . Thus, a negative value of the change in free energy is a necessary condition for a process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus.
From the 1998 textbook Modern Thermodynamics by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition."
During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity.
In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat.
In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess's law came to be viewed as a consequence of the law of conservation of energy.
Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, . This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics.
By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from to. Clausius originally called this the "transformation content" of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as .
In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.
Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas' hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy A at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or A is the amount of energy "free" for work under the given conditions.
Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
See also
Energy
Exergy
Merle Randall
Second law of thermodynamics
Superconductivity
References
Energy (physics)
State functions | 0.777964 | 0.995448 | 0.774423 |
Liénard–Wiechert potential | The Liénard–Wiechert potentials describe the classical electromagnetic effect of a moving electric point charge in terms of a vector potential and a scalar potential in the Lorenz gauge. Stemming directly from Maxwell's equations, these describe the complete, relativistically correct, time-varying electromagnetic field for a point charge in arbitrary motion, but are not corrected for quantum mechanical effects. Electromagnetic radiation in the form of waves can be obtained from these potentials. These expressions were developed in part by Alfred-Marie Liénard in 1898 and independently by Emil Wiechert in 1900.
Equations
Definition of Liénard–Wiechert potentials
The retarded time is defined, in the context of distributions of charges and currents, as
where is the observation point, and is the observed point subject to the variations of source charges and currents.
For a moving point charge whose given trajectory is ,
is no more fixed, but becomes a function of the retarded time itself. In other words, following the trajectory
of yields the implicit equation
which provides the retarded time as a function of the current time (and of the given trajectory):
.
The Liénard–Wiechert potentials (scalar potential field) and (vector potential field) are, for a source point charge at position traveling with velocity :
and
where:
is the velocity of the source expressed as a fraction of the speed of light;
is the distance from the source;
is the unit vector pointing in the direction from the source and,
The symbol means that the quantities inside the parenthesis should be evaluated at the retarded time .
This can also be written in a covariant way, where the electromagnetic four-potential at is:
where and is the position of the source and is its four velocity.
Field computation
We can calculate the electric and magnetic fields directly from the potentials using the definitions:
and
The calculation is nontrivial and requires a number of steps. The electric and magnetic fields are (in non-covariant form):
and
where , and (the Lorentz factor).
Note that the part of the first term of the electric field updates the direction of the field toward the instantaneous position of the charge, if it continues to move with constant velocity . This term is connected with the "static" part of the electromagnetic field of the charge.
The second term, which is connected with electromagnetic radiation by the moving charge, requires charge acceleration and if this is zero, the value of this term is zero, and the charge does not radiate (emit electromagnetic radiation). This term requires additionally that a component of the charge acceleration be in a direction transverse to the line which connects the charge and the observer of the field . The direction of the field associated with this radiative term is toward the fully time-retarded position of the charge (i.e. where the charge was when it was accelerated).
Derivation
The scalar and vector potentials satisfy the nonhomogeneous electromagnetic wave equation where the sources are expressed with the charge and current densities and
and the Ampère-Maxwell law is:
Since the potentials are not unique, but have gauge freedom, these equations can be simplified by gauge fixing. A common choice is the Lorenz gauge condition:
Then the nonhomogeneous wave equations become uncoupled and symmetric in the potentials:
Generally, the retarded solutions for the scalar and vector potentials (SI units) are
and
where is the retarded time and and
satisfy the homogeneous wave equation with no sources and boundary conditions. In the case that there are no boundaries surrounding the sources then
and .
For a moving point charge whose trajectory is given as a function of time by , the charge and current densities are as follows:
where is the three-dimensional Dirac delta function and is the velocity of the point charge.
Substituting into the expressions for the potential gives
These integrals are difficult to evaluate in their present form, so we will rewrite them by replacing with and integrating over the delta distribution :
We exchange the order of integration:
The delta function picks out which allows us to perform the inner integration with ease. Note that is a function of , so this integration also fixes .
The retarded time is a function of the field point and the source trajectory , and hence depends on . To evaluate this integral, therefore, we need the identity
where each is a zero of . Because there is only one retarded time for any given space-time coordinates and source trajectory , this reduces to:
where and are evaluated at the retarded time , and we have used the identity with . Notice that the retarded time is the solution of the equation . Finally, the delta function picks out , and
which are the Liénard–Wiechert potentials.
Lorenz gauge, electric and magnetic fields
In order to calculate the derivatives of and it is convenient to first compute the derivatives of the retarded time. Taking the derivatives of both sides of its defining equation (remembering that ):
Differentiating with respect to t,
Similarly, taking the gradient with respect to and using the multivariable chain rule gives
It follows that
These can be used in calculating the derivatives of the vector potential and the resulting expressions are
These show that the Lorenz gauge is satisfied, namely that .
Similarly one calculates:
By noting that for any vectors , , :
The expression for the electric field mentioned above becomes
which is easily seen to be equal to
Similarly gives the expression of the magnetic field mentioned above:
The source terms , , and are to be evaluated at the retarded time.
Implications
The study of classical electrodynamics was instrumental in Albert Einstein's development of the theory of relativity. Analysis of the motion and propagation of electromagnetic waves led to the special relativity description of space and time. The Liénard–Wiechert formulation is an important launchpad into a deeper analysis of relativistic moving particles.
The Liénard–Wiechert description is accurate for a large, independently moving particle (i.e. the treatment is "classical" and the acceleration of the charge is due to a force independent of the electromagnetic field). The Liénard–Wiechert formulation always provides two sets of solutions: Advanced fields are absorbed by the charges and retarded fields are emitted. Schwarzschild and Fokker considered the advanced field of a system of moving charges, and the retarded field of a system of charges having the same geometry and opposite charges. Linearity of Maxwell's equations in vacuum allows one to add both systems, so that the charges disappear: This trick allows Maxwell's equations to become linear in matter.
Multiplying electric parameters of both problems by arbitrary real constants produces a coherent interaction of light with matter which generalizes Einstein's theory which is now considered as founding theory of lasers: it is not necessary to study a large set of identical molecules to get coherent amplification in the mode obtained by arbitrary multiplications of advanced and retarded fields.
To compute energy, it is necessary to use the absolute fields which include the zero point field; otherwise, an error appears, for instance in photon counting.
It is important to take into account the zero point field discovered by Planck. It replaces Einstein's "A" coefficient and explains that the classical electron is stable on Rydberg's classical orbits. Moreover, introducing the fluctuations of the zero point field produces Willis E. Lamb's correction of levels of H atom.
Quantum electrodynamics helped bring together the radiative behavior with the quantum constraints. It introduces quantization of normal modes of the electromagnetic field in assumed perfect optical resonators.
Universal speed limit
The force on a particle at a given location and time depends in a complicated way on the position of the source particles at an earlier time due to the finite speed, c, at which electromagnetic information travels. A particle on Earth 'sees' a charged particle accelerate on the Moon as this acceleration happened 1.5 seconds ago, and a charged particle's acceleration on the Sun as happened 500 seconds ago. This earlier time in which an event happens such that a particle at location 'sees' this event at a later time is called the retarded time, . The retarded time varies with position; for example the retarded time at the Moon is 1.5 seconds before the current time and the retarded time on the Sun is 500 s before the current time on the Earth. The retarded time tr=tr(r,t) is defined implicitly by
where is the distance of the particle from the source at the retarded time. Only electromagnetic wave effects depend fully on the retarded time.
A novel feature in the Liénard–Wiechert potential is seen in the breakup of its terms into two types of field terms (see below), only one of which depends fully on the retarded time. The first of these is the static electric (or magnetic) field term that depends only on the distance to the moving charge, and does not depend on the retarded time at all, if the velocity of the source is constant. The other term is dynamic, in that it requires that the moving charge be accelerating with a component perpendicular to the line connecting the charge and the observer and does not appear unless the source changes velocity. This second term is connected with electromagnetic radiation.
The first term describes near field effects from the charge, and its direction in space is updated with a term that corrects for any constant-velocity motion of the charge on its distant static field, so that the distant static field appears at distance from the charge, with no aberration of light or light-time correction. This term, which corrects for time-retardation delays in the direction of the static field, is required by Lorentz invariance. A charge moving with a constant velocity must appear to a distant observer in exactly the same way as a static charge appears to a moving observer, and in the latter case, the direction of the static field must change instantaneously, with no time-delay. Thus, static fields (the first term) point exactly at the true instantaneous (non-retarded) position of the charged object if its velocity has not changed over the retarded time delay. This is true over any distance separating objects.
The second term, however, which contains information about the acceleration and other unique behavior of the charge that cannot be removed by changing the Lorentz frame (inertial reference frame of the observer), is fully dependent for direction on the time-retarded position of the source. Thus, electromagnetic radiation (described by the second term) always appears to come from the direction of the position of the emitting charge at the retarded time. Only this second term describes information transfer about the behavior of the charge, which transfer occurs (radiates from the charge) at the speed of light. At "far" distances (longer than several wavelengths of radiation), the 1/R dependence of this term makes electromagnetic field effects (the value of this field term) more powerful than "static" field effects, which are described by the 1/R2 field of the first (static) term and thus decay more rapidly with distance from the charge.
Existence and uniqueness of the retarded time
Existence
The retarded time is not guaranteed to exist in general. For example, if, in a given frame of reference, an electron has just been created, then at this very moment another electron does not yet feel its electromagnetic force at all. However, under certain conditions, there always exists a retarded time. For example, if the source charge has existed for an unlimited amount of time, during which it has always travelled at a speed not exceeding , then there exists a valid retarded time . This can be seen by considering the function . At the present time ; . The derivative is given by
By the mean value theorem, . By making sufficiently large, this can become negative, i.e., at some point in the past, . By the intermediate value theorem, there exists an intermediate with , the defining equation of the retarded time. Intuitively, as the source charge moves back in time, the cross section of its light cone at present time expands faster than it can recede, so eventually it must reach the point . This is not necessarily true if the source charge's speed is allowed to be arbitrarily close to , i.e., if for any given speed there was some time in the past when the charge was moving at this speed. In this case the cross section of the light cone at present time approaches the point as the observer travels back in time but does not necessarily ever reach it.
Uniqueness
For a given point and trajectory of the point source , there is at most one value of the retarded time , i.e., one value such that . This can be realized by assuming that there are two retarded times and , with . Then, and . Subtracting gives by the triangle inequality. Unless , this then implies that the average velocity of the charge between and is , which is impossible. The intuitive interpretation is that one can only ever "see" the point source at one location/time at once unless it travels at least at the speed of light to another location. As the source moves forward in time, the cross section of its light cone at present time contracts faster than the source can approach, so it can never intersect the point again.
The conclusion is that, under certain conditions, the retarded time exists and is unique.
See also
Maxwell's equations which govern classical electromagnetism
Classical electromagnetism for the larger theory surrounding this analysis
Relativistic electromagnetism
Special relativity, which was a direct consequence of these analyses
Rydberg formula for quantum description of the EM radiation due to atomic orbital electrons
Jefimenko's equations
Larmor formula
Abraham–Lorentz force
Inhomogeneous electromagnetic wave equation
Wheeler–Feynman absorber theory also known as the Wheeler–Feynman time-symmetric theory
Paradox of a charge in a gravitational field
Whitehead's theory of gravitation
References
External links
The Feynman Lectures on Physics Vol. II Ch. 21: Solutions of Maxwell’s Equations with Currents and Charges
Electromagnetic radiation
Potentials | 0.784608 | 0.987011 | 0.774417 |
Special relativity | In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's 1905 paper, On the Electrodynamics of Moving Bodies, the theory is presented as being based on just two postulates:
The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity.
The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
The first postulate was first formulated by Galileo Galilei (see Galilean invariance).
Origins and significance
Special relativity was described by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". Maxwell's equations of electromagnetism appeared to be incompatible with Newtonian mechanics, and the Michelson–Morley experiment failed to detect the Earth's motion against the hypothesized luminiferous aether. These led to the development of the Lorentz transformations, by Hendrik Lorentz, which adjust distances and times for moving objects. Special relativity corrects the hitherto laws of mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as ). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, everyday motions on Earth.
Special relativity has a wide range of consequences that have been experimentally verified. They include the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession. It has, for example, replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula , where is the speed of light in a vacuum. It also explains how the phenomena of electricity and magnetism are related.
A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other (as was previously thought to be the case). Rather, space and time are interwoven into a single continuum known as "spacetime". Events that occur at the same time for one observer can occur at different times for another.
Until several years later when Einstein developed general relativity, which introduced a curved spacetime to incorporate gravity, the phrase "special relativity" was not used. A translation sometimes used is "restricted relativity"; "special" really means "special case". Some of the work of Albert Einstein in special relativity is built on the earlier work by Hendrik Lorentz and Henri Poincaré. The theory became essentially complete in 1907, with Hermann Minkowski's papers on spacetime.
The theory is "special" in that it only applies in the special case where the spacetime is "flat", that is, where the curvature of spacetime (a consequence of the energy–momentum tensor and representing gravity) is negligible. To correctly accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some historical descriptions, does accommodate accelerations as well as accelerating frames of reference.
Just as Galilean relativity is now accepted to be an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. But general relativity incorporates non-Euclidean geometry to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime.
Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.
Traditional "two postulates" approach to special relativity
Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as:
The principle of relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.
The principle of invariant light speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source.
The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.
The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history.
Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations. But the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the principle of relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is:
Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms.
Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles.
Principle of relativity
Reference frames and relative motion
Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a "clock" (any reference device with uniform periodicity).
An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired.
For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S.
In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations.
Standard configuration
To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame (pronounced "S prime" or "S dash") belongs to a second observer .
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame .
Frame moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S.
The origins of frames S and are coincident when time for frame S and for frame .
Since there is no absolute reference frame in relativity theory, a concept of "moving" does not strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and are not comoving.
Lack of an absolute reference frame
The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. But in the late 19th century the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
Relativity without the second postulate
From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in a vacuum.
Lorentz invariance as the essential core of special relativity
Alternative approaches to special relativity
Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of relativity and light-speed invariance. He wrote:
Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime.
Rather than considering universal Lorentz covariance to be a derived principle, this article considers it to be the fundamental postulate of special relativity. The traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations. Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler and by Callahan. This is also the approach followed by the Wikipedia articles Spacetime and Minkowski diagram.
Lorentz transformation and its inverse
Define an event to have spacetime coordinates in system S and in a reference frame moving at a velocity v on the x-axis with respect to that frame, . Then the Lorentz transformation specifies that these coordinates are related in the following way:
where is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of , relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity.
Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation:
This shows that the unprimed frame is moving with the velocity −v, as measured in the primed frame.
There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular; see the article Lorentz transformation for details.
A quantity invariant under Lorentz transformations is known as a Lorentz scalar.
Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates and , another event has coordinates and , and the differences are defined as
we get
If we take differentials instead of taking differences, we get
Graphical representation of the Lorentz transformation
Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario.
To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S', in standard configuration, as shown in Fig. 2-1.
Fig. 3-1a. Draw the and axes of frame S. The axis is horizontal and the (actually ) axis is vertical, which is the opposite of the usual convention in kinematics. The axis is scaled by a factor of so that both axes have common units of length. In the diagram shown, the gridlines are spaced one unit distance apart. The 45° diagonal lines represent the worldlines of two photons passing through the origin at time The slope of these worldlines is 1 because the photons advance one unit in space per unit of time. Two events, and have been plotted on this graph so that their coordinates may be compared in the S and S' frames.
Fig. 3-1b. Draw the and axes of frame S'. The axis represents the worldline of the origin of the S' coordinate system as measured in frame S. In this figure, Both the and axes are tilted from the unprimed axes by an angle where The primed and unprimed axes share a common origin because frames S and S' had been set up in standard configuration, so that when
Fig. 3-1c. Units in the primed axes have a different scale from units in the unprimed axes. From the Lorentz transformations, we observe that coordinates of in the primed coordinate system transform to in the unprimed coordinate system. Likewise, coordinates of in the primed coordinate system transform to in the unprimed system. Draw gridlines parallel with the axis through points as measured in the unprimed frame, where is an integer. Likewise, draw gridlines parallel with the axis through as measured in the unprimed frame. Using the Pythagorean theorem, we observe that the spacing between units equals times the spacing between units, as measured in frame S. This ratio is always greater than 1, and ultimately it approaches infinity as
Fig. 3-1d. Since the speed of light is an invariant, the worldlines of two photons passing through the origin at time still plot as 45° diagonal lines. The primed coordinates of and are related to the unprimed coordinates through the Lorentz transformations and could be approximately measured from the graph (assuming that it has been plotted accurately enough), but the real merit of a Minkowski diagram is its granting us a geometric view of the scenario. For example, in this figure, we observe that the two timelike-separated events that had different x-coordinates in the unprimed frame are now at the same position in space.
While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent.
Consequences derived from the Lorentz transformation
The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive.
Invariant interval
In Galilean relativity, an object's length and the temporal separation between two events are independent invariants, the values of which do not change when observed from different frames of reference.
In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as
The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames.
The form of being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame.
In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed:
Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration:
The value of is hence independent of the frame in which it is measured.
In considering the physical significance of , there are three cases to note:
Δs2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that and given the Lorentz transformation it is evident that there exists a less than for which (in particular, ). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time, is called the proper time.
Δs2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that and given the Lorentz transformation there exists a less than for which (in particular, ). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space, is called the proper distance, or proper length. For values of greater than and less than the sign of changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. But the temporal order of timelike-separated events is absolute, since the only way that could be greater than would be if
Δs2 = 0: In this case, the two events are said to be lightlike separated. This implies that and this relationship is frame independent due to the invariance of From this, we observe that the speed of light is in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory.
Relativity of simultaneity
Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
From (the forward Lorentz transformation in terms of coordinate differences)
It is clear that the two events that are simultaneous in frame S (satisfying ), are not necessarily simultaneous in another inertial frame (satisfying ). Only if these events are additionally co-local in frame S (satisfying ), will they be simultaneous in another frame .
The Sagnac effect can be considered a manifestation of the relativity of simultaneity. Since relativity of simultaneity is a first order effect in , instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity.
Time dilation
The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames.
Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by . To find the relation between the times between these ticks as measured in both systems, can be used to find:
for events satisfying
This shows that the time (Δ) between the two ticks as seen in the frame in which the clock is moving, is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory.
Whenever one hears a statement to the effect that "moving clocks run slow", one should envision an inertial reference frame thickly populated with identical, synchronized clocks. As a moving clock travels through this array, its reading at any particular point is compared with a stationary clock at the same point.
The measurements that we would get if we actually looked at a moving clock would, in general, not at all be the same thing, because the time that we would see would be delayed by the finite speed of light, i.e. the times that we see would be distorted by the Doppler effect. Measurements of relativistic effects must always be understood as having been made after finite speed-of-light effects have been factored out.
Langevin's light-clock
Paul Langevin, an early proponent of the theory of relativity, did much to popularize the theory in the face of resistance by many physicists to Einstein's revolutionary concepts. Among his numerous contributions to the foundations of special relativity were independent work on the mass-energy relationship, a thorough examination of the twin paradox, and investigations into rotating coordinate systems. His name is frequently attached to a hypothetical construct called a "light-clock" (originally developed by Lewis and Tolman in 1909) which he used to perform a novel derivation of the Lorentz transformation.
A light-clock is imagined to be a box of perfectly reflecting walls wherein a light signal reflects back and forth from opposite faces. The concept of time dilation is frequently taught using a light-clock that is traveling in uniform inertial motion perpendicular to a line connecting the two mirrors. (Langevin himself made use of a light-clock oriented parallel to its line of motion.)
Consider the scenario illustrated in Observer A holds a light-clock of length as well as an electronic timer with which she measures how long it takes a pulse to make a round trip up and down along the light-clock. Although observer A is traveling rapidly along a train, from her point of view the emission and receipt of the pulse occur at the same place, and she measures the interval using a single clock located at the precise position of these two events. For the interval between these two events, observer A finds A time interval measured using a single clock which is motionless in a particular reference frame is called a proper time interval.
Fig. 4-3B illustrates these same two events from the standpoint of observer B, who is parked by the tracks as the train goes by at a speed of Instead of making straight up-and-down motions, observer B sees the pulses moving along a zig-zag line. However, because of the postulate of the constancy of the speed of light, the speed of the pulses along these diagonal lines is the same that observer A saw for her up-and-down pulses. B measures the speed of the vertical component of these pulses as so that the total round-trip time of the pulses is Note that for observer B, the emission and receipt of the light pulse occurred at different places, and he measured the interval using two stationary and synchronized clocks located at two different positions in his reference frame. The interval that B measured was therefore not a proper time interval because he did not measure it with a single resting clock.
Reciprocal time dilation
In the above description of the Langevin light-clock, the labeling of one observer as stationary and the other as in motion was completely arbitrary. One could just as well have observer B carrying the light-clock and moving at a speed of to the left, in which case observer A would perceive B's clock as running slower than her local clock.
There is no paradox here, because there is no independent observer C who will agree with both A and B. Observer C necessarily makes his measurements from his own reference frame. If that reference frame coincides with A's reference frame, then C will agree with A's measurement of time. If C's reference frame coincides with B's reference frame, then C will agree with B's measurement of time. If C's reference frame coincides with neither A's frame nor B's frame, then C's measurement of time will disagree with both A's and B's measurement of time.
Twin paradox
The reciprocity of time dilation between two observers in separate inertial frames leads to the so-called twin paradox, articulated in its present form by Langevin in 1911. Langevin imagined an adventurer wishing to explore the future of the Earth. This traveler boards a projectile capable of traveling at 99.995% of the speed of light. After making a round-trip journey to and from a nearby star lasting only two years of his own life, he returns to an Earth that is two hundred years older.
This result appears puzzling because both the traveler and an Earthbound observer would see the other as moving, and so, because of the reciprocity of time dilation, one might initially expect that each should have found the other to have aged less. In reality, there is no paradox at all, because in order for the two observers to compare their proper times, the symmetry of the situation must be broken: At least one of the two observers must change their state of motion to match that of the other.
Knowing the general resolution of the paradox, however, does not immediately yield the ability to calculate correct quantitative results. Many solutions to this puzzle have been provided in the literature and have been reviewed in the Twin paradox article. We will examine in the following one such solution to the paradox.
Our basic aim will be to demonstrate that, after the trip, both twins are in perfect agreement about who aged by how much, regardless of their different experiences. illustrates a scenario where the traveling twin flies at to and from a star distant. During the trip, each twin sends yearly time signals (measured in their own proper times) to the other. After the trip, the cumulative counts are compared. On the outward phase of the trip, each twin receives the other's signals at the lowered rate of Initially, the situation is perfectly symmetric: note that each twin receives the other's one-year signal at two years measured on their own clock. The symmetry is broken when the traveling twin turns around at the four-year mark as measured by her clock. During the remaining four years of her trip, she receives signals at the enhanced rate of The situation is quite different with the stationary twin. Because of light-speed delay, he does not see his sister turn around until eight years have passed on his own clock. Thus, he receives enhanced-rate signals from his sister for only a relatively brief period. Although the twins disagree in their respective measures of total time, we see in the following table, as well as by simple observation of the Minkowski diagram, that each twin is in total agreement with the other as to the total number of signals sent from one to the other. There is hence no paradox.
Length contraction
The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage).
Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system , in which the rod is moving, the distances to the end points of the rod must be measured simultaneously in that system . In other words, the measurement is characterized by , which can be combined with to find the relation between the lengths Δx and Δ:
for events satisfying
This shows that the length (Δ) of the rod as measured in the frame in which it is moving, is shorter than its length (Δx) in its own rest frame (S).
Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system.
Lorentz transformation of velocities
Consider two frames S and in standard configuration. A particle in S moves in the x direction with velocity vector What is its velocity in frame ?
We can write
Substituting expressions for and from into , followed by straightforward mathematical manipulations and back-substitution from yields the Lorentz transformation of the speed to :
The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing with
For not aligned along the x-axis, we write:
The forward and inverse transformations for this case are:
and can be interpreted as giving the resultant of the two velocities and and they replace the formula which is valid in Galilean relativity. Interpreted in such a fashion, they are commonly referred to as the relativistic velocity addition (or composition) formulas, valid for the three axes of S and being aligned with each other (although not necessarily in standard configuration).
We note the following points:
If an object (e.g., a photon) were moving at the speed of light in one frame , then it would also be moving at the speed of light in any other frame, moving at .
The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c.
If both and (and then also and ) are small with respect to the speed of light (that is, e.g., , then the intuitive Galilean transformations are recovered from the transformation equations for special relativity
Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations.
There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction; and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details.
Thomas rotation
The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation.
Thomas rotation results from the relativity of simultaneity. In Fig. 4-5a, a rod of length in its rest frame (i.e., having a proper length of ) rises vertically along the y-axis in the ground frame.
In Fig. 4-5b, the same rod is observed from the frame of a rocket moving at speed to the right. If we imagine two clocks situated at the left and right ends of the rod that are synchronized in the frame of the rod, relativity of simultaneity causes the observer in the rocket frame to observe (not see) the clock at the right end of the rod as being advanced in time by and the rod is correspondingly observed as tilted.
Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion.
Thomas rotation provides the resolution to the well-known "meter stick and hole paradox".
Causality and prohibition of motion faster than light
In Fig. 4-6, the time interval between the events A (the "cause") and B (the "effect") is 'time-like'; that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect).
The interval AC in the diagram is 'space-like'; that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. But no frames are accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, paradoxes of causality would result.
For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). A variety of causal paradoxes could then be constructed.
Consider the spacetime diagrams in Fig. 4-7. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards, reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground.
Fig. 4-7a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent.
Fig. 4-7b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, B will receive the message before having sent it out, a violation of causality.
It is not necessary for signals to be instantaneous to violate causality. Even if the signal from D to C were slightly shallower than the axis (and the signal from A to B slightly steeper than the axis), it would still be possible for B to receive his message before he had sent it. By increasing the speed of the train to near light speeds, the and axes can be squeezed very close to the dashed line representing the speed of light. With this modified setup, it can be demonstrated that even signals only slightly faster than the speed of light will result in causality violation.
Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum.
This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon).
Optical effects
Dragging effects
In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water?
In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light.
According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If is the speed of light in still water, and is the speed of the water, and is the water-borne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then
Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since depends on wavelength, the aether must be capable of sustaining different motions at the same time. A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient, that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies.
From the point of view of special relativity, Fizeau's result is nothing but an approximation to , the relativistic formula for composition of velocities.
Relativistic aberration of light
Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver.
The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag.
Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include
OR OR
Relativistic Doppler effect
Relativistic longitudinal Doppler effect
The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here.
Assume the receiver and the source are moving away from each other with a relative speed as measured by an observer on the receiver or the source (The sign convention adopted here is that is negative if the receiver and the source are moving towards each other). Assume that the source is stationary in the medium. Then
where is the speed of sound.
For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be
where
and
is the Lorentz factor.
An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source.
Transverse Doppler effect
The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.
Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver.
Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of . In Fig. 5-3b, the light is redshifted by the same factor.
Measurement versus visual appearance
Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer.
Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. But the visual appearance of an object is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye.
For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the circular outline for all speeds, for any distance, and for all view angles, although
the surface of the sphere and the images on it will appear distorted.
Both Fig. 5-4 and Fig. 5-5 illustrate objects moving transversely to the line of sight. In Fig. 5-4, a cube is viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. At high speeds, the sphere in Fig. 5-5 takes on the appearance of a flattened disk tilted up to 45° from the line of sight. If the objects' motions are not strictly transverse but instead include a longitudinal component, exaggerated distortions in perspective may be seen. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect.
Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. In Fig. 5-6, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out.
Dynamics
Section Consequences derived from the Lorentz transformation dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself.
Equivalence of mass and energy
As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference.
The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies.
In addition to the papers referenced above—which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for .
Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is : it has a time component which is the energy, and three space components which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes . The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2.
The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these do not talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions.
Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy.
Einstein's 1905 demonstration of E = mc2
In his fourth of his 1905 Annus mirabilis papers, Einstein presented a heuristic argument for the equivalence of mass and energy. Although, as discussed above, subsequent scholarship has established that his arguments fell short of a broadly definitive proof, the conclusions that he reached in this paper have stood the test of time.
Einstein took as starting assumptions his recently discovered formula for relativistic Doppler shift, the laws of conservation of energy and conservation of momentum, and the relationship between the frequency of light and its energy as implied by Maxwell's equations.
Fig. 6-1 (top). Consider a system of plane waves of light having frequency traveling in direction relative to the x-axis of reference frame S. The frequency (and hence energy) of the waves as measured in frame that is moving along the x-axis at velocity is given by the relativistic Doppler shift formula which Einstein had developed in his 1905 paper on special relativity:
Fig. 6-1 (bottom). Consider an arbitrary body that is stationary in reference frame S. Let this body emit a pair of equal-energy light-pulses in opposite directions at angle with respect to the x-axis. Each pulse has energy . Because of conservation of momentum, the body remains stationary in S after emission of the two pulses. Let be the energy of the body before emission of the two pulses and after their emission.
Next, consider the same system observed from frame that is moving along the x-axis at speed relative to frame S. In this frame, light from the forwards and reverse pulses will be relativistically Doppler-shifted. Let be the energy of the body measured in reference frame before emission of the two pulses and after their emission. We obtain the following relationships:
From the above equations, we obtain the following:
The two differences of form seen in the above equation have a straightforward physical interpretation. Since and are the energies of the arbitrary body in the moving and stationary frames, and represents the kinetic energies of the bodies before and after the emission of light (except for an additive constant that fixes the zero point of energy and is conventionally set to zero). Hence,
Taking a Taylor series expansion and neglecting higher order terms, he obtained
Comparing the above expression with the classical expression for kinetic energy, K.E. = mv2, Einstein then noted: "If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2."
Rindler has observed that Einstein's heuristic argument suggested merely that energy contributes to mass. In 1905, Einstein's cautious expression of the mass–energy relationship allowed for the possibility that "dormant" mass might exist that would remain behind after all the energy of a body was removed. By 1907, however, Einstein was ready to assert that all inertial mass represented a reserve of energy. "To equate all mass with energy required an act of aesthetic faith, very characteristic of Einstein." Einstein's bold hypothesis has been amply confirmed in the years subsequent to his original proposal.
For a variety of reasons, Einstein's original derivation is currently seldom taught. Besides the vigorous debate that continues until this day as to the formal correctness of his original derivation, the recognition of special relativity as being what Einstein called a "principle theory" has led to a shift away from reliance on electromagnetic phenomena to purely dynamic methods of proof.
How far can you travel from the Earth?
Since nothing can travel faster than light, one might conclude that a human can never travel farther from Earth than ~100 light years. You would easily think that a traveler would never be able to reach more than the few solar systems which exist within the limit of 100 light years from Earth. However, because of time dilation, a hypothetical spaceship can travel thousands of light years during a passenger's lifetime. If a spaceship could be built that accelerates at a constant 1g, it will, after one year, be travelling at almost the speed of light as seen from Earth. This is described by:
where v(t) is the velocity at a time t, a is the acceleration of the spaceship and t is the coordinate time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at v = 0.712c and 0.946c after three years, relative to Earth. After three years of this acceleration, with the spaceship achieving a velocity of 94.6% of the speed of light relative to Earth, time dilation will result in each second experienced on the spaceship corresponding to 3.1 seconds back on Earth. During their journey, people on Earth will experience more time than they do - since their clocks (all physical phenomena) would really be ticking 3.1 times faster than those of the spaceship. A 5-year round trip for the traveller will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for them (5 years accelerating, 5 decelerating, twice each) will land them back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at 1.1g will take 148,000 Earth years and cover about 140,000 light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest).
Elastic collisions
Examination of the collision products generated by particle accelerators around the world provides scientists evidence of the structure of the subatomic world and the natural laws governing it. Analysis of the collision products, the sum of whose masses may vastly exceed the masses of the incident particles, requires special relativity.
In Newtonian mechanics, analysis of collisions involves use of the conservation laws for mass, momentum and energy. In relativistic mechanics, mass is not independently conserved, because it has been subsumed into the total relativistic energy. We illustrate the differences that arise between the Newtonian and relativistic treatments of particle collisions by examining the simple case of two perfectly elastic colliding particles of equal mass. (Inelastic collisions are discussed in Spacetime#Conservation laws. Radioactive decay may be considered a sort of time-reversed inelastic collision.)
Elastic scattering of charged elementary particles deviates from ideality due to the production of Bremsstrahlung radiation.
Newtonian analysis
Fig. 6-2 provides a demonstration of the result, familiar to billiard players, that if a stationary ball is struck elastically by another one of the same mass (assuming no sidespin, or "English"), then after collision, the diverging paths of the two balls will subtend a right angle. (a) In the stationary frame, an incident sphere traveling at 2v strikes a stationary sphere. (b) In the center of momentum frame, the two spheres approach each other symmetrically at ±v. After elastic collision, the two spheres rebound from each other with equal and opposite velocities ±u. Energy conservation requires that = . (c) Reverting to the stationary frame, the rebound velocities are . The dot product , indicating that the vectors are orthogonal.
Relativistic analysis
Consider the elastic collision scenario in Fig. 6-3 between a moving particle colliding with an equal mass stationary particle. Unlike the Newtonian case, the angle between the two particles after collision is less than 90°, is dependent on the angle of scattering, and becomes smaller and smaller as the velocity of the incident particle approaches the speed of light:
The relativistic momentum and total relativistic energy of a particle are given by
Conservation of momentum dictates that the sum of the momenta of the incoming particle and the stationary particle (which initially has momentum = 0) equals the sum of the momenta of the emergent particles:
Likewise, the sum of the total relativistic energies of the incoming particle and the stationary particle (which initially has total energy mc2) equals the sum of the total energies of the emergent particles:
Breaking down into its components, replacing with the dimensionless , and factoring out common terms from and yields the following:
From these we obtain the following relationships:
For the symmetrical case in which and takes on the simpler form:
Beyond the basics
Rapidity
Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas.
This nonlinearity is an artifact of our choice of parameters. We have previously noted that in an spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other.
The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 7-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the , but as twice the area of the sector swept out by the ray from the . Numerically, the angle and measures for the unit circle are identical. Fig. 7-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 7-2 presents plots of the sinh, cosh, and tanh functions.
For the unit circle, the slope of the ray is given by
In the Cartesian plane, rotation of point into point by angle θ is given by
In a spacetime diagram, the velocity parameter is the analog of slope. The rapidity, φ, is defined by
where
The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula;
or in other words,
The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as
Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts.
Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the may be written as
and the inverse Lorentz boost in the may be written as
In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime.
The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage.
4‑vectors
Four‑vectors have been mentioned above in context of the energy–momentum , but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, , and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation, really no more than an observation, using the field strength tensor formulation.
On the other hand, general relativity, from the outset, relies heavily on , and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime.
Definition of 4-vectors
A 4-tuple, is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation.
If using coordinates, A is a if it transforms (in the ) according to
which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation.
As usual, when we write x, t, etc. we generally mean Δx, Δt etc.
The last three components of a must be a standard vector in three-dimensional space. Therefore, a must transform like under Lorentz transformations as well as rotations.
Properties of 4-vectors
Closure under linear combination: If A and B are , then is also a .
Inner-product invariance: If A and B are , then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a . In the following, and are :
In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in .
Two vectors are said to be orthogonal if Unlike the case with orthogonal are not necessarily at right angles with each other. The rule is that two are orthogonal if they are offset by equal and opposite angles from the 45° line which is the world line of a light ray. This implies that a lightlike is orthogonal with itself.
Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval and the invariant length of the relativistic momentum vector
Examples of 4-vectors
Displacement 4-vector: Otherwise known as the spacetime separation, this is or for infinitesimal separations, .
Velocity 4-vector: This results when the displacement is divided by , where is the proper time between the two events that yield dt, dx, dy, and dz.
The is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle.
An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles.
Since photons move on null lines, for a photon, and a cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path.
Energy–momentum 4-vector:
As indicated before, there are varying treatments for the energy-momentum so that one may also see it expressed as or The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy-momentum is a conserved quantity.
Acceleration 4-vector: This results from taking the derivative of the velocity with respect to
Force 4-vector: This is the derivative of the momentum with respect to
As expected, the final components of the above are all standard corresponding to spatial , etc.
4-vectors and physical law
The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving rather than give up on conservation of momentum.
Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving require the use of tensors with appropriate rank, which themselves can be thought of as being built up from .
Acceleration
It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. Actually, accelerating objects can generally be analyzed without needing to deal with accelerating frames at all. It is only when gravitation is significant that general relativity is required.
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.
In this section, we analyze several scenarios involving accelerated reference frames.
Dewan–Beran–Bell spaceship paradox
The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues.
In Fig. 7-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string which is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration. Will the string break?
When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer.
To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break.
Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break.
The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity.
A spacetime diagram (Fig. 7-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude acceleration for proper time (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity turns out to be greater than the length along the line of simultaneity .
The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 7-5, the acceleration is finished, the ships will remain at a constant offset in some frame If and are the ships' positions in the positions in frame are:
The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame . As shown in Fig. 7-5, Bell's example asserts the moving lengths and measured in frame to be fixed, thereby forcing the rest frame length in frame to increase.
Accelerated observer with horizon
Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Section "Invariant hyperbola" of the article Spacetime, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases.
Fig. 7-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter approaches a limit of one as increases. Likewise, approaches infinity.
The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows:
We remember that
Since we conclude that
From the relativistic force law,
Substituting from step 2 and the expression for from step 3 yields which is a constant expression.
Fig. 7-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines).
After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon.
Relativity and unifying electromagnetism
Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity.
The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame.
Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus.
Theories of relativity and quantum mechanics
Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research.
The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time.
In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only described the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics.
On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time.
Status
Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. But at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20)
and thus accepted by the physics community. Experimental results which appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors.
Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields).
Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See classical mechanics for a more detailed discussion.
Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory.
The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities.
The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times.
The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame.
The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle.
Particle accelerators accelerate and measure the properties of particles moving at near the speed of light, where their behavior is consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples:
Tests of relativistic energy and momentum – testing the limiting speed of particles
Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation
Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life
Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations
Hughes–Drever experiment – testing isotropy of space and mass
Modern searches for Lorentz violation – various modern tests
Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter.
Experiments to test the aether drag hypothesis – no "aether flow obstruction".
Technical discussion of spacetime
Geometry of spacetime
Comparison between flat Euclidean space and Minkowski space
Special relativity uses a "flat" 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time.
In 3D space, the differential of distance (line element) ds is defined by
where are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills
where are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime.
The actual form of ds above depends on the metric and on the choices for the X0 coordinate.
To make the time coordinate look like the space coordinates, it can be treated as imaginary: (this is called a Wick rotation).
According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take , rather than a "disguised" Euclidean metric using ict as the time coordinate.
Some authors use , with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor.
These numerous conventions can be superseded by using natural units where . Then space and time have equivalent units, and no factors of c appear anywhere.
3D spacetime
If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space
we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation;
or simply
which is the equation of a circle of radius c dt.
4D spacetime
If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone:
so
As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt.
This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star which I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance away and a time d/c in the past. For this reason the null dual cone is also known as the "light cone". (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".)
The cone in the −t region is the information that the point is "receiving", while the cone in the +t section is the information that the point is "sending".
The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity.
Physics in spacetime
Transformations of physical quantities between reference frames
Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation.
The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows:
In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four-vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used.
The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component , in a contravariant position four-vector with components:
where we define so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally. Now the transformation of the contravariant components of the position 4-vector can be compactly written as:
where there is an implied summation on from 0 to 3, and is a matrix.
More generally, all contravariant components of a four-vector transform from one frame to another frame by a Lorentz transformation:
Examples of other 4-vectors include the four-velocity defined as the derivative of the position 4-vector with respect to proper time:
where the Lorentz factor is:
The relativistic energy and relativistic momentum of an object are respectively the timelike and spacelike components of a contravariant four-momentum vector:
where m is the invariant mass.
The four-acceleration is the proper time derivative of 4-velocity:
The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix.
The four-gradient of a scalar field φ transforms covariantly rather than contravariantly:
which is the transpose of:
only in Cartesian coordinates. It is the covariant derivative which transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates.
More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation:
where is the reciprocal matrix of .
The postulates of special relativity constrain the exact form the Lorentz transformation matrices take.
More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law
where is the reciprocal matrix of . All tensors transform by this rule.
An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor.
The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor.
Metric
The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates) which can be arranged in a matrix:
which is equal to its reciprocal, , in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs.
The Poincaré group is the most general group of transformations which preserves the Minkowski metric:
and this is the physical symmetry underlying special relativity.
The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is:
Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself:
One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants:
similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one does not need to perform Lorentz transformations to determine the invariants.
Relativistic kinematics and invariance
The coordinate differentials transform also contravariantly:
so the squared length of the differential of the position four-vector dXμ constructed using
is an invariant. Notice that when the line element dX2 is negative that is the differential of proper time, while when dX2 is positive, is differential of the proper distance.
The 4-velocity Uμ has an invariant form:
which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces:
So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal.
Relativistic dynamics and invariance
The invariant magnitude of the momentum 4-vector generates the energy–momentum relation:
We can work out what this invariant is by first arguing that, since it is a scalar, it does not matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero.
We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero.
The rest energy is related to the mass according to the celebrated equation discussed above:
The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames.
To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D which contains the components of the 3D force vector among its components.
If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is:
In the rest frame of the object, the time component of the four-force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four-force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four-force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ.
In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.
See also
People
Max Planck
Hermann Minkowski
Max von Laue
Arnold Sommerfeld
Max Born
Mileva Marić
Relativity
History of special relativity
Doubly special relativity
Bondi k-calculus
Einstein synchronisation
Rietdijk–Putnam argument
Special relativity (alternative formulations)
Relativity priority dispute
Physics
Einstein's thought experiments
physical cosmology
Relativistic Euler equations
Lorentz ether theory
Moving magnet and conductor problem
Shape waves
Relativistic heat conduction
Relativistic disk
Born rigidity
Born coordinates
Mathematics
Lorentz group
Relativity in the APS formalism
Philosophy
actualism
conventionalism
Paradoxes
Ehrenfest paradox
Bell's spaceship paradox
Velocity composition paradox
Lighthouse paradox
Notes
Primary sources
References
Further reading
Texts by Einstein and text about history of special relativity
Einstein, Albert (1920). Relativity: The Special and General Theory.
Einstein, Albert (1996). The Meaning of Relativity. Fine Communications.
Logunov, Anatoly A. (2005). Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soloviev, edited by V. A. Petrov). Nauka, Moscow.
Textbooks
Charles Misner, Kip Thorne, and John Archibald Wheeler (1971) Gravitation. W. H. Freeman & Co.
Post, E.J., 1997 (1962) Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications.
Wolfgang Rindler (1991). Introduction to Special Relativity (2nd ed.), Oxford University Press. ;
Harvey R. Brown (2005). Physical relativity: space–time structure from a dynamical perspective, Oxford University Press, ;
Silberstein, Ludwik (1914). The Theory of Relativity.
Taylor, Edwin, and John Archibald Wheeler (1992). Spacetime Physics (2nd ed.). W. H. Freeman & Co. .
Tipler, Paul, and Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman & Co. .
Journal articles
Special Relativity Scholarpedia
External links
Original works
Zur Elektrodynamik bewegter Körper Einstein's original work in German, Annalen der Physik, Bern 1905
On the Electrodynamics of Moving Bodies English Translation as published in the 1923 book The Principle of Relativity.
Special relativity for a general audience (no mathematical knowledge required)
Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics.
Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics.
Audio: Cain/Gay (2006) – Astronomy Cast. Einstein's Theory of Special Relativity
Special relativity explained (using simple or more advanced mathematics)
Bondi K-Calculus – A simple introduction to the special theory of relativity.
Greg Egan's Foundations .
The Hogg Notes on Special Relativity A good introduction to special relativity at the undergraduate level, using calculus.
Relativity Calculator: Special Relativity – An algebraic and integral calculus derivation for .
MathPages – Reflections on Relativity A complete online book on relativity with an extensive bibliography.
Special Relativity An introduction to special relativity at the undergraduate level.
, by Albert Einstein
Special Relativity Lecture Notes is a standard introduction to special relativity containing illustrative explanations based on drawings and spacetime diagrams from Virginia Polytechnic Institute and State University.
Understanding Special Relativity The theory of special relativity in an easily understandable way.
An Introduction to the Special Theory of Relativity (1964) by Robert Katz, "an introduction ... that is accessible to any student who has had an introduction to general physics and some slight acquaintance with the calculus" (130 pp; pdf format).
Lecture Notes on Special Relativity by J D Cresser Department of Physics Macquarie University.
SpecialRelativity.net – An overview with visualizations and minimal mathematics.
Relativity 4-ever? The problem of superluminal motion is discussed in an entertaining manner.
Visualization
Raytracing Special Relativity Software visualizing several scenarios under the influence of special relativity.
Real Time Relativity The Australian National University. Relativistic visual effects experienced through an interactive program.
Spacetime travel A variety of visualizations of relativistic effects, from relativistic motion to black holes.
Through Einstein's Eyes The Australian National University. Relativistic visual effects explained with movies and images.
Warp Special Relativity Simulator A computer program to show the effects of traveling close to the speed of light.
visualizing the Lorentz transformation.
Original interactive FLASH Animations from John de Pillis illustrating Lorentz and Galilean frames, Train and Tunnel Paradox, the Twin Paradox, Wave Propagation, Clock Synchronization, etc.
lightspeed An OpenGL-based program developed to illustrate the effects of special relativity on the appearance of moving objects.
Animation showing the stars near Earth, as seen from a spacecraft accelerating rapidly to light speed.
Albert Einstein | 0.775063 | 0.999158 | 0.774411 |
Electric potential | Electric potential (also called the electric field potential, potential drop, the electrostatic potential) is defined as the amount of work/energy needed per unit of electric charge to move the charge from a reference point to a specific point in an electric field. More precisely, the electric potential is the energy per unit charge for a test charge that is so small that the disturbance of the field under consideration is negligible. The motion across the field is supposed to proceed with negligible acceleration, so as to avoid the test charge acquiring kinetic energy or producing radiation. By definition, the electric potential at the reference point is zero units. Typically, the reference point is earth or a point at infinity, although any point can be used.
In classical electrostatics, the electrostatic field is a vector quantity expressed as the gradient of the electrostatic potential, which is a scalar quantity denoted by or occasionally , equal to the electric potential energy of any charged particle at any location (measured in joules) divided by the charge of that particle (measured in coulombs). By dividing out the charge on the particle a quotient is obtained that is a property of the electric field itself. In short, an electric potential is the electric potential energy per unit charge.
This value can be calculated in either a static (time-invariant) or a dynamic (time-varying) electric field at a specific time with the unit joules per coulomb (J⋅C−1) or volt (V). The electric potential at infinity is assumed to be zero.
In electrodynamics, when time-varying fields are present, the electric field cannot be expressed only as a scalar potential. Instead, the electric field can be expressed as both the scalar electric potential and the magnetic vector potential. The electric potential and the magnetic vector potential together form a four-vector, so that the two kinds of potential are mixed under Lorentz transformations.
Practically, the electric potential is a continuous function in all space, because a spatial derivative of a discontinuous electric potential yields an electric field of impossibly infinite magnitude. Notably, the electric potential due to an idealized point charge (proportional to , with the distance from the point charge) is continuous in all space except at the location of the point charge. Though electric field is not continuous across an idealized surface charge, it is not infinite at any point. Therefore, the electric potential is continuous across an idealized surface charge. Additionally, an idealized line of charge has electric potential (proportional to , with the radial distance from the line of charge) is continuous everywhere except on the line of charge.
Introduction
Classical mechanics explores concepts such as force, energy, and potential. Force and potential energy are directly related. A net force acting on any object will cause it to accelerate. As an object moves in the direction of a force acting on it, its potential energy decreases. For example, the gravitational potential energy of a cannonball at the top of a hill is greater than at the base of the hill. As it rolls downhill, its potential energy decreases and is being translated to motion – kinetic energy.
It is possible to define the potential of certain force fields so that the potential energy of an object in that field depends only on the position of the object with respect to the field. Two such force fields are a gravitational field and an electric field (in the absence of time-varying magnetic fields). Such fields affect objects because of the intrinsic properties (e.g., mass or charge) and positions of the objects.
An object may possess a property known as electric charge. Since an electric field exerts force on a charged object, if the object has a positive charge, the force will be in the direction of the electric field vector at the location of the charge; if the charge is negative, the force will be in the opposite direction.
The magnitude of force is given by the quantity of the charge multiplied by the magnitude of the electric field vector,
Electrostatics
An electric potential at a point in a static electric field is given by the line integral
where is an arbitrary path from some fixed reference point to ; it is uniquely determined up to a constant that is added or subtracted from the integral. In electrostatics, the Maxwell-Faraday equation reveals that the curl is zero, making the electric field conservative. Thus, the line integral above does not depend on the specific path chosen but only on its endpoints, making well-defined everywhere. The gradient theorem then allows us to write:
This states that the electric field points "downhill" towards lower voltages. By Gauss's law, the potential can also be found to satisfy Poisson's equation:
where is the total charge density and denotes the divergence.
The concept of electric potential is closely linked with potential energy. A test charge, , has an electric potential energy, , given by
The potential energy and hence, also the electric potential, is only defined up to an additive constant: one must arbitrarily choose a position where the potential energy and the electric potential are zero.
These equations cannot be used if i.e., in the case of a non-conservative electric field (caused by a changing magnetic field; see Maxwell's equations). The generalization of electric potential to this case is described in the section .
Electric potential due to a point charge
The electric potential arising from a point charge, , at a distance, , from the location of is observed to be
where is the permittivity of vacuum, is known as the Coulomb potential. Note that, in contrast to the magnitude of an electric field due to a point charge, the electric potential scales respective to the reciprocal of the radius, rather than the radius squared.
The electric potential at any location, , in a system of point charges is equal to the sum of the individual electric potentials due to every point charge in the system. This fact simplifies calculations significantly, because addition of potential (scalar) fields is much easier than addition of the electric (vector) fields. Specifically, the potential of a set of discrete point charges at points becomes
where
is a point at which the potential is evaluated;
is a point at which there is a nonzero charge; and
is the charge at the point .
And the potential of a continuous charge distribution becomes
where
is a point at which the potential is evaluated;
is a region containing all the points at which the charge density is nonzero;
is a point inside ; and
is the charge density at the point .
The equations given above for the electric potential (and all the equations used here) are in the forms required by SI units. In some other (less common) systems of units, such as CGS-Gaussian, many of these equations would be altered.
Generalization to electrodynamics
When time-varying magnetic fields are present (which is true whenever there are time-varying electric fields and vice versa), it is not possible to describe the electric field simply as a scalar potential because the electric field is no longer conservative: is path-dependent because (due to the Maxwell-Faraday equation).
Instead, one can still define a scalar potential by also including the magnetic vector potential . In particular, is defined to satisfy:
where is the magnetic field. By the fundamental theorem of vector calculus, such an can always be found, since the divergence of the magnetic field is always zero due to the absence of magnetic monopoles. Now, the quantity
is a conservative field, since the curl of is canceled by the curl of according to the Maxwell–Faraday equation. One can therefore write
where is the scalar potential defined by the conservative field .
The electrostatic potential is simply the special case of this definition where is time-invariant. On the other hand, for time-varying fields,
unlike electrostatics.
Gauge freedom
The electrostatic potential could have any constant added to it without affecting the electric field. In electrodynamics, the electric potential has infinitely many degrees of freedom. For any (possibly time-varying or space-varying) scalar field, , we can perform the following gauge transformation to find a new set of potentials that produce exactly the same electric and magnetic fields:
Given different choices of gauge, the electric potential could have quite different properties. In the Coulomb gauge, the electric potential is given by Poisson's equation
just like in electrostatics. However, in the Lorenz gauge, the electric potential is a retarded potential that propagates at the speed of light and is the solution to an inhomogeneous wave equation:
Units
The SI derived unit of electric potential is the volt (in honor of Alessandro Volta), denoted as V, which is why the electric potential difference between two points in space is known as a voltage. Older units are rarely used today. Variants of the centimetre–gram–second system of units included a number of different units for electric potential, including the abvolt and the statvolt.
Galvani potential versus electrochemical potential
Inside metals (and other solids and liquids), the energy of an electron is affected not only by the electric potential, but also by the specific atomic environment that it is in. When a voltmeter is connected between two different types of metal, it measures the potential difference corrected for the different atomic environments. The quantity measured by a voltmeter is called electrochemical potential or fermi level, while the pure unadjusted electric potential, , is sometimes called the Galvani potential, . The terms "voltage" and "electric potential" are a bit ambiguous but one may refer to of these in different contexts.
Common formulas
See also
Absolute electrode potential
Electrochemical potential
Electrode potential
References
Further reading
Potentials
Electrostatics
Voltage
Electromagnetic quantities | 0.775437 | 0.998616 | 0.774364 |
Motility | Motility is the ability of an organism to move independently using metabolic energy. This biological concept encompasses movement at various levels, from whole organisms to cells and subcellular components.
Motility is observed in animals, microorganisms, and even some plant structures, playing crucial roles in activities such as foraging, reproduction, and cellular functions. It is genetically determined but can be influenced by environmental factors.
In multicellular organisms, motility is facilitated by systems like the nervous and musculoskeletal systems, while at the cellular level, it involves mechanisms such as amoeboid movement and flagellar propulsion. These cellular movements can be directed by external stimuli, a phenomenon known as taxis. Examples include chemotaxis (movement along chemical gradients) and phototaxis (movement in response to light).
Motility also includes physiological processes like gastrointestinal movements and peristalsis. Understanding motility is important in biology, medicine, and ecology, as it impacts processes ranging from bacterial behavior to ecosystem dynamics.
Definitions
Motility, the ability of an organism to move independently, using metabolic energy, can be contrasted with sessility, the state of organisms that do not possess a means of self-locomotion and are normally immobile.
Motility differs from mobility, the ability of an object to be moved.
The term vagility means a lifeform that can be moved but only passively; sessile organisms including plants and fungi often have vagile parts such as fruits, seeds, or spores which may be dispersed by other agents such as wind, water, or other organisms.
Motility is genetically determined, but may be affected by environmental factors such as toxins. The nervous system and musculoskeletal system provide the majority of mammalian motility.
In addition to animal locomotion, most animals are motile, though some are vagile, described as having passive locomotion. Many bacteria and other microorganisms, including even some viruses, and multicellular organisms are motile; some mechanisms of fluid flow in multicellular organs and tissue are also considered instances of motility, as with gastrointestinal motility. Motile marine animals are commonly called free-swimming, and motile non-parasitic organisms are called free-living.
Motility includes an organism's ability to move food through its digestive tract. There are two types of intestinal motility – peristalsis and segmentation. This motility is brought about by the contraction of smooth muscles in the gastrointestinal tract which mix the luminal contents with various secretions (segmentation) and move contents through the digestive tract from the mouth to the anus (peristalsis).
Cellular level
At the cellular level, different modes of movement exist:
amoeboid movement, a crawling-like movement, which also makes swimming possible
filopodia, enabling movement of the axonal growth cone
flagellar motility, a swimming-like motion (observed for example in spermatozoa, propelled by the regular beat of their flagellum, or the E. coli bacterium, which swims by rotating a helical prokaryotic flagellum)
gliding motility
swarming motility
twitching motility, a form of motility used by bacteria to crawl over surfaces using grappling hook-like filaments called type IV pili.
Many cells are not motile, for example Klebsiella pneumoniae and Shigella, or under specific circumstances such as Yersinia pestis at 37 °C.
Movements
Events perceived as movements can be directed:
along a chemical gradient (see chemotaxis)
along a temperature gradient (see thermotaxis)
along a light gradient (see phototaxis)
along a magnetic field line (see magnetotaxis)
along an electric field (see galvanotaxis)
along the direction of the gravitational force (see gravitaxis)
along a rigidity gradient (see durotaxis)
along a gradient of cell adhesion sites (see haptotaxis)
along other cells or biopolymers
See also
Cell migration
References
Physiology
Cell movement
Articles containing video clips | 0.778603 | 0.994522 | 0.774338 |
Cosmology | Cosmology is a branch of physics and metaphysics dealing with the nature of the universe, the cosmos. The term cosmology was first used in English in 1656 in Thomas Blount's Glossographia, and in 1731 taken up in Latin by German philosopher Christian Wolff in Cosmologia Generalis. Religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation myths and eschatology. In the science of astronomy, cosmology is concerned with the study of the chronology of the universe.
Physical cosmology is the study of the observable universe's origin, its large-scale structures and dynamics, and the ultimate fate of the universe, including the laws of science that govern these areas. It is investigated by scientists, including astronomers and physicists, as well as philosophers, such as metaphysicians, philosophers of physics, and philosophers of space and time. Because of this shared scope with philosophy, theories in physical cosmology may include both scientific and non-scientific propositions and may depend upon assumptions that cannot be tested. Physical cosmology is a sub-branch of astronomy that is concerned with the universe as a whole. Modern physical cosmology is dominated by the Big Bang Theory which attempts to bring together observational astronomy and particle physics; more specifically, a standard parameterization of the Big Bang with dark matter and dark energy, known as the Lambda-CDM model.
Theoretical astrophysicist David N. Spergel has described cosmology as a "historical science" because "when we look out in space, we look back in time" due to the finite nature of the speed of light.
Disciplines
Physics and astrophysics have played central roles in shaping our understanding of the universe through scientific observation and experiment. Physical cosmology was shaped through both mathematics and observation in an analysis of the whole universe. The universe is generally understood to have begun with the Big Bang, followed almost instantaneously by cosmic inflation, an expansion of space from which the universe is thought to have emerged 13.799 ± 0.021 billion years ago. Cosmogony studies the origin of the universe, and cosmography maps the features of the universe.
In Diderot's Encyclopédie, cosmology is broken down into uranology (the science of the heavens), aerology (the science of the air), geology (the science of the continents), and hydrology (the science of waters).
Metaphysical cosmology has also been described as the placing of humans in the universe in relationship to all other entities. This is exemplified by Marcus Aurelius's observation that a man's place in that relationship: "He who does not know what the world is does not know where he is, and he who does not know for what purpose the world exists, does not know who he is, nor what the world is."
Discoveries
Physical cosmology
Physical cosmology is the branch of physics and astrophysics that deals with the study of the physical origins and evolution of the universe. It also includes the study of the nature of the universe on a large scale. In its earliest form, it was what is now known as "celestial mechanics," the study of the heavens. Greek philosophers Aristarchus of Samos, Aristotle, and Ptolemy proposed different cosmological theories. The geocentric Ptolemaic system was the prevailing theory until the 16th century when Nicolaus Copernicus, and subsequently Johannes Kepler and Galileo Galilei, proposed a heliocentric system. This is one of the most famous examples of epistemological rupture in physical cosmology.
Isaac Newton's Principia Mathematica, published in 1687, was the first description of the law of universal gravitation. It provided a physical mechanism for Kepler's laws and also allowed the anomalies in previous systems, caused by gravitational interaction between the planets, to be resolved. A fundamental difference between Newton's cosmology and those preceding it was the Copernican principle—that the bodies on Earth obey the same physical laws as all celestial bodies. This was a crucial philosophical advance in physical cosmology.
Modern scientific cosmology is widely considered to have begun in 1917 with Albert Einstein's publication of his final modification of general relativity in the paper "Cosmological Considerations of the General Theory of Relativity" (although this paper was not widely available outside of Germany until the end of World War I). General relativity prompted cosmogonists such as Willem de Sitter, Karl Schwarzschild, and Arthur Eddington to explore its astronomical ramifications, which enhanced the ability of astronomers to study very distant objects. Physicists began changing the assumption that the universe was static and unchanging. In 1922, Alexander Friedmann introduced the idea of an expanding universe that contained moving matter.
In parallel to this dynamic approach to cosmology, one long-standing debate about the structure of the cosmos was coming to a climax – the Great Debate (1917 to 1922) – with early cosmologists such as Heber Curtis and Ernst Öpik determining that some nebulae seen in telescopes were separate galaxies far distant from our own. While Heber Curtis argued for the idea that spiral nebulae were star systems in their own right as island universes, Mount Wilson astronomer Harlow Shapley championed the model of a cosmos made up of the Milky Way star system only. This difference of ideas came to a climax with the organization of the Great Debate on 26 April 1920 at the meeting of the U.S. National Academy of Sciences in Washington, D.C. The debate was resolved when Edwin Hubble detected Cepheid Variables in the Andromeda Galaxy in 1923 and 1924. Their distance established spiral nebulae well beyond the edge of the Milky Way.
Subsequent modelling of the universe explored the possibility that the cosmological constant, introduced by Einstein in his 1917 paper, may result in an expanding universe, depending on its value. Thus the Big Bang model was proposed by the Belgian priest Georges Lemaître in 1927 which was subsequently corroborated by Edwin Hubble's discovery of the redshift in 1929 and later by the discovery of the cosmic microwave background radiation by Arno Penzias and Robert Woodrow Wilson in 1964. These findings were a first step to rule out some of many alternative cosmologies.
Since around 1990, several dramatic advances in observational cosmology have transformed cosmology from a largely speculative science into a predictive science with precise agreement between theory and observation. These advances include observations of the microwave background from the COBE, WMAP and Planck satellites, large new galaxy redshift surveys including 2dfGRS and SDSS, and observations of distant supernovae and gravitational lensing. These observations matched the predictions of the cosmic inflation theory, a modified Big Bang theory, and the specific version known as the Lambda-CDM model. This has led many to refer to modern times as the "golden age of cosmology".
In 2014, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, this result was later found to be spurious: the supposed evidence of gravitational waves was in fact due to interstellar dust.
On 1 December 2014, at the Planck 2014 meeting in Ferrara, Italy, astronomers reported that the universe is 13.8 billion years old and composed of 4.9% atomic matter, 26.6% dark matter and 68.5% dark energy.
Religious or mythological cosmology
Religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation and eschatology. Creation myths are found in most religions, and are typically split into five different classifications, based on a system created by Mircea Eliade and his colleague Charles Long.
Types of Creation Myths based on similar motifs:
Creation ex nihilo in which the creation is through the thought, word, dream or bodily secretions of a divine being.
Earth diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world.
Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world.
Creation by the dismemberment of a primordial being.
Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos.
Philosophy
Cosmology deals with the world as the totality of space, time and all phenomena. Historically, it has had quite a broad scope, and in many cases was found in religion. Some questions about the Universe are beyond the scope of scientific inquiry but may still be interrogated through appeals to other philosophical approaches like dialectics. Some questions that are included in extra-scientific endeavors may include: Charles Kahn, an important historian of philosophy, attributed the origins of ancient Greek cosmology to Anaximander.
What is the origin of the universe? What is its first cause (if any)? Is its existence necessary? (see monism, pantheism, emanationism and creationism)
What are the ultimate material components of the universe? (see mechanism, dynamism, hylomorphism, atomism)
What is the ultimate reason (if any) for the existence of the universe? Does the cosmos have a purpose? (see teleology)
Does the existence of consciousness have a role in the existence of reality? How do we know what we know about the totality of the cosmos? Does cosmological reasoning reveal metaphysical truths? (see epistemology)
Historical cosmologies
Table notes: the term "static" simply means not expanding and not contracting. Symbol G represents Newton's gravitational constant; Λ (Lambda) is the cosmological constant.
See also
Absolute time and space
Big History
Earth science
Galaxy formation and evolution
Illustris project
Jainism and non-creationism
Lambda-CDM model
List of astrophysicists
Non-standard cosmology
Taiji (philosophy)
Timeline of cosmological theories
Universal rotation curve
Warm inflation
Big Ring
References
Sources
Download full text:
Charles Kahn. 1994. Anaximander and the Origins of Greek Cosmology. Indianapolis: Hackett.
Lectures given at the Summer School in High Energy Physics and Cosmology, ICTP (Trieste) 1993.) 60 pages, plus 5 Figures.
Sophia Centre. The Sophia Centre for the Study of Cosmology in Culture, University of Wales Trinity Saint David. | 0.775476 | 0.998523 | 0.77433 |
Buoyancy | Buoyancy, or upthrust is a net upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
For this reason, an object whose average density is greater than that of the fluid in which it is submerged tends to sink. If the object is less dense than the liquid, the force can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction.
Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water.
Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist.
The center of buoyancy of an object is the center of gravity of the displaced volume of fluid.
Archimedes' principle
Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces:
—with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object.
More tersely: buoyant force = weight of displaced fluid.
Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle that buoyancy = weight of displaced fluid remains valid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). In simple terms, the principle states that the buoyancy force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravitational acceleration, g. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor. It is generally easier to lift an object up through the water than it is to pull it out of the water.
Assuming Archimedes' principle to be reformulated as follows,
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes:
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the car's acceleration (i.e., towards the rear). The balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed "out of the way", and will actually drift in the same direction as the car's acceleration (i.e., forward). If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve, the balloon will drift towards the inside of the curve.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally.
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Static stability
A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up.
Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'.
The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position.
Fluids and objects
As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.
Compressible objects
As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands.
If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation.
Submarines
Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion.
Balloons
The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.
Divers
Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies.
Density
If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise.
If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position.
An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink.
A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.
See also
References
External links
Falling in Water
W. H. Besant (1889) Elementary Hydrostatics from Google Books.
NASA's definition of buoyancy
Fluid mechanics
Force | 0.775411 | 0.998569 | 0.774302 |
Arrow of time | The arrow of time, also called time's arrow, is the concept positing the "one-way direction" or "asymmetry" of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This direction, according to Eddington, could be determined by studying the organization of atoms, molecules, and bodies, and might be drawn upon a four-dimensional relativistic map of the world ("a solid block of paper").
The arrow of time paradox was originally recognized in the 1800s for gases (and other substances) as a discrepancy between microscopic and macroscopic description of thermodynamics / statistical Physics: at the microscopic level physical processes are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time.
Overview
The symmetry of time (T-symmetry) can be understood simply as the following: if time were perfectly symmetrical, a video of real events would seem realistic whether played forwards or backwards. Gravity, for example, is a time-reversible force. A ball that is tossed up, slows to a stop, and falls is a case where recordings would look equally realistic forwards and backwards. The system is T-symmetrical. However, the process of the ball bouncing and eventually coming to a stop is not time-reversible. While going forward, kinetic energy is dissipated and entropy is increased. Entropy may be one of the few processes that is not time-reversible. According to the statistical notion of increasing entropy, the "arrow" of time is identified with a decrease of free energy.
In his book The Big Picture, physicist Sean M. Carroll compares the asymmetry of time to the asymmetry of space: While physical laws are in general isotropic, near Earth there is an obvious distinction between "up" and "down", due to proximity to this huge body, which breaks the symmetry of space. Similarly, physical laws are in general symmetric to the flipping of time direction, but near the Big Bang (i.e., in the first many trillions of years following it), there is an obvious distinction between "forward" and "backward" in time, due to relative proximity to this special event, which breaks the symmetry of time. Under this view, all the arrows of time are a result of our relative proximity in time to the Big Bang and the special circumstances that existed then. (Strictly speaking, the weak interactions are asymmetric to both spatial reflection and to flipping of the time direction. However, they do obey a more complicated symmetry that includes both.)
Conception by Eddington
In the 1928 book The Nature of the Physical World, which helped to popularize the concept, Eddington stated:
Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase 'time's arrow' to express this one-way property of time which has no analogue in space.
Eddington then gives three points to note about this arrow:
It is vividly recognized by consciousness.
It is equally insisted on by our reasoning faculty, which tells us that a reversal of the arrow would render the external world nonsensical.
It makes no appearance in physical science except in the study of organization of a number of individuals. (In other words, it is only observed in entropy, a statistical mechanics phenomenon arising from a system.)
Arrows
Psychological/perceptual arrow of time
A related mental arrow arises because one has the sense that one's perception is a continuous movement from the known past to the unknown future. This phenomenon has two aspects: memory (we remember the past but not the future) and volition (we feel we can influence the future but not the past). The two aspects are a consequence of the causal arrow of time: past events (but not future events) are the cause of our present memories, as more and more correlations are formed between the outer world and our brain (see correlations and the arrow of time); and our present volitions and actions are causes of future events. This is because the increase of entropy is thought to be related to increase of both correlations between a system and its surroundings and of the overall complexity, under an appropriate definition; thus all increase together with time.
Past and future are also psychologically associated with additional notions. English, along with other languages, tends to associate the past with "behind" and the future with "ahead", with expressions such as "to look forward to welcoming you", "to look back to the good old times", or "to be years ahead". However, this association of "behind ⇔ past" and "ahead ⇔ future" is culturally determined. For example, the Aymara language associates "ahead ⇔ past" and "behind ⇔ future" both in terms of terminology and gestures, corresponding to the past being observed and the future being unobserved. Similarly, the Chinese term for "the day after tomorrow" 後天 ("hòutiān") literally means "after (or behind) day", whereas "the day before yesterday" 前天 ("qiántiān") is literally "preceding (or in front) day", and Chinese speakers spontaneously gesture in front for the past and behind for the future, although there are conflicting findings on whether they perceive the ego to be in front of or behind the past. There are no languages that place the past and future on a left–right axis (e.g., there is no expression in English such as *the meeting was moved to the left), although at least English speakers associate the past with the left and the future with the right, which seems to have its origin in the left-to-right writing system.
The words "yesterday" and "tomorrow" both translate to the same word in Hindi: कल ("kal"), meaning "[one] day remote from today." The ambiguity is resolved by verb tense. परसों ("parson") is used for both "day before yesterday" and "day after tomorrow", or "two days from today".
तरसों ("tarson") is used for "three days from today" and नरसों ("narson") is used for "four days from today".
The other side of the psychological passage of time is in the realm of volition and action. We plan and often execute actions intended to affect the course of events in the future. From the Rubaiyat:
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit.
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.
— Omar Khayyam (translation by Edward Fitzgerald).
In June 2022, researchers reported in Physical Review Letters finding that salamanders were demonstrating counter-intuitive responses to the arrow of time in how their eyes perceived different stimuli.
Thermodynamic arrow of time
The arrow of time is the "one-way direction" or "asymmetry" of time. The thermodynamic arrow of time is provided by the second law of thermodynamics, which says that in an isolated system, entropy tends to increase with time. Entropy can be thought of as a measure of microscopic disorder; thus the second law implies that time is asymmetrical with respect to the amount of order in an isolated system: as a system advances through time, it becomes more statistically disordered. This asymmetry can be used empirically to distinguish between future and past, though measuring entropy does not accurately measure time. Also, in an open system, entropy can decrease with time.
British physicist Sir Alfred Brian Pippard wrote: "There is thus no justification for the view, often glibly repeated, that the Second Law of Thermodynamics is only statistically true, in the sense that microscopic violations repeatedly occur, but never violations of any serious magnitude. On the contrary, no evidence has ever been presented that the Second Law breaks down under any circumstances." However, there are a number of paradoxes regarding violation of the second law of thermodynamics, one of them due to the Poincaré recurrence theorem.
This arrow of time seems to be related to all other arrows of time and arguably underlies some of them, with the exception of the weak arrow of time.
Harold Blum's 1951 book Time's Arrow and Evolution discusses "the relationship between time's arrow (the second law of thermodynamics) and organic evolution." This influential text explores "irreversibility and direction in evolution and order, negentropy, and evolution." Blum argues that evolution followed specific patterns predetermined by the inorganic nature of the earth and its thermodynamic processes.
Cosmological arrow of time
The cosmological arrow of time points in the direction of the universe's expansion. It may be linked to the thermodynamic arrow, with the universe heading towards a heat death (Big Chill) as the amount of Thermodynamic free energy becomes negligible. Alternatively, it may be an artifact of our place in the universe's evolution (see the Anthropic bias), with this arrow reversing as gravity pulls everything back into a Big Crunch.
If this arrow of time is related to the other arrows of time, then the future is by definition the direction towards which the universe becomes bigger. Thus, the universe expands—rather than shrinks—by definition.
The thermodynamic arrow of time and the second law of thermodynamics are thought to be a consequence of the initial conditions in the early universe. Therefore, they ultimately result from the cosmological set-up.
Radiative arrow of time
Waves, from radio waves to sound waves to those on a pond from throwing a stone, expand outward from their source, even though the wave equations accommodate solutions of convergent waves as well as radiative ones. This arrow has been reversed in carefully worked experiments that created convergent waves, so this arrow probably follows from the thermodynamic arrow in that meeting the conditions to produce a convergent wave requires more order than the conditions for a radiative wave. Put differently, the probability for initial conditions that produce a convergent wave is much lower than the probability for initial conditions that produce a radiative wave. In fact, normally a radiative wave increases entropy, while a convergent wave decreases it, making the latter contradictory to the second law of thermodynamics in usual circumstances.
Causal arrow of time
A cause precedes its effect: the causal event occurs before the event it causes or affects. Birth, for example, follows a successful conception and not vice versa. Thus causality is intimately bound up with time's arrow.
An epistemological problem with using causality as an arrow of time is that, as David Hume maintained, the causal relation per se cannot be perceived; one only perceives sequences of events. Furthermore, it is surprisingly difficult to provide a clear explanation of what the terms cause and effect really mean, or to define the events to which they refer. However, it does seem evident that dropping a cup of water is a cause while the cup subsequently shattering and spilling the water is the effect.
Physically speaking, correlations between a system and its surrounding are thought to increase with entropy, and have been shown to be equivalent to it in a simplified case of a finite system interacting with the environment. The assumption of low initial entropy is indeed equivalent to assuming no initial correlations in the system; thus correlations can only be created as we move forward in time, not backwards. Controlling the future, or causing something to happen, creates correlations between the doer and the effect, and therefore the relation between cause and effect is a result of the thermodynamic arrow of time, a consequence of the second law of thermodynamics. Indeed, in the above example of the cup dropping, the initial conditions have high order and low entropy, while the final state has high correlations between relatively distant parts of the system – the shattered pieces of the cup, as well as the spilled water, and the object that caused the cup to drop.
Quantum arrow of time
Quantum evolution is governed by equations of motions that are time-symmetric (such as the Schrödinger equation in the non-relativistic approximation), and by wave function collapse, which is a time-irreversible process, and is either real (by the Copenhagen interpretation of quantum mechanics) or apparent only (by the many-worlds interpretation and relational quantum mechanics interpretation).
The theory of quantum decoherence explains why wave function collapse happens in a time-asymmetric fashion due to the second law of thermodynamics, thus deriving the quantum arrow of time from the thermodynamic arrow of time. In essence, following any particle scattering or interaction between two larger systems, the relative phases of the two systems are at first orderly related, but subsequent interactions (with additional particles or systems) make them less so, so that the two systems become decoherent. Thus decoherence is a form of increase in microscopic disorder in short, decoherence increases entropy. Two decoherent systems can no longer interact via quantum superposition, unless they become coherent again, which is normally impossible, by the second law of thermodynamics. In the language of relational quantum mechanics, the observer becomes entangled with the measured state, where this entanglement increases entropy. As stated by Seth Lloyd, "the arrow of time is an arrow of increasing correlations".
However, under special circumstances, one can prepare initial conditions that will cause a decrease in decoherence and in entropy. This has been shown experimentally in 2019, when a team of Russian scientists reported the reversal of the quantum arrow of time on an IBM quantum computer, in an experiment supporting the understanding of the quantum arrow of time as emerging from the thermodynamic one. By observing the state of the quantum computer made of two and later three superconducting qubits, they found that in 85% of the cases, the two-qubit computer returned to the initial state. The state's reversal was made by a special program, similarly to the random microwave background fluctuation in the case of the electron. However, according to the estimations, throughout the age of the universe (13.7 billion years) such a reversal of the electron's state would only happen once, for 0.06 nanoseconds. The scientists' experiment led to the possibility of a quantum algorithm that reverses a given quantum state through complex conjugation of the state.
Note that quantum decoherence merely allows the process of quantum wave collapse; it is a matter of dispute whether the collapse itself actually takes place or is redundant and apparent only. However, since the theory of quantum decoherence is now widely accepted and has been supported experimentally, this dispute can no longer be considered as related to the arrow of time question.
Particle physics (weak) arrow of time
Certain subatomic interactions involving the weak nuclear force violate the conservation of both parity and charge conjugation, but only very rarely. An example is the kaon decay. According to the CPT theorem, this means they should also be time-irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe.
That the combination of parity and charge conjugation is broken so rarely means that this arrow only "barely" points in one direction, setting it apart from the other arrows whose direction is much more obvious. This arrow had not been linked to any large-scale temporal behaviour until the work of Joan Vaccaro, who showed that T violation could be responsible for conservation laws and dynamics.
See also
A Brief History of Time
Anthropic principle
Ilya Prigogine
Loschmidt's paradox
Maxwell's demon
Quantum Zeno effect
Royal Institution Christmas Lectures 1999
Samayā
Time evolution
Time flies like an arrow
Time reversal signal processing
Wheeler–Feynman absorber theory
References
Further reading
Translated from the original German by Stephen G. Brush. Originally published 1896/1898.
Website.
.
Chapter 5.
(technical).
Mersini-Houghton, L., Vaas, R. (eds.) (2012) (partly technical).
Section 3.8.
Chapter 7.
Chapter 27.
Website.
Official website for the book.
External links
The Ritz-Einstein Agreement to Disagree, a review of historical perspectives of the subject, prior to the evolvement of quantum field theory
The Thermodynamic Arrow: Puzzles and Pseudo-Puzzles Huw Price on Time's Arrow
Arrow of time in a discrete toy model
The Arrow of Time
Why Does Time Run Only Forwards, by Adam Becker, bbc.com
Asymmetry
Non-equilibrium thermodynamics
Philosophical analogies
Philosophy of thermal and statistical physics
Philosophy of time
Time in physics | 0.778381 | 0.994701 | 0.774257 |
Gravitational energy | Gravitational energy or gravitational potential energy is the potential energy a massive object has due to its position in a gravitational field. It is the mechanical work done by the gravitational force to bring the mass from a chosen reference point (often an "infinite distance" from the mass generating the field) to some other point in the field, which is equal to the change in the kinetic energies of the objects as they fall towards each other. Gravitational potential energy increases when two objects are brought further apart and is converted to kinetic energy as they are allowed to fall towards each other.
Formulation
For two pairwise interacting point particles, the gravitational potential energy is the work done by the gravitational force in bringing the masses together:
where is the displacement vector between the two particles and denotes the scalar product. Since the gravitational force is always parallel to the axis joining the particles, this simplifies to:
where and are the masses of the two particles and is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to
where is the object's mass, is the gravity of Earth, and is the height of the object's center of mass above a chosen reference level.
Newtonian mechanics
In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. The gravitational potential energy is the potential energy an object has because it is within a gravitational field.
The magnitude of the force between a point mass, , and another point mass, , is given by Newton's law of gravitation:
To get the total work done by the gravitational force in bringing point mass from infinity to final distance (for example, the radius of Earth) from point mass , the force is integrated with respect to displacement:
Because , the total work done on the object can be written as:
In the common situation where a much smaller mass is moving near the surface of a much larger object with mass , the gravitational field is nearly constant and so the expression for gravitational energy can be considerably simplified. The change in potential energy moving from the surface (a distance from the center) to a height above the surface is
If is small, as it must be close to the surface where is constant, then this expression can be simplified using the binomial approximation
to
As the gravitational field is , this reduces to
Taking at the surface (instead of at infinity), the familiar expression for gravitational potential energy emerges:
General relativity
In general relativity gravitational energy is extremely complex, and there is no single agreed upon definition of the concept. It is sometimes modelled via the Landau–Lifshitz pseudotensor that allows retention for the energy–momentum conservation laws of classical mechanics. Addition of the matter stress–energy tensor to the Landau–Lifshitz pseudotensor results in a combined matter plus gravitational energy pseudotensor that has a vanishing 4-divergence in all frames—ensuring the conservation law. Some people object to this derivation on the grounds that pseudotensors are inappropriate in general relativity, but the divergence of the combined matter plus gravitational energy pseudotensor is a tensor.
See also
Gravitational binding energy
Gravitational potential
Gravitational potential energy storage
Positive energy theorem
References
Forms of energy
Gravity
Conservation laws
Tensors in general relativity
Potentials | 0.777737 | 0.995498 | 0.774236 |
Lenz's law | Lenz's law states that the direction of the electric current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes changes in the initial magnetic field. It is named after physicist Heinrich Lenz, who formulated it in 1834.
It is a qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law predicts the direction of many effects in electromagnetism, such as the direction of voltage induced in an inductor or wire loop by a changing current, or the drag force of eddy currents exerted on moving objects in a magnetic field.
Lenz's law may be seen as analogous to Newton's third law in classical mechanics and Le Chatelier's principle in chemistry.
Definition
Lenz's law states that:
The current induced in a circuit due to a change in a magnetic field is directed to oppose the change in flux and to exert a mechanical force which opposes the motion.
Lenz's law is contained in the rigorous treatment of Faraday's law of induction (the magnitude of EMF induced in a coil is proportional to the rate of change of the magnetic flux), where it finds expression by the negative sign:
which indicates that the induced electromotive force and the rate of change in magnetic flux have opposite signs.
This means that the direction of the back EMF of an induced field opposes the changing current that is its cause. D.J. Griffiths summarized it as follows: Nature abhors a change in flux.
If a change in the magnetic field of current i1 induces another electric current, i2, the direction of i2 is opposite that of the change in i1. If these currents are in two coaxial circular conductors ℓ1 and ℓ2 respectively, and both are initially 0, then the currents i1 and i2 must counter-rotate. The opposing currents will repel each other as a result.
Example
Magnetic fields from strong magnets can create counter-rotating currents in a copper or aluminium pipe. This is shown by dropping the magnet through the pipe. The descent of the magnet inside the pipe is observably slower than when dropped outside the pipe.
When a voltage is generated by a change in magnetic flux according to Faraday's law, the polarity of the induced voltage is such that it produces a current whose magnetic field opposes the change which produces it. The induced magnetic field inside any loop of wire always acts to keep the magnetic flux in the loop constant. The direction of an induced current can be determined using the right-hand rule to show which direction of current flow would create a magnetic field that would oppose the direction of changing flux through the loop. In the examples above, if the flux is increasing, the induced field acts in opposition to it. If it is decreasing, the induced field acts in the direction of the applied field to oppose the change.
Detailed interaction of charges in these currents
In electromagnetism, when charges move along electric field lines work is done on them, whether it involves storing potential energy (negative work) or increasing kinetic energy (positive work).
When net positive work is applied to a charge q1, it gains speed and momentum. The net work on q1 thereby generates a magnetic field whose strength (in units of magnetic flux density (1 tesla = 1 volt-second per square meter)) is proportional to the speed increase of q1. This magnetic field can interact with a neighboring charge q2, passing on this momentum to it, and in return, q1 loses momentum.
The charge q2 can also act on q1 in a similar manner, by which it returns some of the momentum that it received from q1. This back-and-forth component of momentum contributes to magnetic inductance. The closer that q1 and q2 are, the greater the effect. When q2 is inside a conductive medium such as a thick slab made of copper or aluminum, it more readily responds to the force applied to it by q1. The energy of q1 is not instantly consumed as heat generated by the current of q2 but is also stored in two opposing magnetic fields. The energy density of magnetic fields tends to vary with the square of the magnetic field's intensity; however, in the case of magnetically non-linear materials such as ferromagnets and superconductors, this relationship breaks down.
Conservation of momentum
Momentum must be conserved in the process, so if q1 is pushed in one direction, then q2 ought to be pushed in the other direction by the same force at the same time. However, the situation becomes more complicated when the finite speed of electromagnetic wave propagation is introduced (see retarded potential). This means that for a brief period the total momentum of the two charges is not conserved, implying that the difference should be accounted for by momentum in the fields, as asserted by Richard P. Feynman. Famous 19th century electrodynamicist James Clerk Maxwell called this the "electromagnetic momentum". Yet, such a treatment of fields may be necessary when Lenz's law is applied to opposite charges. It is normally assumed that the charges in question have the same sign. If they do not, such as a proton and an electron, the interaction is different. An electron generating a magnetic field would generate an EMF that causes a proton to accelerate in the same direction as the electron. At first, this might seem to violate the law of conservation of momentum, but such an interaction is seen to conserve momentum if the momentum of electromagnetic fields is taken into account.
References
External links
with an aluminum block in an MRI
Magnetic levitation
Electrodynamics
Articles containing video clips | 0.775752 | 0.998044 | 0.774234 |
Wave–particle duality | Wave-particle duality is the concept in quantum mechanics that quantum entities exhibit particle or wave properties according to the experimental circumstances. It expresses the inability of the classical concepts such as particle or wave to fully describe the behavior of quantum objects. During the 19th and early 20th centuries, light was found to behave as a wave then later discovered to have a particulate behavior, whereas electrons behaved like particles in early experiments then later discovered to have wavelike behavior. The concept of duality arose to name these seeming contradictions.
History
Wave-particle duality of light
In the late 17th century, Sir Isaac Newton had advocated that light was particles, but Christiaan Huygens took an opposing wave approach. Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, validated Huygens' wave models. However, the wave model was challenged in 1901 by Planck's law for black-body radiation. Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. In 1905 Einstein interpreted the photoelectric effect also with discrete energies for photons. These both indicate particle behavior. Despite confirmation by various experimental observations, the photon theory (as it came to be called) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum and energy seemingly contradicted the earlier work demonstrating wave-like interference of light.
Wave-particle duality of matter
The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thomson, Robert Millikan, and Charles Wilson among others had shown that free electrons had particle properties, for instance, the measurement of their mass by Thomson in 1897. In 1924, Louis de Broglie introduced his theory of electron waves in his PhD thesis Recherches sur la théorie des quanta. He suggested that an electron around a nucleus could be thought of as being a standing wave and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles, and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before.
Following de Broglie's proposal of wave–particle duality of electrons, in 1925 to 1926, Erwin Schrödinger developed the wave equation of motion for electrons. This rapidly became part of what was called by Schrödinger undulatory mechanics, now called the Schrödinger equation and also "wave mechanics".
In 1926, Max Born gave a talk in an Oxford meeting about using the electron diffraction experiments to confirm the wave–particle duality of electrons. In his talk, Born cited experimental data from Clinton Davisson in 1923. It happened that Davisson also attended that talk. Davisson returned to his lab in the US to switch his experimental focus to test the wave property of electrons.
In 1927, the wave nature of electrons was empirically confirmed by two experiments. The Davisson–Germer experiment at Bell Labs measured electrons scattered from Ni metal surfaces. George Paget Thomson and Alexander Reid at Cambridge University scattered electrons through thin metal films and observed concentric diffraction rings. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Davisson and Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Bethe, which includes the refraction due to the average potential, yielded more accurate results. Davisson and Thomson were awarded the Nobel Prize in 1937 for experimental verification of wave property of electrons by diffraction experiments. Similar crystal diffraction experiments were carried out by Otto Stern in the 1930s using beams of helium atoms and hydrogen molecules. These experiments further verified that wave behavior is not limited to electrons and is a general property of matter on a microscopic scale.
Classical waves and particles
Before proceeding further, it is critical to introduce some definitions of waves and particles both in a classical sense and in quantum mechanics. Waves and particles are two very different models for physical systems, each with an exceptionally large range of application. Classical waves obey the wave equation; they have continuous values at many points in space that vary with time; their spatial extent can vary with time due to diffraction, and they display wave interference. Physical systems exhibiting wave behavior and described by the mathematics of wave equations include water waves, seismic waves, sound waves, radio waves, and more.
Classical particles obey classical mechanics; they have some center of mass and extent; they follow trajectories characterized by positions and velocities that vary over time; in the absence of forces their trajectories are straight lines. Stars, planets, spacecraft, tennis balls, bullets, sand grains: particle models work across a huge scale. Unlike waves, particles do not exhibit interference.
Some experiments on quantum systems show wave-like interference and diffraction; some experiments show particle-like collisions.
Quantum systems obey wave equations that predict particle probability distributions. These particles are associated with discrete values called quanta for properties such as spin, electric charge and magnetic moment. These particles arrive one at time, randomly, but build up a pattern. The probability that experiments will measure particles at a point in space is the square of a complex-number valued wave. Experiments can be designed to exhibit diffraction and interference of the probability amplitude. Thus statistically large numbers of the random particle appearances can display wave-like properties. Similar equations govern collective excitations called quasiparticles.
Electrons behaving as waves and particles
The electron double slit experiment is a textbook demonstration of wave-particle duality. A modern version of the experiment is shown schematically in the figure below.
Electrons from the source hit a wall with two thin slits. A mask behind the slits can expose either one or open to expose both slits. The results for high electron intensity are shown on the right, first for each slit individually, then with both slits open. With either slit open there is a smooth intensity variation due to diffraction. When both slits are open the intensity oscillates, characteristic of wave interference.
Having observed wave behavior, now change the experiment, lowering the intensity of the electron source until only one or two are detected per second, appearing as individual particles, dots in the video. As shown in the movie clip below, the dots on the detector seem at first to be random. After some time a pattern emerges, eventually forming an alternating sequence of light and dark bands.
The experiment shows wave interference revealed a single particle at a time -- quantum mechanical electrons display both wave and particle behavior. Similar results have been shown for atoms and even large molecules.
Observing photons as particles
While electrons were thought to be particles until their wave properties were discovered; for photons it was the opposite. In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays, what are now called electrons. In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation. In 1905, Albert Einstein suggested that the energy of the light must occur a finite number of energy quanta. He postulated that electrons can receive energy from an electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by
where h is the Planck constant (6.626×10−34 J⋅s). Only photons of a high enough frequency (above a certain threshold value which is the work function) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal he used, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light.
Both discrete (quantized) energies and also momentum are, classically, particle attributes. There are many other examples where photons display particle-type properties, for instance in solar sails, where sunlight could propel a space vehicle and laser cooling where the momentum is used to slow down (cool) atoms. These are a different aspect of wave-particle duality.
Which slit experiments
In a "which way" experiment, particle detectors are placed at the slits to determine which slit the electron traveled through. When these detectors are inserted, quantum mechanics predicts that the interference pattern disappears because the detected part of the electron wave has changed (loss of coherence). Many similar proposals have been made and many have been converted into experiments and tried out. Every single one shows the same result: as soon as electron trajectories are detected, interference disappears.
A simple example of these "which way" experiments uses a Mach–Zehnder interferometer, a device based on lasers and mirrors sketched below.
A laser beam along the input port splits at a half-silvered mirror. Part of the beam continues straight, passes though a glass phase shifter, then reflects downward. The other part of the beam reflects from the first mirror then turns at another mirror. The two beams meet at a second half-silvered beam splitter.
Each output port has a camera to record the results. The two beams show interference characteristic of wave propagation. If the laser intensity is turned sufficiently low, individual dots appear on the cameras, building up the pattern as in the electron example.
The first beam-splitter mirror acts like double slits, but in the interferometer case we can remove the second beam splitter. Then the beam heading down ends up in output port 1: any photon particles on this path gets counted in that port. The beam going across the top ends up on output port 2. In either case the counts will track the photon trajectories. However, as soon as the second beam splitter is removed the interference pattern disappears.
See also
Einstein's thought experiments
Interpretations of quantum mechanics
Uncertainty principle
Matter wave
Corpuscular theory of light
References
External links
Articles containing video clips
Dichotomies
Foundational quantum physics
Waves
Particles | 0.77544 | 0.998405 | 0.774203 |
Applied science | Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena.
There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology.
Applied research
Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow.
Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered.
Moreover, this type of research method applies natural sciences to human conditions:
Action research: aids firms in identifying workable solutions to issues influencing them.
Evaluation research: researchers examine available data to assist clients in making wise judgments.
Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.)
Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development.
Due to its practical focus, applied research information will be found in the literature associated with individual disciplines.
Branches
Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory.
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems.The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics.
Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine.
In education
In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree.
In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere.
In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island.
See also
Applied mathematics
Basic research
Exact sciences
Hard and soft science
Invention
Secondary research
References
External links
Branches of science | 0.777273 | 0.996006 | 0.774168 |
Thermodynamic process | Classical thermodynamics considers three main kinds of thermodynamic processes: (1) changes in a system, (2) cycles in a system, and (3) flow processes.
(1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress.
As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact.
A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.
(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed.
(3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering.
Kinds of process
Cyclic process
Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states.
Flow process
Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact.
A cycle of quasi-static processes
A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable.
Conjugate variable processes
It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair.
Pressure – volume
The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work.
An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir.
An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, . The system is dynamically insulated, by a rigid boundary, from the environment.
Temperature – entropy
The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system.
An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant.
An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles.
In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-μ reservoir.
The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed.
Thermodynamic potentials
Any of the thermodynamic potentials may be held constant during a process. For example:
An isenthalpic process introduces no change in enthalpy in the system.
Polytropic processes
A polytropic process is a thermodynamic process that obeys the relation:
where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids.
Processes classified by the second law of thermodynamics
According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural.
Natural process
Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings.
Effectively reversible process
To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true.
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred.
Quasistatic process
A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
See also
Flow process
Heat
Phase transition
Work (thermodynamics)
References
Further reading
Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008,
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft), (VHC Inc.)
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
Physics with Modern Applications, L.H. Greenberg, Holt-Saunders International W.B. Saunders and Co, 1978,
Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971,
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic systems
Thermodynamics | 0.783147 | 0.988498 | 0.774139 |
Solid mechanics | Solid mechanics (also known as mechanics of solids or mechanics of materials) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents.
Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them.
Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics.
Fundamental aspects
A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress.
Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures.
The most common topics covered in solid mechanics include:
stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics
dynamical systems and chaos - dealing with mechanical systems highly sensitive to their given initial position
thermomechanics - analyzing materials with models derived from principles of thermodynamics
biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue
geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock
vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering
fracture and damage mechanics - dealing with crack-growth mechanics in solid materials
composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass
variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM)
experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures
Relationship to continuum mechanics
As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics.
Response models
A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region.
It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common.
These are basic models that describe how a solid responds to an applied stress:
Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law.
Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence.
Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent.
Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud.
Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models.
Timeline
1452–1519 Leonardo da Vinci made many contributions
1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures
1660: Hooke's law by Robert Hooke
1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion
1750: Euler–Bernoulli beam equation
1700–1782: Daniel Bernoulli introduced the principle of virtual work
1707–1783: Leonhard Euler developed the theory of buckling of columns
1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures
1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case
1874: Otto Mohr formalized the idea of a statically indeterminate structure.
1922: Timoshenko corrects the Euler–Bernoulli beam equation
1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames.
1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework
1942: R. Courant divided a domain into finite subregions
1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today
See also
Strength of materials - Specific definitions and the relationships between stress and strain.
Applied mechanics
Materials science
Continuum mechanics
Fracture mechanics
Impact (mechanics)
References
Notes
Bibliography
L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann,
J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover,
P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover,
R.W. Ogden, Non-linear Elastic Deformation, Dover,
S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970.
G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000
A.I. Lurie, Theory of Elasticity, Springer, 1999.
L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990.
R. Hill, The Mathematical Theory of Plasticity, Oxford University, 1950.
J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990.
J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010.
D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012.
Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, .
Mechanics
Continuum mechanics
Rigid bodies mechanics
km:មេកានិចសូលីដ
sv:Hållfasthetslära | 0.782824 | 0.988855 | 0.774099 |
Compressibility factor | In thermodynamics, the compressibility factor (Z), also known as the compression factor or the gas deviation factor, describes the deviation of a real gas from ideal gas behaviour. It is simply defined as the ratio of the molar volume of a gas to the molar volume of an ideal gas at the same temperature and pressure. It is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behaviour. In general, deviation from ideal behaviour becomes more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound-specific empirical constants as input. For a gas that is a mixture of two or more pure gases (air or natural gas, for example), the gas composition must be known before compressibility can be calculated.
Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot as a function of pressure at constant temperature.
The compressibility factor should not be confused with the compressibility (also known as coefficient of compressibility or isothermal compressibility) of a material, which is the measure of the relative volume change of a fluid or solid in response to a pressure change.
Definition and physical significance
The compressibility factor is defined in thermodynamics and engineering frequently as:
where p is the pressure, is the density of the gas and is the specific gas constant, being the molar mass, and the is the absolute temperature (kelvin or Rankine scale).
In statistical mechanics the description is:
where is the pressure, is the number of moles of gas, is the absolute temperature, is the gas constant, and is unit volume.
For an ideal gas the compressibility factor is per definition. In many real world applications requirements for accuracy demand that deviations from ideal gas behaviour, i.e., real gas behaviour, be taken into account. The value of generally increases with pressure and decreases with temperature. At high pressures molecules are colliding more often. This allows repulsive forces between molecules to have a noticeable effect, making the molar volume of the real gas greater than the molar volume of the corresponding ideal gas, which causes to exceed one. When pressures are lower, the molecules are free to move. In this case attractive forces dominate, making . The closer the gas is to its critical point or its boiling point, the more deviates from the ideal case.
Fugacity
The compressibility factor is linked to the fugacity by the relation:
Generalized compressibility factor graphs for pure gases
The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties.
As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor.
The reduced temperature and pressure are defined by
and
Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist.
The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes.
In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam.
There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1–2 percent for values greater than 0.6 and within 4–6 percent for values of 0.3–0.6.
The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15–20 percent.
The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior and the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs:
and
where the temperatures are in kelvins and the pressures are in atmospheres.
Reading a generalized compressibility chart
In order to read a compressibility chart, the reduced pressure and temperature must be known. If either the reduced pressure or temperature is unknown, the reduced specific volume must be found. Unlike the reduced pressure and temperature, the reduced specific volume is not found by using the critical volume. The reduced specific volume is defined by,
where is the specific volume.
Once two of the three reduced properties are found, the compressibility chart can be used. In a compressibility chart, reduced pressure is on the x-axis and Z is on the y-axis. When given the reduced pressure and temperature, find the given pressure on the x-axis. From there, move up on the chart until the given reduced temperature is found. Z is found by looking where those two points intersect. the same process can be followed if reduced specific volume is given with either reduced pressure or temperature.
Observations made from a generalized compressibility chart
There are three observations that can be made when looking at a generalized compressibility chart. These observations are:
Gases behave as an ideal gas regardless of temperature when the reduced pressure is much less than one (PR ≪ 1).
When reduced temperature is greater than two (TR > 2), ideal-gas behavior can be assumed regardless of pressure, unless pressure is much greater than one (PR ≫ 1).
Gases deviate from ideal-gas behavior the most in the vicinity of the critical point.
Theoretical models
The virial equation is especially useful to describe the causes of non-ideality at a molecular level (very few gases are mono-atomic) as it is derived directly from statistical mechanics:
Where the coefficients in the numerator are known as virial coefficients and are functions of temperature.
The virial coefficients account for interactions between successively larger groups of molecules. For example, accounts for interactions between pairs, for interactions between three gas molecules, and so on. Because interactions between large numbers of molecules are rare, the virial equation is usually truncated after the third term.
When this truncation is assumed, the compressibility factor is linked to the intermolecular-force potential φ by:
The Real gas article features more theoretical methods to compute compressibility factors.
Physical mechanism of temperature and pressure dependence
Deviations of the compressibility factor, Z, from unity are due to attractive and repulsive intermolecular forces. At a given temperature and pressure, repulsive forces tend to make the volume larger than for an ideal gas; when these forces dominate Z is greater than unity. When attractive forces dominate, Z is less than unity. The relative importance of attractive forces decreases as temperature increases (see effect on gases).
As seen above, the behavior of Z is qualitatively similar for all gases. Molecular nitrogen, N, is used here to further describe and understand that behavior. All data used in this section were obtained from the NIST Chemistry WebBook. It is useful to note that for N the normal boiling point of the liquid is 77.4 K and the critical point is at 126.2 K and 34.0 bar.
The figure on the right shows an overview covering a wide temperature range. At low temperature (100 K), the curve has a characteristic check-mark shape, the rising portion of the curve is very nearly directly proportional to pressure. At intermediate temperature (160 K), there is a smooth curve with a broad minimum; although the high pressure portion is again nearly linear, it is no longer directly proportional to pressure. Finally, at high temperature (400 K), Z is above unity at all pressures. For all curves, Z approaches the ideal gas value of unity at low pressure and exceeds that value at very high pressure.
To better understand these curves, a closer look at the behavior for low temperature and pressure is given in the second figure. All of the curves start out with Z equal to unity at zero pressure and Z initially decreases as pressure increases. N is a gas under these conditions, so the distance between molecules is large, but becomes smaller as pressure increases. This increases the attractive interactions between molecules, pulling the molecules closer together and causing the volume to be less than for an ideal gas at the same temperature and pressure. Higher temperature reduces the effect of the attractive interactions and the gas behaves in a more nearly ideal manner.
As the pressure increases, the gas eventually reaches the gas-liquid coexistence curve, shown by the dashed line in the figure. When that happens, the attractive interactions have become strong enough to overcome the tendency of thermal motion to cause the molecules to spread out; so the gas condenses to form a liquid. Points on the vertical portions of the curves correspond to N2 being partly gas and partly liquid. On the coexistence curve, there are then two possible values for Z, a larger one corresponding to the gas and a smaller value corresponding to the liquid. Once all the gas has been converted to liquid, the volume decreases only slightly with further increases in pressure; then Z is very nearly proportional to pressure.
As temperature and pressure increase along the coexistence curve, the gas becomes more like a liquid and the liquid becomes more like a gas. At the critical point, the two are the same. So for temperatures above the critical temperature (126.2 K), there is no phase transition; as pressure increases the gas gradually transforms into something more like a liquid. Just above the critical point there is a range of pressure for which Z drops quite rapidly (see the 130 K curve), but at higher temperatures the process is entirely gradual.
The final figures shows the behavior at temperatures well above the critical temperatures. The repulsive interactions are essentially unaffected by temperature, but the attractive interaction have less and less influence. Thus, at sufficiently high temperature, the repulsive interactions dominate at all pressures.
This can be seen in the graph showing the high temperature behavior. As temperature increases, the initial slope becomes less negative, the pressure at which Z is a minimum gets smaller, and the pressure at which repulsive interactions start to dominate, i.e. where Z goes from less than unity to greater than unity, gets smaller. At the Boyle temperature (327 K for N), the attractive and repulsive effects cancel each other at low pressure. Then Z remains at the ideal gas value of unity up to pressures of several tens of bar. Above the Boyle temperature, the compressibility factor is always greater than unity and increases slowly but steadily as pressure increases.
Experimental values
It is extremely difficult to generalize at what pressures or temperatures the deviation from the ideal gas becomes important. As a rule of thumb, the ideal gas law is reasonably accurate up to a pressure of about 2 atm, and even higher for small non-associating molecules. For example, methyl chloride, a highly polar molecule and therefore with significant intermolecular forces, the experimental value for the compressibility factor is at a pressure of 10 atm and temperature of 100 °C. For air (small non-polar molecules) at approximately the same conditions, the compressibility factor is only (see table below for 10 bars, 400 K).
Compressibility of air
Normal air comprises in crude numbers 80 percent nitrogen and 20 percent oxygen . Both molecules are small and non-polar (and therefore non-associating). We can therefore expect that the behaviour of air within broad temperature and pressure ranges can be approximated as an ideal gas with reasonable accuracy. Experimental values for the compressibility factor confirm this.
values are calculated from values of pressure, volume (or density), and temperature in Vasserman, Kazavchinskii, and Rabinovich, "Thermophysical Properties of Air and Air Components;' Moscow, Nauka, 1966, and NBS-NSF Trans. TT 70-50095, 1971: and Vasserman and Rabinovich, "Thermophysical Properties of Liquid Air and Its Component, "Moscow, 1968, and NBS-NSF Trans. 69-55092, 1970.
See also
Fugacity
Real gas
Theorem of corresponding states
Van der Waals equation
References
External links
Compressibility factor (gases) A Citizendium article.
Real Gases includes a discussion of compressibility factors.
Chemical engineering thermodynamics
Gas laws | 0.778158 | 0.994718 | 0.774047 |
Grashof number | In fluid mechanics (especially fluid thermodynamics), the Grashof number (, after Franz Grashof) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid. It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number.
Definition
Heat transfer
Free convection is caused by a change in density of a fluid due to a temperature change or gradient. Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces.
The Grashof number is:
for vertical flat plates
for pipes and bluff bodies
where:
is gravitational acceleration due to Earth
is the coefficient of volume expansion (equal to approximately for ideal gases)
is the surface temperature
is the bulk temperature
is the vertical length
is the diameter
is the kinematic viscosity.
The and subscripts indicate the length scale basis for the Grashof number.
The transition to turbulent flow occurs in the range for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range .
Mass transfer
There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients.
where
and:
is gravitational acceleration due to Earth
is the concentration of species at surface
is the concentration of species in ambient medium
is the characteristic length
is the kinematic viscosity
is the fluid density
is the concentration of species
is the temperature (constant)
is the pressure (constant).
Relationship to other dimensionless numbers
The Rayleigh number, shown below, is a dimensionless number that characterizes convection problems in heat transfer. A critical value exists for the Rayleigh number, above which fluid motion occurs.
The ratio of the Grashof number to the square of the Reynolds number may be used to determine if forced or free convection may be neglected for a system, or if there's a combination of the two. This characteristic ratio is known as the Richardson number. If the ratio is much less than one, then free convection may be ignored. If the ratio is much greater than one, forced convection may be ignored. Otherwise, the regime is combined forced and free convection.
Derivation
The first step to deriving the Grashof number is manipulating the volume expansion coefficient, as follows.
The in the equation above, which represents specific volume, is not the same as the in the subsequent sections of this derivation, which will represent a velocity. This partial relation of the volume expansion coefficient, , with respect to fluid density, , given constant pressure, can be rewritten as
where:
is the bulk fluid density
is the boundary layer density
, the temperature difference between boundary layer and bulk fluid.
There are two different ways to find the Grashof number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid.
Energy equation
This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow.
where:
is the rotational direction, i.e. direction parallel to the surface
is the tangential velocity, i.e. velocity parallel to the surface
is the planar direction, i.e. direction normal to the surface
is the normal velocity, i.e. velocity normal to the surface
is the radius.
In this equation the superscript is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true.
= 1: rotationally symmetric flow
= 0: planar, two-dimensional flow
is gravitational acceleration
This equation expands to the following with the addition of physical fluid properties:
From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0.
This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation.
where the volume expansion coefficient to density relationship found above and the kinematic viscosity relationship were substituted into the momentum equation.
To find the Grashof number from this point, the preceding equation must be non-dimensionalized. This means that every variable in the equation should have no dimension and should instead be a ratio characteristic to the geometry and setup of the problem. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length, . Velocities are divided by appropriate reference velocities, , which, considering the Reynolds number, gives . Temperatures are divided by the appropriate temperature difference, . These dimensionless parameters look like the following:
,
,
,
, and
.
The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation.
where:
is the surface temperature
is the bulk fluid temperature
is the characteristic length.
The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof number:
Buckingham π theorem
Another form of dimensional analysis that will result in the Grashof number is known as the Buckingham π theorem. This method takes into account the buoyancy force per unit volume, due to the density difference in the boundary layer and the bulk fluid.
This equation can be manipulated to give,
The list of variables that are used in the Buckingham π method is listed below, along with their symbols and dimensions.
With reference to the Buckingham π theorem there are dimensionless groups. Choose , , and as the reference variables. Thus the groups are as follows:
,
,
,
.
Solving these groups gives:
,
,
,
From the two groups and the product forms the Grashof number:
Taking and the preceding equation can be rendered as the same result from deriving the Grashof number from the energy equation.
In forced convection the Reynolds number governs the fluid flow. But, in natural convection the Grashof number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof number.
Physical Reasoning
It is also possible to derive the Grashof number by physical definition of the number as follows:
However, above expression, especially the final part at the right hand side, is slightly different from Grashof number appearing in literature. Following dimensionally correct scale in terms of dynamic viscosity can be used to have the final form.
Writing above scale in Gr gives;
Physical reasoning is helpful to grasp the meaning of the number. On the other hand, following velocity definition can be used as a characteristic velocity value for making certain velocities nondimensional.
Effects of Grashof number on the flow of different fluids
In a recent research carried out on the effects of Grashof number on the flow of different fluids driven by convection over various surfaces. Using slope of the linear regression line through data points, it is concluded that increase in the value of Grashof number or any buoyancy related parameter implies an increase in the wall temperature and this makes the bond(s) between the fluid to become weaker, strength of the internal friction to decrease, the gravity to becomes stronger enough (i.e. makes the specific weight appreciably different between the immediate fluid layers adjacent to the wall). The effects of buoyancy parameter are highly significant in the laminar flow within the boundary layer formed on a vertically moving cylinder. This is only achievable when the prescribed surface temperature (PST) and prescribed wall heat flux (WHF) are considered. It can be concluded that buoyancy parameter has a negligible positive effect on the local Nusselt number. This is only true when the magnitude of Prandtl number is small or prescribed wall heat flux (WHF) is considered. Sherwood number, Bejan Number, Entropy generation, Stanton Number and pressure gradient are increasing properties of buoyancy related parameter while concentration profiles, frictional force, and motile microorganism are decreasing properties.
Notes
References
Further reading
Buoyancy
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics
Heat transfer | 0.783524 | 0.987903 | 0.774045 |
Descriptive research | Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characteristics of the population or situation being studied?). The characteristics used to describe the situation or population are usually some kind of categorical scheme also known as descriptive categories. For example, the periodic table categorizes the elements. Scientists use knowledge about the nature of electrons, protons and neutrons to devise this categorical scheme. We now take for granted the periodic table, yet it took descriptive research to devise it. Descriptive research generally precedes explanatory research. For example, over time the periodic table's description of the elements allowed scientists to explain chemical reaction and make sound prediction when elements were combined.
Hence, descriptive research cannot describe what caused a situation. Thus, descriptive research cannot be used as the basis of a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity.
The description is used for frequencies, averages, and other statistical calculations. Often the best approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative research often has the aim of description and researchers may follow up with examinations of why the observations exist and what the implications of the findings are.
Social science research
In addition, the conceptualizing of descriptive research (categorization or taxonomy) precedes the hypotheses of explanatory research. (For a discussion of how the underlying conceptualization of exploratory research, descriptive research and explanatory research fit together, see: Conceptual framework.)
Descriptive research can be statistical research. The main objective of this type of research is to describe the data and characteristics of what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic. That is, analysis of the past as opposed to the future. Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person.
Descriptive science
Descriptive science is a category of science that involves descriptive research; that is, observing, recording, describing, and classifying phenomena. Descriptive research is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation.
David A. Grimaldi and Michael S. Engel suggest that descriptive science in biology is currently undervalued and misunderstood:
"Descriptive" in science is a pejorative, almost always preceded by "merely," and typically applied to the array of classical -ologies and -omies: anatomy, archaeology, astronomy, embryology, morphology, paleontology, taxonomy, botany, cartography, stratigraphy, and the various disciplines of zoology, to name a few. [...] First, an organism, object, or substance is not described in a vacuum, but rather in comparison with other organisms, objects, and substances. [...] Second, descriptive science is not necessarily low-tech science, and high tech is not necessarily better. [...] Finally, a theory is only as good as what it explains and the evidence (i.e., descriptions) that supports it.
A negative attitude by scientists toward descriptive science is not limited to biological disciplines: Lord Rutherford's notorious quote, "All science is either physics or stamp collecting," displays a clear negative attitude about descriptive science, and it is known that he was dismissive of astronomy, which at the beginning of the 20th century was still gathering largely descriptive data about stars, nebulae, and galaxies, and was only beginning to develop a satisfactory integration of these observations within the framework of physical law, a cornerstone of the philosophy of physics.
Descriptive versus design sciences
Ilkka Niiniluoto has used the terms "descriptive sciences" and "design sciences" as an updated version of the distinction between basic and applied science. According to Niiniluoto, descriptive sciences are those that seek to describe reality, while design sciences seek useful knowledge for human activities.
See also
Methodology
Normative science
Procedural knowledge
Scientific method
References
External links
Descriptive Research from BYU linguistics department
Research
Descriptive statistics
Philosophy of science | 0.780645 | 0.991542 | 0.774042 |
Thomson scattering | Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is the low-energy limit of Compton scattering: the particle's kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays).
Description of the phenomenon
Thomson scattering is a model for the effect of electromagnetic fields on electrons when the field energy is much less than the rest mass of the electron . In the model the electric field of the incident wave accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson. As long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized.
The electric fields of the incoming and observed wave (i.e. the outgoing wave) can be divided up into those components lying in the plane of observation (formed by the incoming and observed waves) and those components perpendicular to that plane. Those components lying in the plane are referred to as "radial" and those perpendicular to the plane are "tangential". (It is difficult to make these terms seem natural, but it is standard terminology.)
The diagram on the right depicts the plane of observation. It shows the radial component of the incident electric field, which causes the charged particles at the scattering point to exhibit a radial component of acceleration (i.e., a component tangent to the plane of observation). It can be shown that the amplitude of the observed wave will be proportional to the cosine of χ, the angle between the incident and observed waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos2(χ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way.
The scattering is best described by an emission coefficient which is defined as ε where ε dt dV dΩ dλ is the energy scattered by a volume element in time dt into solid angle dΩ between wavelengths λ and λ+dλ. From the point of view of an observer, there are two emission coefficients, εr corresponding to radially polarized light and εt corresponding to tangentially polarized light. For unpolarized incident light, these are given by:
where is the density of charged particles at the scattering point, is incident flux (i.e. energy/time/area/wavelength), is the angle between the incident and scattered photons (see figure above) and is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element in time dt between wavelengths λ and λ+dλ is found by integrating the sum of the emission coefficients over all directions (solid angle):
The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by
expressed in SI units; q is the charge per particle, m the mass of particle, and a constant, the permittivity of free space. (To obtain an expression in cgs units, drop the factor of 4ε0.) Integrating over the solid angle, we obtain the Thomson cross section
in SI units.
The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass m and charge q, namely
Alternatively, this can be expressed in terms of , the Compton wavelength, and the fine structure constant:
For an electron, the Thomson cross-section is numerically given by:
Examples of Thomson scattering
The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002.
The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites.
In tokamaks, corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses Nd:YAG lasers to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events.
In the Sunyaev–Zeldovich effect, where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron.
Models for X-ray crystallography are based on Thomson scattering.
See also
Compton scattering
Kapitsa–Dirac effect
Klein–Nishina formula
References
Further reading
External links
Thomson scattering notes
Thomson scattering: principle and measurements
Atomic physics
Scattering
Plasma diagnostics | 0.78018 | 0.992031 | 0.773963 |
Chaos theory | Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors
In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.
Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
History
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
Lorenz's pioneering contributions to chaotic modeling
Throughout his career, Professor Edward Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964).
In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect.
A popular but inaccurate analogy for chaos
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence".
Applications
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.
Economics
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
Finite predictability in weather and climate
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
AI-extended modeling framework
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
Other areas
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also
Examples of chaotic systems
Advected contours
Arnold's cat map
Bifurcation theory
Bouncing ball dynamics
Chua's circuit
Cliodynamics
Coupled map lattice
Double pendulum
Duffing equation
Dynamical billiards
Economic bubble
Gaspard-Rice system
Hénon map
Horseshoe map
List of chaotic maps
Rössler attractor
Standard map
Swinging Atwood's machine
Tilt A Whirl
Other related topics
Amplitude death
Anosov diffeomorphism
Catastrophe theory
Causality
Chaos as topological supersymmetry breaking
Chaos machine
Chaotic mixing
Chaotic scattering
Control of chaos
Determinism
Edge of chaos
Emergence
Mandelbrot set
Kolmogorov–Arnold–Moser theorem
Ill-conditioning
Ill-posedness
Nonlinear system
Patterns in nature
Predictability
Quantum chaos
Santa Fe Institute
Shadowing lemma
Synchronization of chaos
Unintended consequence
People
Ralph Abraham
Michael Berry
Leon O. Chua
Ivar Ekeland
Doyne Farmer
Martin Gutzwiller
Brosl Hasslacher
Michel Hénon
Aleksandr Lyapunov
Norman Packard
Otto Rössler
David Ruelle
Oleksandr Mikolaiovich Sharkovsky
Robert Shaw
Floris Takens
James A. Yorke
George M. Zaslavsky
References
Further reading
Articles
Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Textbooks
Semitechnical and popular works
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, .
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
External links
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt)
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller
Copyright note
Complex systems theory
Computational fields of study | 0.774231 | 0.999624 | 0.77394 |
Electrohydrodynamics | Electrohydrodynamics (EHD), also known as electro-fluid-dynamics (EFD) or electrokinetics, is the study of the dynamics of electrically charged fluids. Electrohydrodynamics (EHD) is a joint domain of electrodynamics and fluid dynamics mainly focused on the fluid motion induced by electric fields. EHD, in its simplest form, involves the application of an electric field to a fluid medium, resulting in fluid flow, form, or properties manipulation. These mechanisms arise from the interaction between the electric fields and charged particles or polarization effects within the fluid. The generation and movement of charge carriers (ions) in a fluid subjected to an electric field are the underlying physics of all EHD-based technologies.
The electric forces acting on particles consist of electrostatic (Coulomb) and electrophoresis force (first term in the following equation)., dielectrophoretic force (second term in the following equation), and electrostrictive force (third term in the following equation):
This electrical force is then inserted in Navier-Stokes equation, as a body (volumetric) force.EHD covers the following types of particle and fluid transport mechanisms: electrophoresis, electrokinesis, dielectrophoresis, electro-osmosis, and electrorotation. In general, the phenomena relate to the direct conversion of electrical energy into kinetic energy, and vice versa.
In the first instance, shaped electrostatic fields (ESF's) create hydrostatic pressure (HSP, or motion) in dielectric media. When such media are fluids, a flow is produced. If the dielectric is a vacuum or a solid, no flow is produced. Such flow can be directed against the electrodes, generally to move the electrodes. In such case, the moving structure acts as an electric motor. Practical fields of interest of EHD are the common air ioniser, electrohydrodynamic thrusters and EHD cooling systems.
In the second instance, the converse takes place. A powered flow of medium within a shaped electrostatic field adds energy to the system which is picked up as a potential difference by electrodes. In such case, the structure acts as an electrical generator.
Electrokinesis
Electrokinesis is the particle or fluid transport produced by an electric field acting on a fluid having a net mobile charge. (See -kinesis for explanation and further uses of the -kinesis suffix.) Electrokinesis was first observed by Ferdinand Frederic Reuss during 1808, in the electrophoresis of clay particles The effect was also noticed and publicized in the 1920s by Thomas Townsend Brown which he called the Biefeld–Brown effect, although he seems to have misidentified it as an electric field acting on gravity. The flow rate in such a mechanism is linear in the electric field. Electrokinesis is of considerable practical importance in microfluidics, because it offers a way to manipulate and convey fluids in microsystems using only electric fields, with no moving parts.
The force acting on the fluid, is given by the equation
where, is the resulting force, measured in newtons, is the current, measured in amperes, is the distance between electrodes, measured in metres, and is the ion mobility coefficient of the dielectric fluid, measured in m2/(V·s).
If the electrodes are free to move within the fluid, while keeping their distance fixed from each other, then such a force will actually propel the electrodes with respect to the fluid.
Electrokinesis has also been observed in biology, where it was found to cause physical damage to neurons by inciting movement in their membranes. It is discussed in R. J. Elul's "Fixed charge in the cell membrane" (1967).
Water electrokinetics
In October 2003, Dr. Daniel Kwok, Dr. Larry Kostiuk and two graduate students from the University of Alberta discussed a method to convert hydrodynamic to electrical energy by exploiting the natural electrokinetic properties of a liquid such as ordinary tap water, by pumping fluid through tiny micro-channels with a pressure difference. This technology could lead to a practical and clean energy storage device, replacing batteries for devices such as mobile phones or calculators which would be charged up by simply compressing water to high pressure. Pressure would then be released on demand, for the fluid to flow through micro-channels. When water travels, or streams over a surface, the ions in the water "rub" against the solid, leaving the surface slightly charged. Kinetic energy from the moving ions would thus be converted to electrical energy. Although the power generated from a single channel is extremely small, millions of parallel micro-channels can be used to increase the power output.
This streaming potential, water-flow phenomenon was discovered in 1859 by German physicist Georg Hermann Quincke.
Electrokinetic instabilities
The fluid flows in microfluidic and nanofluidic devices are often stable and strongly damped by viscous forces (with Reynolds numbers of order unity or smaller). However, heterogeneous ionic conductivity fields in the presence of applied electric fields can, under certain conditions, generate an unstable flow field owing to electrokinetic instabilities (EKI). Conductivity gradients are prevalent in on-chip electrokinetic processes such as preconcentration methods (e.g. field amplified sample stacking and isoelectric focusing), multidimensional assays, and systems with poorly specified sample chemistry. The dynamics and periodic morphology of electrokinetic instabilities are similar to other systems with Rayleigh–Taylor instabilities. The particular case of a flat plane geometry with homogeneous ions injection in the bottom side leads to a mathematical frame identical to the Rayleigh–Bénard convection.
EKI's can be leveraged for rapid mixing or can cause undesirable dispersion in sample injection, separation and stacking. These instabilities are caused by a coupling of electric fields and ionic conductivity gradients that results in an electric body force. This coupling results in an electric body force in the bulk liquid, outside the electric double layer, that can generate temporal, convective, and absolute flow instabilities. Electrokinetic flows with conductivity gradients become unstable when the electroviscous stretching and folding of conductivity interfaces grows faster than the dissipative effect of molecular diffusion.
Since these flows are characterized by low velocities and small length scales, the Reynolds number is below 0.01 and the flow is laminar. The onset of instability in these flows is best described as an electric "Rayleigh number".
Misc
Liquids can be printed at nanoscale by pyro-EHD.
See also
Magnetohydrodynamic drive
Magnetohydrodynamics
Electrodynamic droplet deformation
Electrospray
Electrokinetic phenomena
Optoelectrofluidics
Electrostatic precipitator
List of textbooks in electromagnetism
References
External links
Dr. Larry Kostiuk's website.
Science-daily article about the discovery.
BBC article with graphics.
Electrodynamics
Energy conversion
Fluid dynamics | 0.79072 | 0.978719 | 0.773893 |
Conservation of energy | The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. In the case of a closed system the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by , the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale.
History
Ancient philosophers as far back as Thales of Miletus 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify their theories with what we know today as "mass-energy" (for example, Thales thought it was water). Empedocles (490–430 BCE) wrote that in his universal system, composed of four roots (earth, air, water, fire), "nothing comes to be or perishes"; instead, these elements suffer continual rearrangement. Epicurus ( 350 BCE) on the other hand believed everything in the universe to be composed of indivisible units of matter—the ancient precursor to 'atoms'—and he too had some idea of the necessity of conservation, stating that "the sum total of things was always such as it is now, and such it will ever remain."
In 1605, the Flemish scientist Simon Stevin was able to solve a number of problems in statics based on the principle that perpetual motion was impossible.
In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface.
In 1669, Christiaan Huygens published his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momenta as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time. This led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, and connected this idea with the impossibility of perpetual motion. Huygens's study of the dynamics of pendulum motion was based on a single principle: that the center of gravity of a heavy object cannot lift itself.
Between 1676 and 1689, Gottfried Leibniz first attempted a mathematical formulation of the kind of energy that is associated with motion (kinetic energy). Using Huygens's work on collision, Leibniz noticed that in many mechanical systems (of several masses mi, each with velocity vi),
was conserved so long as the masses did not interact. He called this quantity the vis viva or living force of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, including Isaac Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum:
was the conserved vis viva. It was later shown that both quantities are conserved simultaneously given the proper conditions, such as in an elastic collision.
In 1687, Isaac Newton published his Principia, which set out his laws of motion. It was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required.
By the 1690s, Leibniz was arguing that conservation of vis viva and conservation of momentum undermined the then-popular philosophical doctrine of interactionist dualism. (During the 19th century, when conservation of energy was better understood, Leibniz's basic argument would gain widespread acceptance. Some modern scholars continue to champion specifically conservation-based attacks on dualism, while others subsume the argument into a more general argument about causal closure.)
The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his Hydrodynamica, published in 1738, on this single vis viva conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which asserts the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas.
This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics.
Émilie du Châtelet (1706–1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy—as indicated by the quantity of material displaced—was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height from which the balls were dropped, equal to the initial potential energy. Some earlier workers, including Newton and Voltaire, had believed that "energy" was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped. In classical physics, the correct formula is , where is the kinetic energy of an object, its mass and its speed. On this basis, du Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to consider it in different forms (kinetic, potential, heat, ...).
Engineers such as John Smeaton, Peter Ewart, , Gustave-Adolphe Hirn, and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown.
Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (that it was important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). Vis viva then started to be known as energy, after the term was first used in that sense by Thomas Young in 1807.
The recalibration of vis viva to
which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work), and both championed its use in engineering calculations.
In the paper Über die Natur der Wärme (German "On the Nature of Heat/Warmth"), published in the in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called Kraft [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others."
Mechanical equivalent of heat
A key stage in the development of the modern conservation principle was the demonstration of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable.
In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid.
In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt.
The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy, and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them.
Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In one of them, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark.
Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition.
In 1844, the Welsh scientist William Robert Grove postulated a relationship between mechanics, heat, light, electricity, and magnetism by treating them all as manifestations of a single "force" (energy in modern terms). In 1846, Grove published his theories in his book The Correlation of Physical Forces. In 1847, drawing on the earlier work of Joule, Sadi Carnot, and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general modern acceptance of the principle stems from this publication.
In 1850, the Scottish mathematician William Rankine first used the phrase the law of the conservation of energy for the principle.
In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia Mathematica. This is now regarded as an example of Whig history.
Mass–energy equivalence
Matter is composed of atoms and what makes up atoms. Matter has intrinsic or rest mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of rest energy. This means that rest mass can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the total mass or total energy. All forms of energy contribute to the total mass and total energy.
For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total mass nor the total energy of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass.
Thus, conservation of energy (total, including material or rest energy) and conservation of mass (total, not just rest) are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws.
Conservation of energy in beta decay
The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy.
First law of thermodynamics
For a closed thermodynamic system, the first law of thermodynamics may be stated as:
, or equivalently,
where is the quantity of energy added to the system by a heating process, is the quantity of energy lost by the system due to work done by the system on its surroundings, and is the change in the internal energy of the system.
The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the increment of internal energy (see Inexact differential). Work and heat refer to kinds of process which add or subtract energy to or from a system, while the internal energy is a property of a particular state of the system when it is in unchanging thermodynamic equilibrium. Thus the term "heat energy" for means "that amount of energy added as a result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for means "that amount of energy lost as a result of work". Thus one can state the amount of internal energy possessed by a thermodynamic system that one knows is presently in a given state, but one cannot tell, just from knowledge of the given present state, how much energy has in the past flowed into or out of the system as a result of its being heated or cooled, nor as a result of work being performed on or by the system.
Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work.
For a simple compressible system, the work performed by the system may be written:
where is the pressure and is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called quasi-static, and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, the heat energy may be written
where is the temperature and is a small change in the entropy of the system. Temperature and entropy are variables of the state of a system.
If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written as
where is the added mass of species and is the corresponding enthalpy per unit mass. Note that generally in this case, as matter carries its own entropy. Instead, , where is the entropy per unit mass of type , from which we recover the fundamental thermodynamic relation
because the chemical potential is the partial molar Gibbs free energy of species and the Gibbs free energy .
Noether's theorem
The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, developed by Emmy Noether in 1915 and first published in 1918. In any physical theory that obeys the stationary-action principle, the theorem states that every continuous symmetry has an associated conserved quantity; if the theory's symmetry is time invariance, then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se". In other words, if the physical system is invariant under the continuous symmetry of time translation, then its energy (which is the canonical conjugate quantity to time) is conserved. Conversely, systems that are not invariant under shifts in time (e.g. systems with time-dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time-invariant again. Conservation of energy for finite systems is valid in physical theories such as special relativity and quantum theory (including QED) in the flat space-time.
Special relativity
With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated).
The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the equation .
Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation.
General relativity
General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form.
John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor.
For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero.
Quantum theory
In quantum mechanics, the energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for the energy-momentum tensor operator. Thus energy is conserved by the normal unitary evolution of a quantum system.
However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; most physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that does not rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics.
Status
In the context of perpetual motion machines such as the Orbo, Professor Eric Ash has argued at the BBC: "Denying [conservation of energy] would undermine not just little bits of science - the whole edifice would be no more. All of the technology on which we built the modern world would lie in ruins". It is because of conservation of energy that "we know - without having to examine details of a particular device - that Orbo cannot work."
Energy conservation has been a foundational physical principle for about two hundred years. From the point of view of modern general relativity, the lab environment can be well approximated by Minkowski spacetime, where energy is exactly conserved. The entire Earth can be well approximated by the Schwarzschild metric, where again energy is exactly conserved. Given all the experimental evidence, any new theory (such as quantum gravity), in order to be successful, will have to explain why energy has appeared to always be exactly conserved in terrestrial experiments. In some speculative theories, corrections to quantum mechanics are too small to be detected at anywhere near the current TeV level accessible through particle accelerators. Doubly special relativity models may argue for a breakdown in energy-momentum conservation for sufficiently energetic particles; such models are constrained by observations that cosmic rays appear to travel for billions of years without displaying anomalous non-conservation behavior. Some interpretations of quantum mechanics claim that observed energy tends to increase when the Born rule is applied due to localization of the wave function. If true, objects could be expected to spontaneously heat up; thus, such models are constrained by observations of large, cool astronomical objects as well as the observation of (often supercooled) laboratory experiments.
Milton A. Rothman wrote that the law of conservation of energy has been verified by nuclear physics experiments to an accuracy of one part in a thousand million million (1015). He then defines its precision as "perfect for all practical purposes".
See also
Energy quality
Energy transformation
Lagrangian mechanics
Laws of thermodynamics
Zero-energy universe
References
Bibliography
Modern accounts
Goldstein, Martin, and Inge F., (1993). The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction.
Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical.
History of ideas
Kuhn, T.S. (1957) "Energy conservation as an example of simultaneous discovery", in M. Clagett (ed.) Critical Problems in the History of Science pp.321–56
, Chapter 8, "Energy and Thermo-dynamics"
External links
MISN-0-158 The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET.
Articles containing video clips
Conservation laws
Energy (physics)
Laws of thermodynamics | 0.774721 | 0.998926 | 0.77389 |
Force field (chemistry) | In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.
A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.
There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.
Force fields for molecular systems
The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as
where the components of the covalent and noncovalent contributions are given by the following summations:
The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additional, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.
The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.
Bond stretching
As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula:
where is the force constant, is the bond length, and is the value for the bond length between atoms and when all other terms in the force field are set to 0. The term is at times differently defined or taken at different thermodynamic conditions.
The bond stretching constant can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.
Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.
Electrostatic interactions
Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law:
where is the distance between two atoms and . The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes .
Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.
Force fields for crystal systems
Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, in particular multi-body interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. For metals, also so-called Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.
Parameterization
In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.
A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/ transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.
Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.
Heuristic force field parametrization procedures have been very successfully for many year, but recently criticized. since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.
Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.
Force field databases
A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases, focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).
Transferability and mixing function types
Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.
In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.
Limitations
All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.
Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.
All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.
Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.
It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ -1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were -4 to -6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.
Force fields available in literature
Different force fields are designed for different purposes:
Classical
AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
CVFF – also used broadly for small molecules and macromolecules.
ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
QCFF/PI – A general force fields for conjugated molecules.
UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.
Polarizable
Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.
AMBER – polarizable force field developed by Jim Caldwell and coworkers.
AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
DRF90 – developed by P. Th. van Duijnen and coworkers.
NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
PHAST – polarizable potential developed by Chris Cioce and coworkers.
ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov
XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.
Reactive
EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.
Coarse-grained
DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman with later modifications by Español and Warren The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids, later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.
Machine learning
MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
FFLUX (originally QCTFF) A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.
Water
The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.
Modified amino acids
Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.
Other
LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.
See also
References
Further reading
Intermolecular forces
Molecular physics
Molecular modelling | 0.782753 | 0.988667 | 0.773882 |
Space travel under constant acceleration | Space travel under constant acceleration is a hypothetical method of space travel that involves the use of a propulsion system that generates a constant acceleration rather than the short, impulsive thrusts produced by traditional chemical rockets. For the first half of the journey the propulsion system would constantly accelerate the spacecraft toward its destination, and for the second half of the journey it would constantly decelerate the spaceship. Constant acceleration could be used to achieve relativistic speeds, making it a potential means of achieving human interstellar travel. This mode of travel has yet to be used in practice.
Constant-acceleration drives
Constant acceleration has two main advantages:
It is the fastest form of interplanetary and interstellar travel.
It creates its own artificial gravity, potentially sparing passengers from the effects of microgravity.
Constant thrust versus constant acceleration
Constant-thrust and constant-acceleration trajectories both involve a spacecraft firing its engine continuously. In a constant-thrust trajectory, the vehicle's acceleration increases during thrusting period, since the use of fuel decreases the vehicle mass. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust decreases during the journey.
The spacecraft must flip its orientation halfway through the journey and decelerate the rest of the way, if it is required to rendezvous with its destination (as opposed to a flyby).
Interstellar travel
A spaceship using significant constant acceleration will approach the speed of light over interstellar distances, so special relativity effects including time dilation (the difference in time flow between ship time and local time) become important.
Expressions for covered distance and elapsed time
The distance traveled, under constant proper acceleration, from the point of view of Earth as a function of the traveler's time is expressed by the coordinate distance x as a function of proper time τ at constant proper acceleration a. It is given by:
where c is the speed of light.
Under the same circumstances, the time elapsed on Earth (the coordinate time) as a function of the traveler's time is given by:
Feasibility
A limitation of constant acceleration is adequate fuel. Constant acceleration is only feasible with the development of fuels with a much higher specific impulse than presently available.
There are two broad approaches to higher specific impulse propulsion:
Higher efficiency fuel (the motor ship approach). Two possibilities for the motor ship approach are nuclear and matter–antimatter based fuels.
Drawing propulsion energy from the environment as the ship passes through it (the sailing ship approach). One hypothetical sailing ship approach is discovering something equivalent to the parallelogram of force between wind and water which allows sails to propel a sailing ship.
Picking up fuel along the way — the ramjet approach — will lose efficiency as the space craft's speed increases relative to the planetary reference. This happens because the fuel must be accelerated to the spaceship's velocity before its energy can be extracted, and that will cut the fuel efficiency dramatically.
A related issue is drag. If the near-light-speed space craft is interacting with matter that is moving slowly in the planetary reference frame, this will cause drag which will bleed off a portion of the engine's acceleration.
A second big issue facing ships using constant acceleration for interstellar travel is colliding with matter and radiation while en route. In mid-journey any such impact will be at near light speed, so the result will be dramatic.
Interstellar traveling speeds
If a space ship is using constant acceleration over interstellar distances, it will approach the speed of light for the middle part of its journey when viewed from the planetary frame of reference. This means that the effects of relativity will become important. The most important effect is that time will appear to pass at different rates in the ship frame and the planetary frame, and this means that the ship's speed and journey time will appear different in the two frames.
Planetary reference frame
From the planetary frame of reference, the ship's speed will appear to be limited by the speed of light — it can approach the speed of light, but never reach it. If a ship is using 1 g constant acceleration, it will appear to get near the speed of light in about a year, and have traveled about half a light year in distance. For the middle of the journey the ship's speed will be roughly the speed of light, and it will slow down again to zero over a year at the end of the journey.
As a rule of thumb, for a constant acceleration at 1 g (Earth gravity), the journey time, as measured on Earth, will be the distance in light years to the destination, plus 1 year. This rule of thumb will give answers that are slightly shorter than the exact calculated answer, but reasonably accurate.
Ship reference frame
From the frame of reference of those on the ship the acceleration will not change as the journey goes on. Instead the planetary reference frame will look more and more relativistic. This means that for voyagers on the ship the journey will appear to be much shorter than what planetary observers see.
At a constant acceleration of 1 g, a rocket could travel the diameter of our galaxy in about 12 years ship time, and about 113,000 years planetary time. If the last half of the trip involves deceleration at 1 g, the trip would take about 24 years. If the trip is merely to the nearest star, with deceleration the last half of the way, it would take 3.6 years.
In fiction
The spacecraft of George O. Smith's Venus Equilateral stories are all constant acceleration ships. Normal acceleration is 1 g, but in "The External Triangle" it is mentioned that accelerations of up to 5 g are possible if the crew is drugged with gravanol to counteract the effects of the g-load.
"Sky Lift" is a science fiction short story by Robert A. Heinlein, first published 1953. In the story, a torchship pilot lights out from Earth orbit to Pluto on a mission to deliver a cure to a plague ravaging a research station.
Tau Zero, a hard science fiction novel by Poul Anderson, has a spaceship using a constant acceleration drive.
Spacecraft in Joe Haldeman's 1974 novel The Forever War make extensive use of constant acceleration; they require elaborate safety equipment to keep their occupants alive at high acceleration (up to 25 g), and accelerate at 1 g even when "at rest" to provide humans with a comfortable level of gravity.
In the Known Space universe, constructed by Larry Niven, Earth uses constant acceleration drives in the form of Bussard ramjets to help colonize the nearest planetary systems. In the non-known space novel A World Out of Time, Jerome Branch Corbell (for himself), "takes" a ramjet to the Galactic Center and back in 150 years ships time (most of it in cold sleep), but 3 million years passes on Earth.
In The Sparrow, by Mary Doria Russell, interstellar travel is achieved by converting a small asteroid into a constant acceleration spacecraft. Force is applied by ion engines fed with material mined from the asteroid itself.
In the Revelation Space series by Alastair Reynolds, interstellar commerce depends upon "lighthugger" starships which can accelerate indefinitely at 1 g, with superseded antimatter powered constant acceleration drives. The effects of relativistic travel are an important plot point in several stories, informing the psychologies and politics of the lighthuggers' "ultranaut" crews for example.
In the novel 2061: Odyssey Three by Arthur C. Clarke, the spaceship Universe, using a muon-catalyzed fusion rocket, is capable of constant acceleration at 0.2 g under full thrust. Clarke's novel "Imperial Earth" features an "asymptotic drive", which utilises a microscopic black hole and hydrogen propellant, to achieve a similar acceleration travelling from Titan to Earth.
The UET and Hidden Worlds spaceships of F.M. Busby's Rissa Kerguelen saga utilize a constant acceleration drive that can accelerate at 1 g or even a little more.
Ships in the Expanse series by James S. A. Corey make use of constant acceleration drives, which also provide artificial gravity for the occupants.
In The Martian, by Andy Weir, the spaceship Hermes uses a constant thrust ion engine to transport astronauts between Earth and Mars. In Project Hail Mary, also by Weir, the protagonist's spaceship uses a constant 1.5 g acceleration spin drive to travel between the Solar System, Tau Ceti and 40 Eridani.
Explorers on the Moon, one of the Adventures of Tintin series of comic albums by Hergé, features a crewed Moon rocket with an unspecified 'atomic rocket motor'. The ship constantly accelerates from takeoff to provide occupants with consistent gravity, until a mid-way point is reached where the ship is turned around to constantly decelerate towards the Moon.
The Lost Fleet, written by John G. Hemry under the pen name Jack Campbell, is a military science fiction series which various ships of all sizes utilize constant acceleration propulsion to travel distances within star systems. Taking into account relativistic effects on space combat, communication, and timing, the ships work in various formations to maximize firepower while minimizing damage taken. The series also features the use of Jump Drives for travel between stars using gravitational jump points as well as the use of Hypernets, which utilizes quantum entanglement and probability wave principles for long distance travel between massively constructed gates.
References
Interstellar travel
Space colonization
Special relativity
Acceleration | 0.78388 | 0.987181 | 0.773832 |
Couple (mechanics) | In physics, a couple is a system of forces with a resultant (a.k.a. net or sum) moment of force but no resultant force.
A more descriptive term is force couple or pure moment. Its effect is to impart angular momentum but no linear momentum. In rigid body dynamics, force couples are free vectors, meaning their effects on a body are independent of the point of application.
The resultant moment of a couple is a special case of moment. A couple has the property that it is independent of reference point.
Simple couple
Definition
A couple is a pair of forces, equal in magnitude, oppositely directed, and displaced by perpendicular distance or moment.
The simplest kind of couple consists of two equal and opposite forces whose lines of action do not coincide. This is called a "simple couple". The forces have a turning effect or moment called a torque about an axis which is normal (perpendicular) to the plane of the forces. The SI unit for the torque of the couple is newton metre.
If the two forces are and , then the magnitude of the torque is given by the following formula:
where
is the moment of couple
is the magnitude of the force
is the perpendicular distance (moment) between the two parallel forces
The magnitude of the torque is equal to , with the direction of the torque given by the unit vector , which is perpendicular to the plane containing the two forces and positive being a counter-clockwise couple. When is taken as a vector between the points of action of the forces, then the torque is the cross product of and , i.e.
Independence of reference point
The moment of a force is only defined with respect to a certain point (it is said to be the "moment about ") and, in general, when is changed, the moment changes. However, the moment (torque) of a couple is independent of the reference point : Any point will give the same moment. In other words, a couple, unlike any more general moments, is a "free vector". (This fact is called Varignon's Second Moment Theorem.)
The proof of this claim is as follows: Suppose there are a set of force vectors , , etc. that form a couple, with position vectors (about some origin ), , , etc., respectively. The moment about is
Now we pick a new reference point that differs from by the vector . The new moment is
Now the distributive property of the cross product implies
However, the definition of a force couple means that
Therefore,
This proves that the moment is independent of reference point, which is proof that a couple is a free vector.
Forces and couples
A force F applied to a rigid body at a distance d from the center of mass has the same effect as the same force applied directly to the center of mass and a couple Cℓ = Fd. The couple produces an angular acceleration of the rigid body at right angles to the plane of the couple. The force at the center of mass accelerates the body in the direction of the force without change in orientation. The general theorems are:
A single force acting at any point O′ of a rigid body can be replaced by an equal and parallel force F acting at any given point O and a couple with forces parallel to F whose moment is M = Fd, d being the separation of O and O′. Conversely, a couple and a force in the plane of the couple can be replaced by a single force, appropriately located.
Any couple can be replaced by another in the same plane of the same direction and moment, having any desired force or any desired arm.
Applications
Couples are very important in engineering and the physical sciences. A few examples are:
The forces exerted by one's hand on a screw-driver
The forces exerted by the tip of a screwdriver on the head of a screw
Drag forces acting on a spinning propeller
Forces on an electric dipole in a uniform electric field
The reaction control system on a spacecraft
Force exerted by hands on steering wheel
'Rocking couples' are a regular imbalance giving rise to vibration
See also
Traction (engineering)
Torque
Moment (physics)
Force
References
H.F. Girvin (1938) Applied Mechanics, §28 Couples, pp 33,4, Scranton Pennsylvania: International Textbook Company.
Physical quantities
Mechanics | 0.782787 | 0.988498 | 0.773783 |
Energy cascade | In continuum mechanics, an energy cascade involves the transfer of energy from large scales of motion to the small scales (called a direct energy cascade) or a transfer of energy from the small scales to the large scales (called an inverse energy cascade). This transfer of energy between different scales requires that the dynamics of the system is nonlinear. Strictly speaking, a cascade requires the energy transfer to be local in scale (only between fluctuations of nearly the same size), evoking a cascading waterfall from pool to pool without long-range transfers across the scale domain.
This concept plays an important role in the study of well-developed turbulence. It was memorably expressed in this poem by Lewis F. Richardson in the 1920s. Energy cascades are also important for wind waves in the theory of wave turbulence.
Consider for instance turbulence generated by the air flow around a tall building: the energy-containing eddies generated by flow separation have sizes of the order of tens of meters. Somewhere downstream, dissipation by viscosity takes place, for the most part, in eddies at the Kolmogorov microscales: of the order of a millimetre for the present case. At these intermediate scales, there is neither a direct forcing of the flow nor a significant amount of viscous dissipation, but there is a net nonlinear transfer of energy from the large scales to the small scales.
This intermediate range of scales, if present, is called the inertial subrange. The dynamics at these scales is described by use of self-similarity, or by assumptions – for turbulence closure – on the statistical properties of the flow in the inertial subrange. A pioneering work was the deduction by Andrey Kolmogorov in the 1940s of the expected wavenumber spectrum in the turbulence inertial subrange.
Spectra in the inertial subrange of turbulent flow
The largest motions, or eddies, of turbulence contain most of the kinetic energy, whereas the smallest eddies are responsible for the viscous dissipation of turbulence kinetic energy. Kolmogorov hypothesized that when these scales are well separated, the intermediate range of length scales would be statistically isotropic, and that its characteristics in equilibrium would depend only on the rate at which kinetic energy is dissipated at the small scales. Dissipation is the frictional conversion of mechanical energy to thermal energy. The dissipation rate, , may be written down in terms of the fluctuating rates of strain in the turbulent flow and the fluid's kinematic viscosity, . It has dimensions of energy per unit mass per second. In equilibrium, the production of turbulence kinetic energy at the large scales of motion is equal to the dissipation of this energy at the small scales.
Energy spectrum of turbulence
The energy spectrum of turbulence, E(k), is related to the mean turbulence kinetic energy per unit mass as
where ui are the components of the fluctuating velocity, the overbar denotes an ensemble average, summation over i is implied, and k is the wavenumber. The energy spectrum, E(k), thus represents the contribution to turbulence kinetic energy by wavenumbers from k to k + dk. The largest eddies have low wavenumber, and the small eddies have high wavenumbers.
Since diffusion goes as the Laplacian of velocity, the dissipation rate may be written in terms of the energy spectrum as:
with ν the kinematic viscosity of the fluid. From this equation, it may again be observed that dissipation is mainly associated with high wavenumbers (small eddies) even though kinetic energy is associated mainly with lower wavenumbers (large eddies).
Energy spectrum in the inertial subrange
The transfer of energy from the low wavenumbers to the high wavenumbers is the energy cascade. This transfer brings turbulence kinetic energy from the large scales to the small scales, at which viscous friction dissipates it. In the intermediate range of scales, the so-called inertial subrange, Kolmogorov's hypotheses lead to the following universal form for the energy spectrum:
An extensive body of experimental evidence supports this result, over a vast range of conditions. Experimentally, the value is observed.
The result was first stated by independently by Alexander Obukhov in 1941. Obukhov's result is equivalent to a Fourier transform of Kolmogorov's 1941 result for the turbulent structure function.
Spectrum of pressure fluctuations
The pressure fluctuations in a turbulent flow may be similarly characterized. The mean-square pressure fluctuation in a turbulent flow may be represented by a pressure spectrum, (k):
For the case of turbulence with no mean velocity gradient (isotropic turbulence), the spectrum in the inertial subrange is given by
where ρ is the fluid density, and α = 1.32 C2 = 2.97. A mean-flow velocity gradient (shear flow) creates an additional, additive contribution to the inertial subrange pressure spectrum which varies as k−11/3; but the k−7/3 behavior is dominant at higher wavenumbers.
Spectrum of turbulence-driven disturbances at a free liquid surface
Pressure fluctuations below the free surface of a liquid can drive fluctuating displacements of the liquid surface, which at small wavelengths are modulated by surface tension. This free-surface–turbulence interaction may also be characterized by a wavenumber spectrum. If δ is the instantaneous displacement of the surface from its average position, the mean squared displacement may be represented with a displacement spectrum G(k) as:
A three dimensional form of the pressure spectrum may be combined with the Young–Laplace equation to show that:
Experimental observation of this k−19/3 law has been obtained by optical measurements of the surface of turbulent free liquid jets.
Notes
References
External links
Turbulence
Water waves | 0.789256 | 0.980366 | 0.77376 |
Inverse problem | An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.
Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in system identification, optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, slope stability analysis and many other fields.
History
Starting with the effects to discover the causes has concerned physicists for centuries. A historical example is the calculations of Adams and Le Verrier which led to the discovery of Neptune from the perturbed trajectory of Uranus. However, a formal study of inverse problems was not initiated until the 20th century.
One of the earliest examples of a solution to an inverse problem was discovered by Hermann Weyl and published in 1911, describing the asymptotic behavior of eigenvalues of the Laplace–Beltrami operator. Today known as Weyl's law, it is perhaps most easily understood as an answer to the question of whether it is possible to hear the shape of a drum. Weyl conjectured that the eigenfrequencies of a drum would be related to the area and perimeter of the drum by a particular equation, a result improved upon by later mathematicians.
The field of inverse problems was later touched on by Soviet-Armenian physicist, Viktor Ambartsumian.
While still a student, Ambartsumian thoroughly studied the theory of atomic structure, the formation of energy levels, and the Schrödinger equation and its properties, and when he mastered the theory of eigenvalues of differential equations, he pointed out the apparent analogy between discrete energy levels and the eigenvalues of differential equations. He then asked: given a family of eigenvalues, is it possible to find the form of the equations whose eigenvalues they are? Essentially Ambartsumian was examining the inverse Sturm–Liouville problem, which dealt with determining the equations of a vibrating string. This paper was published in 1929 in the German physics journal Zeitschrift für Physik and remained in obscurity for a rather long time. Describing this situation after many decades, Ambartsumian said, "If an astronomer publishes an article with a mathematical content in a physics journal, then the most likely thing that will happen to it is oblivion."
Nonetheless, toward the end of the Second World War, this article, written by the 20-year-old Ambartsumian, was found by Swedish mathematicians and formed the starting point for a whole area of research on inverse problems, becoming the foundation of an entire discipline.
Then important efforts have been devoted to a "direct solution" of the inverse scattering problem especially by Gelfand and Levitan in the Soviet Union. They proposed an analytic constructive method for determining the solution. When computers became available, some authors have investigated the possibility of applying their approach to similar problems such as the inverse problem in the 1D wave equation. But it rapidly turned out that the inversion is an unstable process: noise and errors can be tremendously amplified making a direct solution hardly practicable.
Then, around the seventies, the least-squares and probabilistic approaches came in and turned out to be very helpful for the determination of parameters involved in various physical systems. This approach met a lot of success. Nowadays inverse problems are also investigated in fields outside physics, such as chemistry, economics, and computer science. Eventually, as numerical models become prevalent in many parts of society, we may expect an inverse problem associated with each of these numerical models.
Conceptual understanding
Since Newton, scientists have extensively attempted to model the world. In particular, when a mathematical model is available (for instance, Newton's gravitational law or Coulomb's equation for electrostatics), we can foresee, given some parameters that describe a physical system (such as a distribution of mass or a distribution of electric charges), the behavior of the system. This approach is known as mathematical modeling and the above-mentioned physical parameters are called the model parameters or simply the model. To be precise, we introduce the notion of state of the physical system: it is the solution of the mathematical model's equation. In optimal control theory, these equations are referred to as the state equations. In many situations we are not truly interested in knowing the physical state but just its effects on some objects (for instance, the effects the gravitational field has on a specific planet). Hence we have to introduce another operator, called the observation operator, which converts the state of the physical system (here the predicted gravitational field) into what we want to observe (here the movements of the considered planet). We can now introduce the so-called forward problem, which consists of two steps:
determination of the state of the system from the physical parameters that describe it
application of the observation operator to the estimated state of the system so as to predict the behavior of what we want to observe.
This leads to introduce another operator (F stands for "forward") which maps model parameters into , the data that model predicts that is the result of this two-step procedure. Operator is called forward operator or forward map.
In this approach we basically attempt at predicting the effects knowing the causes.
The table below shows, the Earth being considered as the physical system and for different physical phenomena, the model parameters that describe the system, the physical quantity that describes the state of the physical system and observations commonly made on the state of the system.
In the inverse problem approach we, roughly speaking, try to know the causes given the effects.
General statement of the inverse problem
The inverse problem is the "inverse" of the forward problem: instead of determining the data produced by particular model parameters, we want to determine the model parameters that produce the data that is the observation we have recorded (the subscript obs stands for observed).
Our goal, in other words, is to determine the model parameters such that (at least approximately)
where is the forward map. We denote by the (possibly infinite) number of model parameters, and by the number of recorded data.
We introduce some useful concepts and the associated notations that will be used below:
The space of models denoted by : the vector space spanned by model parameters; it has dimensions;
The space of data denoted by : if we organize the measured samples in a vector with components (if our measurements consist of functions, is a vector space with infinite dimensions);
: the response of model ; it consists of the data predicted by model ;
: the image of by the forward map, it is a subset of (but not a subspace unless is linear) made of responses of all models;
: the data misfits (or residuals) associated with model : they can be arranged as a vector, an element of .
The concept of residuals is very important: in the scope of finding a model that matches the data, their analysis reveals if the considered model can be considered as realistic or not. Systematic unrealistic discrepancies between the data and the model responses also reveals that the forward map is inadequate and may give insights about an improved forward map.
When operator is linear, the inverse problem is linear. Otherwise, that is most often, the inverse problem is nonlinear.
Also, models cannot always be described by a finite number of parameters. It is the case when we look for distributed parameters (a distribution of wave-speeds for instance): in such cases the goal of the inverse problem is to retrieve one or several functions. Such inverse problems are inverse problems with infinite dimension.
Linear inverse problems
In the case of a linear forward map and when we deal with a finite number of model parameters, the forward map can be written as a linear system
where is the matrix that characterizes the forward map.
An elementary example: Earth's gravitational field
Only a few physical systems are actually linear with respect to the model parameters. One such system from geophysics is that of the Earth's gravitational field. The Earth's gravitational field is determined by the density distribution of the Earth in the subsurface. Because the lithology of the Earth changes quite significantly, we are able to observe minute differences in the Earth's gravitational field on the surface of the Earth. From our understanding of gravity (Newton's Law of Gravitation), we know that the mathematical expression for gravity is:
here is a measure of the local gravitational acceleration, is the universal gravitational constant, is the local mass (which is related to density) of the rock in the subsurface and is the distance from the mass to the observation point.
By discretizing the above expression, we are able to relate the discrete data observations on the surface of the Earth to the discrete model parameters (density) in the subsurface that we wish to know more about. For example, consider the case where we have measurements carried out at 5 locations on the surface of the Earth. In this case, our data vector, is a column vector of dimension (5×1): its -th component is associated with the -th observation location. We also know that we only have five unknown masses in the subsurface (unrealistic but used to demonstrate the concept) with known location: we denote by the distance between the -th observation location and the -th mass. Thus, we can construct the linear system relating the five unknown masses to the five data points as follows:
To solve for the model parameters that fit our data, we might be able to invert the matrix to directly convert the measurements into our model parameters. For example:
A system with five equations and five unknowns is a very specific situation: our example was designed to end up with this specificity. In general, the numbers of data and unknowns are different so that matrix is not square.
However, even a square matrix can have no inverse: matrix can be rank deficient (i.e. has zero eigenvalues) and the solution of the system is not unique. Then the solution of the inverse problem will be undetermined. This is a first difficulty. Over-determined systems (more equations than unknowns) have other issues.
Also noise may corrupt our observations making possibly outside the space of possible responses to model parameters so that solution of the system may not exist. This is another difficulty.
Tools to overcome the first difficulty
The first difficulty reflects a crucial problem: Our observations do not contain enough information and additional data are required. Additional data can come from physical prior information on the parameter values, on their spatial distribution or, more generally, on their mutual dependence. It can also come from other experiments: For instance, we may think of integrating data recorded by gravimeters and seismographs for a better estimation of densities.
The integration of this additional information is basically a problem of statistics. This discipline is the one that can answer the question: How to mix quantities of different nature? We will be more precise in the section "Bayesian approach" below.
Concerning distributed parameters, prior information about their spatial distribution often consists of information about some derivatives of these distributed parameters. Also, it is common practice, although somewhat artificial, to look for the "simplest" model that reasonably matches the data. This is usually achieved by penalizing the norm of the gradient (or the total variation) of the parameters (this approach is also referred to as the maximization of the entropy). One can also make the model simple through a parametrization that introduces freedom degrees only when necessary.
Additional information may also be integrated through inequality constraints on the model parameters or some functions of them. Such constraints are important to avoid unrealistic values for the parameters (negative values for instance). In this case, the space spanned by model parameters will no longer be a vector space but a subset of admissible models denoted by in the sequel.
Tools to overcome the second difficulty
As mentioned above, noise may be such that our measurements are not the image of any model, so that we cannot look for a model that produces the data but rather look for the best (or optimal) model: that is, the one that best matches the data. This leads us to minimize an objective function, namely a functional that quantifies how big the residuals are or how far the predicted data are from the observed data. Of course, when we have perfect data (i.e. no noise) then the recovered model should fit the observed data perfectly. A standard objective function, , is of the form:
where is the Euclidean norm (it will be the norm when the measurements are functions instead of samples) of the residuals. This approach amounts to making use of ordinary least squares, an approach widely used in statistics. However, the Euclidean norm is known to be very sensitive to outliers: to avoid this difficulty we may think of using other distances, for instance the norm, in replacement of the norm.
Bayesian approach
Very similar to the least-squares approach is the probabilistic approach: If we know the statistics of the noise that contaminates the data, we can think of seeking the most likely model m, which is the model that matches the maximum likelihood criterion. If the noise is Gaussian, the maximum likelihood criterion appears as a least-squares criterion, the Euclidean scalar product in data space being replaced by a scalar product involving the co-variance of the noise. Also, should prior information on model parameters be available, we could think of using Bayesian inference to formulate the solution of the inverse problem. This approach is described in detail in Tarantola's book.
Numerical solution of our elementary example
Here we make use of the Euclidean norm to quantify the data misfits. As we deal with a linear inverse problem, the objective function is quadratic. For its minimization, it is classical to compute its gradient using the same rationale (as we would to minimize a function of only one variable). At the optimal model , this gradient vanishes which can be written as:
where FT denotes the matrix transpose of F. This equation simplifies to:
This expression is known as the normal equation and gives us a possible solution to the inverse problem.
In our example matrix turns out to be generally full rank so that the equation above makes sense and determines uniquely the model parameters: we do not need integrating additional information for ending up with a unique solution.
Mathematical and computational aspects
Inverse problems are typically ill-posed, as opposed to the well-posed problems usually met in mathematical modeling. Of the three conditions for a well-posed problem suggested by Jacques Hadamard (existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often violated. In the sense of functional analysis, the inverse problem is represented by a mapping between metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a finite number of measurements, and the practical consideration of recovering only a finite number of unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild assumptions on the solution and prevent overfitting. Many instances of regularized inverse problems can be interpreted as special cases of Bayesian inference.
Numerical solution of the optimization problem
Some inverse problems have a very simple solution, for instance, when one has a set of unisolvent functions, meaning a set of functions such that evaluating them at distinct points yields a set of linearly independent vectors. This means that given a linear combination of these functions, the coefficients can be computed by arranging the vectors as the columns of a matrix and then inverting this matrix. The simplest example of unisolvent functions is polynomials constructed, using the unisolvence theorem, so as to be unisolvent. Concretely, this is done by inverting the Vandermonde matrix. But this a very specific situation.
In general, the solution of an inverse problem requires sophisticated optimization algorithms. When the model is described by a large number of parameters (the number of unknowns involved in some diffraction tomography applications can reach one billion), solving the linear system associated with the normal equations can be cumbersome. The numerical method to be used for solving the optimization problem depends in particular on the cost required for computing the solution of the forward problem. Once chosen the appropriate algorithm for solving the forward problem (a straightforward matrix-vector multiplication may be not adequate when matrix is huge), the appropriate algorithm for carrying out the minimization can be found in textbooks dealing with numerical methods for the solution of linear systems and for the minimization of quadratic functions (see for instance Ciarlet or Nocedal).
Also, the user may wish to add physical constraints to the models: In this case, they have to be familiar with constrained optimization methods, a subject in itself. In all cases, computing the gradient of the objective function often is a key element for the solution of the optimization problem. As mentioned above, information about the spatial distribution of a distributed parameter can be introduced through the parametrization. One can also think of adapting this parametrization during the optimization.
Should the objective function be based on a norm other than the Euclidean norm, we have to leave the area of quadratic optimization. As a result, the optimization problem becomes more difficult. In particular, when the norm is used for quantifying the data misfit the objective function is no longer differentiable: its gradient does not make sense any longer. Dedicated methods (see for instance Lemaréchal) from non differentiable optimization come in.
Once the optimal model is computed we have to address the question: "Can we trust this model?" The question can be formulated as follows: How large is the set of models that match the data "nearly as well" as this model? In the case of quadratic objective functions, this set is contained in a hyper-ellipsoid, a subset of ( is the number of unknowns), whose size depends on what we mean with "nearly as well", that is on the noise level. The direction of the largest axis of this ellipsoid (eigenvector associated with the smallest eigenvalue of matrix ) is the direction of poorly determined components: if we follow this direction, we can bring a strong perturbation to the model without changing significantly the value of the objective function and thus end up with a significantly different quasi-optimal model. We clearly see that the answer to the question "can we trust this model" is governed by the noise level and by the eigenvalues of the Hessian of the objective function or equivalently, in the case where no regularization has been integrated, by the singular values of matrix . Of course, the use of regularization (or other kinds of prior information) reduces the size of the set of almost optimal solutions and, in turn, increases the confidence we can put in the computed solution.
Stability, regularization and model discretization in infinite dimension
We focus here on the recovery of a distributed parameter.
When looking for distributed parameters we have to discretize these unknown functions. Doing so, we reduce the dimension of the problem to something finite. But now, the question is: is there any link between the solution we compute and the one of the initial problem? Then another question: what do we mean with the solution of the initial problem? Since a finite number of data does not allow the determination of an infinity of unknowns, the original data misfit functional has to be regularized to ensure the uniqueness of the solution. Many times, reducing the unknowns to a finite-dimensional space will provide an adequate regularization: the computed solution will look like a discrete version of the solution we were looking for. For example, a naive discretization will often work for solving the deconvolution problem: it will work as long as we do not allow missing frequencies to show up in the numerical solution. But many times, regularization has to be integrated explicitly in the objective function.
In order to understand what may happen, we have to keep in mind that solving such a linear inverse problem amount to solving a Fredholm integral equation of the first kind:
where is the kernel, and are vectors of , and is a domain in . This holds for a 2D application. For a 3D application, we consider . Note that here the model parameters consist of a function and that the response of a model also consists of a function denoted by . This equation is an extension to infinite dimension of the matrix equation given in the case of discrete problems.
For sufficiently smooth the operator defined above is compact on reasonable Banach spaces such as the . F. Riesz theory states that the set of singular values of such an operator contains zero (hence the existence of a null-space), is finite or at most countable, and, in the latter case, they constitute a sequence that goes to zero. In the case of a symmetric kernel, we have an infinity of eigenvalues and the associated eigenvectors constitute a hilbertian basis of . Thus any solution of this equation is determined up to an additive function in the null-space and, in the case of infinity of singular values, the solution (which involves the reciprocal of arbitrary small eigenvalues) is unstable: two ingredients that make the solution of this integral equation a typical ill-posed problem! However, we can define a solution through the pseudo-inverse of the forward map (again up to an arbitrary additive function). When the forward map is compact, the classical Tikhonov regularization will work if we use it for integrating prior information stating that the norm of the solution should be as small as possible: this will make the inverse problem well-posed. Yet, as in the finite dimension case, we have to question the confidence we can put in the computed solution. Again, basically, the information lies in the eigenvalues of the Hessian operator. Should subspaces containing eigenvectors associated with small eigenvalues be explored for computing the solution, then the solution can hardly be trusted: some of its components will be poorly determined. The smallest eigenvalue is equal to the weight introduced in Tikhonov regularization.
Irregular kernels may yield a forward map which is not compact and even unbounded if we naively equip the space of models with the norm. In such cases, the Hessian is not a bounded operator and the notion of eigenvalue does not make sense any longer. A mathematical analysis is required to make it a bounded operator and design a well-posed problem: an illustration can be found in. Again, we have to question the confidence we can put in the computed solution and we have to generalize the notion of eigenvalue to get the answer.
Analysis of the spectrum of the Hessian operator is thus a key element to determine how reliable the computed solution is. However, such an analysis is usually a very heavy task. This has led several authors to investigate alternative approaches in the case where we are not interested in all the components of the unknown function but only in sub-unknowns that are the images of the unknown function by a linear operator. These approaches are referred to as the " Backus and Gilbert method", Lions's sentinels approach, and the SOLA method: these approaches turned out to be strongly related with one another as explained in Chavent Finally, the concept of limited resolution, often invoked by physicists, is nothing but a specific view of the fact that some poorly determined components may corrupt the solution. But, generally speaking, these poorly determined components of the model are not necessarily associated with high frequencies.
Some classical linear inverse problems for the recovery of distributed parameters
The problems mentioned below correspond to different versions of the Fredholm integral: each of these is associated with a specific kernel .
Deconvolution
The goal of deconvolution is to reconstruct the original image or signal which appears as noisy and blurred on the data .
From a mathematical point of view, the kernel here only depends on the difference between and .
Tomographic methods
In these methods we attempt at recovering a distributed parameter, the observation consisting in the measurement of the integrals of this parameter carried out along a family of lines. We denote by the line in this family associated with measurement point . The observation at can thus be written as:
where is the arc-length along and a known weighting function. Comparing this equation with the Fredholm integral above, we notice that the kernel is kind of a delta function that peaks on line . With such a kernel, the forward map is not compact.
Computed tomography
In X-ray computed tomography the lines on which the parameter is integrated are straight lines: the tomographic reconstruction of the parameter distribution is based on the inversion of the Radon transform. Although from a theoretical point of view many linear inverse problems are well understood, problems involving the Radon transform and its generalisations still present many theoretical challenges with questions of sufficiency of data still unresolved. Such problems include incomplete data for the x-ray transform in three dimensions and problems involving the generalisation of the x-ray transform to tensor fields. Solutions explored include Algebraic Reconstruction Technique, filtered backprojection, and as computing power has increased, iterative reconstruction methods such as iterative Sparse Asymptotic Minimum Variance.
Diffraction tomography
Diffraction tomography is a classical linear inverse problem in exploration seismology: the amplitude recorded at one time for a given source-receiver pair is the sum of contributions arising from points such that the sum of the distances, measured in traveltimes, from the source and the receiver, respectively, is equal to the corresponding recording time. In 3D the parameter is not integrated along lines but over surfaces. Should the propagation velocity be constant, such points are distributed on an ellipsoid. The inverse problems consists in retrieving the distribution of diffracting points from the seismograms recorded along the survey, the velocity distribution being known. A direct solution has been originally proposed by Beylkin and Lambaré et al.: these works were the starting points of approaches known as amplitude preserved migration (see Beylkin and Bleistein). Should geometrical optics techniques (i.e. rays) be used for the solving the wave equation, these methods turn out to be closely related to the so-called least-squares migration methods derived from the least-squares approach (see Lailly, Tarantola).
Doppler tomography (astrophysics)
If we consider a rotating stellar object, the spectral lines we can observe on a spectral profile will be shifted due to Doppler effect. Doppler tomography aims at converting the information contained in spectral monitoring of the object into a 2D image of the emission (as a function of the radial velocity and of the phase in the periodic rotation movement) of the stellar atmosphere. As explained by Tom Marsh this linear inverse problem is tomography like: we have to recover a distributed parameter which has been integrated along lines to produce its effects in the recordings.
Inverse heat conduction
Early publications on inverse heat conduction arose from determining surface heat flux during atmospheric re-entry from buried temperature sensors.
Other applications where surface heat flux is needed but surface sensors are not practical include: inside reciprocating engines, inside rocket engines; and, testing of nuclear reactor components. A variety of numerical techniques have been developed to address the ill-posedness and sensitivity to measurement error caused by damping and lagging in the temperature signal.
Non-linear inverse problems
Non-linear inverse problems constitute an inherently more difficult family of inverse problems. Here the forward map is a non-linear operator. Modeling of physical phenomena often relies on the solution of a partial differential equation (see table above except for gravity law): although these partial differential equations are often linear, the physical parameters that appear in these equations depend in a non-linear way of the state of the system and therefore on the observations we make on it.
Some classical non-linear inverse problems
Inverse scattering problems
Whereas linear inverse problems were completely solved from the theoretical point of view at the end of the nineteenth century , only one class of nonlinear inverse problems was so before 1970, that of inverse spectral and (one space dimension) inverse scattering problems, after the seminal work of the Russian mathematical school (Krein, Gelfand, Levitan, Marchenko). A large review of the results has been given by Chadan and Sabatier in their book "Inverse Problems of Quantum Scattering Theory" (two editions in English, one in Russian).
In this kind of problem, data are properties of the spectrum of a linear operator which describe the scattering. The spectrum is made of eigenvalues and eigenfunctions, forming together the "discrete spectrum", and generalizations, called the continuous spectrum. The very remarkable physical point is that scattering experiments give information only on the continuous spectrum, and that knowing its full spectrum is both necessary and sufficient in recovering the scattering operator. Hence we have invisible parameters, much more interesting than the null space which has a similar property in linear inverse problems. In addition, there are physical motions in which the spectrum of such an operator is conserved as a consequence of such motion. This phenomenon is governed by special nonlinear partial differential evolution equations, for example the Korteweg–de Vries equation. If the spectrum of the operator is reduced to one single eigenvalue, its corresponding motion is that of a single bump that propagates at constant velocity and without deformation, a solitary wave called a "soliton".
A perfect signal and its generalizations for the Korteweg–de Vries equation or other integrable nonlinear partial differential equations are of great interest, with many possible applications. This area has been studied as a branch of mathematical physics since the 1970s. Nonlinear inverse problems are also currently studied in many fields of applied science (acoustics, mechanics, quantum mechanics, electromagnetic scattering - in particular radar soundings, seismic soundings, and nearly all imaging modalities).
A final example related to the Riemann hypothesis was given by Wu and Sprung, the idea is that in the semiclassical old quantum theory the inverse of the potential inside the Hamiltonian is proportional to the half-derivative of the eigenvalues (energies) counting function n(x).
Permeability matching in oil and gas reservoirs
The goal is to recover the diffusion coefficient in the parabolic partial differential equation that models single phase fluid flows in porous media. This problem has been the object of many studies since a pioneering work carried out in the early seventies. Concerning two-phase flows an important problem is to estimate the relative permeabilities and the capillary pressures.
Inverse problems in the wave equations
The goal is to recover the wave-speeds (P and S waves) and the density distributions from seismograms. Such inverse problems are of prime interest in seismology and exploration geophysics.
We can basically consider two mathematical models:
The acoustic wave equation (in which S waves are ignored when the space dimensions are 2 or 3)
The elastodynamics equation in which the P and S wave velocities can be derived from the Lamé parameters and the density.
These basic hyperbolic equations can be upgraded by incorporating attenuation, anisotropy, ...
The solution of the inverse problem in the 1D wave equation has been the object of many studies. It is one of the very few non-linear inverse problems for which we can prove the uniqueness of the solution. The analysis of the stability of the solution was another challenge. Practical applications, using the least-squares approach, were developed.
Extension to 2D or 3D problems and to the elastodynamics equations was attempted since the 80's but turned out to be very difficult ! This problem often referred to as Full Waveform Inversion (FWI), is not yet completely solved: among the main difficulties are the existence of non-Gaussian noise into the seismograms, cycle-skipping issues (also known as phase ambiguity), and the chaotic behavior of the data misfit function. Some authors have investigated the possibility of reformulating the inverse problem so as to make the objective function less chaotic than the data misfit function.
Travel-time tomography
Realizing how difficult is the inverse problem in the wave equation, seismologists investigated a simplified approach making use of geometrical optics. In particular they aimed at inverting for the propagation velocity distribution, knowing the arrival times of wave-fronts observed on seismograms. These wave-fronts can be associated with direct arrivals or with reflections associated with reflectors whose geometry is to be determined, jointly with the velocity distribution.
The arrival time distribution ( is a point in physical space) of a wave-front issued from a point source, satisfies the Eikonal equation:
where denotes the slowness (reciprocal of the velocity) distribution. The presence of makes this equation nonlinear. It is classically solved by shooting rays (trajectories about which the arrival time is stationary) from the point source.
This problem is tomography like: the measured arrival times are the integral along the ray-path of the slowness. But this tomography like problem is nonlinear, mainly because the unknown ray-path geometry depends upon the velocity (or slowness) distribution. In spite of its nonlinear character, travel-time tomography turned out to be very effective for determining the propagation velocity in the Earth or in the subsurface, the latter aspect being a key element for seismic imaging, in particular using methods mentioned in Section "Diffraction tomography".
Mathematical aspects: Hadamard's questions
The questions concern well-posedness: Does the least-squares problem have a unique solution which depends continuously on the data (stability problem)? It is the first question, but it is also a difficult one because of the non-linearity of .
In order to see where the difficulties arise from, Chavent proposed to conceptually split the minimization of the data misfit function into two consecutive steps ( is the subset of admissible models):
projection step: given find a projection on (nearest point on according to the distance involved in the definition of the objective function)
given this projection find one pre-image that is a model whose image by operator is this projection.
Difficulties can - and usually will - arise in both steps:
operator is not likely to be one-to-one, therefore there can be more than one pre-image,
even when is one-to-one, its inverse may not be continuous over ,
the projection on may not exist, should this set be not closed,
the projection on can be non-unique and not continuous as this can be non-convex due to the non-linearity of .
We refer to Chavent for a mathematical analysis of these points.
Computational aspects
A non-convex data misfit function
The forward map being nonlinear, the data misfit function is likely to be non-convex, making local minimization techniques inefficient. Several approaches have been investigated to overcome this difficulty:
use of global optimization techniques such as sampling of the posterior density function and Metropolis algorithm in the inverse problem probabilistic framework, genetic algorithms (alone or in combination with Metropolis algorithm: see for an application to the determination of permeabilities that match the existing permeability data), neural networks, regularization techniques including multi scale analysis;
reformulation of the least-squares objective function so as to make it smoother (see for the inverse problem in the wave equations.)
Computation of the gradient of the objective function
Inverse problems, especially in infinite dimension, may be large size, thus requiring important computing time. When the forward map is nonlinear, the computational difficulties increase and minimizing the objective function can be difficult. Contrary to the linear situation, an explicit use of the Hessian matrix for solving the normal equations does not make sense here: the Hessian matrix varies with models. Much more effective is the evaluation of the gradient of the objective function for some models. Important computational effort can be saved when we can avoid the very heavy computation of the Jacobian (often called "Fréchet derivatives"): the adjoint state method, proposed by Chavent and Lions, is aimed to avoid this very heavy computation. It is now very widely used.
Applications
Inverse problem theory is used extensively in weather predictions, oceanography, hydrology, and petroleum engineering. Another application is inversion of elastic waves for non-destructive characterization of engineering structures.
Inverse problems are also found in the field of heat transfer, where a surface heat flux is estimated outgoing from temperature data measured inside a rigid body; and, in understanding the controls on plant-matter decay. The linear inverse problem is also the fundamental of spectral estimation and direction-of-arrival (DOA) estimation in signal processing.
Inverse lithography is used in photomask design for semiconductor device fabrication.
See also
Academic journals
Four main academic journals cover inverse problems in general:
Inverse Problems
Journal of Inverse and Ill-posed Problems
Inverse Problems in Science and Engineering
Inverse Problems and Imaging
Many journals on medical imaging, geophysics, non-destructive testing, etc. are dominated by inverse problems in those areas.
References
References
Chadan, Khosrow & Sabatier, Pierre Célestin (1977). Inverse Problems in Quantum Scattering Theory. Springer-Verlag.
Aster, Richard; Borchers, Brian, and Thurber, Clifford (2018). Parameter Estimation and Inverse Problems, Third Edition, Elsevier. ,
Further reading
External links
Inverse Problems International Association
Eurasian Association on Inverse Problems
Finnish Inverse Problems Society
Inverse Problems Network
Albert Tarantola's website, includes a free PDF version of his Inverse Problem Theory book, and some online articles on Inverse Problems
Inverse Problems page at the University of Alabama
Inverse Problems and Geostatistics Project , Niels Bohr Institute, University of Copenhagen
Andy Ganse's Geophysical Inverse Theory Resources Page
Finnish Centre of Excellence in Inverse Problems Research | 0.780039 | 0.991925 | 0.77374 |
Divergence | In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
Physical interpretation of divergence
In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source at a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence.
The divergence of a vector field is often illustrated using the simple example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction at each point, which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore, the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the net flux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal.
If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surface not enclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore, the divergence at any other point is zero.
Definition
The divergence of a vector field at a point is defined as the limit of the ratio of the surface integral of out of the closed surface of a volume enclosing to the volume of , as shrinks to zero
where is the volume of , is the boundary of , and is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain and approach zero volume. The result, , is a scalar function of .
Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However it is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use.
A vector field with zero divergence everywhere is called solenoidal – in which case any closed surface has no net flux across it.
Definition in coordinates
Cartesian coordinates
In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field is defined as the scalar-valued function:
Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an -dimensional vector field in -dimensional space is invariant under any invertible linear transformation.
The common notation for the divergence is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the operator (see del), apply them to the corresponding components of , and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
Cylindrical coordinates
For a vector expressed in local unit cylindrical coordinates as
where is the unit vector in direction , the divergence is
The use of local coordinates is vital for the validity of the expression. If we consider the position vector and the functions , , and , which assign the corresponding global cylindrical coordinate to a vector, in general , , and . In particular, if we consider the identity function , we find that:
.
Spherical coordinates
In spherical coordinates, with the angle with the axis and the rotation around the axis, and again written in local unit coordinates, the divergence is
Tensor field
Let be continuously differentiable second-order tensor field defined as follows:
the divergence in cartesian coordinate system is a first-order tensor field and can be defined in two ways:
and
We have
If tensor is symmetric then . Because of this, often in the literature the two definitions (and symbols and ) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed).
Expressions of in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates.
General coordinates
Using Einstein notation we can consider the divergence in general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so refers to the second component, and not the quantity squared. The index variable is used to refer to an arbitrary component, such as . The divergence can then be written via the Voss-Weyl formula, as:
where is the local coefficient of the volume element and are the components of with respect to the local unnormalized covariant basis (sometimes written as . The Einstein notation implies summation over , since it appears as both an upper and lower index.
The volume coefficient is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have , and , respectively. The volume can also be expressed as , where is the metric tensor. The determinant appears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writing . The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as the Jacobian of the transformation from Cartesian to curvilinear coordinates, which for gives
Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write for the normalized basis, and for the components of with respect to it, we have that
using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element , we can conclude that . After substituting, the formula becomes:
See for further discussion.
Properties
The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e.,
for all vector fields and and all real numbers and .
There is a product rule of the following type: if is a scalar-valued function and is a vector field, then
or in more suggestive notation
Another product rule for the cross product of two vector fields and in three dimensions involves the curl and reads as follows:
or
The Laplacian of a scalar field is the divergence of the field's gradient:
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
If a vector field with zero divergence is defined on a ball in , then there exists some vector field on the ball with . For regions in more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of failure of the truth of the statement, measured by the homology of the chain complex
serves as a nice quantification of the complicatedness of the underlying region . These are the beginnings and main motivations of de Rham cohomology.
Decomposition theorem
It can be shown that any stationary flux that is twice continuously differentiable in and vanishes sufficiently fast for can be decomposed uniquely into an irrotational part and a source-free part . Moreover, these parts are explicitly determined by the respective source densities (see above) and circulation densities (see the article Curl):
For the irrotational part one has
with
The source-free part, , can be similarly written: one only has to replace the scalar potential by a vector potential and the terms by , and the source density
by the circulation density .
This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well.
In arbitrary finite dimensions
The divergence of a vector field can be defined in any finite number of dimensions. If
in a Euclidean coordinate system with coordinates , define
In the 1D case, reduces to a regular function, and the divergence reduces to the derivative.
For any , the divergence is a linear operator, and it satisfies the "product rule"
for any scalar-valued function .
Relation to the exterior derivative
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in . Define the current two-form as
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density moving with local velocity . Its exterior derivative is then given by
where is the wedge product.
Thus, the divergence of the vector field can be expressed as:
Here the superscript is one of the two musical isomorphisms, and is the Hodge star operator. When the divergence is written in this way, the operator is referred to as the codifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
In curvilinear coordinates
The appropriate expression is more complicated in curvilinear coordinates. The divergence of a vector field extends naturally to any differentiable manifold of dimension that has a volume form (or density) , e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on , on such a manifold a vector field defines an -form obtained by contracting with . The divergence is then the function defined by
The divergence can be defined in terms of the Lie derivative as
This means that the divergence measures the rate of expansion of a unit of volume (a volume element) as it flows with the vector field.
On a pseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of the Levi-Civita connection :
where the second expression is the contraction of the vector field valued 1-form with itself and the last expression is the traditional coordinate expression from Ricci calculus.
An equivalent expression without using a connection is
where is the metric and denotes the partial derivative with respect to coordinate . The square-root of the (absolute value of the determinant of the) metric appears because the divergence must be written with the correct conception of the volume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that the can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space (i.e. unit volume, i.e. one, i.e. not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which is covariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called a vielbein. A different way to see this is to note that the divergence is the codifferential in disguise. That is, the divergence corresponds to the expression with the differential and the Hodge star. The Hodge star, by its construction, causes the volume form to appear in all of the right places.
The divergence of tensors
Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector is given by
where denotes the covariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is a codifferential; the appropriate properties follow from there.
Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism : if is a -tensor ( for the contravariant vector and for the covariant one), then we define the divergence of to be the -tensor
that is, we take the trace over the first two covariant indices of the covariant derivative.
The symbol refers to the musical isomorphism.
See also
Curl
Del in cylindrical and spherical coordinates
Divergence theorem
Gradient
Notes
Citations
References
External links
The idea of divergence of a vector field
Khan Academy: Divergence video lesson
Differential operators
Linear operators in calculus
Vector calculus | 0.774795 | 0.998638 | 0.77374 |
Faraday's law of induction | Faraday's law of induction (or simply Faraday's law) is a law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf). This phenomenon, known as electromagnetic induction, is the fundamental operating principle of transformers, inductors, and many types of electric motors, generators and solenoids.
The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that emf (electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) on a conductive loop when the magnetic flux through the surface enclosed by the loop varies in time.
Once Faraday's law had been discovered, one aspect of it (transformer emf) was formulated as the Maxwell–Faraday equation later. The equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer emf) and the Lorentz force (describing motional emf). The integral form of the Maxwell–Faraday equation describes only the transformer emf, while the equation of Faraday's law describes both the transformer emf and the motional emf.
History
Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Faraday was the first to publish the results of his experiments.
Faraday's notebook on August 29, 1831 describes an experimental demonstration of electromagnetic induction (see figure) that wraps two wires around opposite sides of an iron ring (like a modern toroidal transformer). His assessment of newly-discovered properties of electromagnets suggested that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Indeed, a galvanometer's needle measured a transient current (which he called a "wave of electricity") on the right side's wire when he connected or disconnected the left side's wire to a battery. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. His notebook entry also noted that fewer wraps for the battery side resulted in a greater disturbance of the galvanometer's needle.
Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
Lenz's law, formulated by Emil Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced emf and current resulting from electromagnetic induction (elaborated upon in the examples below).
According to Albert Einstein, much of the groundwork and discovery of his special relativity theory was presented by this law of induction by Faraday in 1834.
Faraday's law
The most widespread version of Faraday's law states:
Mathematical statement
For a loop of wire in a magnetic field, the magnetic flux is defined for any surface whose boundary is the given loop. Since the wire loop may be moving, we write for the surface. The magnetic flux is the surface integral:
where is an element of area vector of the moving surface , is the magnetic field, and is a vector dot product representing the element of flux through . In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.
When the flux changes—because changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an emf, defined as the energy available from a unit charge that has traveled once around the wire loop. (Although some sources state the definition differently, this expression was chosen for compatibility with the equations of special relativity.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads.
Faraday's law states that the emf is also given by the rate of change of the magnetic flux:
where is the electromotive force (emf) and is the magnetic flux.
The direction of the electromotive force is given by Lenz's law.
The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845.
Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.
It is possible to find out the direction of the electromotive force (emf) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows:
Align the curved fingers of the left hand with the loop (yellow line).
Stretch your thumb. The stretched thumb indicates the direction of (brown), the normal to the area enclosed by the loop.
Find the sign of , the change in flux. Determine the initial and final fluxes (whose difference is ) with respect to the normal , as indicated by the stretched thumb.
If the change in flux, , is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads).
If is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads).
For a tightly wound coil of wire, composed of identical turns, each with the same , Faraday's law of induction states that
where is the number of turns of wire and is the magnetic flux through a single loop.
Maxwell–Faraday equation
The Maxwell–Faraday equation states that a time-varying magnetic field always accompanies a spatially varying (also possibly time-varying), non-conservative electric field, and vice versa. The Maxwell–Faraday equation is
(in SI units) where is the curl operator and again is the electric field and is the magnetic field. These fields can generally be functions of position and time .
The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem, thereby reproducing Faraday's law:
where, as indicated in the figure, is a surface bounded by the closed contour , is an infinitesimal vector element of the contour , and is an infinitesimal vector element of surface . Its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface.
Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface , a positive path element of curve is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal to the surface .
The line integral around is called circulation. A nonzero circulation of is different from the behavior of the electric field generated by static charges. A charge-generated -field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem.
The integral equation is true for any path through space, and any surface for which that path is a boundary.
If the surface is not changing in time, the equation can be rewritten:
The surface integral at the right-hand side is the explicit expression for the magnetic flux through .
The electric vector field induced by a changing magnetic flux, the solenoidal component of the overall electric field, can be approximated in the non-relativistic limit by the volume integral equation
Proof
The four Maxwell's equations (including the Maxwell–Faraday equation), along with Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism. Therefore, it is possible to "prove" Faraday's law starting with these equations.
The starting point is the time-derivative of flux through an arbitrary surface (that can be moved or deformed) in space:
(by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation and some vector identities; the details are in the box below:
The result is:
where is the boundary (loop) of the surface , and is the velocity of a part of the boundary.
In the case of a conductive loop, emf (Electromotive Force) is the electromagnetic work done on a unit charge when it has traveled around the loop once, and this work is done by the Lorentz force. Therefore, emf is expressed as
where is emf and is the unit charge velocity.
In a macroscopic view, for charges on a segment of the loop, consists of two components in average; one is the velocity of the charge along the segment , and the other is the velocity of the segment (the loop is deformed or moved). does not contribute to the work done on the charge since the direction of is same to the direction of . Mathematically,
since is perpendicular to as and are along the same direction. Now we can see that, for the conductive loop, emf is same to the time-derivative of the magnetic flux through the loop except for the sign on it. Therefore, we now reach the equation of Faraday's law (for the conductive loop) as
where . With breaking this integral, is for the transformer emf (due to a time-varying magnetic field) and is for the motional emf (due to the magnetic Lorentz force on charges by the motion or deformation of the loop in the magnetic field).
Exceptions
It is tempting to generalize Faraday's law to state: If is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through equals the emf around . This statement, however, is not always true and the reason is not just from the obvious reason that emf is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve matches the actual velocity of the material conducting the electricity. The two examples illustrated below show that one often obtains incorrect results when the motion of is divorced from the motion of the material.
One can analyze examples like these by taking care that the path moves with the same velocity as the material. Alternatively, one can always correctly calculate the emf by combining Lorentz force law with the Maxwell–Faraday equation:
where "it is very important to notice that (1) is the velocity of the conductor ... not the velocity of the path element and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time."
Faraday's law and relativity
Two phenomena
Faraday's law is a single equation describing two different phenomena: the motional emf generated by a magnetic force on a moving wire (see the Lorentz force), and the transformer emf generated by an electric force due to a changing magnetic field (described by the Maxwell–Faraday equation).
James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena.
A reference to these two aspects of electromagnetic induction is made in some modern textbooks. As Richard Feynman states:
Explanation based on four-dimensional formalism
In the general case, explanation of the motional emf appearance by action of the magnetic force on the charges in the moving wire or in the circuit changing its area is unsatisfactory. As a matter of fact, the charges in the wire or in the circuit could be completely absent, will then the electromagnetic induction effect disappear in this case? This situation is analyzed in the article, in which, when writing the integral equations of the electromagnetic field in a four-dimensional covariant form, in the Faraday’s law the total time derivative of the magnetic flux through the circuit appears instead of the partial time derivative. Thus, electromagnetic induction appears either when the magnetic field changes over time or when the area of the circuit changes. From the physical point of view, it is better to speak not about the induction emf, but about the induced electric field strength , that occurs in the circuit when the magnetic flux changes. In this case, the contribution to from the change in the magnetic field is made through the term , where is the vector potential. If the circuit area is changing in case of the constant magnetic field, then some part of the circuit is inevitably moving, and the electric field emerges in this part of the circuit in the comoving reference frame K’ as a result of the Lorentz transformation of the magnetic field , present in the stationary reference frame K, which passes through the circuit. The presence of the field in K’ is considered as a result of the induction effect in the moving circuit, regardless of whether the charges are present in the circuit or not. In the conducting circuit, the field causes motion of the charges. In the reference frame K, it looks like appearance of emf of the induction , the gradient of which in the form of , taken along the circuit, seems to generate the field .
Einstein's view
Reflection on this apparent dichotomy was one of the principal paths that led Albert Einstein to develop special relativity:
See also
References
Further reading
External links
A simple interactive tutorial on electromagnetic induction (click and drag magnet back and forth) National High Magnetic Field Laboratory
Roberto Vega. Induction: Faraday's law and Lenz's law – Highly animated lecture, with sound effects, Electricity and Magnetism course page
Notes from Physics and Astronomy HyperPhysics at Georgia State University
Tankersley and Mosca: Introducing Faraday's law
A free simulation on motional emf
Faraday's law of electromagnetic induction
Michael Faraday
Maxwell's equations | 0.774592 | 0.99888 | 0.773725 |
Dialectics of Nature | Dialectics of Nature is an unfinished 1883 work by Friedrich Engels that applies Marxist ideas – particularly those of dialectical materialism – to nature.
History and contents
Engels wrote most of the manuscript between 1872 and 1882, which was a melange of German, French and English notations on the contemporary development of science and technology; however, it was not published within his lifetime. In later times, Eduard Bernstein passed the manuscripts to Albert Einstein, who thought the science confused (particularly the mathematics and physics) but the overall work worthy of a broader readership. After that in 1925, the Marx–Engels–Lenin Institute in Moscow published the manuscripts (a bilingual German/Russian edition).
The biologist J. B. S. Haldane wrote a preface for the work in 1939, "Hence it is often hard to follow if one does not know the history of the scientific practice of that time. The idea of what is now called the conservation of energy was beginning to permeate physics, astronomy, chemistry, geoscience, and biology, but it was still very incompletely realised, and still more incompletely applied. Words such as 'force', 'motion', and 'vis viva' were used where we should now speak of energy".
Some then controversial topics of Engels' day, pertaining to incomplete or faulty theories, are now settled, making some of Engels' essays dated. "Their interest lies not so much in their detailed criticism of theories, but in showing how Engels grappled with intellectual problems".
One "law" proposed in the Dialectics of Nature is the "law of the transformation of quantity into quality and vice versa". Probably the most commonly cited example of this is the change of water from a liquid to a gas, by increasing its temperature (although Engels also describes other examples from chemistry). In contemporary science, this process is known as a phase transition. There has also been an effort to apply this mechanism to social phenomena, whereby increases in population result in changes in social structure.
Dialectics and its study was derived from the philosopher and author of Science of Logic, G. W. F. Hegel, who, in turn, had studied the Greek philosopher Heraclitus. Heraclitus taught that everything was constantly changing and that all things consisted of two opposite elements which changed into each other as night changes into day, light into darkness, life into death etc.
Engels's work develops from the comments he had made about science in Anti-Dühring. It includes the famous "The Part Played by Labour in the Transition from Ape to Man", which has also been published separately as a pamphlet. Engels argues that the hand and brain grew together, an idea supported by later fossil discoveries (see Australopithecus afarensis).
Most of the work is fragmentary and in the form of rough notes, as shown in this quotation from the section entitled "Biology":
See also
Natural philosophy
Naturphilosophie
Dialectical materialism
Notes and references
External links
Full text on-line.
Dialectics of Nature, PDF of edition published by Progress Publishers.
1883 non-fiction books
Marxist books
Books by Friedrich Engels
Dialectical materialism | 0.789533 | 0.979927 | 0.773685 |
Darcy's law | Darcy's law is an equation that describes the flow of a fluid through a porous medium and through a Hele-Shaw cell. The law was formulated by Henry Darcy based on results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, a branch of earth sciences. It is analogous to Ohm's law in electrostatics, linearly relating the volume flow rate of the fluid to the hydraulic head difference (which is often just proportional to the pressure difference) via the hydraulic conductivity. In fact, the Darcy's law is a special case of the Stokes equation for the momentum flux, in turn deriving from the momentum Navier-Stokes equation.
Background
Darcy's law was first determined experimentally by Darcy, but has since been derived from the Navier–Stokes equations via homogenization methods. It is analogous to Fourier's law in the field of heat conduction, Ohm's law in the field of electrical networks, and Fick's law in diffusion theory.
One application of Darcy's law is in the analysis of water flow through an aquifer; Darcy's law along with the equation of conservation of mass simplifies to the groundwater flow equation, one of the basic relationships of hydrogeology.
Morris Muskat first refined Darcy's equation for a single-phase flow by including viscosity in the single (fluid) phase equation of Darcy. It can be understood that viscous fluids have more difficulty permeating through a porous medium than less viscous fluids. This change made it suitable for researchers in the petroleum industry. Based on experimental results by his colleagues Wyckoff and Botset, Muskat and Meres also generalized Darcy's law to cover a multiphase flow of water, oil and gas in the porous medium of a petroleum reservoir. The generalized multiphase flow equations by Muskat and others provide the analytical foundation for reservoir engineering that exists to this day.
Description
In the integral form, Darcy's law, as refined by Morris Muskat, in the absence of gravitational forces and in a homogeneously permeable medium, is given by a simple proportionality relationship between the volumetric flow rate , and the pressure drop through a porous medium. The proportionality constant is linked to the permeability of the medium, the dynamic viscosity of the fluid , the given distance over which the pressure drop is computed, and the cross-sectional area , in the form:
Note that the ratio:
can be defined as the Darcy's law hydraulic resistance.
The Darcy's law can be generalised to a local form:
where is the hydraulic gradient and is the volumetric flux which here is called also superficial velocity.
Note that the ratio:
can be thought as the Darcy's law hydraulic conductivity.
In the (less general) integral form, the volumetric flux and the pressure gradient correspond to the ratios:
.
In case of an anisotropic porous media, the permeability is a second order tensor, and in tensor notation one can write the more general law:
Notice that the quantity , often referred to as the Darcy flux or Darcy velocity, is not the velocity at which the fluid is travelling through the pores. The flow velocity is related to the flux by the porosity with the following equation:
The Darcy's constitutive equation, for single phase (fluid) flow, is the defining equation for absolute permeability (single phase permeability).
With reference to the diagram to the right, the flow velocity is in SI units , and since the porosity is a nondimensional number, the Darcy flux , or discharge per unit area, is also defined in units ; the permeability in units , the dynamic viscosity in units and the hydraulic gradient is in units .
In the integral form, the total pressure drop is in units , and is the length of the sample in units , the Darcy's volumetric flow rate , or discharge, is also defined in units and the cross-sectional area in units . A number of these parameters are used in alternative definitions below. A negative sign is used in the definition of the flux following the standard physics convention that fluids flow from regions of high pressure to regions of low pressure. Note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative, then the flow will be in the positive direction. There have been several proposals for a constitutive equation for absolute permeability, and the most famous one is probably the Kozeny equation (also called Kozeny–Carman equation).
By considering the relation for static fluid pressure (Stevin's law):
one can decline the integral form also into the equation:
where ν is the kinematic viscosity.
The corresponding hydraulic conductivity is therefore:
Darcy's law is a simple mathematical statement which neatly summarizes several familiar properties that groundwater flowing in aquifers exhibits, including:
if there is no pressure gradient over a distance, no flow occurs (these are hydrostatic conditions),
if there is a pressure gradient, flow will occur from high pressure towards low pressure (opposite the direction of increasing gradient — hence the negative sign in Darcy's law),
the greater the pressure gradient (through the same formation material), the greater the discharge rate, and
the discharge rate of fluid will often be different — through different formation materials (or even through the same material, in a different direction) — even if the same pressure gradient exists in both cases.
A graphical illustration of the use of the steady-state groundwater flow equation (based on Darcy's law and the conservation of mass) is in the construction of flownets, to quantify the amount of groundwater flowing under a dam.
Darcy's law is only valid for slow, viscous flow; however, most groundwater flow cases fall in this category. Typically any flow with a Reynolds number less than one is clearly laminar, and it would be valid to apply Darcy's law. Experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. The Reynolds number (a dimensionless parameter) for porous media flow is typically expressed as
where is the kinematic viscosity of water, is the specific discharge (not the pore velocity — with units of length per time), is a representative grain diameter for the porous media (the standard choice is math|d30, which is the 30% passing size from a grain size analysis using sieves — with units of length).
Derivation
For stationary, creeping, incompressible flow, i.e. , the Navier–Stokes equation simplifies to the Stokes equation, which by neglecting the bulk term is:
where is the viscosity, is the velocity in the direction, and is the pressure. Assuming the viscous resisting force is linear with the velocity we may write:
where is the porosity, and is the second order permeability tensor. This gives the velocity in the direction,
which gives Darcy's law for the volumetric flux density in the direction,
In isotropic porous media the off-diagonal elements in the permeability tensor are zero, for and the diagonal elements are identical, , and the common form is obtained as below, which enables the determination of the liquid flow velocity by solving a set of equations in a given region.
The above equation is a governing equation for single-phase fluid flow in a porous medium.
Use in petroleum engineering
Another derivation of Darcy's law is used extensively in petroleum engineering to determine the flow through permeable media — the most simple of which is for a one-dimensional, homogeneous rock formation with a single fluid phase and constant fluid viscosity.
Almost all oil reservoirs have a water zone below the oil leg, and some have also a gas cap above the oil leg. When the reservoir pressure drops due to oil production, water flows into the oil zone from below, and gas flows into the oil zone from above (if the gas cap exists), and we get a simultaneous flow and immiscible mixing of all fluid phases in the oil zone. The operator of the oil field may also inject water (and/or gas) in order to improve oil production. The petroleum industry is therefore using a generalized Darcy equation for multiphase flow that was developed by Muskat et alios. Because Darcy's name is so widespread and strongly associated with flow in porous media, the multiphase equation is denoted Darcy's law for multiphase flow or generalized Darcy equation (or law) or simply Darcy's equation (or law) or simply flow equation if the context says that the text is discussing the multiphase equation of Muskat et alios. Multiphase flow in oil and gas reservoirs is a comprehensive topic, and one of many articles about this topic is Darcy's law for multiphase flow.
Use in coffee brewing
A number of papers have utilized Darcy's law to model the physics of brewing in a moka pot, specifically how the hot water percolates through the coffee grinds under pressure, starting with a 2001 paper by Varlamov and Balestrino, and continuing with a 2007 paper by Gianino, a 2008 paper by Navarini et al., and a 2008 paper by W. King. The papers will either take the coffee permeability to be constant as a simplification or will measure change through the brewing process.
Additional forms
Differential expression
Darcy's law can be expressed very generally as:
where q is the volume flux vector of the fluid at a particular point in the medium, h is the total hydraulic head, and K is the hydraulic conductivity tensor, at that point. The hydraulic conductivity can often be approximated as a scalar. (Note the analogy to Ohm's law in electrostatics. The flux vector is analogous to the current density, head is analogous to voltage, and hydraulic conductivity is analogous to electrical conductivity.)
Quadratic law
For flows in porous media with Reynolds numbers greater than about 1 to 10, inertial effects can also become significant. Sometimes an inertial term is added to the Darcy's equation, known as Forchheimer term. This term is able to account for the non-linear behavior of the pressure difference vs flow data.
where the additional term is known as inertial permeability, in units of length .
The flow in the middle of a sandstone reservoir is so slow that Forchheimer's equation is usually not needed, but the gas flow into a gas production well may be high enough to justify using it. In this case, the inflow performance calculations for the well, not the grid cell of the 3D model, are based on the Forchheimer equation. The effect of this is that an additional rate-dependent skin appears in the inflow performance formula.
Some carbonate reservoirs have many fractures, and Darcy's equation for multiphase flow is generalized in order to govern both flow in fractures and flow in the matrix (i.e. the traditional porous rock). The irregular surface of the fracture walls and high flow rate in the fractures may justify the use of Forchheimer's equation.
Correction for gases in fine media (Knudsen diffusion or Klinkenberg effect)
For gas flow in small characteristic dimensions (e.g., very fine sand, nanoporous structures etc.), the particle-wall interactions become more frequent, giving rise to additional wall friction (Knudsen friction). For a flow in this region, where both viscous and Knudsen friction are present, a new formulation needs to be used. Knudsen presented a semi-empirical model for flow in transition regime based on his experiments on small capillaries. For a porous medium, the Knudsen equation can be given as
where is the molar flux, is the gas constant, is the temperature, is the effective Knudsen diffusivity of the porous media. The model can also be derived from the first-principle-based binary friction model (BFM). The differential equation of transition flow in porous media based on BFM is given as
This equation is valid for capillaries as well as porous media. The terminology of the Knudsen effect and Knudsen diffusivity is more common in mechanical and chemical engineering. In geological and petrochemical engineering, this effect is known as the Klinkenberg effect. Using the definition of molar flux, the above equation can be rewritten as
This equation can be rearranged into the following equation
Comparing this equation with conventional Darcy's law, a new formulation can be given as
where
This is equivalent to the effective permeability formulation proposed by Klinkenberg:
where is known as the Klinkenberg parameter, which depends on the gas and the porous medium structure. This is quite evident if we compare the above formulations. The Klinkenberg parameter is dependent on permeability, Knudsen diffusivity and viscosity (i.e., both gas and porous medium properties).
Darcy's law for short time scales
For very short time scales, a time derivative of flux may be added to Darcy's law, which results in valid solutions at very small times (in heat transfer, this is called the modified form of Fourier's law),
where is a very small time constant which causes this equation to reduce to the normal form of Darcy's law at "normal" times (> nanoseconds). The main reason for doing this is that the regular groundwater flow equation (diffusion equation) leads to singularities at constant head boundaries at very small times. This form is more mathematically rigorous but leads to a hyperbolic groundwater flow equation, which is more difficult to solve and is only useful at very small times, typically out of the realm of practical use.
Brinkman form of Darcy's law
Another extension to the traditional form of Darcy's law is the Brinkman term, which is used to account for transitional flow between boundaries (introduced by Brinkman in 1949),
where is an effective viscosity term. This correction term accounts for flow through medium where the grains of the media are porous themselves, but is difficult to use, and is typically neglected.
Validity of Darcy's law
Darcy's law is valid for laminar flow through sediments. In fine-grained sediments, the dimensions of interstices are small; thus, the flow is laminar. Coarse-grained sediments also behave similarly, but in very coarse-grained sediments, the flow may be turbulent. Hence Darcy's law is not always valid in such sediments.
For flow through commercial circular pipes, the flow is laminar when the Reynolds number is less than 2000 and turbulent when it is more than 4000, but in some sediments, it has been found that flow is laminar when the value of the Reynolds number is less than 1.
See also
The darcy, a unit of fluid permeability
Hydrogeology
Groundwater flow equation
Mathematical model
Black-oil equations
Fick's law
Ergun equation
References
Water
Civil engineering
Soil mechanics
Soil physics
Hydrology
Transport phenomena | 0.776409 | 0.996472 | 0.77367 |
Thermodynamic equilibrium | Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of matter nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, not only is there an absence of macroscopic change, but there is an “absence of any tendency toward change on a macroscopic scale.”
Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, until disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium.
A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings.
In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a "meta-stable equilibrium".
Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equilibrium and this is accompanied by an increase in the sum of the entropies of the portions.
Overview
Classical thermodynamics deals with states of dynamic equilibrium. The state of a system at thermodynamic equilibrium is the one for which some thermodynamic potential is minimized (in the absence of an applied voltage), or for which the entropy (S) is maximized, for specified conditions. One such potential is the Helmholtz free energy (A), for a closed system at constant volume and temperature (controlled by a heat bath):
Another potential, the Gibbs free energy (G), is minimized at thermodynamic equilibrium in a closed system at constant temperature and pressure, both controlled by the surroundings:
where T denotes the absolute thermodynamic temperature, P the pressure, S the entropy, V the volume, and U the internal energy of the system. In other words, is a necessary condition for chemical equilibrium under these conditions (in the absence of an applied voltage).
Thermodynamic equilibrium is the unique stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the particular conditions in the specified surroundings.
Conditions
For a completely isolated system, S is maximum at thermodynamic equilibrium.
For a closed system at controlled constant temperature and volume, A is minimum at thermodynamic equilibrium.
For a closed system at controlled constant temperature and pressure without an applied voltage, G is minimum at thermodynamic equilibrium.
The various types of equilibriums are achieved as follows:
Two systems are in thermal equilibrium when their temperatures are the same.
Two systems are in mechanical equilibrium when their pressures are the same.
Two systems are in diffusive equilibrium when their chemical potentials are the same.
All forces are balanced and there is no significant external driving force.
Relation of exchange equilibrium between systems
Often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in mutual contact, with long-range forces also linking them. The enclosure of the system is the surface of contiguity or boundary between the two systems. In the thermodynamic formalism, that surface is regarded as having specific properties of permeability. For example, the surface of contiguity may be supposed to be permeable only to heat, allowing energy to transfer only as heat. Then the two systems are said to be in thermal equilibrium when the long-range forces are unchanging in time and the transfer of energy as heat between them has slowed and eventually stopped permanently; this is an example of a contact equilibrium. Other kinds of contact equilibrium are defined by other kinds of specific permeability. When two systems are in contact equilibrium with respect to a particular kind of permeability, they have common values of the intensive variable that belongs to that particular kind of permeability. Examples of such intensive variables are temperature, pressure, chemical potential.
A contact equilibrium may be regarded also as an exchange equilibrium. There is a zero balance of rate of transfer of some quantity between the two systems in contact equilibrium. For example, for a wall permeable only to heat, the rates of diffusion of internal energy as heat between the two systems are equal and opposite. An adiabatic wall between the two systems is 'permeable' only to energy transferred as work; at mechanical equilibrium the rates of transfer of energy as work between them are equal and opposite. If the wall is a simple wall, then the rates of transfer of volume across it are also equal and opposite; and the pressures on either side of it are equal. If the adiabatic wall is more complicated, with a sort of leverage, having an area-ratio, then the pressures of the two systems in exchange equilibrium are in the inverse ratio of the volume exchange ratio; this keeps the zero balance of rates of transfer as work.
A radiative exchange can occur between two otherwise separate systems. Radiative exchange equilibrium prevails when the two systems have the same temperature.
The
Thermodynamic state of internal equilibrium of a system
A collection of matter may be entirely isolated from its surroundings. If it has been left undisturbed for an indefinitely long time, classical thermodynamics postulates that it is in a state in which no changes occur within it, and there are no flows within it. This is a thermodynamic state of internal equilibrium. (This postulate is sometimes, but not often, called the "minus first" law of thermodynamics. One textbook calls it the "zeroth law", remarking that the authors think this more befitting that title than its more customary definition, which apparently was suggested by Fowler.)
Such states are a principal concern in what is known as classical or equilibrium thermodynamics, for they are the only states of the system that are regarded as well defined in that subject. A system in contact equilibrium with another system can by a thermodynamic operation be isolated, and upon the event of isolation, no change occurs in it. A system in a relation of contact equilibrium with another system may thus also be regarded as being in its own state of internal thermodynamic equilibrium.
Multiple contact equilibrium
The thermodynamic formalism allows that a system may have contact with several other systems at once, which may or may not also have mutual contact, the contacts having respectively different permeabilities. If these systems are all jointly isolated from the rest of the world those of them that are in contact then reach respective contact equilibria with one another.
If several systems are free of adiabatic walls between each other, but are jointly isolated from the rest of the world, then they reach a state of multiple contact equilibrium, and they have a common temperature, a total internal energy, and a total entropy. Amongst intensive variables, this is a unique property of temperature. It holds even in the presence of long-range forces. (That is, there is no "force" that can maintain temperature discrepancies.) For example, in a system in thermodynamic equilibrium in a vertical gravitational field, the pressure on the top wall is less than that on the bottom wall, but the temperature is the same everywhere.
A thermodynamic operation may occur as an event restricted to the walls that are within the surroundings, directly affecting neither the walls of contact of the system of interest with its surroundings, nor its interior, and occurring within a definitely limited time. For example, an immovable adiabatic wall may be placed or removed within the surroundings. Consequent upon such an operation restricted to the surroundings, the system may be for a time driven away from its own initial internal state of thermodynamic equilibrium. Then, according to the second law of thermodynamics, the whole undergoes changes and eventually reaches a new and final equilibrium with the surroundings. Following Planck, this consequent train of events is called a natural thermodynamic process. It is allowed in equilibrium thermodynamics just because the initial and final states are of thermodynamic equilibrium, even though during the process there is transient departure from thermodynamic equilibrium, when neither the system nor its surroundings are in well defined states of internal equilibrium. A natural process proceeds at a finite rate for the main part of its course. It is thereby radically different from a fictive quasi-static 'process' that proceeds infinitely slowly throughout its course, and is fictively 'reversible'. Classical thermodynamics allows that even though a process may take a very long time to settle to thermodynamic equilibrium, if the main part of its course is at a finite rate, then it is considered to be natural, and to be subject to the second law of thermodynamics, and thereby irreversible. Engineered machines and artificial devices and manipulations are permitted within the surroundings. The allowance of such operations and devices in the surroundings but not in the system is the reason why Kelvin in one of his statements of the second law of thermodynamics spoke of "inanimate" agency; a system in thermodynamic equilibrium is inanimate.
Otherwise, a thermodynamic operation may directly affect a wall of the system.
It is often convenient to suppose that some of the surrounding subsystems are so much larger than the system that the process can affect the intensive variables only of the surrounding subsystems, and they are then called reservoirs for relevant intensive variables.
Local and global equilibrium
It can be useful to distinguish between global and local thermodynamic equilibrium. In thermodynamics, exchanges within a system and between the system and the outside are controlled by intensive parameters. As an example, temperature controls heat exchanges. Global thermodynamic equilibrium (GTE) means that those intensive parameters are homogeneous throughout the whole system, while local thermodynamic equilibrium (LTE) means that those intensive parameters are varying in space and time, but are varying so slowly that, for any point, one can assume thermodynamic equilibrium in some neighborhood about that point.
If the description of the system requires variations in the intensive parameters that are too large, the very assumptions upon which the definitions of these intensive parameters are based will break down, and the system will be in neither global nor local equilibrium. For example, it takes a certain number of collisions for a particle to equilibrate to its surroundings. If the average distance it has moved during these collisions removes it from the neighborhood it is equilibrating to, it will never equilibrate, and there will be no LTE. Temperature is, by definition, proportional to the average internal energy of an equilibrated neighborhood. Since there is no equilibrated neighborhood, the concept of temperature doesn't hold, and the temperature becomes undefined.
This local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.
As an example, LTE will exist in a glass of water that contains a melting ice cube. The temperature inside the glass can be defined at any point, but it is colder near the ice cube than far away from it. If energies of the molecules located near a given point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for a certain temperature. If the energies of the molecules located near another point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for another temperature.
Local thermodynamic equilibrium does not require either local or global stationarity. In other words, each small locality need not have a constant temperature. However, it does require that each small locality change slowly enough to practically sustain its local Maxwell–Boltzmann distribution of molecular velocities. A global non-equilibrium state can be stably stationary only if it is maintained by exchanges between the system and the outside. For example, a globally-stable stationary state could be maintained inside the glass of water by continuously adding finely powdered ice into it to compensate for the melting, and continuously draining off the meltwater. Natural transport phenomena may lead a system from local to global thermodynamic equilibrium. Going back to our example, the diffusion of heat will lead our glass of water toward global thermodynamic equilibrium, a state in which the temperature of the glass is completely homogeneous.
Reservations
Careful and well informed writers about thermodynamics, in their accounts of thermodynamic equilibrium, often enough make provisos or reservations to their statements. Some writers leave such reservations merely implied or more or less unstated.
For example, one widely cited writer, H. B. Callen writes in this context: "In actuality, few systems are in absolute and true equilibrium." He refers to radioactive processes and remarks that they may take "cosmic times to complete, [and] generally can be ignored". He adds "In practice, the criterion for equilibrium is circular. Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory!"
J.A. Beattie and I. Oppenheim write: "Insistence on a strict interpretation of the definition of equilibrium would rule out the application of thermodynamics to practically all states of real systems."
Another author, cited by Callen as giving a "scholarly and rigorous treatment", and cited by Adkins as having written a "classic text", A.B. Pippard writes in that text: "Given long enough a supercooled vapour will eventually condense, ... . The time involved may be so enormous, however, perhaps 10100 years or more, ... . For most purposes, provided the rapid change is not artificially stimulated, the systems may be regarded as being in equilibrium."
Another author, A. Münster, writes in this context. He observes that thermonuclear processes often occur so slowly that they can be ignored in thermodynamics. He comments: "The concept 'absolute equilibrium' or 'equilibrium with respect to all imaginable processes', has therefore, no physical significance." He therefore states that: "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions."
According to L. Tisza: "... in the discussion of phenomena near absolute zero. The absolute predictions of the classical theory become particularly vague because the occurrence of frozen-in nonequilibrium states is very common."
Definitions
The most general kind of thermodynamic equilibrium of a system is through contact with the surroundings that allows simultaneous passages of all chemical substances and all kinds of energy. A system in thermodynamic equilibrium may move with uniform acceleration through space but must not change its shape or size while doing so; thus it is defined by a rigid volume in space. It may lie within external fields of force, determined by external factors of far greater extent than the system itself, so that events within the system cannot in an appreciable amount affect the external fields of force. The system can be in thermodynamic equilibrium only if the external force fields are uniform, and are determining its uniform acceleration, or if it lies in a non-uniform force field but is held stationary there by local forces, such as mechanical pressures, on its surface.
Thermodynamic equilibrium is a primitive notion of the theory of thermodynamics. According to P.M. Morse: "It should be emphasized that the fact that there are thermodynamic states, ..., and the fact that there are thermodynamic variables which are uniquely specified by the equilibrium state ... are not conclusions deduced logically from some philosophical first principles. They are conclusions ineluctably drawn from more than two centuries of experiments." This means that thermodynamic equilibrium is not to be defined solely in terms of other theoretical concepts of thermodynamics. M. Bailyn proposes a fundamental law of thermodynamics that defines and postulates the existence of states of thermodynamic equilibrium.
Textbook definitions of thermodynamic equilibrium are often stated carefully, with some reservation or other.
For example, A. Münster writes: "An isolated system is in thermodynamic equilibrium when, in the system, no changes of state are occurring at a measurable rate." There are two reservations stated here; the system is isolated; any changes of state are immeasurably slow. He discusses the second proviso by giving an account of a mixture oxygen and hydrogen at room temperature in the absence of a catalyst. Münster points out that a thermodynamic equilibrium state is described by fewer macroscopic variables than is any other state of a given system. This is partly, but not entirely, because all flows within and through the system are zero.
R. Haase's presentation of thermodynamics does not start with a restriction to thermodynamic equilibrium because he intends to allow for non-equilibrium thermodynamics. He considers an arbitrary system with time invariant properties. He tests it for thermodynamic equilibrium by cutting it off from all external influences, except external force fields. If after insulation, nothing changes, he says that the system was in equilibrium.
In a section headed "Thermodynamic equilibrium", H.B. Callen defines equilibrium states in a paragraph. He points out that they "are determined by intrinsic factors" within the system. They are "terminal states", towards which the systems evolve, over time, which may occur with "glacial slowness". This statement does not explicitly say that for thermodynamic equilibrium, the system must be isolated; Callen does not spell out what he means by the words "intrinsic factors".
Another textbook writer, C.J. Adkins, explicitly allows thermodynamic equilibrium to occur in a system which is not isolated. His system is, however, closed with respect to transfer of matter. He writes: "In general, the approach to thermodynamic equilibrium will involve both thermal and work-like interactions with the surroundings." He distinguishes such thermodynamic equilibrium from thermal equilibrium, in which only thermal contact is mediating transfer of energy.
Another textbook author, J.R. Partington, writes: "(i) An equilibrium state is one which is independent of time." But, referring to systems "which are only apparently in equilibrium", he adds : "Such systems are in states of ″false equilibrium.″" Partington's statement does not explicitly state that the equilibrium refers to an isolated system. Like Münster, Partington also refers to the mixture of oxygen and hydrogen. He adds a proviso that "In a true equilibrium state, the smallest change of any external condition which influences the state will produce a small change of state ..." This proviso means that thermodynamic equilibrium must be stable against small perturbations; this requirement is essential for the strict meaning of thermodynamic equilibrium.
A student textbook by F.H. Crawford has a section headed "Thermodynamic Equilibrium". It distinguishes several drivers of flows, and then says: "These are examples of the apparently universal tendency of isolated systems toward a state of complete mechanical, thermal, chemical, and electrical—or, in a single word, thermodynamic—equilibrium."
A monograph on classical thermodynamics by H.A. Buchdahl considers the "equilibrium of a thermodynamic system", without actually writing the phrase "thermodynamic equilibrium". Referring to systems closed to exchange of matter, Buchdahl writes: "If a system is in a terminal condition which is properly static, it will be said to be in equilibrium." Buchdahl's monograph also discusses amorphous glass, for the purposes of thermodynamic description. It states: "More precisely, the glass may be regarded as being in equilibrium so long as experimental tests show that 'slow' transitions are in effect reversible." It is not customary to make this proviso part of the definition of thermodynamic equilibrium, but the converse is usually assumed: that if a body in thermodynamic equilibrium is subject to a sufficiently slow process, that process may be considered to be sufficiently nearly reversible, and the body remains sufficiently nearly in thermodynamic equilibrium during the process.
A. Münster carefully extends his definition of thermodynamic equilibrium for isolated systems by introducing a concept of contact equilibrium. This specifies particular processes that are allowed when considering thermodynamic equilibrium for non-isolated systems, with special concern for open systems, which may gain or lose matter from or to their surroundings. A contact equilibrium is between the system of interest and a system in the surroundings, brought into contact with the system of interest, the contact being through a special kind of wall; for the rest, the whole joint system is isolated. Walls of this special kind were also considered by C. Carathéodory, and are mentioned by other writers also. They are selectively permeable. They may be permeable only to mechanical work, or only to heat, or only to some particular chemical substance. Each contact equilibrium defines an intensive parameter; for example, a wall permeable only to heat defines an empirical temperature. A contact equilibrium can exist for each chemical constituent of the system of interest. In a contact equilibrium, despite the possible exchange through the selectively permeable wall, the system of interest is changeless, as if it were in isolated thermodynamic equilibrium. This scheme follows the general rule that "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Thermodynamic equilibrium for an open system means that, with respect to every relevant kind of selectively permeable wall, contact equilibrium exists when the respective intensive parameters of the system and surroundings are equal. This definition does not consider the most general kind of thermodynamic equilibrium, which is through unselective contacts. This definition does not simply state that no current of matter or energy exists in the interior or at the boundaries; but it is compatible with the following definition, which does so state.
M. Zemansky also distinguishes mechanical, chemical, and thermal equilibrium. He then writes: "When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium".
P.M. Morse writes that thermodynamics is concerned with "states of thermodynamic equilibrium". He also uses the phrase "thermal equilibrium" while discussing transfer of energy as heat between a body and a heat reservoir in its surroundings, though not explicitly defining a special term 'thermal equilibrium'.
J.R. Waldram writes of "a definite thermodynamic state". He defines the term "thermal equilibrium" for a system "when its observables have ceased to change over time". But shortly below that definition he writes of a piece of glass that has not yet reached its "full thermodynamic equilibrium state".
Considering equilibrium states, M. Bailyn writes: "Each intensive variable has its own type of equilibrium." He then defines thermal equilibrium, mechanical equilibrium, and material equilibrium. Accordingly, he writes: "If all the intensive variables become uniform, thermodynamic equilibrium is said to exist." He is not here considering the presence of an external force field.
J.G. Kirkwood and I. Oppenheim define thermodynamic equilibrium as follows: "A system is in a state of thermodynamic equilibrium if, during the time period allotted for experimentation, (a) its intensive properties are independent of time and (b) no current of matter or energy exists in its interior or at its boundaries with the surroundings." It is evident that they are not restricting the definition to isolated or to closed systems. They do not discuss the possibility of changes that occur with "glacial slowness", and proceed beyond the time period allotted for experimentation. They note that for two systems in contact, there exists a small subclass of intensive properties such that if all those of that small subclass are respectively equal, then all respective intensive properties are equal. States of thermodynamic equilibrium may be defined by this subclass, provided some other conditions are satisfied.
Characteristics of a state of internal thermodynamic equilibrium
Homogeneity in the absence of external forces
A thermodynamic system consisting of a single phase in the absence of external forces, in its own internal thermodynamic equilibrium, is homogeneous. This means that the material in any small volume element of the system can be interchanged with the material of any other geometrically congruent volume element of the system, and the effect is to leave the system thermodynamically unchanged. In general, a strong external force field makes a system of a single phase in its own internal thermodynamic equilibrium inhomogeneous with respect to some intensive variables. For example, a relatively dense component of a mixture can be concentrated by centrifugation.
Uniform temperature
Such equilibrium inhomogeneity, induced by external forces, does not occur for the intensive variable temperature. According to E.A. Guggenheim, "The most important conception of thermodynamics is temperature." Planck introduces his treatise with a brief account of heat and temperature and thermal equilibrium, and then announces: "In the following we shall deal chiefly with homogeneous, isotropic bodies of any form, possessing throughout their substance the same temperature and density, and subject to a uniform pressure acting everywhere perpendicular to the surface." As did Carathéodory, Planck was setting aside surface effects and external fields and anisotropic crystals. Though referring to temperature, Planck did not there explicitly refer to the concept of thermodynamic equilibrium. In contrast, Carathéodory's scheme of presentation of classical thermodynamics for closed systems postulates the concept of an "equilibrium state" following Gibbs (Gibbs speaks routinely of a "thermodynamic state"), though not explicitly using the phrase 'thermodynamic equilibrium', nor explicitly postulating the existence of a temperature to define it.
Although thermodynamic laws are immutable, systems can be created that delay the time to reach thermodynamic equilibrium. In a thought experiment, Reed A. Howald conceived of a system called "The Fizz Keeper"consisting of a cap with a nozzle that can re-pressurize any standard bottle of carbonated beverage. Nitrogen and oxygen, which air are mostly made out of, would keep getting pumped in, which would slow down the rate at which the carbon dioxide fizzles out of the system. This is possible because the thermodynamic equilibrium between the unconverted and converted carbon dioxide inside the bottle would stay the same. To come to this conclusion, he also appeals to Henry's Law, which states that gases dissolve in direct proportion to their partial pressures. By influencing the partial pressure on the top of a closed system, this would help slow down the rate of fizzing out of carbonated beverages which is governed by thermodynamic equilibrum. The equilibria of carbon dioxide and other gases would not change, however the partial pressure on top would slow down the rate of dissolution extending the time a gas stays in a particular state. due to the nature of thermal equilibrium of the remainder of the beverage. The equilibrium constant of carbon dioxide would be completely independent of the nitrogen and oxygen pumped into the system, which would slow down the diffusion of gas, and yet not have an impact on the thermodynamics of the entire system.
The temperature within a system in thermodynamic equilibrium is uniform in space as well as in time. In a system in its own state of internal thermodynamic equilibrium, there are no net internal macroscopic flows. In particular, this means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Langrangian multipliers. Considerations of kinetic theory or statistical mechanics also support this statement.
In order that a system may be in its own internal state of thermodynamic equilibrium, it is of course necessary, but not sufficient, that it be in its own internal state of thermal equilibrium; it is possible for a system to reach internal mechanical equilibrium before it reaches internal thermal equilibrium.
Number of real variables needed for specification
In his exposition of his scheme of closed system equilibrium thermodynamics, C. Carathéodory initially postulates that experiment reveals that a definite number of real variables define the states that are the points of the manifold of equilibria. In the words of Prigogine and Defay (1945): "It is a matter of experience that when we have specified a certain number of macroscopic properties of a system, then all the other properties are fixed." As noted above, according to A. Münster, the number of variables needed to define a thermodynamic equilibrium is the least for any state of a given isolated system. As noted above, J.G. Kirkwood and I. Oppenheim point out that a state of thermodynamic equilibrium may be defined by a special subclass of intensive variables, with a definite number of members in that subclass.
If the thermodynamic equilibrium lies in an external force field, it is only the temperature that can in general be expected to be spatially uniform. Intensive variables other than temperature will in general be non-uniform if the external force field is non-zero. In such a case, in general, additional variables are needed to describe the spatial non-uniformity.
Stability against small perturbations
As noted above, J.R. Partington points out that a state of thermodynamic equilibrium is stable against small transient perturbations. Without this condition, in general, experiments intended to study systems in thermodynamic equilibrium are in severe difficulties.
Approach to thermodynamic equilibrium within an isolated system
When a body of material starts from a non-equilibrium state of inhomogeneity or chemical non-equilibrium, and is then isolated, it spontaneously evolves towards its own internal state of thermodynamic equilibrium. It is not necessary that all aspects of internal thermodynamic equilibrium be reached simultaneously; some can be established before others. For example, in many cases of such evolution, internal mechanical equilibrium is established much more rapidly than the other aspects of the eventual thermodynamic equilibrium. Another example is that, in many cases of such evolution, thermal equilibrium is reached much more rapidly than chemical equilibrium.
Fluctuations within an isolated system in its own internal thermodynamic equilibrium
In an isolated system, thermodynamic equilibrium by definition persists over an indefinitely long time. In classical physics it is often convenient to ignore the effects of measurement and this is assumed in the present account.
To consider the notion of fluctuations in an isolated thermodynamic system, a convenient example is a system specified by its extensive state variables, internal energy, volume, and mass composition. By definition they are time-invariant. By definition, they combine with time-invariant nominal values of their conjugate intensive functions of state, inverse temperature, pressure divided by temperature, and the chemical potentials divided by temperature, so as to exactly obey the laws of thermodynamics. But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system.
It may be admitted that on repeated measurement of those conjugate intensive functions of state, they are found to have slightly different values from time to time. Such variability is regarded as due to internal fluctuations. The different measured values average to their nominal values.
If the system is truly macroscopic as postulated by classical thermodynamics, then the fluctuations are too small to detect macroscopically. This is called the thermodynamic limit. In effect, the molecular nature of matter and the quantal nature of momentum transfer have vanished from sight, too small to see. According to Buchdahl: "... there is no place within the strictly phenomenological theory for the idea of fluctuations about equilibrium (see, however, Section 76)."
If the system is repeatedly subdivided, eventually a system is produced that is small enough to exhibit obvious fluctuations. This is a mesoscopic level of investigation. The fluctuations are then directly dependent on the natures of the various walls of the system. The precise choice of independent state variables is then important. At this stage, statistical features of the laws of thermodynamics become apparent.
If the mesoscopic system is further repeatedly divided, eventually a microscopic system is produced. Then the molecular character of matter and the quantal nature of momentum transfer become important in the processes of fluctuation. One has left the realm of classical or macroscopic thermodynamics, and one needs quantum statistical mechanics. The fluctuations can become relatively dominant, and questions of measurement become important.
The statement that 'the system is its own internal thermodynamic equilibrium' may be taken to mean that 'indefinitely many such measurements have been taken from time to time, with no trend in time in the various measured values'. Thus the statement, that 'a system is in its own internal thermodynamic equilibrium, with stated nominal values of its functions of state conjugate to its specifying state variables', is far far more informative than a statement that 'a set of single simultaneous measurements of those functions of state have those same values'. This is because the single measurements might have been made during a slight fluctuation, away from another set of nominal values of those conjugate intensive functions of state, that is due to unknown and different constitutive properties. A single measurement cannot tell whether that might be so, unless there is also knowledge of the nominal values that belong to the equilibrium state.
Thermal equilibrium
An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which there are several occurring irreversible processes, entailing non-zero fluxes; the two systems are separated by a wall permeable only to heat. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process." This illustrates the importance for thermodynamics of the concept of temperature.
Thermal equilibrium is achieved when two systems in thermal contact with each other cease to have a net exchange of energy. It follows that if two systems are in thermal equilibrium, then their temperatures are the same.
Thermal equilibrium occurs when a system's macroscopic thermal observables have ceased to change with time. For example, an ideal gas whose distribution function has stabilised to a specific Maxwell–Boltzmann distribution would be in thermal equilibrium. This outcome allows a single temperature and pressure to be attributed to the whole system. For an isolated body, it is quite possible for mechanical equilibrium to be reached before thermal equilibrium is reached, but eventually, all aspects of equilibrium, including thermal equilibrium, are necessary for thermodynamic equilibrium.
Non-equilibrium
A system's internal state of thermodynamic equilibrium should be distinguished from a "stationary state" in which thermodynamic parameters are unchanging in time but the system is not isolated, so that there are, into and out of the system, non-zero macroscopic fluxes which are constant in time.
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
Laws governing systems which are far from equilibrium are also debatable. One of the guiding principles for these systems is the maximum entropy production principle. It states that a non-equilibrium system evolves such as to maximize its entropy production.
See also
Thermodynamic models
Non-random two-liquid model (NRTL model) - Phase equilibrium calculations
UNIQUAC model - Phase equilibrium calculations
Time crystal
Topics in control theory
Coefficient diagram method
Control reconfiguration
Feedback
H infinity
Hankel singular value
Krener's theorem
Lead-lag compensator
Markov chain approximation method
Minor loop feedback
Multi-loop feedback
Positive systems
Radial basis function
Root locus
Signal-flow graphs
Stable polynomial
State space representation
Steady state
Transient state
Underactuation
Youla–Kucera parametrization
Other related topics
Automation and remote control
Bond graph
Control engineering
Control–feedback–abort loop
Controller (control theory)
Cybernetics
Intelligent control
Mathematical system theory
Negative feedback amplifier
People in systems and control
Perceptual control theory
Systems theory
Time scale calculus
General references
C. Michael Hogan, Leda C. Patmore and Harry Seidman (1973) Statistical Prediction of Dynamic Thermal Equilibrium Temperatures using Standard Meteorological Data Bases, Second Edition (EPA-660/2-73-003 2006) United States Environmental Protection Agency Office of Research and Development, Washington, D.C.
Cesare Barbieri (2007) Fundamentals of Astronomy. First Edition (QB43.3.B37 2006) CRC Press ,
F. Mandl (1988) Statistical Physics, Second Edition, John Wiley & Sons
Hans R. Griem (2005) Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York
References
Cited bibliography
Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, third edition, McGraw-Hill, London, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier Scientific Publishing, Amsterdam, .
Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley.
Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK.
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
Carathéodory, C. (1909). Untersuchungen über die Grundlagen der Thermodynamik, Mathematische Annalen, 67: 355–386. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London.
Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, .
Denbigh, K.G. (1951). Thermodynamics of the Steady State, Methuen, London.
Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, .
Fitts, D.D. (1962). Nonequilibrium thermodynamics. A Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York.
Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108–248, 343–524, reprinted in The Collected Works of J. Willard Gibbs, PhD, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353.
Griem, H.R. (2005). Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York .
Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland, Amsterdam.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York.
Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York.
Levine, I.N. (1983), Physical Chemistry, second edition, McGraw-Hill, New York, .
Morse, P.M. (1969). Thermal Physics, second edition, W.A. Benjamin, Inc, New York.
Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London.
Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London.
Pippard, A.B. (1957/1966). The Elements of Classical Thermodynamics, reprinted with corrections 1966, Cambridge University Press, London.
Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia.
Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège.
Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London.
Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005). Physical Chemistry, fourth edition, Wiley, Hoboken NJ.
ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA.
Also published in
Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA.
Uhlenbeck, G.E., Ford, G.W. (1963). Lectures in Statistical Mechanics, American Mathematical Society, Providence RI.
Waldram, J.R. (1985). The Theory of Thermodynamics, Cambridge University Press, Cambridge UK, .
Zemansky, M. (1937/1968). Heat and Thermodynamics. An Intermediate Textbook, fifth edition 1967, McGraw–Hill Book Company, New York.
External links
Breakdown of Local Thermodynamic Equilibrium George W. Collins, The Fundamentals of Stellar Astrophysics, Chapter 15
Local Thermodynamic Equilibrium
Non-Local Thermodynamic Equilibrium in Cloudy Planetary Atmospheres Paper by R. E. Samueison quantifying the effects due to non-LTE in an atmosphere
Thermodynamic Equilibrium, Local and otherwise lecture by Michael Richmond
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic processes
Thermodynamic systems
Thermodynamics | 0.776759 | 0.995984 | 0.773639 |
Symmetry in quantum mechanics | Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group.
Notation
The notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, while quantum states use bra–ket notation. Wide hats are for operators, narrow hats are for unit vectors (including their components in tensor index notation). The summation convention on the repeated tensor indices is used, unless stated otherwise. The Minkowski metric signature is (+−−−).
Symmetry transformations on the wavefunction in non-relativistic quantum mechanics
Continuous symmetries
Generally, the correspondence between continuous symmetries and conservation laws is given by Noether's theorem.
The form of the fundamental quantum operators, for example energy as a partial time derivative and momentum as a spatial gradient, becomes clear when one considers the initial state, then changes one parameter of it slightly. This can be done for displacements (lengths), durations (time), and angles (rotations). Additionally, the invariance of certain quantities can be seen by making such changes in lengths and angles, illustrating conservation of these quantities.
In what follows, transformations on only one-particle wavefunctions in the form:
are considered, where denotes a unitary operator. Unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state (representing the total probability of finding the particle somewhere with some spin) must be invariant under these transformations. The inverse is the Hermitian conjugate . The results can be extended to many-particle wavefunctions. Written in Dirac notation as standard, the transformations on quantum state vectors are:
Now, the action of changes to , so the inverse changes back to , so an operator invariant under satisfies:
and thus:
for any state ψ. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i.e. the operator equals its Hermitian conjugate, .
Overview of Lie group theory
Following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall
Let be a Lie group, which is a group that locally is parameterized by a finite number of real continuously varying parameters . In more mathematical language, this means that is a smooth manifold that is also a group, for which the group operations are smooth.
the dimension of the group, , is the number of parameters it has.
the group elements, , in are functions of the parameters: and all parameters set to zero returns the identity element of the group: Group elements are often matrices which act on vectors, or transformations acting on functions.
The generators of the group are the partial derivatives of the group elements with respect to the group parameters with the result evaluated when the parameter is set to zero: In the language of manifolds, the generators are the elements of the tangent space to G at the identity. The generators are also known as infinitesimal group elements or as the elements of the Lie algebra of G. (See the discussion below of the commutator.) One aspect of generators in theoretical physics is they can be constructed themselves as operators corresponding to symmetries, which may be written as matrices, or as differential operators. In quantum theory, for unitary representations of the group, the generators require a factor of : The generators of the group form a vector space, which means linear combinations of generators also form a generator.
The generators (whether matrices or differential operators) satisfy the commutation relations: where are the (basis dependent) structure constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra. Due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices.
The representations of the group then describe the ways that the group (or its Lie algebra) can act on a vector space. (The vector space might be, for example, the space of eigenvectors for a Hamiltonian having as its symmetry group.) We denote the representations using a capital . One can then differentiate to obtain a representation of the Lie algebra, often also denoted by . These two representations are related as follows: without summation on the repeated index . Representations are linear operators that take in group elements and preserve the composition rule:
A representation which cannot be decomposed into a direct sum of other representations, is called irreducible. It is conventional to label irreducible representations by a superscripted number in brackets, as in , or if there is more than one number, we write .
There is an additional subtlety that arises in quantum theory, where two vectors that differ by multiplication by a scalar represent the same physical state. Here, the pertinent notion of representation is a projective representation, one that only satisfies the composition law up to a scalar. In the context of quantum mechanical spin, such representations are called spinorial.
Momentum and energy as generators of translation and time evolution, and rotation
The space translation operator acts on a wavefunction to shift the space coordinates by an infinitesimal displacement . The explicit expression can be quickly determined by a Taylor expansion of about , then (keeping the first order term and neglecting second and higher order terms), replace the space derivatives by the momentum operator . Similarly for the time translation operator acting on the time parameter, the Taylor expansion of is about , and the time derivative replaced by the energy operator .
The exponential functions arise by definition as those limits, due to Euler, and can be understood physically and mathematically as follows. A net translation can be composed of many small translations, so to obtain the translation operator for a finite increment, replace by and by , where is a positive non-zero integer. Then as increases, the magnitude of and become even smaller, while leaving the directions unchanged. Acting the infinitesimal operators on the wavefunction times and taking the limit as tends to infinity gives the finite operators.
Space and time translations commute, which means the operators and generators commute.
For a time-independent Hamiltonian, energy is conserved in time and quantum states are stationary states: the eigenstates of the Hamiltonian are the energy eigenvalues :
and all stationary states have the form
where is the initial time, usually set to zero since there is no loss of continuity when the initial time is set.
An alternative notation is .
Angular momentum as the generator of rotations
Orbital angular momentum
The rotation operator acts on a wavefunction to rotate the spatial coordinates of a particle by a constant angle :
where are the rotated coordinates about an axis defined by a unit vector through an angular increment , given by:
where is a rotation matrix dependent on the axis and angle. In group theoretic language, the rotation matrices are group elements, and the angles and axis are the parameters, of the three-dimensional special orthogonal group, SO(3). The rotation matrices about the standard Cartesian basis vector through angle , and the corresponding generators of rotations , are:
More generally for rotations about an axis defined by , the rotation matrix elements are:
where is the Kronecker delta, and is the Levi-Civita symbol.
It is not as obvious how to determine the rotational operator compared to space and time translations. We may consider a special case (rotations about the , , or -axis) then infer the general result, or use the general rotation matrix directly and tensor index notation with and . To derive the infinitesimal rotation operator, which corresponds to small , we use the small angle approximations and , then Taylor expand about or , keep the first order term, and substitute the angular momentum operator components.
The -component of angular momentum can be replaced by the component along the axis defined by , using the dot product .
Again, a finite rotation can be made from many small rotations, replacing by and taking the limit as tends to infinity gives the rotation operator for a finite rotation.
Rotations about the same axis do commute, for example a rotation through angles and about axis can be written
However, rotations about different axes do not commute. The general commutation rules are summarized by
In this sense, orbital angular momentum has the common sense properties of rotations. Each of the above commutators can be easily demonstrated by holding an everyday object and rotating it through the same angle about any two different axes in both possible orderings; the final configurations are different.
In quantum mechanics, there is another form of rotation which mathematically appears similar to the orbital case, but has different properties, described next.
Spin angular momentum
All previous quantities have classical definitions. Spin is a quantity possessed by particles in quantum mechanics without any classical analogue, having the units of angular momentum. The spin vector operator is denoted . The eigenvalues of its components are the possible outcomes (in units of ) of a measurement of the spin projected onto one of the basis directions.
Rotations (of ordinary space) about an axis through angle about the unit vector in space acting on a multicomponent wave function (spinor) at a point in space is represented by:
However, unlike orbital angular momentum in which the z-projection quantum number can only take positive or negative integer values (including zero), the z-projection spin quantum number s can take all positive and negative half-integer values. There are rotational matrices for each spin quantum number.
Evaluating the exponential for a given z-projection spin quantum number s gives a (2s + 1)-dimensional spin matrix. This can be used to define a spinor as a column vector of 2s + 1 components which transforms to a rotated coordinate system according to the spin matrix at a fixed point in space.
For the simplest non-trivial case of s = 1/2, the spin operator is given by
where the Pauli matrices in the standard representation are:
Total angular momentum
The total angular momentum operator is the sum of the orbital and spin
and is an important quantity for multi-particle systems, especially in nuclear physics and the quantum chemistry of multi-electron atoms and molecules.
We have a similar rotation matrix:
Conserved quantities in the quantum harmonic oscillator
The dynamical symmetry group of the n dimensional quantum harmonic oscillator is the special unitary group SU(n). As an example, the number of infinitesimal generators of the corresponding Lie algebras of SU(2) and SU(3) are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems.
The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum.
Lorentz group in relativistic quantum mechanics
Following is an overview of the Lorentz group; a treatment of boosts and rotations in spacetime. Throughout this section, see (for example) T. Ohlsson (2011) and E. Abers (2004).
Lorentz transformations can be parametrized by rapidity for a boost in the direction of a three-dimensional unit vector , and a rotation angle about a three-dimensional unit vector defining an axis, so and are together six parameters of the Lorentz group (three for rotations and three for boosts). The Lorentz group is 6-dimensional.
Pure rotations in spacetime
The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. Three of the Lorentz group elements and generators for pure rotations are:
The rotation matrices act on any four vector and rotate the space-like components according to
leaving the time-like coordinate unchanged. In matrix expressions, is treated as a column vector.
Pure boosts in spacetime
A boost with velocity in the x, y, or z directions given by the standard Cartesian basis vector , are the boost transformation matrices. These matrices and the corresponding generators are the remaining three group elements and generators of the Lorentz group:
The boost matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
The term "boost" refers to the relative velocity between two frames, and is not to be conflated with momentum as the generator of translations, as explained below.
Combining boosts and rotations
Products of rotations give another rotation (a frequent exemplification of a subgroup), while products of boosts and boosts or of rotations and boosts cannot be expressed as pure boosts or pure rotations. In general, any Lorentz transformation can be expressed as a product of a pure rotation and a pure boost. For more background see (for example) B.R. Durney (2011) and H.L. Berk et al. and references therein.
The boost and rotation generators have representations denoted and respectively, the capital in this context indicates a group representation.
For the Lorentz group, the representations and of the generators and fulfill the following commutation rules.
In all commutators, the boost entities mixed with those for rotations, although rotations alone simply give another rotation. Exponentiating the generators gives the boost and rotation operators which combine into the general Lorentz transformation, under which the spacetime coordinates transform from one rest frame to another boosted and/or rotating frame. Likewise, exponentiating the representations of the generators gives the representations of the boost and rotation operators, under which a particle's spinor field transforms.
In the literature, the boost generators and rotation generators are sometimes combined into one generator for Lorentz transformations , an antisymmetric four-dimensional matrix with entries:
and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries:
The general Lorentz transformation is then:
with summation over repeated matrix indices α and β. The Λ matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
Transformations of spinor wavefunctions in relativistic quantum mechanics
In relativistic quantum mechanics, wavefunctions are no longer single-component scalar fields, but now 2(2s + 1) component spinor fields, where s is the spin of the particle. The transformations of these functions in spacetime are given below.
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group:
where is a finite-dimensional representation, in other words a dimensional square matrix, and is thought of as a column vector containing components with the allowed values of :
Real irreducible representations and spin
The irreducible representations of and , in short "irreps", can be used to build to spin representations of the Lorentz group. Defining new operators:
so and are simply complex conjugates of each other, it follows they satisfy the symmetrically formed commutators:
and these are essentially the commutators the orbital and spin angular momentum operators satisfy. Therefore, and form operator algebras analogous to angular momentum; same ladder operators, z-projections, etc., independently of each other as each of their components mutually commute. By the analogy to the spin quantum number, we can introduce positive integers or half integers, , with corresponding sets of values and . The matrices satisfying the above commutation relations are the same as for spins a and b have components given by multiplying Kronecker delta values with angular momentum matrix elements:
where in each case the row number m′n′ and column number mn are separated by a comma, and in turn:
and similarly for J(n). The three J(m) matrices are each square matrices, and the three J(n) are each square matrices. The integers or half-integers m and n numerate all the irreducible representations by, in equivalent notations used by authors: , which are each square matrices.
Applying this to particles with spin ;
left-handed -component spinors transform under the real irreps ,
right-handed -component spinors transform under the real irreps ,
taking direct sums symbolized by (see direct sum of matrices for the simpler matrix concept), one obtains the representations under which -component spinors transform: where . These are also real irreps, but as shown above, they split into complex conjugates.
In these cases the refers to any of , , or a full Lorentz transformation .
Relativistic wave equations
In the context of the Dirac equation and Weyl equation, the Weyl spinors satisfying the Weyl equation transform under the simplest irreducible spin representations of the Lorentz group, since the spin quantum number in this case is the smallest non-zero number allowed: 1/2. The 2-component left-handed Weyl spinor transforms under and the 2-component right-handed Weyl spinor transforms under . Dirac spinors satisfying the Dirac equation transform under the representation , the direct sum of the irreps for the Weyl spinors.
The Poincaré group in relativistic quantum mechanics and field theory
Space translations, time translations, rotations, and boosts, all taken together, constitute the Poincaré group. The group elements are the three rotation matrices and three boost matrices (as in the Lorentz group), and one for time translations and three for space translations in spacetime. There is a generator for each. Therefore, the Poincaré group is 10-dimensional.
In special relativity, space and time can be collected into a four-position vector , and in parallel so can energy and momentum which combine into a four-momentum vector . With relativistic quantum mechanics in mind, the time duration and spatial displacement parameters (four in total, one for time and three for space) combine into a spacetime displacement , and the energy and momentum operators are inserted in the four-momentum to obtain a four-momentum operator,
which are the generators of spacetime translations (four in total, one time and three space):
There are commutation relations between the components four-momentum P (generators of spacetime translations), and angular momentum M (generators of Lorentz transformations), that define the Poincaré algebra:
where η is the Minkowski metric tensor. (It is common to drop any hats for the four-momentum operators in the commutation relations). These equations are an expression of the fundamental properties of space and time as far as they are known today. They have a classical counterpart where the commutators are replaced by Poisson brackets.
To describe spin in relativistic quantum mechanics, the Pauli–Lubanski pseudovector
a Casimir operator, is the constant spin contribution to the total angular momentum, and there are commutation relations between P and W and between M and W:
Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group.
Symmetries in quantum field theory and particle physics
Unitary groups in quantum field theory
Group theory is an abstract way of mathematically analyzing symmetries. Unitary operators are paramount to quantum theory, so unitary groups are important in particle physics. The group of N dimensional unitary square matrices is denoted U(N). Unitary operators preserve inner products which means probabilities are also preserved, so the quantum mechanics of the system is invariant under unitary transformations. Let be a unitary operator, so the inverse is the Hermitian adjoint , which commutes with the Hamiltonian:
then the observable corresponding to the operator is conserved, and the Hamiltonian is invariant under the transformation .
Since the predictions of quantum mechanics should be invariant under the action of a group, physicists look for unitary transformations to represent the group.
Important subgroups of each U(N) are those unitary matrices which have unit determinant (or are "unimodular"): these are called the special unitary groups and are denoted SU(N).
U(1)
The simplest unitary group is U(1), which is just the complex numbers of modulus 1. This one-dimensional matrix entry is of the form:
in which θ is the parameter of the group, and the group is Abelian since one-dimensional matrices always commute under matrix multiplication. Lagrangians in quantum field theory for complex scalar fields are often invariant under U(1) transformations. If there is a quantum number a associated with the U(1) symmetry, for example baryon and the three lepton numbers in electromagnetic interactions, we have:
U(2) and SU(2)
The general form of an element of a U(2) element is parametrized by two complex numbers a and b:
and for SU(2), the determinant is restricted to 1:
In group theoretic language, the Pauli matrices are the generators of the special unitary group in two dimensions, denoted SU(2). Their commutation relation is the same as for orbital angular momentum, aside from a factor of 2:
A group element of SU(2) can be written:
where σj is a Pauli matrix, and the group parameters are the angles turned through about an axis.
The two-dimensional isotropic quantum harmonic oscillator has symmetry group SU(2), while the symmetry algebra of the rational anisotropic oscillator is a nonlinear extension of u(2).
U(3) and SU(3)
The eight Gell-Mann matrices (see article for them and the structure constants) are important for quantum chromodynamics. They originally arose in the theory SU(3) of flavor which is still of practical importance in nuclear physics. They are the generators for the SU(3) group, so an element of SU(3) can be written analogously to an element of SU(2):
where are eight independent parameters. The matrices satisfy the commutator:
where the indices , , take the values 1, 2, 3, ..., 8. The structure constants fabc are totally antisymmetric in all indices analogous to those of SU(2). In the standard colour charge basis (r for red, g for green, b for blue):
the colour states are eigenstates of the and matrices, while the other matrices mix colour states together.
The eight gluons states (8-dimensional column vectors) are simultaneous eigenstates of the adjoint representation of , the 8-dimensional representation acting on its own Lie algebra , for the and matrices. By forming tensor products of representations (the standard representation and its dual) and taking appropriate quotients, protons and neutrons, and other hadrons are eigenstates of various representations of of color. The representations of SU(3) can be described by a "theorem of the highest weight".
Matter and antimatter
In relativistic quantum mechanics, relativistic wave equations predict a remarkable symmetry of nature: that every particle has a corresponding antiparticle. This is mathematically contained in the spinor fields which are the solutions of the relativistic wave equations.
Charge conjugation switches particles and antiparticles. Physical laws and interactions unchanged by this operation have C symmetry.
Discrete spacetime symmetries
Parity mirrors the orientation of the spatial coordinates from left-handed to right-handed. Informally, space is "reflected" into its mirror image. Physical laws and interactions unchanged by this operation have P symmetry.
Time reversal flips the time coordinate, which amounts to time running from future to past. A curious property of time, which space does not have, is that it is unidirectional: particles traveling forwards in time are equivalent to antiparticles traveling back in time. Physical laws and interactions unchanged by this operation have T symmetry.
C, P, T symmetries
CPT theorem
CP violation
PT symmetry
Lorentz violation
Gauge theory
In quantum electrodynamics, the local symmetry group is U(1) and is abelian. In quantum chromodynamics, the local symmetry group is SU(3) and is non-abelian.
The electromagnetic interaction is mediated by photons, which have no electric charge. The electromagnetic tensor has an electromagnetic four-potential field possessing gauge symmetry.
The strong (color) interaction is mediated by gluons, which can have eight color charges. There are eight gluon field strength tensors with corresponding gluon four potentials field, each possessing gauge symmetry.
The strong (color) interaction
Color charge
Analogous to the spin operator, there are color charge operators in terms of the Gell-Mann matrices :
and since color charge is a conserved charge, all color charge operators must commute with the Hamiltonian:
Isospin
Isospin is conserved in strong interactions.
The weak and electromagnetic interactions
Duality transformation
Magnetic monopoles can be theoretically realized, although current observations and theory are consistent with them existing or not existing. Electric and magnetic charges can effectively be "rotated into one another" by a duality transformation.
Electroweak symmetry
Electroweak symmetry
Electroweak symmetry breaking
Supersymmetry
A Lie superalgebra is an algebra in which (suitable) basis elements either have a commutation relation or have an anticommutation relation. Symmetries have been proposed to the effect that all fermionic particles have bosonic analogues, and vice versa. These symmetry have theoretical appeal in that no extra assumptions (such as existence of strings) barring symmetries are made. In addition, by assuming supersymmetry, a number of puzzling issues can be resolved. These symmetries, which are represented by Lie superalgebras, have not been confirmed experimentally. It is now believed that they are broken symmetries, if they exist. But it has been speculated that dark matter is constitutes gravitinos, a spin 3/2 particle with mass, its supersymmetric partner being the graviton.
Exchange symmetry
The concept of exchange symmetry is derived from a fundamental postulate of quantum statistics, which states that no observable physical quantity should change after exchanging two identical particles. It states that because all observables are proportional to for a system of identical particles, the wave function must either remain the same or change sign upon such an exchange. More generally, for a system of n identical particles the wave function must transform as an irreducible representation of the finite symmetric group Sn. It turns out that, according to the spin-statistics theorem, fermion states transform as the antisymmetric irreducible representation of Sn and boson states as the symmetric irreducible representation.
Because the exchange of two identical particles is mathematically equivalent to the rotation of each particle by 180 degrees (and so to the rotation of one particle's frame by 360 degrees), the symmetric nature of the wave function depends on the particle's spin after the rotation operator is applied to it. Integer spin particles do not change the sign of their wave function upon a 360 degree rotation—therefore the sign of the wave function of the entire system does not change. Semi-integer spin particles change the sign of their wave function upon a 360 degree rotation (see more in spin–statistics theorem).
Particles for which the wave function does not change sign upon exchange are called bosons, or particles with a symmetric wave function. The particles for which the wave function of the system changes sign are called fermions, or particles with an antisymmetric wave function.
Fermions therefore obey different statistics (called Fermi–Dirac statistics) than bosons (which obey Bose–Einstein statistics). One of the consequences of Fermi–Dirac statistics is the exclusion principle for fermions—no two identical fermions can share the same quantum state (in other words, the wave function of two identical fermions in the same state is zero). This in turn results in degeneracy pressure for fermions—the strong resistance of fermions to compression into smaller volume. This resistance gives rise to the “stiffness” or “rigidity” of ordinary atomic matter (as atoms contain electrons which are fermions).
See also
Symmetric group
Spin-statistics theorem
Projective representation
Casimir operator
Pauli–Lubanski pseudovector
Symmetries in general relativity
Renormalization group
Representation of a Lie group
Representation theory of the Poincaré group
Representation theory of the Lorentz group
Footnotes
References
Further reading
External links
(2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem
Lie groups
Continuous Groups, Lie Groups, and Lie Algebras
Pauli exclusion principle
Special relativity
Quantum field theory
Group theory
Theoretical physics | 0.787465 | 0.98242 | 0.773622 |
Quantization of the electromagnetic field | The quantization of the electromagnetic field is a procedure in physics turning Maxwell's classical electromagnetic waves into particles called photons. Photons are massless particles of definite energy, definite momentum, and definite spin.
To explain the photoelectric effect, Albert Einstein assumed heuristically in 1905 that an electromagnetic field consists of particles of energy of amount hν, where h is the Planck constant and ν is the wave frequency. In 1927 Paul A. M. Dirac was able to weave the photon concept into the fabric of the new quantum mechanics and to describe the interaction of photons with matter. He applied a technique which is now generally called second quantization, although this term is somewhat of a misnomer for electromagnetic fields, because they are solutions of the classical Maxwell equations. In Dirac's theory the fields are quantized for the first time and it is also the first time that the Planck constant enters the expressions. In his original work, Dirac took the phases of the different electromagnetic modes (Fourier components of the field) and the mode energies as dynamic variables to quantize (i.e., he reinterpreted them as operators and postulated commutation relations between them). At present it is more common to quantize the Fourier components of the vector potential. This is what is done below.
A quantum mechanical photon state belonging to mode is introduced below, and it is shown that it has the following properties:
These equations say respectively: a photon has zero rest mass; the photon energy is hν = hc|k| (k is the wave vector, c is speed of light); its electromagnetic momentum is ħk [ħ = h/(2π)]; the polarization μ = ±1 is the eigenvalue of the z-component of the photon spin.
Second quantization
Second quantization starts with an expansion of a scalar or vector field (or wave functions) in a basis consisting of a complete set of functions. These expansion functions depend on the coordinates of a single particle. The coefficients multiplying the basis functions are interpreted as operators and (anti)commutation relations between these new operators are imposed, commutation relations for bosons and anticommutation relations for fermions (nothing happens to the basis functions themselves). By doing this, the expanded field is converted into a fermion or boson operator field. The expansion coefficients have been promoted from ordinary numbers to operators, creation and annihilation operators. A creation operator creates a particle in the corresponding basis function and an annihilation operator annihilates a particle in this function.
In the case of EM fields the required expansion of the field is the Fourier expansion.
Electromagnetic field and vector potential
As the term suggests, an EM field consists of two vector fields, an electric field and a magnetic field . Both are time-dependent vector fields that in vacuum depend on a third vector field (the vector potential), as well as a scalar field
where ∇ × A is the curl of A.
Choosing the Coulomb gauge, for which ∇⋅A = 0, makes A into a transverse field. The Fourier expansion of the vector potential enclosed in a finite cubic box of volume V = L3 is then
where denotes the complex conjugate of . The wave vector k gives the propagation direction of the corresponding Fourier component (a polarized monochromatic wave) of A(r,t); the length of the wave vector is
with ν the frequency of the mode. In this summation k runs over all integers, both positive and negative. (The component of Fourier basis is complex conjugate of component of as is real.) The components of the vector k have discrete values (a consequence of the boundary condition that A has the same value on opposite walls of the box):
Two e(μ) ("polarization vectors") are conventional unit vectors for left and right hand circular polarized (LCP and RCP) EM waves (See Jones calculus or Jones vector, Jones calculus) and perpendicular to k. They are related to the orthonormal Cartesian vectors ex and ey through a unitary transformation,
The kth Fourier component of A is a vector perpendicular to k and hence is a linear combination of e(1) and e(−1). The superscript μ indicates a component along e(μ).
Clearly, the (discrete infinite) set of Fourier coefficients and are variables defining the vector potential. In the following they will be promoted to operators.
By using field equations of and in terms of above, electric and magnetic fields are
By using identity ( and are vectors) and as each mode has single frequency dependence.
Quantization of EM field
The best known example of quantization is the replacement of the time-dependent linear momentum of a particle by the rule
Note that the Planck constant is introduced here and that the time-dependence of the classical expression is not taken over in the quantum mechanical operator (this is true in the so-called Schrödinger picture).
For the EM field we do something similar. The quantity is the electric constant, which appears here because of the use of electromagnetic SI units. The quantization rules are:
subject to the boson commutation relations
The square brackets indicate a commutator, defined by for any two quantum mechanical operators A and B. The introduction of the Planck constant is essential in the transition from a classical to a quantum theory. The factor
is introduced to give the Hamiltonian (energy operator) a simple form, see below.
The quantized fields (operator fields) are the following
where ω = c = ck.
Hamiltonian of the field
The classical Hamiltonian has the form
The right-hand-side is easily obtained by first using
(can be derived from Euler equation and trigonometric orthogonality) where k is wavenumber for wave confined within the box of V = L × L × L as described above and second, using ω = kc.
Substitution of the field operators into the classical Hamiltonian gives the Hamilton operator of the EM field,
The second equality follows by use of the third of the boson commutation relations from above with k′ = k and μ′ = μ. Note again that ħω = hν = ħc|k| and remember that ω depends on k, even though it is not explicit in the notation. The notation ω(k) could have been introduced, but is not common as it clutters the equations.
Digression: harmonic oscillator
The second quantized treatment of the one-dimensional quantum harmonic oscillator is a well-known topic in quantum mechanical courses. We digress and say a few words about it. The harmonic oscillator Hamiltonian has the form
where ω ≡ 2πν is the fundamental frequency of the oscillator. The ground state of the oscillator is designated by ; and is referred to as the "vacuum state". It can be shown that is an excitation operator, it excites from an n fold excited state to an n + 1 fold excited state:
In particular: and
Since harmonic oscillator energies are equidistant, the n-fold excited state ; can be looked upon as a single state containing n particles (sometimes called vibrons) all of energy hν. These particles are bosons. For obvious reason the excitation operator is called a creation operator.
From the commutation relation follows that the Hermitian adjoint de-excites: in particular so that For obvious reason the de-excitation operator is called an annihilation operator.
By mathematical induction the following "differentiation rule", that will be needed later, is easily proved,
Suppose now we have a number of non-interacting (independent) one-dimensional harmonic oscillators, each with its own fundamental frequency ωi . Because the oscillators are independent, the Hamiltonian is a simple sum:
By substituting for we see that the Hamiltonian of the EM field can be considered a Hamiltonian of independent oscillators of energy ω = c oscillating along direction e(μ) with μ = ±1.
Photon number states (Fock states)
The quantized EM field has a vacuum (no photons) state . The application of it to, say,
gives a quantum state of m photons in mode (k, μ) and n photons in mode (k′, μ′). The proportionality symbol is used because the state on the left-hand is not normalized to unity, whereas the state on the right-hand may be normalized.
The operator
is the number operator. When acting on a quantum mechanical photon number state, it returns the number of photons in mode (k, μ). This also holds when the number of photons in this mode is zero, then the number operator returns zero. To show the action of the number operator on a one-photon ket, we consider
i.e., a number operator of mode (k, μ) returns zero if the mode is unoccupied and returns unity if the mode is singly occupied. To consider the action of the number operator of mode (k, μ) on a n-photon ket of the same mode, we drop the indices k and μ and consider
Use the "differentiation rule" introduced earlier and it follows that
A photon number state (or a Fock state) is an eigenstate of the number operator. This is why the formalism described here is often referred to as the occupation number representation.
Photon energy
Earlier the Hamiltonian,
was introduced. The zero of energy can be shifted, which leads to an expression in terms of the number operator,
The effect of H on a single-photon state is
Thus the single-photon state is an eigenstate of H and ħω = hν is the corresponding energy. In the same way
Photon momentum
Introducing the Fourier expansion of the electromagnetic field into the classical form
yields
Quantization gives
The term 1/2 could be dropped, because when one sums over the allowed k, k cancels with −k. The effect of PEM on a single-photon state is
Apparently, the single-photon state is an eigenstate of the momentum operator, and ħk is the eigenvalue (the momentum of a single photon).
Photon mass
The photon having non-zero linear momentum, one could imagine that it has a non-vanishing rest mass m0, which is its mass at zero speed. However, we will now show that this is not the case: m0 = 0.
Since the photon propagates with the speed of light, special relativity is called for. The relativistic expressions for energy and momentum squared are,
From p2/E2,
Use
and it follows that
so that m0 = 0.
Photon spin
The photon can be assigned a triplet spin with spin quantum number S = 1. This is similar to, say, the nuclear spin of the 14N isotope, but with the important difference that the state with MS = 0 is zero, only the states with MS = ±1 are non-zero.
Define spin operators:
The two operators between the two orthogonal unit vectors are dyadic products. The unit vectors are perpendicular to the propagation direction k (the direction of the z axis, which is the spin quantization axis).
The spin operators satisfy the usual angular momentum commutation relations
Indeed, use the dyadic product property
because ez is of unit length. In this manner,
By inspection it follows that
and therefore μ labels the photon spin,
Because the vector potential A is a transverse field, the photon has no forward (μ = 0) spin component.
Classical approximation
The classical approximation to EM radiation is good when the number of photons is much larger than unity in the volume where λ is the length of the radio waves. In that case quantum fluctuations are negligible.
For example, the photons emitted by a radio station broadcast at the frequency ν = 100 MHz, have an energy content of νh = (1 × 108) × (6.6 × 10−34) = 6.6 × 10−26 J, where h is the Planck constant. The wavelength of the station is λ = c/ν = 3 m, so that λ/(2π) = 48 cm and the volume is 0.109 m3. The energy content of this volume element at 5 km from the station is 2.1 × 10−10 × 0.109 = 2.3 × 10−11 J, which amounts to 3.4 × 1014 photons per Since 3.4 × 1014 > 1, quantum effects do not play a role. The waves emitted by this station are well-described by the classical limit and quantum mechanics is not needed.
See also
QED vacuum
Generalized polarization vector of arbitrary spin fields.
References
Gauge theories
Mathematical quantization | 0.783961 | 0.986761 | 0.773582 |
42 (number) | 42 (forty-two) is the natural number that follows 41 and precedes 43.
Mathematics
42 is a pronic number an abundant number, and a Catalan number.
Where the plane-vertex tiling 3.10.15 is constructible through elementary methods, the largest such tiling, 3.7.42, is not. This means that the 42-sided tetracontadigon is the largest such regular polygon that can only tile a vertex alongside other regular polygons, without tiling the plane.
42 is the only known that is equal to the number of sets of four distinct positive integers — each less than — such that and are all multiples of . Whether there are other values remains an open question.
42 is the magic constant of the smallest non-trivial magic cube, a cube with entries of 1 through 27, where every row, column, corridor, and diagonal passing through the center sums to forty-two.
42 can be expressed as the following sum of three cubes:
Science
42 is the atomic number of molybdenum.
42 is the atomic mass of one of the naturally occurring stable isotopes of calcium.
The angle rounded to whole degrees for which a rainbow appears (the critical angle).
In 1966, mathematician Paul Cooper theorized that the fastest, most efficient way to travel across continents would be to bore a straight hollow tube directly through the Earth, connecting a set of antipodes, remove the air from the tube and fall through. The first half of the journey consists of free-fall acceleration, while the second half consists of an exactly equal deceleration. The time for such a journey works out to be 42 minutes. Even if the tube does not pass through the exact center of the Earth, the time for a journey powered entirely by gravity (known as a gravity train) always works out to be 42 minutes, so long as the tube remains friction-free, as while the force of gravity would be lessened, the distance traveled is reduced at an equal rate. (The same idea was proposed, without calculation by Lewis Carroll in 1893 in Sylvie and Bruno Concluded.) Now we know that is inaccurate, and it only would take about 38 minutes.
As determined by the Babylonians, in 79 years, Mars orbits the Sun almost exactly 42 times.
The hypothetical efficiency of converting mass to energy, as per by having a given mass orbit a rotating black hole, is 42%, the highest efficiency yet known to modern physics.
In Powers of Ten by Ray and Charles Eames, the known universe from large-scale to small-scale is represented by 42 different powers of ten. These powers range from 1025 meters to 10−17 meters.
Technology
Magic numbers used by programmers:
In TIFF (Tag Image File Format), the second 16-bit word of every file is 42, "an arbitrary but carefully chosen number that further identifies the file as a TIFF file".
In the reiser4 file system, 42 is the inode number of the root directory.
In the military IRIG 106 Chapter 10 data recording standard, the hex value 0x464F52545974776F (ASCII "FORTYtwo") is used as a magic number to identify directory blocks.
The GNU C Library, a set of standard routines available for use in computer programming, contains a function—memfrob()—which performs an XOR combination of a given variable and the binary pattern 00101010 (42) as an XOR cipher.
Tiling a plane using regular hexagons, which is honeycomb in appearance, is approximated in a topological sense to an accuracy of better than 1% using a stretcher bond brick pattern with bricks of 42 squares (6 by 7).
The password expiration policy for a Microsoft Windows domain defaults to 42 days.
The ASCII code 42 is for the asterisk symbol, being a wildcard for everything.
The nonce of the first ethereum blockchain block is the hex value 0x42
Astronomy
Messier object M42, a magnitude 5.0 diffuse nebula in the constellation Orion, also known as the Orion Nebula.
The New General Catalogue object NGC 42, a spiral galaxy in the constellation Pegasus.
In January 2004, asteroid was given the permanent name 25924 Douglasadams, for the author Douglas Adams who popularized the number 42. Adams died in 2001. Brian G. Marsden, the director of the Minor Planet Center and the secretary for the naming committee, remarked that, with even his initials in the provisional designation, "This was sort of made for him, wasn't it?".
Kepler-42, a red dwarf in the constellation Cygnus that hosts the three smallest exoplanets found to date.
42 Isis, a large main-belt asteroid measuring about 100 km in diameter.
Wisdom Literature, Religion and Philosophy
Ancient Egyptian Wisdom Literature : Over most of pharaonic Egyptian history, the empire was divided into 42 nomes. Ancient Egyptian religion and mythological structure frequently model this terrestrial structure.
42 body parts of Osiris: In some traditions of the Osiris myth, Seth slays Osiris and distributes his 42 body parts all over Egypt. (In others, the number is fourteen and sixteen).
42 negative confessions: In Ancient Egyptian religion, the 42 negative confessions were a list of questions asked of deceased persons making their journey through the underworld after death. Ma'at was an abstract concept representing moral law, order, and truth in both the physical and moral spheres, as well as being an important goddess in the religion. In the judgment scene described in the Egyptian Book of the Dead, which evolved from the Coffin Texts and the Pyramid Texts, 42 questions were asked of the deceased person as part of the assessment of Ma'at. If the deceased person could reasonably give answers to the 42 questions, they would be permitted to enter the afterlife. These 42 questions are known as the "42 Negative Confessions" and can be found in funerary texts such as the Papyrus of Ani.
42 books in the core library: Clement of Alexandria states that the Egyptian temple library is divided into 42 "absolutely necessary" books that formed the stock of a core library. 36 contain the entire philosophy of the Egyptians which are memorized by the priests. While the remaining 6, are learned by the Pastophoroi (image-bearers). (36 is like-wise a sacred number in Egyptian thought, related to time, in particular the thirty-six Decan stars and the thirty-six, 10-day "weeks" in the Egyptian year.) The 42 books were not canonized like the Hebrew bible; they only supported and never replaced temple ritual. Hence, the destruction of the Egyptian temples and the cessation of the rituals ended Egyptian cultural continuity.
Abrahamic religions
In Ezra 2:24, 42 men of Beth-azmaveth were counted in the census of men of Israel upon return from exile; 2 Kings 2:24, God, because of a "curse" Elisha put on them, sent/allowed bears to maul 42 teenage boys who mocked Elisha for his baldness.
In Judaism, by some traditions the Torah scroll is written with no fewer than 42 lines per column, based on the journeys of Israel. In the present day, 42 lines is the most common standard, but various traditions remain in use (see Sefer Torah).
42 is the number with which God creates the Universe in Kabbalistic tradition. In Kabbalah, the most significant name is that of the En Sof (also known as "Ein Sof", "Infinite" or "Endless"), who is above the Sefirot (sometimes spelled "Sephirot"). The Forty-Two-Lettered Name contains four combined names which are spelled in Hebrew letters (spelled in letters = 42 letters), which is the name of Azilut (or "Atziluth" "Emanation"). While there are obvious links between the Forty-Two Lettered Name of the Babylonian Talmud and the Kabbalah's Forty-Two Lettered Name, they are probably not identical because of the Kabbalah's emphasis on numbers. The Kabbalah also contains a Forty-Five Lettered Name and a Seventy-Two Lettered Name. 42 letters make the Ana beKo'ach prayer.
The number 42 appears in various contexts in Christianity. There are 42 generations (names) in the Gospel of Matthew's version of the Genealogy of Jesus.
Hebrew Bible & Book of Revelation: “For a thousand years in your sight are like a day that has just gone by, or like a watch in the night.” (Psalm 90:4)→ “Fear God and give Him glory, because the hour of His judgment has come. Worship the One who made the heavens and the earth and the sea and the springs of waters.” (Revelation 14:7)→ 1,000 years per day/24 hours per day ≈ 42 years/hour; it is prophesied that for 42 months the Beast will hold dominion over the Earth (Revelation 13:5); etc.
The Gutenberg Bible is also known as the "42-line Bible", as the book contained 42 lines per page.
The Forty-Two Articles (1552), largely the work of Thomas Cranmer, were intended to summarize Anglican doctrine, as it now existed under the reign of Edward VI.
East Asian religions
The Sutra of Forty-two Sections is a Buddhist scripture.
In Japanese culture, the number 42 is considered unlucky because the numerals when pronounced separately—shi ni (four two)—sound like the word "dying", like the Latin word "mori".
Popular culture
The Hitchhiker's Guide to the Galaxy
The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the "Answer to the Ultimate Question of Life, the Universe, and Everything", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is. Thus, to calculate the Ultimate Question, a special computer the size of a small planet was built from organic components and named "Earth". The Ultimate Question "What do you get when you multiply six by nine" is found by Arthur Dent and Ford Prefect in the second book of the series, The Restaurant at the End of the Universe. This appeared first in the radio play and later in the novelization of The Hitchhiker's Guide to the Galaxy.
The fourth book in the series, , contains 42 chapters.
According to , 42 is the street address of Stavromula Beta.
In 1994, Adams created the 42 Puzzle, a game based on the number 42. Adams says he picked the number simply as a joke, with no deeper meaning.
Google also has a calculator easter egg when one searches "the answer to the ultimate question of life, the universe, and everything." Once typed (all in lowercase), the calculator answers with the number 42.
Works of Lewis Carroll
Lewis Carroll, who was a mathematician, made repeated use of this number in his writings.
Examples of Carroll's use of 42:
Alice's Adventures in Wonderland has 42 illustrations.
Alice's attempts at multiplication (chapter two of Alice in Wonderland) work if one uses base 18 to write the first answer, and increases the base by threes to 21, 24, etc. (the answers working up to 4 × 12 = "19" in base 39), but "breaks" precisely when one attempts the answer to 4 × 13 in base 42, leading Alice to declare "oh dear! I shall never get to twenty at that rate!"
Rule Forty-two in Alice's Adventures in Wonderland ("All persons more than a mile high to leave the court").
Rule 42 of the Code in the preface to The Hunting of the Snark ("No one shall speak to the Man at the Helm").
In "fit the first" of The Hunting of the Snark the Baker had "forty-two boxes, all carefully packed, With his name painted clearly on each."
The White Queen announces her age as "one hundred and one, five months and a day", which—if the best possible date is assumed for the action of Through the Looking-Glass (e.g., a date is chosen such that the rollover from February to March is excluded from what would otherwise be an imprecise measurement of "five months and a day")—gives a total of 37,044 days. If the Red Queen, as part of the same chess set, is regarded as the same age, their combined age is 74,088 days, or 42 × 42 × 42.
La Vita Nuova, Dante (1294)
Dante modeled the 42 chapters of his Vita Nuova on the 42 Stations of the Exodus.
Music
42 Dugg is an American rapper.
"Forty-two" ("42") is a work (dedicated to Elvis Presley, Joe Dassin and Vladimir Vysotsky) for oboe and symphony orchestra by Estonian composer Peeter Vähi.
Level 42 is an English pop/rock/funk music band.
"42" is one of the tracks on Coldplay′s 2008 album Viva la Vida or Death and All His Friends.
"Channel 42" is an electronic music song by deadmau5 featuring Wolfgang Gartner; it appears on the 2012 deadmau5 album Album Title Goes Here.
"42" is a song from Mumford and Sons′ 2018 album Delta.
"42" is a song by rapper and hip-hop producer Pi'erre Bourne, released on his 2021 studio album: The Life Of Pi'erre 5
"42" is a song written and produced by hip-hop and record production trio 3Racha, which consists of members Bang Chan, Han Jisung, and Seo Changbin of popular k-pop group Stray Kids. A lyric in this song states, "Why do we live? What's the purpose? Is it 42? Stop speaking nonsense," which directly references The Hitchhiker's Guide to the Galaxys definition of 42.
""42"" is a song from the 2018 album SR3MM by American rap duo Rae Sremmurd.
"42" is a song from the 2019 album Don′t Panic by the progressive rock band IZZ. The album is a partial concept album based on The Hitchhiker's Guide to the Galaxy.
"42" is a song by The Disco Biscuits.
Television and film
The Kumars at No. 42 is a British comedy television series.
"42" is an episode of Doctor Who, set in real time lasting approximately 42 minutes.
On the game show Jeopardy!, "Watson" the IBM supercomputer has 42 "threads" in its avatar.
42 is a film on the life of American baseball player Jackie Robinson.
Captain Harlock is sometimes seen wearing clothing with the number 42 on it.
In the Stargate Atlantis season 4 episode "Quarantine", Colonel Sheppard states that Dr. McKay's password ends in 42 because "It's the ultimate answer to the great question of life, the universe and everything."
In Star Wars: The Rise of Skywalker, the Festival of the Ancestors on Planet Pasaana is held every 42 years. The film itself was released in 2019, 42 years after the 1977 original Star Wars film. By a "whole string of pretty meaningless coincidences", 2019 is the same year that 42 was found to be the largest possible natural number less than 100 to be expressed as a sum of three cubes.
In the TV show Lost, 42 is one of the numbers used throughout the show for some of its mysteries.
There is a Belgian TV drama called Unit 42 about a special police unit that uses high-tech tools to go after criminals. One of the characters in the pilot episode explains that the unit was named based on the Hitchhiker's Guide To The Galaxy.
Video games
42 Entertainment is the company responsible for several alternate reality games, including I Love Bees, Year Zero, and Why So Serious.
Tokyo 42 is a video game released in 2017.
Squadron 42 is a video game set in the Star Citizen Universe with an unspecified release date.
Sports
The jersey number of Jackie Robinson, which is the only number retired by all Major League Baseball teams. Although the number was retired in 1997, Mariano Rivera of the New York Yankees, the last professional baseball player to wear number 42, continued to wear it until he retired at the end of the 2013 season. As of the 2014 season, no player ever again wore the number 42 in Major League Baseball except on Jackie Robinson Day (April 15), when all uniformed personnel (players, managers, coaches, and umpires) wear the number.
The number of the laws of cricket.
Rule 42 is the historic name of a Gaelic Athletic Association rule (now codified in Rule 5.1 and Rule 44) that in practice prohibits the playing of "foreign sports" (generally association football and the rugby codes) at GAA grounds.
Architecture
The architects of the Rockefeller Center in New York City worked daily in the Graybar Building where on "the twenty-fifth floor, one enormous drafting room contained forty-two identical drawing boards, each the size of a six-seat dining room table; another room harboured twelve more, and an additional fourteen stood just outside the principals' offices at the top of the circular iron staircase connecting 25 to 26".
In the Rockefeller Center (New York City) there are a total of "forty-two elevators in five separate banks" which carry tenants and visitors to the sixty-six floors.
Comics
Miles Morales was bitten by a spider bearing the number 42, causing him to become a Spider-Man. The number was later heavily referenced in the film Spider-Man: Into the Spider-Verse. The use of 42 within the franchise references Jackie Robinson's use of the number, though many fans incorrectly believed it to be a Hitchhiker's Guide to the Galaxy reference.
Other fields
+42 is the historical Select Country for the former country of Czechoslovakia.
There are 42 US gallons in a barrel of oil.
42 is the number of the French department of Loire. The number is also reflected in the postal code for that area.
Tower 42 is a skyscraper in the City of London, formerly known as the NatWest Tower.
In New York City, 42nd Street is a main and very popular two-way thoroughfare. Landmarks on it include the Chrysler Building, Grand Central Terminal, the main branch of the New York Public Library, and Times Square. The Headquarters of the United Nations is at the east end of the street. The New York City street is also the setting for a movie by the same name (which also gave fame to its eponymous title song), and which later inspired a musical adaptation, 42nd Street.
42 is the inspiration for the name of the 42 Center of Excellence for Artificial Intelligence, based in Vienna, Austria.
42 is the name of the private computer science school with campuses located in Paris, France, and Fremont, California.
42 is the sum of the numbers on a pair of dice.
42 (dominoes) is a trick-taking game played with dominoes, rather than cards. Originated and predominantly found in Texas.
42 is the number of times a standard sheet of paper would need to be folded over onto itself for the thickness of the folded piece of paper to be thick enough to reach the Moon from the surface of the Earth.
42 is the maximum amount of marks in the International Mathematical Olympiad.
Other languages
Notes
References
External links
My latest favorite Number: 42, John C. Baez
The number Forty-two in real life
Integers
In-jokes
The Beast (Revelation) | 0.774419 | 0.998727 | 0.773433 |
Ergodicity | In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.
Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.
Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.
The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.
Informal explanation
Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. Equivalently, ergodicity can be understood in terms of stochastic processes. They are one and the same, despite using dramatically different notation and language.
Measure-preserving dynamical systems
The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as
The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the natural volume of the space and of its subspaces. The collection of subspaces is denoted by , and the size of any given subset is ; the size is its volume. Naively, one could imagine to be the power set of ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach–Tarski paradox). Thus, conventionally, consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements of open sets; these can always be taken to be measurable.
The time evolution of the system is described by a map . Given some subset , its map will in general be a deformed version of – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set must have the same volume as ; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).
A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be with . Worse, a single point has no size. These difficulties can be avoided by working with the inverse map ; it will map any given subset to the parts that were assembled to make it: these parts are . It has the important property of not losing track of where things came from. More strongly, it has the important property that any (measure-preserving) map is the inverse of some map . The proper definition of a volume-preserving map is one for which because describes all the pieces-parts that came from.
One is now interested in studying the time evolution of the system. If a set eventually comes to fill all of over a long period of time (that is, if approaches all of for large ), the system is said to be ergodic. If every set behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets wander away, never to be returned to. An example would be water running downhill: once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part.
Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets , and not just between some set and . That is, given any two sets , a system is said to be (topologically) mixing if there is an integer such that, for all and , one has that . Here, denotes set intersection and is the empty set. Other notions of mixing include strong and weak mixing, which describe the notion that the mixed substances intermingle everywhere, in equal proportion. This can be non-trivial, as practical experience of trying to mix sticky, gooey substances shows.
Ergodic processes
The above discussion appeals to a physical sense of a volume. The volume does not have to literally be some portion of 3D space; it can be some abstract volume. This is generally the case in statistical systems, where the volume (the measure) is given by the probability. The total volume corresponds to probability one. This correspondence works because the axioms of probability theory are identical to those of measure theory; these are the Kolmogorov axioms.
The idea of a volume can be very abstract. Consider, for example, the set of all possible coin-flips: the set of infinite sequences of heads and tails. Assigning the volume of 1 to this space, it is clear that half of all such sequences start with heads, and half start with tails. One can slice up this volume in other ways: one can say "I don't care about the first coin-flips; but I want the 'th of them to be heads, and then I don't care about what comes after that". This can be written as the set where is "don't care" and is "heads". The volume of this space is again one-half.
The above is enough to build up a measure-preserving dynamical system, in its entirety. The sets of or occurring in the 'th place are called cylinder sets. The set of all possible intersections, unions and complements of the cylinder sets then form the Borel set defined above. In formal terms, the cylinder sets form the base for a topology on the space of all possible infinite-length coin-flips. The measure has all of the common-sense properties one might hope for: the measure of a cylinder set with in the 'th position, and in the 'th position is obviously 1/4, and so on. These common-sense properties persist for set-complement and set-union: everything except for and in locations and obviously has the volume of 3/4. All together, these form the axioms of a sigma-additive measure; measure-preserving dynamical systems always use sigma-additive measures. For coin flips, this measure is called the Bernoulli measure.
For the coin-flip process, the time-evolution operator is the shift operator that says "throw away the first coin-flip, and keep the rest". Formally, if is a sequence of coin-flips, then . The measure is obviously shift-invariant: as long as we are talking about some set where the first coin-flip is the "don't care" value, then the volume does not change: . In order to avoid talking about the first coin-flip, it is easier to define as inserting a "don't care" value into the first position: . With this definition, one obviously has that with no constraints on . This is again an example of why is used in the formal definitions.
The above development takes a random process, the Bernoulli process, and converts it to a measure-preserving dynamical system The same conversion (equivalence, isomorphism) can be applied to any stochastic process. Thus, an informal definition of ergodicity is that a sequence is ergodic if it visits all of ; such sequences are "typical" for the process. Another is that its statistical properties can be deduced from a single, sufficiently long, random sample of the process (thus uniformly sampling all of ), or that any collection of random samples from a process must represent the average statistical properties of the entire process (that is, samples drawn uniformly from are representative of as a whole.) In the present example, a sequence of coin flips, where half are heads, and half are tails, is a "typical" sequence.
There are several important points to be made about the Bernoulli process. If one writes 0 for tails and 1 for heads, one gets the set of all infinite strings of binary digits. These correspond to the base-two expansion of real numbers. Explicitly, given a sequence , the corresponding real number is
The statement that the Bernoulli process is ergodic is equivalent to the statement that the real numbers are uniformly distributed. The set of all such strings can be written in a variety of ways: This set is the Cantor set, sometimes called the Cantor space to avoid confusion with the Cantor function
In the end, these are all "the same thing".
The Cantor set plays key roles in many branches of mathematics. In recreational mathematics, it underpins the period-doubling fractals; in analysis, it appears in a vast variety of theorems. A key one for stochastic processes is the Wold decomposition, which states that any stationary process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.
The Ornstein isomorphism theorem states that every stationary stochastic process is equivalent to a Bernoulli scheme (a Bernoulli process with an N-sided (and possibly unfair) gaming die). Other results include that every non-dissipative ergodic system is equivalent to the Markov odometer, sometimes called an "adding machine" because it looks like elementary-school addition, that is, taking a base-N digit sequence, adding one, and propagating the carry bits. The proof of equivalence is very abstract; understanding the result is not: by adding one at each time step, every possible state of the odometer is visited, until it rolls over, and starts again. Likewise, ergodic systems visit each state, uniformly, moving on to the next, until they have all been visited.
Systems that generate (infinite) sequences of N letters are studied by means of symbolic dynamics. Important special cases include subshifts of finite type and sofic systems.
History and etymology
The term ergodic is commonly thought to derive from the Greek words (ergon: "work") and (hodos: "path", "way"), as chosen by Ludwig Boltzmann while he was working on a problem in statistical mechanics. At the same time it is also claimed to be a derivation of ergomonode, coined by Boltzmann in a relatively obscure paper from 1884. The etymology appears to be contested in other ways as well.
The idea of ergodicity was born in the field of thermodynamics, where it was necessary to relate the individual states of gas molecules to the temperature of a gas as a whole and its time evolution thereof. In order to do this, it was necessary to state what exactly it means for gases to mix well together, so that thermodynamic equilibrium could be defined with mathematical rigor. Once the theory was well developed in physics, it was rapidly formalized and extended, so that ergodic theory has long been an independent area of mathematics in itself. As part of that progression, more than one slightly different definition of ergodicity and multitudes of interpretations of the concept in different fields coexist.
For example, in classical physics the term implies that a system satisfies the ergodic hypothesis of thermodynamics, the relevant state space being position and momentum space.
In dynamical systems theory the state space is usually taken to be a more general phase space. On the other hand in coding theory the state space is often discrete in both time and state, with less concomitant structure. In all those fields the ideas of time average and ensemble average can also carry extra baggage as well—as is the case with the many possible thermodynamically relevant partition functions used to define ensemble averages in physics, back again. As such the measure theoretic formalization of the concept also serves as a unifying discipline. In 1913 Michel Plancherel proved the strict impossibility of ergodicity for a purely mechanical system.
Ergodicity in physics and geometry
A review of ergodicity in physics, and in geometry follows. In all cases, the notion of ergodicity is exactly the same as that for dynamical systems; there is no difference, except for outlook, notation, style of thinking and the journals where results are published.
Physical systems can be split into three categories: classical mechanics, which describes machines with a finite number of moving parts, quantum mechanics, which describes the structure of atoms, and statistical mechanics, which describes gases, liquids, solids; this includes condensed matter physics. These presented below.
In statistical mechanics
This section reviews ergodicity in statistical mechanics. The above abstract definition of a volume is required as the appropriate setting for definitions of ergodicity in physics. Consider a container of liquid, or gas, or plasma, or other collection of atoms or particles. Each and every particle has a 3D position, and a 3D velocity, and is thus described by six numbers: a point in six-dimensional space If there are of these particles in the system, a complete description requires numbers. Any one system is just a single point in The physical system is not all of , of course; if it's a box of width, height and length then a point is in Nor can velocities be infinite: they are scaled by some probability measure, for example the Boltzmann–Gibbs measure for a gas. Nonetheless, for close to the Avogadro number, this is obviously a very large space. This space is called the canonical ensemble.
A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system. For the above example, this implies that any given atom not only visits every part of the box with uniform probability, but it does so with every possible velocity, with probability given by the Boltzmann distribution for that velocity (so, uniform with respect to that measure). The ergodic hypothesis states that physical systems actually are ergodic. Multiple time scales are at work: gases and liquids appear to be ergodic over short time scales. Ergodicity in a solid can be viewed in terms of the vibrational modes or phonons, as obviously the atoms in a solid do not exchange locations. Glasses present a challenge to the ergodic hypothesis; time scales are assumed to be in the millions of years, but results are contentious. Spin glasses present particular difficulties.
Formal mathematical proofs of ergodicity in statistical physics are hard to come by; most high-dimensional many-body systems are assumed to be ergodic, without mathematical proof. Exceptions include the dynamical billiards, which model billiard ball-type collisions of atoms in an ideal gas or plasma. The first hard-sphere ergodicity theorem was for Sinai's billiards, which considers two balls, one of them taken as being stationary, at the origin. As the second ball collides, it moves away; applying periodic boundary conditions, it then returns to collide again. By appeal to homogeneity, this return of the "second" ball can instead be taken to be "just some other atom" that has come into range, and is moving to collide with the atom at the origin (which can be taken to be just "any other atom".) This is one of the few formal proofs that exist; there are no equivalent statements e.g. for atoms in a liquid, interacting via van der Waals forces, even if it would be common sense to believe that such systems are ergodic (and mixing). More precise physical arguments can be made, though.
Simple dynamical systems
The formal study of ergodicity can be approached by examining fairly simple dynamical systems. Some of the primary ones are listed here.
The irrational rotation of a circle is ergodic: the orbit of a point is such that eventually, every other point in the circle is visited. Such rotations are a special case of the interval exchange map. The beta expansions of a number are ergodic: beta expansions of a real number are done not in base-N, but in base- for some The reflected version of the beta expansion is tent map; there are a variety of other ergodic maps of the unit interval. Moving to two dimensions, the arithmetic billiards with irrational angles are ergodic. One can also take a flat rectangle, squash it, cut it and reassemble it; this is the previously-mentioned baker's map. Its points can be described by the set of bi-infinite strings in two letters, that is, extending to both the left and right; as such, it looks like two copies of the Bernoulli process. If one deforms sideways during the squashing, one obtains Arnold's cat map. In most ways, the cat map is prototypical of any other similar transformation.
In classical mechanics and geometry
Ergodicity is a widespread phenomenon in the study of symplectic manifolds and Riemannian manifolds. Symplectic manifolds provide the generalized setting for classical mechanics, where the motion of a mechanical system is described by a geodesic. Riemannian manifolds are a special case: the cotangent bundle of a Riemannian manifold is always a symplectic manifold. In particular, the geodesics on a Riemannian manifold are given by the solution of the Hamilton–Jacobi equations.
The geodesic flow of a flat torus following any irrational direction is ergodic; informally this means that when drawing a straight line in a square starting at any point, and with an irrational angle with respect to the sides, if every time one meets a side one starts over on the opposite side with the same angle, the line will eventually meet every subset of positive measure. More generally on any flat surface there are many ergodic directions for the geodesic flow.
For non-flat surfaces, one has that the geodesic flow of any negatively curved compact Riemann surface is ergodic. A surface is "compact" in the sense that it has finite surface area. The geodesic flow is a generalization of the idea of moving in a "straight line" on a curved surface: such straight lines are geodesics. One of the earliest cases studied is Hadamard's billiards, which describes geodesics on the Bolza surface, topologically equivalent to a donut with two holes. Ergodicity can be demonstrated informally, if one has a sharpie and some reasonable example of a two-holed donut: starting anywhere, in any direction, one attempts to draw a straight line; rulers are useful for this. It doesn't take all that long to discover that one is not coming back to the starting point. (Of course, crooked drawing can also account for this; that's why we have proofs.)
These results extend to higher dimensions. The geodesic flow for negatively curved compact Riemannian manifolds is ergodic. A classic example for this is the Anosov flow, which is the horocycle flow on a hyperbolic manifold. This can be seen to be a kind of Hopf fibration. Such flows commonly occur in classical mechanics, which is the study in physics of finite-dimensional moving machinery, e.g. the double pendulum and so-forth. Classical mechanics is constructed on symplectic manifolds. The flows on such systems can be deconstructed into stable and unstable manifolds; as a general rule, when this is possible, chaotic motion results. That this is generic can be seen by noting that the cotangent bundle of a Riemannian manifold is (always) a symplectic manifold; the geodesic flow is given by a solution to the Hamilton–Jacobi equations for this manifold. In terms of the canonical coordinates on the cotangent manifold, the Hamiltonian or energy is given by
with the (inverse of the) metric tensor and the momentum. The resemblance to the kinetic energy of a point particle is hardly accidental; this is the whole point of calling such things "energy". In this sense, chaotic behavior with ergodic orbits is a more-or-less generic phenomenon in large tracts of geometry.
Ergodicity results have been provided in translation surfaces, hyperbolic groups and systolic geometry. Techniques include the study of ergodic flows, the Hopf decomposition, and the Ambrose–Kakutani–Krengel–Kubo theorem. An important class of systems are the Axiom A systems.
A number of both classification and "anti-classification" results have been obtained. The Ornstein isomorphism theorem applies here as well; again, it states that most of these systems are isomorphic to some Bernoulli scheme. This rather neatly ties these systems back into the definition of ergodicity given for a stochastic process, in the previous section. The anti-classification results state that there are more than a countably infinite number of inequivalent ergodic measure-preserving dynamical systems. This is perhaps not entirely a surprise, as one can use points in the Cantor set to construct similar-but-different systems. See measure-preserving dynamical system for a brief survey of some of the anti-classification results.
In wave mechanics
All of the previous sections considered ergodicty either from the point of view of a measurable dynamical system, or from the dual notion of tracking the motion of individual particle trajectories. A closely related concept occurs in (non-linear) wave mechanics. There, the resonant interaction allows for the mixing of normal modes, often (but not always) leading to the eventual thermalization of the system. One of the earliest systems to be rigorously studied in this context is the Fermi–Pasta–Ulam–Tsingou problem, a string of weakly coupled oscillators.
A resonant interaction is possible whenever the dispersion relations for the wave media allow three or more normal modes to sum in such a way as to conserve both the total momentum and the total energy. This allows energy concentrated in one mode to bleed into other modes, eventually distributing that energy uniformly across all interacting modes.
Resonant interactions between waves helps provide insight into the distinction between high-dimensional chaos (that is, turbulence) and thermalization. When normal modes can be combined so that energy and momentum are exactly conserved, then the theory of resonant interactions applies, and energy spreads into all of the interacting modes. When the dispersion relations only allow an approximate balance, turbulence or chaotic motion results. The turbulent modes can then transfer energy into modes that do mix, eventually leading to thermalization, but not before a preceding interval of chaotic motion.
In quantum mechanics
As to quantum mechanics, there is no universal quantum definition of ergodocity or even chaos (see quantum chaos). However, there is a quantum ergodicity theorem stating that the expectation value of an operator converges to the corresponding microcanonical classical average in the semiclassical limit . Nevertheless, the theorem does not imply that all eigenstates of the Hamiltionian whose classical counterpart is chaotic are features and random. For example, the quantum ergodicity theorem do not exclude the existence of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars.
Definition for discrete-time systems
Ergodic measures provide one of the cornerstones with which ergodicity is generally discussed. A formal definition follows.
Invariant measure
Let be a measurable space. If is a measurable function from to itself and a probability measure on , then a measure-preserving dynamical system is defined as a dynamical system for which for all . Such a is said to preserve equivalently, that is -invariant.
Ergodic measure
A measurable function is said to be -ergodic or that is an ergodic measure for if preserves and the following condition holds:
For any such that either or .
In other words, there are no -invariant subsets up to measure 0 (with respect to ).
Some authors relax the requirement that preserves to the requirement that is a non-singular transformation with respect to , meaning that if is a subset so that has zero measure, then so does .
Examples
The simplest example is when is a finite set and the counting measure. Then a self-map of preserves if and only if it is a bijection, and it is ergodic if and only if has only one orbit (that is, for every there exists such that ). For example, if then the cycle is ergodic, but the permutation is not (it has the two invariant subsets and ).
Equivalent formulations
The definition given above admits the following immediate reformulations:
for every with we have or (where denotes the symmetric difference);
for every with positive measure we have ;
for every two sets of positive measure, there exists such that ;
Every measurable function with is constant on a subset of full measure.
Importantly for applications, the condition in the last characterisation can be restricted to square-integrable functions only:
If and then is constant almost everywhere.
Further examples
Bernoulli shifts and subshifts
Let be a finite set and with the product measure (each factor being endowed with its counting measure). Then the shift operator defined by is .
There are many more ergodic measures for the shift map on . Periodic sequences give finitely supported measures. More interestingly, there are infinitely-supported ones which are subshifts of finite type.
Irrational rotations
Let be the unit circle , with its Lebesgue measure . For any the rotation of of angle is given by . If then is not ergodic for the Lebesgue measure as it has infinitely many finite orbits. On the other hand, if is irrational then is ergodic.
Arnold's cat map
Let be the 2-torus. Then any element defines a self-map of since . When one obtains the so-called Arnold's cat map, which is ergodic for the Lebesgue measure on the torus.
Ergodic theorems
If is a probability measure on a space which is ergodic for a transformation the pointwise ergodic theorem of G. Birkhoff states that for every measurable functions and for -almost every point the time average on the orbit of converges to the space average of . Formally this means that
The mean ergodic theorem of J. von Neumann is a similar, weaker statement about averaged translates of square-integrable functions.
Related properties
Dense orbits
An immediate consequence of the definition of ergodicity is that on a topological space , and if is the σ-algebra of Borel sets, if is -ergodic then -almost every orbit of is dense in the support of .
This is not an equivalence since for a transformation which is not uniquely ergodic, but for which there is an ergodic measure with full support , for any other ergodic measure the measure is not ergodic for but its orbits are dense in the support. Explicit examples can be constructed with shift-invariant measures.
Mixing
A transformation of a probability measure space is said to be mixing for the measure if for any measurable sets the following holds:
It is immediate that a mixing transformation is also ergodic (taking to be a -stable subset and its complement). The converse is not true, for example a rotation with irrational angle on the circle (which is ergodic per the examples above) is not mixing (for a sufficiently small interval its successive images will not intersect itself most of the time). Bernoulli shifts are mixing, and so is Arnold's cat map.
This notion of mixing is sometimes called strong mixing, as opposed to weak mixing which means that
Proper ergodicity
The transformation is said to be properly ergodic if it does not have an orbit of full measure. In the discrete case this means that the measure is not supported on a finite orbit of .
Definition for continuous-time dynamical systems
The definition is essentially the same for continuous-time dynamical systems as for a single transformation. Let be a measurable space and for each , then such a system is given by a family of measurable functions from to itself, so that for any the relation holds (usually it is also asked that the orbit map from is also measurable). If is a probability measure on then we say that is -ergodic or is an ergodic measure for if each preserves and the following condition holds:
For any , if for all we have then either or .
Examples
As in the discrete case the simplest example is that of a transitive action, for instance the action on the circle given by is ergodic for Lebesgue measure.
An example with infinitely many orbits is given by the flow along an irrational slope on the torus: let and . Let ; then if this is ergodic for the Lebesgue measure.
Ergodic flows
Further examples of ergodic flows are:
Billiards in convex Euclidean domains;
the geodesic flow of a negatively curved Riemannian manifold of finite volume is ergodic (for the normalised volume measure);
the horocycle flow on a hyperbolic manifold of finite volume is ergodic (for the normalised volume measure)
Ergodicity in compact metric spaces
If is a compact metric space it is naturally endowed with the σ-algebra of Borel sets. The additional structure coming from the topology then allows a much more detailed theory for ergodic transformations and measures on .
Functional analysis interpretation
A very powerful alternate definition of ergodic measures can be given using the theory of Banach spaces. Radon measures on form a Banach space of which the set of probability measures on is a convex subset. Given a continuous transformation of the subset of -invariant measures is a closed convex subset, and a measure is ergodic for if and only if it is an extreme point of this convex.
Existence of ergodic measures
In the setting above it follows from the Banach-Alaoglu theorem that there always exists extremal points in . Hence a transformation of a compact metric space always admits ergodic measures.
Ergodic decomposition
In general an invariant measure need not be ergodic, but as a consequence of Choquet theory it can always be expressed as the barycenter of a probability measure on the set of ergodic measures. This is referred to as the ergodic decomposition of the measure.
Example
In the case of and the counting measure is not ergodic. The ergodic measures for are the uniform measures supported on the subsets and and every -invariant probability measure can be written in the form for some . In particular is the ergodic decomposition of the counting measure.
Continuous systems
Everything in this section transfers verbatim to continuous actions of or on compact metric spaces.
Unique ergodicity
The transformation is said to be uniquely ergodic if there is a unique Borel probability measure on which is ergodic for .
In the examples considered above, irrational rotations of the circle are uniquely ergodic; shift maps are not.
Probabilistic interpretation: ergodic processes
If is a discrete-time stochastic process on a space , it is said to be ergodic if the joint distribution of the variables on is invariant under the shift map . This is a particular case of the notions discussed above.
The simplest case is that of an independent and identically distributed process which corresponds to the shift map described above. Another important case is that of a Markov chain which is discussed in detail below.
A similar interpretation holds for continuous-time stochastic processes though the construction of the measurable structure of the action is more complicated.
Ergodicity of Markov chains
The dynamical system associated with a Markov chain
Let be a finite set. A Markov chain on is defined by a matrix , where is the transition probability from to , so for every we have . A stationary measure for is a probability measure on such that ; that is for all .
Using this data we can define a probability measure on the set with its product σ-algebra by giving the measures of the cylinders as follows:
Stationarity of then means that the measure is invariant under the shift map .
Criterion for ergodicity
The measure is always ergodic for the shift map if the associated Markov chain is irreducible (any state can be reached with positive probability from any other state in a finite number of steps).
The hypotheses above imply that there is a unique stationary measure for the Markov chain. In terms of the matrix a sufficient condition for this is that 1 be a simple eigenvalue of the matrix and all other eigenvalues of (in ) are of modulus <1.
Note that in probability theory the Markov chain is called ergodic if in addition each state is aperiodic (the times where the return probability is positive are not multiples of a single integer >1). This is not necessary for the invariant measure to be ergodic; hence the notions of "ergodicity" for a Markov chain and the associated shift-invariant measure are different (the one for the chain is strictly stronger).
Moreover the criterion is an "if and only if" if all communicating classes in the chain are recurrent and we consider all stationary measures.
Examples
Counting measure
If for all then the stationary measure is the counting measure, the measure is the product of counting measures. The Markov chain is ergodic, so the shift example from above is a special case of the criterion.
Non-ergodic Markov chains
Markov chains with recurring communicating classes which are not irreducible are not ergodic, and this can be seen immediately as follows. If are two distinct recurrent communicating classes there are nonzero stationary measures supported on respectively and the subsets and are both shift-invariant and of measure 1/2 for the invariant probability measure . A very simple example of that is the chain on given by the matrix (both states are stationary).
A periodic chain
The Markov chain on given by the matrix is irreducible but periodic. Thus it is not ergodic in the sense of Markov chain though the associated measure on is ergodic for the shift map. However the shift is not mixing for this measure, as for the sets
and
we have but
Generalisations
The definition of ergodicity also makes sense for group actions. The classical theory (for invertible transformations) corresponds to actions of or .
For non-abelian groups there might not be invariant measures even on compact metric spaces. However the definition of ergodicity carries over unchanged if one replaces invariant measures by quasi-invariant measures.
Important examples are the action of a semisimple Lie group (or a lattice therein) on its Furstenberg boundary.
A measurable equivalence relation it is said to be ergodic if all saturated subsets are either null or conull.
Notes
References
External links
Karma Dajani and Sjoerd Dirksin, "A Simple Introduction to Ergodic Theory"
Ergodic theory | 0.776381 | 0.996166 | 0.773404 |
Mu (letter) | Mu (; uppercase Μ, lowercase μ; Ancient Greek , or μυ—both ) is the twelfth letter of the Greek alphabet, representing the voiced bilabial nasal . In the system of Greek numerals it has a value of 40. Mu was derived from the Egyptian hieroglyphic symbol for water, which had been simplified by the Phoenicians and named after their word for water, to become 𐤌 (mem). Letters that derive from mu include the Roman M and the Cyrillic М, though the lowercase resembles a small Latin U (u).
Names
Ancient Greek
In Greek, the name of the letter was written and pronounced .
Modern Greek
In Modern Greek, the letter is spelled and pronounced . In polytonic orthography, it is written with an acute accent: .
Use as symbol
The lowercase letter mu (μ) is used as a special symbol in many academic fields. Uppercase mu is not used, because it appears identical to Latin M.
Prefix for units of measurement
"μ" is used as a unit prefix denoting a factor of 10−6 (one millionth), in this context, the symbol's name is "micro".
Metric prefix
International System of Units prefix, also known as "SI prefix"
The micrometre with a symbol of "μm" can also be referred to as the non-SI term "micron".
Mathematics
"μ" is conventionally used to denote certain things; however, any Greek letter or other symbol may be used freely as a variable name.
a measure in measure theory
minimalization in computability theory and Recursion theory
the integrating factor in ordinary differential equations
the degree of membership in a fuzzy set
the Möbius function in number theory
the population mean or expected value in probability and statistics
the Ramanujan–Soldner constant
Physics and engineering
In classical physics and engineering:
the coefficient of friction (also used in aviation as braking coefficient (see Braking action))
reduced mass in the two-body problem
Standard gravitational parameter in celestial mechanics
linear density, or mass per unit length, in strings and other one-dimensional objects
permeability in electromagnetism
the magnetic dipole moment of a current-carrying coil
dynamic viscosity in fluid mechanics
the amplification factor or voltage gain of a triode vacuum tube
the electrical mobility of a charged particle
the rotor advance ratio, the ratio of aircraft airspeed to rotor-tip speed in rotorcraft
the pore water pressure in saturated soil
In particle physics:
the elementary particles called the muon and antimuon
the proton-to-electron mass ratio
In thermodynamics:
the chemical potential of a system or component of a system
Computer science
In evolutionary algorithms:
μ, population size from which in each generation λ offspring will generate (the terms μ and λ originate from evolution strategy notation)
In type theory:
Used to introduce a recursive data type. For example, is the type of lists with elements of type (a type variable): a sum of unit, representing , with a pair of a and another (represented by ). In this notation, is a binding form, where the variable introduced by is bound within the following term to the term itself. Via substitution and arithmetic, the type expands to , an infinite sum of ever-increasing products of (that is, a is any -tuple of values of type for any ). Another way to express the same type is .
Chemistry
In chemistry:
the prefix given in IUPAC nomenclature for a bridging ligand
Biology
In biology:
the mutation rate in population genetics
A class of Immunoglobulin heavy chain that defines IgM type Antibodies
Pharmacology
In pharmacology:
an important opiate receptor
Orbital mechanics
In orbital mechanics:
Standard gravitational parameter of a celestial body, the product of the gravitational constant G and the mass M
planetary discriminant, represents an experimental measure of the actual degree of cleanliness of the orbital zone, a criterion for defining a planet. The value of μ is calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone.
Music
Mu chord
Electronic musician Mike Paradinas runs the label Planet Mu which utilizes the letter as its logo, and releases music under the pseudonym μ-Ziq, pronounced "music"
Used as the name of the school idol group μ's, pronounced "muse", consisting of nine singing idols in the anime Love Live! School Idol Project
Official fandom name of Kpop group f(x), appearing as either MeU or 'μ'
Hip-hop artist Muonboy has taken inspiration from the particle for his stage name and his first EP named Mu uses the letter as its title.
Cameras
The Olympus Corporation manufactures a series of digital cameras called Olympus μ (known as Olympus Stylus in North America).
Linguistics
In phonology:
mora
In syntax:
μP (mu phrase) can be used as the name for a functional projection.
In Celtic linguistics:
/μ/ can represent an Old Irish nasalized labial fricative of uncertain articulation, the ancestor of the sound represented by Modern Irish mh.
Unicode
The lower-case mu (as "micro sign") appeared at in the 8-bit ISO-8859-1 encoding, from which Unicode and many other encodings inherited it. It was also at in the popular CP437 on the IBM PC. Unicode has declared that a "real" mu is different than the micro sign.
These are only to be used for mathematical text, not for text styling:
Image list for readers with font problems
See also
Greek letters used in mathematics, science, and engineering
Fraser alphabet#Consonants
References
Greek letters | 0.774347 | 0.998758 | 0.773385 |
Scalar (physics) | Scalar quantities or simply scalars are physical quantities that can be described by a single pure number (a scalar, typically a real number), accompanied by a unit of measurement, as in "10cm" (ten centimeters).
Examples of scalar quantities are length, mass, charge, volume, and time.
Scalars may represent the magnitude of physical quantities, such as speed is to velocity.
Scalars are unaffected by changes to a vector space basis (i.e., a coordinate rotation) but may be affected by translations (as in relative speed).
A change of a vector space basis changes the description of a vector in terms of the basis used but does not change the vector itself, while a scalar has nothing to do with this change. In classical physics, like Newtonian mechanics, rotations and reflections preserve scalars, while in relativity, Lorentz transformations or space-time translations preserve scalars. The term "scalar" has origin in the multiplication of vectors by a unitless scalar, which is a uniform scaling transformation.
Relationship with the mathematical concept
A scalar in physics and other areas of science is also a scalar in mathematics, as an element of a mathematical field used to define a vector space. For example, the magnitude (or length) of an electric field vector is calculated as the square root of its absolute square (the inner product of the electric field with itself); so, the inner product's result is an element of the mathematical field for the vector space in which the electric field is described. As the vector space in this example and usual cases in physics is defined over the mathematical field of real numbers or complex numbers, the magnitude is also an element of the field, so it is mathematically a scalar. Since the inner product is independent of any vector space basis, the electric field magnitude is also physically a scalar.
The mass of an object is unaffected by a change of vector space basis so it is also a physical scalar, described by a real number as an element of the real number field. Since a field is a vector space with addition defined based on vector addition and multiplication defined as scalar multiplication, the mass is also a mathematical scalar.
Scalar field
Since scalars mostly may be treated as special cases of multi-dimensional quantities such as vectors and tensors, physical scalar fields might be regarded as a special case of more general fields, like vector fields, spinor fields, and tensor fields.
Units
Like other physical quantities, a physical quantity of scalar is also typically expressed by a numerical value and a physical unit, not merely a number, to provide its physical meaning. It may be regarded as the product of the number and the unit (e.g., 1 km as a physical distance is the same as 1,000 m). A physical distance does not depend on the length of each base vector of the coordinate system where the base vector length corresponds to the physical distance unit in use. (E.g., 1 m base vector length means the meter unit is used.) A physical distance differs from a metric in the sense that it is not just a real number while the metric is calculated to a real number, but the metric can be converted to the physical distance by converting each base vector length to the corresponding physical unit.
Any change of a coordinate system may affect the formula for computing scalars (for example, the Euclidean formula for distance in terms of coordinates relies on the basis being orthonormal), but not the scalars themselves. Vectors themselves also do not change by a change of a coordinate system, but their descriptions changes (e.g., a change of numbers representing a position vector by rotating a coordinate system in use).
Classical scalars
An example of a scalar quantity is temperature: the temperature at a given point is a single number. Velocity, on the other hand, is a vector quantity.
Other examples of scalar quantities are mass, charge, volume, time, speed, pressure, and electric potential at a point inside a medium. The distance between two points in three-dimensional space is a scalar, but the direction from one of those points to the other is not, since describing a direction requires two physical quantities such as the angle on the horizontal plane and the angle away from that plane. Force cannot be described using a scalar, since force has both direction and magnitude; however, the magnitude of a force alone can be described with a scalar, for instance the gravitational force acting on a particle is not a scalar, but its magnitude is. The speed of an object is a scalar (e.g., 180 km/h), while its velocity is not (e.g. a velocity of 180 km/h in a roughly northwest direction might consist of 108 km/h northward and 144 km/h westward).
Some other examples of scalar quantities in Newtonian mechanics are electric charge and charge density.
Relativistic scalars
In the theory of relativity, one considers changes of coordinate systems that trade space for time. As a consequence, several physical quantities that are scalars in "classical" (non-relativistic) physics need to be combined with other quantities and treated as four-vectors or tensors. For example, the charge density at a point in a medium, which is a scalar in classical physics, must be combined with the local current density (a 3-vector) to comprise a relativistic 4-vector. Similarly, energy density must be combined with momentum density and pressure into the stress–energy tensor.
Examples of scalar quantities in relativity include electric charge, spacetime interval (e.g., proper time and proper length), and invariant mass.
Pseudoscalar
See also
Invariant (physics)
Relative scalar
Scalar (mathematics)
Notes
References
External links | 0.777372 | 0.994871 | 0.773385 |
Thermal equilibrium | Two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. Thermal equilibrium obeys the zeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant.
Systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change in internal energy' but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium.
Two varieties of thermal equilibrium
Relation of thermal equilibrium between two thermally connected bodies
The relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work; it is called a diathermal connection. According to Lieb and Yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. It is not included in the essential meaning whether it is or is not transitive. After discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the "zeroth law of thermodynamics", that thermal equilibrium is a transitive relation. They comment that the equivalence classes of systems so established are called isotherms.
Internal thermal equilibrium of an isolated body
Thermal equilibrium of a body in itself refers to the body when it is isolated. The background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. When it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. It is not implied that it is necessarily in other kinds of internal equilibrium. For example, it is possible that a body might reach internal thermal equilibrium but not be in internal chemical equilibrium; glass is an example.
One may imagine an isolated system, initially not in its own state of internal thermal equilibrium. It could be subjected to a fictive thermodynamic operation of partition into two subsystems separated by nothing, no wall. One could then consider the possibility of transfers of energy as heat between the two subsystems. A long time after the fictive partition operation, the two subsystems will reach a practically stationary state, and so be in the relation of thermal equilibrium with each other. Such an adventure could be conducted in indefinitely many ways, with different fictive partitions. All of them will result in subsystems that could be shown to be in thermal equilibrium with each other, testing subsystems from different partitions. For this reason, an isolated system, initially not its own state of internal thermal equilibrium, but left for a long time, practically always will reach a final state which may be regarded as one of internal thermal equilibrium. Such a final state is one of spatial uniformity or homogeneity of temperature. The existence of such states is a basic postulate of classical thermodynamics. This postulate is sometimes, but not often, called the minus first law of thermodynamics. A notable exception exists for isolated quantum systems which are many-body localized and which never reach internal thermal equilibrium.
Thermal contact
Heat can flow into or out of a closed system by way of thermal conduction or of thermal radiation to or from a thermal reservoir, and when this process is effecting net transfer of heat, the system is not in thermal equilibrium. While the transfer of energy as heat continues, the system's temperature can be changing.
Bodies prepared with separately uniform temperatures, then put into purely thermal communication with each other
If bodies are prepared with separately microscopically stationary states, and are then put into purely thermal connection with each other, by conductive or radiative pathways, they will be in thermal equilibrium with each other just when the connection is followed by no change in either body. But if initially they are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway, conductive or radiative, is available, and this flow will continue until thermal equilibrium is reached and then they will have the same temperature.
One form of thermal equilibrium is radiative exchange equilibrium. Two bodies, each with its own uniform temperature, in solely radiative connection, no matter how far apart, or what partially obstructive, reflective, or refractive, obstacles lie in their path of radiative exchange, not moving relative to one another, will exchange thermal radiation, in net the hotter transferring energy to the cooler, and will exchange equal and opposite amounts just when they are at the same temperature. In this situation, Kirchhoff's law of equality of radiative emissivity and absorptivity and the Helmholtz reciprocity principle are in play.
Change of internal state of an isolated system
If an initially isolated physical system, without internal walls that establish adiabatically isolated subsystems, is left long enough, it will usually reach a state of thermal equilibrium in itself, in which its temperature will be uniform throughout, but not necessarily a state of thermodynamic equilibrium, if there is some structural barrier that can prevent some possible processes in the system from reaching equilibrium; glass is an example. Classical thermodynamics in general considers idealized systems that have reached internal equilibrium, and idealized transfers of matter and energy between them.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.
Such changes in isolated systems are irreversible in the sense that while such a change will occur spontaneously whenever the system is prepared in the same way, the reverse change will practically never occur spontaneously within the isolated system; this is a large part of the content of the second law of thermodynamics. Truly perfectly isolated systems do not occur in nature, and always are artificially prepared.
In a gravitational field
One may consider a system contained in a very tall adiabatically isolating vessel with rigid walls initially containing a thermally heterogeneous distribution of material, left for a long time under the influence of a steady gravitational field, along its tall dimension, due to an outside body such as the earth. It will settle to a state of uniform temperature throughout, though not of uniform pressure or density, and perhaps containing several phases. It is then in internal thermal equilibrium and even in thermodynamic equilibrium. This means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Langrangian multipliers. Considerations of kinetic theory or statistical mechanics also support this statement.
Distinctions between thermal and thermodynamic equilibria
There is an important distinction between thermal and thermodynamic equilibrium. According to Münster (1970), in states of thermodynamic equilibrium, the state variables of a system do not change at a measurable rate. Moreover, "The proviso 'at a measurable rate' implies that we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Also, a state of thermodynamic equilibrium can be described by fewer macroscopic variables than any other state of a given body of matter. A single isolated body can start in a state which is not one of thermodynamic equilibrium, and can change till thermodynamic equilibrium is reached. Thermal equilibrium is a relation between two bodies or closed systems, in which transfers are allowed only of energy and take place through a partition permeable to heat, and in which the transfers have proceeded till the states of the bodies cease to change.
An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by C.J. Adkins. He allows that two systems might be allowed to exchange heat but be constrained from exchanging work; they will naturally exchange heat till they have equal temperatures, and reach thermal equilibrium, but in general, will not be in thermodynamic equilibrium. They can reach thermodynamic equilibrium when they are allowed also to exchange work.
Another explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which several irreversible processes are occurring. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process."
Thermal equilibrium of planets
A planet is in thermal equilibrium when the incident energy reaching it (typically the solar irradiance from its parent star) is equal to the infrared energy radiated away to space.
See also
Thermal center
Thermodynamic equilibrium
Radiative equilibrium
Thermal oscillator
Citations
Citation references
Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, third edition, McGraw-Hill, London, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley.
Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London.
Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108-248, 343-524, reprinted in The Collected Works of J. Willard Gibbs, Ph.D, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353.
Maxwell, J.C. (1867). On the dynamical theory of gases, Phil. Trans. Roy. Soc. London, 157: 49–88.
Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London.
Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London.
Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, first English edition, Longmans, Green and Co., London.
Planck, M. (1914). The Theory of Heat Radiation, second edition translated by M. Masius, P. Blakiston's Son and Co., Philadelphia.
ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
Temperature
Physical quantities
Heat transfer
Thermodynamics | 0.778362 | 0.993414 | 0.773235 |
Radiant energy | In physics, and in particular as measured by radiometry, radiant energy is the energy of electromagnetic and gravitational radiation. As energy, its SI unit is the joule (J). The quantity of radiant energy may be calculated by integrating radiant flux (or power) with respect to time. The symbol Qe is often used throughout literature to denote radiant energy ("e" for "energetic", to avoid confusion with photometric quantities). In branches of physics other than radiometry, electromagnetic energy is referred to using E or W. The term is used particularly when electromagnetic radiation is emitted by a source into the surrounding environment. This radiation may be visible or invisible to the human eye.
Terminology use and history
The term "radiant energy" is most commonly used in the fields of radiometry, solar energy, heating and lighting, but is also sometimes used in other fields (such as telecommunications). In modern applications involving transmission of power from one location to another, "radiant energy" is sometimes used to refer to the electromagnetic waves themselves, rather than their energy (a property of the waves). In the past, the term "electro-radiant energy" has also been used.
The term "radiant energy" also applies to gravitational radiation. For example, the first gravitational waves ever observed were produced by a black hole collision that emitted about 5.3 joules of gravitational-wave energy.
Analysis
Because electromagnetic (EM) radiation can be conceptualized as a stream of photons, radiant energy can be viewed as photon energy – the energy carried by these photons. Alternatively, EM radiation can be viewed as an electromagnetic wave, which carries energy in its oscillating electric and magnetic fields. These two views are completely equivalent and are reconciled to one another in quantum field theory (see wave-particle duality).
EM radiation can have various frequencies. The bands of frequency present in a given EM signal may be sharply defined, as is seen in atomic spectra, or may be broad, as in blackbody radiation. In the particle picture, the energy carried by each photon is proportional to its frequency. In the wave picture, the energy of a monochromatic wave is proportional to its intensity. This implies that if two EM waves have the same intensity, but different frequencies, the one with the higher frequency "contains" fewer photons, since each photon is more energetic.
When EM waves are absorbed by an object, the energy of the waves is converted to heat (or converted to electricity in case of a photoelectric material). This is a very familiar effect, since sunlight warms surfaces that it irradiates. Often this phenomenon is associated particularly with infrared radiation, but any kind of electromagnetic radiation will warm an object that absorbs it. EM waves can also be reflected or scattered, in which case their energy is redirected or redistributed as well.
Open systems
Radiant energy is one of the mechanisms by which energy can enter or leave an open system. Such a system can be man-made, such as a solar energy collector, or natural, such as the Earth's atmosphere. In geophysics, most atmospheric gases, including the greenhouse gases, allow the Sun's short-wavelength radiant energy to pass through to the Earth's surface, heating the ground and oceans. The absorbed solar energy is partly re-emitted as longer wavelength radiation (chiefly infrared radiation), some of which is absorbed by the atmospheric greenhouse gases. Radiant energy is produced in the sun as a result of nuclear fusion.
Applications
Radiant energy is used for radiant heating. It can be generated electrically by infrared lamps, or can be absorbed from sunlight and used to heat water. The heat energy is emitted from a warm element (floor, wall, overhead panel) and warms people and other objects in rooms rather than directly heating the air. Because of this, the air temperature may be lower than in a conventionally heated building, even though the room appears just as comfortable.
Various other applications of radiant energy have been devised. These include treatment and inspection, separating and sorting, medium of control, and medium of communication. Many of these applications involve a source of radiant energy and a detector that responds to that radiation and provides a signal representing some characteristic of the radiation. Radiant energy detectors produce responses to incident radiant energy either as an increase or decrease in electric potential or current flow or some other perceivable change, such as exposure of photographic film.
SI radiometry units
See also
Luminous energy
Luminescence
Power
Radiometry
Federal Standard 1037C
Transmission
Open system
Photoelectric effect
Photodetector
Photocell
Photoelectric cell
Notes and references
Further reading
Caverly, Donald Philip, Primer of Electronics and Radiant Energy. New York, McGraw-Hill, 1952.
Electromagnetic radiation
Radiometry
Forms of energy | 0.777281 | 0.994691 | 0.773154 |
Partition function (statistical mechanics) | In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance.
Canonical partition function
Definition
Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous.
Classical discrete system
For a canonical ensemble that is classical and discrete, the canonical partition function is defined as
where
is the index for the microstates of the system;
is Euler's number;
is the thermodynamic beta, defined as where is the Boltzmann constant;
is the total energy of the system in the respective microstate.
The exponential factor is otherwise known as the Boltzmann factor.
Classical continuous system
In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian of the system;
is the canonical position;
is the canonical momentum.
To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant).
Classical continuous system (multiple identical particles)
For a gas of identical classical noninteracting particles in three dimensions, the partition function is
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the index for the particles of the system;
is the Hamiltonian of a respective particle;
is the canonical position of the respective particle;
is the canonical momentum of the respective particle;
is shorthand notation to indicate that and are vectors in three-dimensional space.
is the classical continuous partition function of a single particle as given in the previous section.
The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant).
Quantum mechanical discrete system
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:
where:
is the trace of a matrix;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator.
The dimension of is the number of energy eigenstates of the system.
Quantum mechanical continuous system
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as
where:
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator;
is the canonical position;
is the canonical momentum.
In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows:
where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es.
The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):
where is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series.
The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity:
where is a normalised Gaussian wavepacket centered at position x and momentum p. Thus
A coherent state is an approximate eigenstate of both operators and , hence also of the Hamiltonian , with errors of the size of the uncertainties. If and can be regarded as zero, the action of reduces to multiplication by the classical Hamiltonian, and reduces to the classical configuration integral.
Connection to probability theory
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy :
Assuming that the heat bath's internal energy is much larger than the energy of S, we can Taylor-expand to first order in Ei and use the thermodynamic relation , where here , are the entropy and temperature of the bath respectively:
Thus
Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:
Calculating the thermodynamic total energy
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:
or, equivalently,
Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner
then the expected value of A is
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.
Relation to thermodynamic variables
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is
The variance in the energy (or "energy fluctuation") is
The heat capacity is
In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:
The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be
In the special case of entropy, entropy is given by
where A is the Helmholtz free energy defined as , where is the total energy and S is the entropy, so that
Furthermore, the heat capacity can be expressed as
Partition functions of subsystems
Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:
If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case
However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.
Meaning and significance
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:
This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies.
Grand canonical partition function
We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.
The grand canonical partition function, denoted by , is the following sum over microstates
Here, each microstate is labelled by , and has total particle number and total energy . This partition function is closely related to the grand potential, , by the relation
This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state :
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as
where is known as the absolute activity (or fugacity) and is the canonical partition function.
See also
Partition function (mathematics)
Partition function (quantum field theory)
Virial theorem
Widom insertion method
References
Equations of physics | 0.775409 | 0.997012 | 0.773092 |
Mathematical model | A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
Elements of a mathematical model
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
Classifications
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
Construction
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
A priori information
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
Subjective information
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
Complexity
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
Training, tuning, and fitting
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
Evaluation and assessment
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
Prediction of empirical data
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
Scope of the model
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
Philosophical considerations
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
Significance in the natural sciences
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
Some applications
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
Examples
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
where
and
is defined by the following state-transition table:
{| border="1"
| || ||
|-
|S1 || ||
|-
|S''2 || ||
|}
The state represents that there has been an even number of 0s in the input so far, while signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, will finish in state an accepting state, so the input string will be accepted.
The language recognized by is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function is the solution of the differential equation: that can be written also as
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of commodities labeled each with a market price The consumer is assumed to have an ordinal utility function (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities consumed. The model further assumes that the consumer has a budget which is used to purchase a vector in such a way as to maximize The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: subject to: This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
See also
Agent-based model
All models are wrong
Cliodynamics
Computer simulation
Conceptual model
Decision engineering
Grey box model
International Mathematical Modeling Challenge
Mathematical biology
Mathematical diagram
Mathematical economics
Mathematical modelling of infectious disease
Mathematical finance
Mathematical psychology
Mathematical sociology
Microscale and macroscale models
Model inversion
Resilience (mathematics)
Scientific model
Sensitivity analysis
Statistical model
Surrogate model
System identification
References
Further reading
Books
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover.
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover.
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM.
Specific applications
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67-80.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
External links
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine'', the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model?
Applied mathematics
Conceptual modelling
Knowledge representation
Mathematical terminology
Mathematical and quantitative methods (economics) | 0.775209 | 0.99726 | 0.773084 |
Causality (physics) | Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy and physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone.
Macroscopic vs microscopic causality
Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer faster than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version.
Macroscopic causality
In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other.
Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than in Descartes' theory.
Simultaneity
In modern physics, the notion of causality had to be clarified. The word simultaneous is observer-dependent in special relativity. The principle is relativity of simultaneity. Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light.
In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In those two theories, causality is closely related to the principle of locality.
Bell's Theorem shows that conditions of "local causality" in experiments involving quantum entanglement result in non-classical correlations predicted by quantum mechanics.
Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture.
Determinism (or, what causality is not)
The word causality in this context means that all effects must have specific physical causes due to fundamental interactions. Causality in this context is not associated with definitional principles such as Newton's second law. As such, in the context of causality, a force does not cause a mass to accelerate nor vice versa. Rather, Newton's Second Law can be derived from the conservation of momentum, which itself is a consequence of the spatial homogeneity of physical laws.
The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) meant that scholastic arguments about what caused phenomena were either rejected for being untestable or were just ignored. The complaint that physics does not explain the cause of phenomena has accordingly been dismissed as a problem that is philosophical or metaphysical rather than empirical (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous and, as indicated above, is not considered a consequence of any principle of causality. Indeed, it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies,
as two coupled equations describing the positions and of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times.
The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way which, indeed, are time-reversible and agnostic to the arrow of time, but with such a direction of time established, the entire evolution system could then be completely determined.
The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than 'Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem.
Confusion between causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined).
Distributed causality
Theories in physics like the butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes:
"Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality.
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly.
Causal sets
In causal set theory, causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class, so knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this, Rafael Sorkin proposed the idea of Causal Set Theory, which is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume.
See also
(general)
References
Further reading
Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis.
Espinoza, Miguel (2006). Théorie du déterminisme causal. Paris: L'Harmattan. .
External links
Causal Processes, Stanford Encyclopedia of Philosophy
Caltech Tutorial on Relativity — A nice discussion of how observers moving relatively to each other see different slices of time.
Faster-than-c signals, special relativity, and causality. This article explains that faster than light signals do not necessarily lead to a violation of causality.
Causality
Concepts in physics
Time
Philosophy of physics
Time travel
ja:因果律 | 0.777595 | 0.994195 | 0.773081 |
Mechanical engineering | Mechanical engineering is the study of physical machines that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science, to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches.
Mechanical engineering requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, design, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, motor vehicles, aircraft, watercraft, robotics, medical devices, weapons, and others.
Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, structural engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems.
History
The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The Sakia was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. Bloomeries and blast furnaces were developed during the seventh century BC in Meroe. Kushite sundials applied mathematics in the form of advanced trigonometry.
The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC.
In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, Dual-roller gins appeared in India and China between the 12th and 14th centuries. The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries.
During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs.
In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent. The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. In England, Isaac Newton formulated Newton's Laws of Motion and developed the calculus, which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley. Gottfried Wilhelm Leibniz, who earlier designed a mechanical calculator, is also credited with developing the calculus during the same time period.
During the early 19th century Industrial Revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848.
In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science.
Education
Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum.
In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies.
In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA).
In India, to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal.
Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate.
Coursework
Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research.
The fundamental subjects required for mechanical engineering usually include:
Mathematics (in particular, calculus, differential equations, and linear algebra)
Basic physical sciences (including physics and chemistry)
Statics and dynamics
Strength of materials and solid mechanics
Materials engineering, composites
Thermodynamics, heat transfer, energy conversion, and HVAC
Fuels, combustion, internal combustion engine
Fluid mechanics (including fluid statics and fluid dynamics)
Mechanism and Machine design (including kinematics and dynamics)
Instrumentation and measurement
Manufacturing engineering, technology, or processes
Vibration, control theory and control engineering
Hydraulics and Pneumatics
Mechatronics and robotics
Engineering design and product design
Drafting, computer-aided design (CAD) and computer-aided manufacturing (CAM)
Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, differential geometry, and statistics, among others.
In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects.
Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation.
Job duties
Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines.
Mechanical engineers typically do the following:
Analyze problems to see how mechanical and thermal devices might help solve the problem.
Design or redesign mechanical and thermal devices using analysis and computer-aided design.
Develop and test prototypes of devices they design.
Analyze the test results and change the design as needed.
Oversee the manufacturing process for the device.
Manage a team of professionals in specialized fields like mechanical drafting and designing, prototyping, 3D printing or/and CNC Machines specialists.
Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems.
Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work.
License and regulation
Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer (United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union).
In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT), and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories.
In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute.
In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act.
In other countries, such as Australia, and the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion.
Salaries and workforce statistics
The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree.
Subdisciplines
The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section.
Mechanics
Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include
Statics, the study of non-moving bodies under known loads, how forces affect static bodies
Dynamics, the study of how forces affect moving bodies. Dynamics includes kinematics (about movement, velocity, and acceleration) and kinetics (about forces and resulting accelerations).
Mechanics of materials, the study of how different materials deform under various types of stress
Fluid mechanics, the study of how fluids react to forces
Kinematics, the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. Kinematics is often used in the design and analysis of mechanisms.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine.
Mechatronics and robotics
Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer.
Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot).
Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications.
Structural analysis
Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure.
Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause.
Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes.
Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests.
Thermodynamics and thermo-science
Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels.
Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others.
Design and drafting
Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
Modern tools
Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances.
Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems.
Areas of research
Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering).
Micro electro-mechanical systems (MEMS)
Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications.
Friction stir welding (FSW)
Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses.
Composites
Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods.
Mechatronics
Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers.
Nanotechnology
At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others.
Finite element analysis
Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common.
Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc.
Biomechanics
Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems.
In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design.
Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine).
Computational fluid dynamics
Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.
Acoustical engineering
Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems.
Related fields
Manufacturing engineering, aerospace engineering, automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
See also
Automobile engineering
Index of mechanical engineering articles
Lists
Glossary of mechanical engineering
List of historic mechanical engineering landmarks
List of inventors
List of mechanical engineering topics
List of mechanical engineers
List of related journals
List of mechanical, electrical and electronic equipment manufacturing companies by revenue
Associations
American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)
American Society of Mechanical Engineers (ASME)
Pi Tau Sigma (Mechanical Engineering honor society)
Society of Automotive Engineers (SAE)
Society of Women Engineers (SWE)
Institution of Mechanical Engineers (IMechE) (British)
Chartered Institution of Building Services Engineers (CIBSE) (British)
Verein Deutscher Ingenieure (VDI) (Germany)
Wikibooks
Engineering Mechanics
Engineering Thermodynamics
Engineering Acoustics
Fluid Mechanics
Heat Transfer
Microtechnology
Nanotechnology
Pro/Engineer (ProE CAD)
Strength of Materials/Solid Mechanics
References
Further reading
External links
Mechanical engineering at MTU.edu
Engineering disciplines
Mechanical designers | 0.7735 | 0.999458 | 0.773081 |
Systems thinking | Systems thinking is a way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts. It has been used as a way of exploring and developing effective action in complex contexts, enabling systems change. Systems thinking draws on and contributes to systems theory and the system sciences.
History
Ptolemaic system versus the Copernican system
The term system is polysemic: Robert Hooke (1674) used it in multiple senses, in his System of the World, but also in the sense of the Ptolemaic system versus the Copernican system of the relation of the planets to the fixed stars which are cataloged in Hipparchus' and Ptolemy's Star catalog. Hooke's claim was answered in magisterial detail by Newton's (1687) Philosophiæ Naturalis Principia Mathematica, Book three, The System of the World (that is, the system of the world is a physical system).
Newton's approach, using dynamical systems continues to this day. In brief, Newton's equations (a system of equations) have methods for their solution.
Feedback control systems
By 1824 the Carnot cycle presented an engineering challenge, which was how to maintain the operating temperatures of the hot and cold working fluids of the physical plant. In 1868 James Clerk Maxwell presented a framework for, and a limited solution to the problem of controlling the rotational speed of a physical plant. Maxwell's solution echoed James Watt's (1784) centrifugal moderator (denoted as element Q) for maintaining (but not enforcing) the constant speed of a physical plant (that is, Q represents a moderator, but not a governor, by Maxwell's definition).
Maxwell's approach, which linearized the equations of motion of the system, produced a tractable method of solution. Norbert Wiener identified this approach as an influence on his studies of cybernetics during World War II and Wiener even proposed treating some subsystems under investigation as black boxes. Methods for solutions of the systems of equations then become the subject of study, as in feedback control systems, in stability theory, in constraint satisfaction problems, the unification algorithm, type inference, and so forth.
Applications
"So, how do we change the structure of systems to produce more of what we want and less of that which is undesirable? ... MIT’s Jay Forrester likes to say that the average manager can ... guess with great accuracy where to look for leverage points—places in the system where a small change could lead to a large shift in behavior".— Donella Meadows, (2008) Thinking In Systems: A Primer p.145
Characteristics
Subsystems serve as part of a larger system, but each comprises a system in its own right. Each frequently can be described reductively, with properties obeying its own laws, such as Newton's System of the World, in which entire planets, stars, and their satellites can be treated, sometimes in a scientific way as dynamical systems, entirely mathematically, as demonstrated by Johannes Kepler's equation (1619) for the orbit of Mars before Newton's Principia appeared in 1687.
Black boxes are subsystems whose operation can be characterized by their inputs and outputs, without regard to further detail.
Particular systems
Political systems were recognized as early as the millennia before the common era.
Biological systems were recognized in Aristotle's lagoon ca. 350 BCE.
Economic systems were recognized by 1776.
Social systems were recognized by the 19th and 20th centuries of the common era.
Radar systems were developed in World War II in subsystem fashion; they were made up of transmitter, receiver, power supply, and signal processing subsystems, to defend against airborne attacks.
Dynamical systems of ordinary differential equations were shown to exhibit stable behavior given a suitable Lyapunov control function by Aleksandr Lyapunov in 1892.
Thermodynamic systems were treated as early as the eighteenth century, in which it was discovered that heat could be created without limit, but that for closed systems, laws of thermodynamics could be formulated. Ilya Prigogine (1980) has identified situations in which systems far from equilibrium can exhibit stable behavior; once a Lyapunov function has been identified, future and past can be distinguished, and scientific activity can begin.
Systems far from equilibrium
Living systems are resilient, and are far from equilibrium. Homeostasis is the analog to equilibrium, for a living system; the concept was described in 1849, and the term was coined in 1926.
Resilient systems are self-organizing;
The scope of functional controls is hierarchical, in a resilient system.
Frameworks and methodologies
Frameworks and methodologies for systems thinking include:
Critical systems heuristics: in particular, there can be twelve boundary categories for the systems when organizing one's thinking and actions.
Critical systems thinking, including the E P I C approach.
Ontology engineering of representation, formal naming and definition of categories, and the properties and the relations between concepts, data, and entities.
Soft systems methodology, including the CATWOE approach and rich pictures.
Systemic design, for example using the double diamond approach.
System dynamics of stocks, flows, and internal feedback loops.
Viable system model: uses 5 subsystems.
See also
Notes
References
Sources
Russell L. Ackoff (1968) "General Systems Theory and Systems Research Contrasting Conceptions of Systems Science." in: Views on a General Systems Theory: Proceedings from the Second System Symposium, Mihajlo D. Mesarovic (ed.).
A.C. Ehresmann, J.-P. Vanbremeersch (1987) Hierarchical evolutive systems: A mathematical model for complex systems" Bulletin of Mathematical Biology Volume 49, Issue 1, Pages 13–50
NJTA Kramer & J de Smit (1977) Systems thinking: Concepts and Notions, Springer. 148 pages
A. H. Louie (November 1983) "Categorical system theory" Bulletin of Mathematical Biology volume 45, pages 1047–1072
DonellaMeadows.org Systems Thinking Resources
Gerald Midgley (ed.) (2002) Systems Thinking, SAGE Publications. 4 volume set: 1,492 pages List of chapter titles
Robert Rosen. (1958) “The Representation of Biological Systems from the Standpoint of the Theory of Categories". Bull. math. Biophys. 20, 317–342.
Peter Senge, (1990) The Fifth Discipline
Cybernetics
Systems science
Systems theory | 0.775061 | 0.997388 | 0.773037 |
Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity, a remarkable phenomenon, emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, prominently illustrated by the renowned "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances. These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. Researchers have successfully fabricated various re-entrant geometries, offering a pathway for practical applications in diverse fields. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, and more, presenting innovative solutions to challenges in biomedicine, desalination, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
See also
Artificial photosynthesis
Artificial enzyme
Bio-inspired computing
Bioinspiration & Biomimetics
Biomimetic synthesis
Carbon sequestration
Reverse engineering
Synthetic biology
References
Further reading
Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47.
Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28.
Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley.
Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer.
Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26.
Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55.
Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917).
Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton.
External links
Biomimetics MIT
Sex, Velcro and Biomimicry with Janine Benyus
Janine Benyus: Biomimicry in Action from TED 2009
Design by Nature - National Geographic
Michael Pawlyn: Using nature's genius in architecture from TED 2010
Robert Full shows how human engineers can learn from animals' tricks from TED 2002
The Fast Draw: Biomimicry from CBS News
Evolutionary biology
Biotechnology
Bioinformatics
Biological engineering
Biophysics
Industrial ecology
Bionics
Water conservation
Renewable energy
Sustainable transport | 0.776637 | 0.995135 | 0.772859 |
Thrust | Thrust is a reaction force described quantitatively by Newton's third law. When a system expels or accelerates mass in one direction, the accelerated mass will cause a force of equal magnitude but opposite direction to be applied to that system.
The force applied on a surface in a direction perpendicular or normal to the surface is also called thrust. Force, and thus thrust, is measured using the International System of Units (SI) in newtons (symbol: N), and represents the amount needed to accelerate 1 kilogram of mass at the rate of 1 meter per second per second. In mechanical engineering, force orthogonal to the main load (such as in parallel helical gears) is referred to as static thrust.
Examples
A fixed-wing aircraft propulsion system generates forward thrust when air is pushed in the direction opposite to flight. This can be done by different means such as the spinning blades of a propeller, the propelling jet of a jet engine, or by ejecting hot gases from a rocket engine. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable-pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft use rotors and thrust vectoring V/STOL aircraft use propellers or engine thrust to support the weight of the aircraft and to provide forward propulsion.
A motorboat propeller generates thrust when it rotates and forces water backwards.
A rocket is propelled forward by a thrust equal in magnitude, but opposite in direction, to the time-rate of momentum change of the exhaust gas accelerated from the combustion chamber through the rocket engine nozzle. This is the exhaust velocity with respect to the rocket, times the time-rate at which the mass is expelled, or in mathematical terms:
Where T is the thrust generated (force), is the rate of change of mass with respect to time (mass flow rate of exhaust), and v is the velocity of the exhaust gases measured relative to the rocket.
For vertical launch of a rocket the initial thrust at liftoff must be more than the weight.
Each of the three Space Shuttle Main Engines could produce a thrust of 1.8 meganewton, and each of the Space Shuttle's two Solid Rocket Boosters , together 29.4 MN.
By contrast, the Simplified Aid for EVA Rescue (SAFER) has 24 thrusters of each.
In the air-breathing category, the AMT-USA AT-180 jet engine developed for radio-controlled aircraft produce 90 N (20 lbf) of thrust. The GE90-115B engine fitted on the Boeing 777-300ER, recognized by the Guinness Book of World Records as the "World's Most Powerful Commercial Jet Engine," has a thrust of 569 kN (127,900 lbf) until it was surpassed by the GE9X, fitted on the upcoming Boeing 777X, at 609 kN (134,300 lbf).
Concepts
Thrust to power
The power needed to generate thrust and the force of the thrust can be related in a non-linear way. In general, . The proportionality constant varies, and can be solved for a uniform flow, where is the incoming air velocity, is the velocity at the actuator disc, and is the final exit velocity:
Solving for the velocity at the disc, , we then have:
When incoming air is accelerated from a standstill – for example when hovering – then , and we can find:
From here we can see the relationship, finding:
The inverse of the proportionality constant, the "efficiency" of an otherwise-perfect thruster, is proportional to the area of the cross section of the propelled volume of fluid and the density of the fluid. This helps to explain why moving through water is easier and why aircraft have much larger propellers than watercraft.
Thrust to propulsive power
A very common question is how to compare the thrust rating of a jet engine with the power rating of a piston engine. Such comparison is difficult, as these quantities are not equivalent. A piston engine does not move the aircraft by itself (the propeller does that), so piston engines are usually rated by how much power they deliver to the propeller. Except for changes in temperature and air pressure, this quantity depends basically on the throttle setting.
A jet engine has no propeller, so the propulsive power of a jet engine is determined from its thrust as follows. Power is the force (F) it takes to move something over some distance (d) divided by the time (t) it takes to move that distance:
In case of a rocket or a jet aircraft, the force is exactly the thrust (T) produced by the engine. If the rocket or aircraft is moving at about a constant speed, then distance divided by time is just speed, so power is thrust times speed:
This formula looks very surprising, but it is correct: the propulsive power (or power available ) of a jet engine increases with its speed. If the speed is zero, then the propulsive power is zero. If a jet aircraft is at full throttle but attached to a static test stand, then the jet engine produces no propulsive power, however thrust is still produced. The combination piston engine–propeller also has a propulsive power with exactly the same formula, and it will also be zero at zero speed – but that is for the engine–propeller set. The engine alone will continue to produce its rated power at a constant rate, whether the aircraft is moving or not.
Now, imagine the strong chain is broken, and the jet and the piston aircraft start to move. At low speeds:
The piston engine will have constant 100% power, and the propeller's thrust will vary with speed
The jet engine will have constant 100% thrust, and the engine's power will vary with speed
Excess thrust
If a powered aircraft is generating thrust T and experiencing drag D, the difference between the two, T − D, is termed the excess thrust. The instantaneous performance of the aircraft is mostly dependent on the excess thrust.
Excess thrust is a vector and is determined as the vector difference between the thrust vector and the drag vector.
Thrust axis
The thrust axis for an airplane is the line of action of the total thrust at any instant. It depends on the location, number, and characteristics of the jet engines or propellers. It usually differs from the drag axis. If so, the distance between the thrust axis and the drag axis will cause a moment that must be resisted by a change in the aerodynamic force on the horizontal stabiliser. Notably, the Boeing 737 MAX, with larger, lower-slung engines than previous 737 models, had a greater distance between the thrust axis and the drag axis, causing the nose to rise up in some flight regimes, necessitating a pitch-control system, MCAS. Early versions of MCAS malfunctioned in flight with catastrophic consequences, leading to the deaths of over 300 people in 2018 and 2019.
See also
(most common in modern rockets)
"Pound of thrust": thrust (force) required to accelerate one pound at one g
References
Aircraft aerodynamics
Force
Temporal rates | 0.776767 | 0.994947 | 0.772842 |
Atmospheric escape | Atmospheric escape is the loss of planetary atmospheric gases to outer space. A number of different mechanisms can be responsible for atmospheric escape; these processes can be divided into thermal escape, non-thermal (or suprathermal) escape, and impact erosion. The relative importance of each loss process depends on the planet's escape velocity, its atmosphere composition, and its distance from its star. Escape occurs when molecular kinetic energy overcomes gravitational energy; in other words, a molecule can escape when it is moving faster than the escape velocity of its planet. Categorizing the rate of atmospheric escape in exoplanets is necessary to determining whether an atmosphere persists, and so the exoplanet's habitability and likelihood of life.
Thermal escape mechanisms
Thermal escape occurs if the molecular velocity due to thermal energy is sufficiently high. Thermal escape happens at all scales, from the molecular level (Jeans escape) to bulk atmospheric outflow (hydrodynamic escape).
Jeans escape
One classical thermal escape mechanism is Jeans escape, named after British astronomer Sir James Jeans, who first described this process of atmospheric loss. In a quantity of gas, the average velocity of any one molecule is measured by the gas's temperature, but the velocities of individual molecules change as they collide with one another, gaining and losing kinetic energy. The variation in kinetic energy among the molecules is described by the Maxwell distribution. The kinetic energy, mass, and velocity of a molecule are related by . Individual molecules in the high tail of the distribution (where a few particles have much higher speeds than the average) may reach escape velocity and leave the atmosphere, provided they can escape before undergoing another collision; this happens predominantly in the exosphere, where the mean free path is comparable in length to the pressure scale height. The number of particles able to escape depends on the molecular concentration at the exobase, which is limited by diffusion through the thermosphere.
Three factors strongly contribute to the relative importance of Jeans escape: mass of the molecule, escape velocity of the planet, and heating of the upper atmosphere by radiation from the parent star. Heavier molecules are less likely to escape because they move slower than lighter molecules at the same temperature. This is why hydrogen escapes from an atmosphere more easily than carbon dioxide. Second, a planet with a larger mass tends to have more gravity, so the escape velocity tends to be greater, and fewer particles will gain the energy required to escape. This is why the gas giant planets still retain significant amounts of hydrogen, which escape more readily from Earth's atmosphere. Finally, the distance a planet orbits from a star also plays a part; a close planet has a hotter atmosphere, with higher velocities and hence, a greater likelihood of escape. A distant body has a cooler atmosphere, with lower velocities, and less chance of escape.
Hydrodynamic escape
An atmosphere with high pressure and temperature can also undergo hydrodynamic escape. In this case, a large amount of thermal energy, usually through extreme ultraviolet radiation, is absorbed by the atmosphere. As molecules are heated, they expand upwards and are further accelerated until they reach escape velocity. In this process, lighter molecules can drag heavier molecules with them through collisions as a larger quantity of gas escapes. Hydrodynamic escape has been observed for exoplanets close to their host star, including the hot Jupiter HD 209458b.
Non-thermal (suprathermal) escape
Escape can also occur due to non-thermal interactions. Most of these processes occur due to photochemistry or charged particle (ion) interactions.
Photochemical escape
In the upper atmosphere, high energy ultraviolet photons can react more readily with molecules. Photodissociation can break a molecule into smaller components and provide enough energy for those components to escape. Photoionization produces ions, which can get trapped in the planet's magnetosphere or undergo dissociative recombination. In the first case, these ions may undergo escape mechanisms described below. In the second case, the ion recombines with an electron, releases energy, and can escape.
Sputtering escape
Excess kinetic energy from the solar wind can impart sufficient energy to eject atmospheric particles, similar to sputtering from a solid surface. This type of interaction is more pronounced in the absence of a planetary magnetosphere, as the electrically charged solar wind is deflected by magnetic fields, which mitigates the loss of atmosphere.
Charge exchange escape
Ions in the solar wind or magnetosphere can charge exchange with molecules in the upper atmosphere. A fast-moving ion can capture the electron from a slow atmospheric neutral, creating a fast neutral and a slow ion. The slow ion is trapped on the magnetic field lines, but the fast neutral can escape.
Polar wind escape
Atmospheric molecules can also escape from the polar regions on a planet with a magnetosphere, due to the polar wind. Near the poles of a magnetosphere, the magnetic field lines are open, allowing a pathway for ions in the atmosphere to exhaust into space. The ambipolar electric field accelerates any ions in the ionosphere, launching along these lines.
Impact erosion
The impact of a large meteoroid can lead to the loss of atmosphere. If a collision is sufficiently energetic, it is possible for ejecta, including atmospheric molecules, to reach escape velocity.
In order to have a significant effect on atmospheric escape, the radius of the impacting body must be larger than the scale height. The projectile can impart momentum, and thereby facilitate escape of the atmosphere, in three main ways: (a) the meteoroid heats and accelerates the gas it encounters as it travels through the atmosphere, (b) solid ejecta from the impact crater heat atmospheric particles through drag as they are ejected, and (c) the impact creates vapor which expands away from the surface. In the first case, the heated gas can escape in a manner similar to hydrodynamic escape, albeit on a more localized scale. Most of the escape from impact erosion occurs due to the third case. The maximum atmosphere that can be ejected is above a plane tangent to the impact site.
Dominant atmospheric escape and loss processes in the Solar System
Earth
Atmospheric escape of hydrogen on Earth is due to charge exchange escape (~60–90%), Jeans escape (~10–40%), and polar wind escape (~10–15%), currently losing about 3 kg/s of hydrogen. The Earth additionally loses approximately 50 g/s of helium primarily through polar wind escape. Escape of other atmospheric constituents is much smaller. A Japanese research team in 2017 found evidence of a small number of oxygen ions on the moon that came from the Earth.
In 1 billion years, the Sun will be 10% brighter than it is now, making it hot enough on Earth to dramatically increase the water vapor in the atmosphere where solar ultraviolet light will
dissociate H2O, allowing it to gradually escape into space until the oceans dry up
Venus
Recent models indicate that hydrogen escape on Venus is almost entirely due to suprathermal mechanisms, primarily photochemical reactions and charge exchange with the solar wind. Oxygen escape is dominated by charge exchange and sputtering escape. Venus Express measured the effect of coronal mass ejections on the rate of atmospheric escape of Venus, and researchers found a factor of 1.9 increase in escape rate during periods of increased coronal mass ejections compared with calmer space weather.
Mars
Primordial Mars also suffered from the cumulative effects of multiple small impact erosion events, and recent observations with MAVEN suggest that 66% of the 36Ar in the Martian atmosphere has been lost over the last 4 billion years due to suprathermal escape, and the amount of CO2 lost over the same time period is around 0.5 bar or more.
The MAVEN mission has also explored the current rate of atmospheric escape of Mars. Jeans escape plays an important role in the continued escape of hydrogen on Mars, contributing to a loss rate that varies between 160 - 1800 g/s. Jeans escape of hydrogen can be significantly modulated by lower atmospheric processes, such as gravity waves, convection, and dust storms. Oxygen loss is dominated by suprathermal methods: photochemical (~1300 g/s), charge exchange (~130 g/s), and sputtering (~80 g/s) escape combine for a total loss rate of ~1500 g/s. Other heavy atoms, such as carbon and nitrogen, are primarily lost due to photochemical reactions and interactions with the solar wind.
Titan and Io
Saturn's moon Titan and Jupiter's moon Io have atmospheres and are subject to atmospheric loss processes. They have no magnetic fields of their own, but orbit planets with powerful magnetic fields, which protects a given moon from the solar wind when its orbit is within the bow shock. However Titan spends roughly half of its orbital period outside of the bow-shock, subjected to unimpeded solar winds. The kinetic energy gained from pick-up and sputtering associated with the solar winds increases thermal escape throughout the orbit of Titan, causing neutral hydrogen to escape. The escaped hydrogen maintains an orbit following in the wake of Titan, creating a neutral hydrogen torus around Saturn. Io, in its orbit around Jupiter, encounters a plasma cloud. Interaction with the plasma cloud induces sputtering, kicking off sodium particles. The interaction produces a stationary banana-shaped charged sodium cloud along a part of the orbit of Io.
Observations of exoplanet atmospheric escape
Studies of exoplanets have measured atmospheric escape as a means of determining atmospheric composition and habitability. The most common method is Lyman-alpha line absorption. Much as exoplanets are discovered using the dimming of a distant star's brightness (transit), looking specifically at wavelengths corresponding to hydrogen absorption describes the amount of hydrogen present in a sphere around the exoplanet. This method indicates that the hot Jupiters HD209458b and HD189733b and Hot Neptune GJ436b are experiencing significant atmospheric escape.
In 2018 it was discovered with the Hubble Space Telescope that atmospheric escape can also be measured with the 1083 nm Helium triplet. This wavelength is much more accessible from ground-based high-resolution spectrographs, when compared to the ultraviolet Lyman-alpha lines. The wavelength around the helium triplet has also the advantage that it is not severely affected by interstellar absorption, which is an issue for Lyman-alpha. Helium has on the other hand the disadvantage that it requires knowledge about the hydrogen-helium ratio to model the mass-loss of the atmosphere. Helium escape was measured around many giant exoplanets, including WASP-107b, WASP-69b and HD 189733b. It has also been detected around some mini-Neptunes, such as TOI-560 b and HD 63433 c.
Other atmospheric loss mechanisms
Sequestration is not a form of escape from the planet, but a loss of molecules from the atmosphere and into the planet. It occurs on Earth when water vapor condenses to form rain or glacial ice, when carbon dioxide is sequestered in sediments or cycled through the oceans, or when rocks are oxidized (for example, by increasing the oxidation states of ferric rocks from Fe2+ to Fe3+). Gases can also be sequestered by adsorption, where fine particles in the regolith capture gas which adheres to the surface particles.
References
Further reading
Ingersoll, Andrew P. (2013). Planetary climates. Princeton, N.J.: Princeton University Press. . .
Concepts in astrophysics
Atmosphere | 0.781957 | 0.988323 | 0.772826 |
Astrophysics | Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are", which is studied in celestial mechanics.
Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.
During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions.
For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.
Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.
In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.
In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.
Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery.
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves.
Observational astrophysics
Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.
Most astrophysical observations are made using the electromagnetic spectrum.
Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes.
Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies.
Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae.
Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope.
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.
Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.
The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.
Theoretical astrophysics
Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics.
Popularization
The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.
The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.
See also
References
Further reading
Astrophysics, Scholarpedia Expert articles
External links
Astronomy and Astrophysics, a European Journal
Astrophysical Journal
Cosmic Journey: A History of Scientific Cosmology from the American Institute of Physics
International Journal of Modern Physics D from World Scientific
List and directory of peer-reviewed Astronomy / Astrophysics Journals
Ned Wright's Cosmology Tutorial, UCLA
Astronomical sub-disciplines | 0.774355 | 0.998016 | 0.772819 |