title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
University Physics | University Physics, informally known as the Sears & Zemansky, is the name of a two-volume physics textbook written by Hugh Young and Roger Freedman. The first edition of University Physics was published by Mark Zemansky and Francis Sears in 1949. Hugh Young became a coauthor with Sears and Zemansky in 1973. Now in its 15th edition, University Physics is among the most widely used introductory textbooks in the world.
University Physics by Pearson is not to be confused with a free textbook by the same name, available from OpenStax.
Contents
Volume 1. Classic mechanics, Waves/acoustics, and Thermodynamics
Mechanics
Units, Physical Quantities, and Vectors
Motion Along a Straight Line
Motion in Two or Three Dimensions
Newton's Laws of Motion
Applying Newton’s Laws
Work and Kinetic Energy
Potential Energy and Energy Conservation
Momentum, Impulse, and Collisions
Rotation of Rigid Bodies
Dynamics of Rotational Motion
Equilibrium and Elasticity
Fluid Mechanics
Gravitation
Periodic Motion
Waves/Acoustics
Mechanical Waves
Sound and Hearing
Thermodynamics
Temperature and Heat
Thermal Properties of Matter
The First Law of Thermodynamics
The Second Law of Thermodynamics
Volume 2. Electromagnetism, optics, and modern physics
Electromagnetism
Electric Charge and Electric Field
Gauss’s Law
Electric Potential
Capacitance and Dielectrics
Current, Resistance, and Electromotive Force
Direct-Current Circuits
Magnetic Field and Magnetic Forces
Sources of Magnetic Field
Electromagnetic Induction
Inductance
Alternating Current
Electromagnetic Waves
Optics
The Nature and Propagation of Light
Geometric Optics
Interference
Diffraction
Modern Physics
Relativity
Photons: Light Waves Behaving as Particles
Particles Behaving as Waves
Quantum Mechanics
Atomic Structure
Molecules and Condensed Matter
Nuclear Physics
Particle Physics and Cosmology
References
Physics textbooks | 0.821413 | 0.982187 | 0.80678 |
Energy transformation | Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
Release of energy from gravitational potential
A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.
On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
Release of energy from radioactive potential
Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
Release of energy from hydrogen fusion potential
In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples
Examples of sets of energy conversions in machines
A coal-fired power plant involves these energy transformations:
Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
Kinetic energy of steam converted to mechanical energy in the turbine
Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output
In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
In a conventional automobile, the following energy transformations occur:
Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
Kinetic energy of expanding gas converted to the linear piston movement
Linear piston movement converted to rotary crankshaft movement
Rotary crankshaft movement passed into transmission assembly
Rotary movement passed out of transmission assembly
Rotary movement passed through a differential
Rotary movement passed out of differential to drive wheels
Rotary movement of drive wheels converted to linear motion of the vehicle
Other energy conversions
There are many different machines and transducers that convert one energy form into another. A short list of examples follows:
ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy)
Battery (electricity) (chemical energy → electrical energy)
Electric generator (kinetic energy or mechanical work → electrical energy)
Electric heater (electric energy → heat)
Fire (chemical energy → heat and light)
Friction (kinetic energy → heat)
Fuel cell (chemical energy → electrical energy)
Geothermal power (heat→ electrical energy)
Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy)
Hydroelectric dam (gravitational potential energy → electrical energy)
Electric lamp (electrical energy → heat and light)
Microphone (sound → electrical energy)
Ocean thermal power (heat → electrical energy)
Photosynthesis (electromagnetic radiation → chemical energy)
Piezoelectrics (strain → electrical energy)
Thermoelectric (heat → electrical energy)
Wave power (mechanical energy → electrical energy)
Windmill (wind energy → electrical energy or mechanical energy)
See also
Chaos theory
Conservation law
Conservation of energy
Conservation of mass
Energy accounting
Energy quality
Groundwater energy balance
Laws of thermodynamics
Noether's theorem
Ocean thermal energy conversion
Thermodynamic equilibrium
Thermoeconomics
Uncertainty principle
References
Further reading
Energy Transfer and Transformation | Core knowledge science
Energy (physics) | 0.809922 | 0.996015 | 0.806695 |
Bernoulli's principle | Bernoulli's principle is a key concept in fluid dynamics that relates pressure, speed and height. Bernoulli's principle states that an increase in the speed of a parcel of fluid occurs simultaneously with a decrease in either the pressure or the height above a datum. The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form.
Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid—implying an increase in its kinetic energy—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ) is the same everywhere.
Bernoulli's principle can also be derived directly from Isaac Newton's second Law of Motion. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.
Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest.
Bernoulli's principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected. However, the principle can be applied to various types of flow within these bounds, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers.
Incompressible flow equation
In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible, and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow.
A common form of Bernoulli's equation is:
where:
is the fluid flow speed at a point,
is the acceleration due to gravity,
is the elevation of the point above a reference plane, with the positive -direction pointing upward—so in the direction opposite to the gravitational acceleration,
is the pressure at the chosen point, and
is the density of the fluid at all points in the fluid.
Bernoulli's equation and the Bernoulli constant are applicable throughout any region of flow where the energy per unit mass is uniform. Because the energy per unit mass of liquid in a well-mixed reservoir is uniform throughout, Bernoulli's equation can be used to analyze the fluid flow everywhere in that reservoir (including pipes or flow fields that the reservoir feeds) except where viscous forces dominate and erode the energy per unit mass.
The following assumptions must be met for this Bernoulli equation to apply:
the flow must be steady, that is, the flow parameters (velocity, density, etc.) at any point cannot change with time,
the flow must be incompressible—even though pressure varies, the density must remain constant along a streamline;
friction by viscous forces must be negligible.
For conservative force fields (not limited to the gravitational field), Bernoulli's equation can be generalized as:
where is the force potential at the point considered. For example, for the Earth's gravity .
By multiplying with the fluid density , equation can be rewritten as:
or:
where
is dynamic pressure,
is the piezometric head or hydraulic head (the sum of the elevation and the pressure head) and
is the stagnation pressure (the sum of the static pressure and dynamic pressure ).
The constant in the Bernoulli equation can be normalized. A common approach is in terms of total head or energy head :
The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids—when the pressure becomes too low—cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid.
Simplified form
In many applications of Bernoulli's equation, the change in the term is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height is so small the term can be omitted. This allows the above equation to be presented in the following simplified form:
where is called total pressure, and is dynamic pressure. Many authors refer to the pressure as static pressure to distinguish it from total pressure and dynamic pressure . In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure."
The simplified form of Bernoulli's equation can be summarized in the following memorable word equation:
Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure and dynamic pressure . Their sum is defined to be the total pressure . The significance of Bernoulli's principle can now be summarized as "total pressure is constant in any region free of viscous forces". If the fluid flow is brought to rest at some point, this point is called a stagnation point, and at this point the static pressure is equal to the stagnation pressure.
If the fluid flow is irrotational, the total pressure is uniform and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow". It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight and ships moving in open bodies of water. However, Bernoulli's principle importantly does not apply in the boundary layer such as in flow through long pipes.
Unsteady potential flow
The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient of a velocity potential . In that case, and for a constant density , the momentum equations of the Euler equations can be integrated to:
which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here denotes the partial derivative of the velocity potential with respect to time , and is the flow speed. The function depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment applies in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case and are constants so equation can be applied in every point of the fluid domain. Further can be made equal to zero by incorporating it into the velocity potential using the transformation:
resulting in:
Note that the relation of the potential to the flow velocity is unaffected by this transformation: .
The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian mechanics.
Compressible flow equation
Bernoulli developed his principle from observations on liquids, and Bernoulli's equation is valid for ideal fluids: those that are incompressible, irrotational, inviscid, and subjected to conservative forces. It is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation—in its incompressible flow form—cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature; however, this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough.
It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics.
Compressible flow in fluid dynamics
For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces,
where:
is the pressure
is the density and indicates that it is a function of pressure
is the flow speed
is the potential associated with the conservative force field, often the gravitational potential
In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation for an ideal gas becomes:
where, in addition to the terms listed above:
is the ratio of the specific heats of the fluid
is the acceleration due to gravity
is the elevation of the point above a reference plane
In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term can be omitted. A very useful form of the equation is then:
where:
is the total pressure
is the total density
Compressible flow in thermodynamics
The most general form of the equation, suitable for use in thermodynamics in case of (quasi) steady flow, is:
Here is the enthalpy per unit mass (also known as specific enthalpy), which is also often written as (not to be confused with "head" or "height").
Note that
where is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy the equation reduces to the incompressible-flow form.
The constant on the right-hand side is often called the Bernoulli constant and denoted . For steady inviscid adiabatic flow with no additional sources or sinks of energy, is constant along any given streamline. More generally, when may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below).
When the change in can be ignored, a very useful form of this equation is:
where is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature.
When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy.
Unsteady potential flow
For a compressible fluid, with a barotropic equation of state, the unsteady momentum conservation equation
With the irrotational assumption, namely, the flow velocity can be described as the gradient of a velocity potential . The unsteady momentum conservation equation becomes
which leads to
In this case, the above equation for isentropic flow becomes:
Derivations
Applications
In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid, and a small viscosity often has a large effect on the flow.
Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations, which were established by Bernoulli over a century before the first man-made wings were used for the purpose of flight.
The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the throat of a venturi can be explained by Bernoulli's principle; in the narrow throat, the air is moving at its fastest speed and therefore it is at its lowest pressure.
An injector on a steam locomotive or a static boiler.
The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure.
A De Laval nozzle utilizes Bernoulli's principle to create a force by turning pressure energy generated by the combustion of propellants into velocity. This then generates thrust by way of Newton's third law of motion.
The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect.
The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, which is compatible with Bernoulli's principle. Increased viscosity lowers this drain rate; this is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice.
The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper.
During a cricket match, bowlers continually polish one side of the ball. After some time, one side is quite rough and the other is still smooth. Hence, when the ball is bowled and passes through air, the speed on one side of the ball is faster than on the other, and this results in a pressure difference between the sides; this leads to the ball rotating ("swinging") while travelling through the air, giving advantage to the bowlers.
Misconceptions
Airfoil lift
One of the most common erroneous explanations of aerodynamic lift asserts that the air must traverse the upper and lower surfaces of a wing in the same amount of time, implying that since the upper surface presents a longer path the air must be moving over the top of the wing faster than over the bottom. Bernoulli's principle is then cited to conclude that the pressure on top of the wing must be lower than on the bottom.
Equal transit time applies to the flow around a body generating no lift, but there is no physical principle that requires equal transit time in cases of bodies generating lift. In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false. While the equal-time explanation is false, it is not the Bernoulli principle that is false, because this principle is well established; Bernoulli's equation is used correctly in common mathematical treatments of aerodynamic lift.
Common classroom demonstrations
There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure".
One problem with this explanation can be seen by blowing along the bottom of the paper: if the deflection was caused by faster moving air, then the paper should deflect downward; but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli's equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field.
As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields.
A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed; in other words, as the air passes over the paper, it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration.
Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure".
See also
Torricelli's law
Coandă effect
Euler equations – for the flow of an inviscid fluid
Hydraulics – applied fluid mechanics for liquids
Navier–Stokes equations – for the flow of a viscous fluid
Teapot effect
Terminology in fluid dynamics
Notes
References
External links
The Flow of Dry Water - The Feynman Lectures on Physics
Science 101 Q: Is It Really Caused by the Bernoulli Effect?
Bernoulli equation calculator
Millersville University – Applications of Euler's equation
NASA – Beginner's guide to aerodynamics
Misinterpretations of Bernoulli's equation – Weltner and Ingelman-Sundberg
Fluid dynamics
Eponymous laws of physics
Equations of fluid dynamics
1738 in science | 0.803722 | 0.99956 | 0.803368 |
Kinetic energy | In physics, the kinetic energy of an object is the form of energy that it possesses due to its motion.
In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is .
The kinetic energy of an object is equal to the work, force (F) times displacement (s), needed to achieve its stated velocity. Having gained this energy during its acceleration, the mass maintains this kinetic energy unless its speed changes. The same amount of work is done by the object when decelerating from its current speed to a state of rest.
The SI unit of kinetic energy is the joule, while the English unit of kinetic energy is the foot-pound.
In relativistic mechanics, is a good approximation of kinetic energy only when v is much less than the speed of light.
History and etymology
The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality.
The principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force, vis viva. Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship in 1722. By dropping weights from different heights into a block of clay, Willem 's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation.
The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–1851. Rankine, who had introduced the term "potential energy" in 1853, and the phrase "actual energy" to complement it, later cites William Thomson and Peter Tait as substituting the word "kinetic" for "actual".
Overview
Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy.
Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist.
The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat.
Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant.
Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant.
Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound and binding energy (breaking bound structures).
Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion.
Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula mv2 given by classical mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed.
Kinetic energy for non-relativistic velocity
Treatments of kinetic energy depend upon the relative velocity of objects compared to the fixed speed of light. Speeds experienced directly by humans are non-relativisitic; higher speeds require the theory of relativity.
Kinetic energy of rigid bodies
In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. In formula form:
where is the mass and is the speed (magnitude of the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules.
For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as
When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.,
Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed.
The kinetic energy of an object is related to its momentum by the equation:
where:
is momentum
is mass of the body
For the translational kinetic energy, that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass , whose center of mass is moving in a straight line with speed , as seen above is equal to
where:
is the mass of the body
is the speed of the center of mass of the body.
The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy.
The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole.
Derivation
Without vector calculus
The work W done by a force F on an object over a distance s parallel to F equals
.
Using Newton's Second Law
with m the mass and a the acceleration of the object and
the distance traveled by the accelerated object in time t, we find with for the velocity v of the object
With vector calculus
The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx
where we have assumed the relationship p = m v and the validity of Newton's Second Law. (However, also see the special relativistic derivation below.)
Applying the product rule we see that:
Therefore, (assuming constant mass so that dm = 0), we have,
Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy:
This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the momentum (p) of a body and the infinitesimal change of the velocity (v) of the body. It is assumed that the body starts with no kinetic energy when it is at rest (motionless).
Rotating bodies
If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy which is simply the sum of the kinetic energies of its moving parts, and is thus given by:
where:
ω is the body's angular velocity
r is the distance of any mass dm from that line
is the body's moment of inertia, equal to .
(In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape).
Kinetic energy of systems
A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains.
A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However, all internal energies of all types contribute to a body's mass, inertia, and total energy.
Fluid dynamics
In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point.
Dividing by V, the unit of volume:
where is the dynamic pressure, and ρ is the density of the incompressible fluid.
Frame of reference
The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame.
The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass.
This may be simply shown: let be the relative velocity of the center of mass frame i in the frame k. Since
Then,
However, let the kinetic energy in the center of mass frame, would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass: . Substituting, we get:
Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same).
Rotation in systems
It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy):
where:
Ek is the total kinetic energy
Et is the translational kinetic energy
Er is the rotational energy or angular kinetic energy in the rest frame
Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation.
Relativistic kinetic energy
If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In relativity, the total energy is given by the energy-momentum relation:
Here we use the relativistic expression for linear momentum: , where .
with being an object's (rest) mass, speed, and c the speed of light in vacuum.
Then kinetic energy is the total relativistic energy minus the rest energy:
At low speeds, the square root can be expanded and the rest energy drops out, giving the Newtonian kinetic energy.
Derivation
Start with the expression for linear momentum , where .
Integrating by parts yields
Since ,
is a constant of integration for the indefinite integral.
Simplifying the expression we obtain
is found by observing that when and , giving
resulting in the formula
This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary.
The mathematical by-product of this calculation is the mass–energy equivalence formula—the body at rest must have energy content
At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. This is done by binomial approximation or by taking the first two terms of the Taylor expansion for the reciprocal square root:
So, the total energy can be partitioned into the rest mass energy plus the non-relativistic kinetic energy at low speeds.
When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation
is small for low speeds. For example, for a speed of the correction to the non-relativistic kinetic energy is 0.0417 J/kg (on a non-relativistic kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a non-relativistic kinetic energy of 5 GJ/kg).
The relativistic relation between kinetic energy and momentum is given by
This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics:
This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity.
General relativity
Using the convention that
where the four-velocity of a particle is
and is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity.
If the particle has momentum
as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is
and the kinetic energy can be expressed as the total energy minus the rest energy:
Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). Since
where vα is the ordinary velocity measured w.r.t. the coordinate system, we get
Solving for ut gives
Thus for a stationary observer (v = 0)
and thus the kinetic energy takes the form
Factoring out the rest energy gives:
This expression reduces to the special relativistic case for the flat-space metric where
In the Newtonian approximation to general relativity
where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies.
Kinetic energy in quantum mechanics
In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator . The kinetic energy operator in the non-relativistic case can be written as
Notice that this can be obtained by replacing by in the classical expression for kinetic energy in terms of momentum,
In the Schrödinger picture, takes the form where the derivative is taken with respect to position coordinates and hence
The expectation value of the electron kinetic energy, , for a system of N electrons described by the wavefunction is a sum of 1-electron operator expectation values:
where is the mass of the electron and is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons.
The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density , the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as
where is known as the von Weizsäcker kinetic energy functional.
See also
Escape velocity
Foot-pound
Joule
Kinetic energy penetrator
Kinetic energy per unit mass of projectiles
Kinetic projectile
Parallel axis theorem
Potential energy
Recoil
Notes
References
External links
Dynamics (mechanics)
Forms of energy | 0.802799 | 0.999166 | 0.80213 |
Motion | In physics, motion is when an object changes its position with respect to a reference point in a given time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics, while the branch studying forces and their effect on motion is called dynamics.
If an object is not in motion relative to a given frame of reference, it is said to be at rest, motionless, immobile, stationary, or to have a constant or time-invariant position with reference to its surroundings. Modern physics holds that, as there is no absolute frame of reference, Newton's concept of absolute motion cannot be determined. Everything in the universe can be considered to be in motion.
Motion applies to various physical systems: objects, bodies, matter particles, matter fields, radiation, radiation fields, radiation particles, curvature, and space-time. One can also speak of the motion of images, shapes, and boundaries. In general, the term motion signifies a continuous change in the position or configuration of a physical system in space. For example, one can talk about the motion of a wave or the motion of a quantum particle, where the configuration consists of the probabilities of the wave or particle occupying specific positions.
Equations of motion
Laws of motion
In physics, the motion of bodies is described through two related sets of laws of mechanics. Classical mechanics for super atomic (larger than an atom) objects (such as cars, projectiles, planets, cells, and humans) and quantum mechanics for atomic and sub-atomic objects (such as helium, protons, and electrons). Historically, Newton and Euler formulated three laws of classical mechanics:
Classical mechanics
Classical mechanics is used for describing the motion of macroscopic objects moving at speeds significantly slower than the speed of light, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains and is one of the oldest and largest scientific descriptions in science, engineering, and technology.
Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, which was first published on July 5, 1687. Newton's three laws are:
A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force. (This is known as the law of inertia.)
Force is equal to the change in momentum per change in time. For a constant mass, force equals mass times acceleration.
For every action, there is an equal and opposite reaction. (In other words, whenever one body exerts a force onto a second body, (in some cases, which is standing still) the second body exerts the force back onto the first body. and are equal in magnitude and opposite in direction. So, the body that exerts will be pushed backward.)
Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and the motion of objects on Earth.
Relativistic mechanics
Modern kinematics developed with study of electromagnetism and refers all velocities to their ratio to speed of light . Velocity is then interpreted as rapidity, the hyperbolic angle for which the hyperbolic tangent function . Acceleration, the change of velocity over time, then changes rapidity according to Lorentz transformations. This part of mechanics is special relativity. Efforts to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein. The development used differential geometry to describe a curved universe with gravity; the study is called general relativity.
Quantum mechanics
Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality.
In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In quantum mechanics, due to the Heisenberg uncertainty principle, the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined.
In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomena such as superfluidity, superconductivity, and biological systems, including the function of smell receptors and the structures of protein.
Orders of magnitude
Humans, like all known things in the universe, are in constant motion; however, aside from obvious movements of the various external body parts and locomotion, humans are in motion in a variety of ways that are more difficult to perceive. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third), which prevents the feeling of motion on a mass to which the observer is connected, and the lack of an obvious frame of reference that would allow individuals to easily see that they are moving. The smaller scales of these motions are too small to be detected conventionally with human senses.
Universe
Spacetime (the fabric of the universe) is expanding, meaning everything in the universe is stretching, like a rubber band. This motion is the most obscure as it is not physical motion, but rather a change in the very nature of the universe. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were moving away from Earth, known as Hubble's law, predicted by a universal expansion.
Galaxy
The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately relative to the observed locations of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background. This frame of reference indicates that the Milky Way is moving at around .
Sun and Solar System
The Milky Way is rotating around its dense Galactic Center, thus the Sun is moving in a circle within the galaxy's gravity. Away from the central bulge, or outer rim, the typical stellar velocity is between . All planets and their moons move with the Sun. Thus, the Solar System is in motion.
Earth
The Earth is rotating or spinning around its axis. This is evidenced by day and night, at the equator the earth has an eastward velocity of . The Earth is also orbiting around the Sun in an orbital revolution. A complete orbit around the Sun takes one year, or about 365 days; it averages a speed of about .
Continents
The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle, causing them to move across the surface of the planet at the slow speed of approximately per year. However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of per year and the Pacific Plate moving per year. At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of about per year.
Internal body
The human heart is regularly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though considerable variation exists, and peak flows in the venae cavae have been found between . additionally, the smooth muscles of hollow internal organs are moving. The most familiar would be the occurrence of peristalsis, which is where digested food is forced throughout the digestive tract. Though different foods travel through the body at different rates, an average speed through the human small intestine is . The human lymphatic system is also constantly causing movements of excess fluids, lipids, and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s.
Cells
The cells of the human body have many structures and organelles that move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm, various motor proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules, and motor proteins are typically powered by the hydrolysis of adenosine triphosphate (ATP), and convert chemical energy into mechanical work. Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s.
Particles
According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms that make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a feeling of cold.
Subatomic particles
Within the standard atomic orbital model, electrons exist in a region around the nucleus of each atom. This region is called the electron cloud. According to Bohr's model of the atom, electrons have a high velocity, and the larger the nucleus they are orbiting the faster they would need to move. If electrons were to move about the electron cloud in strict paths the same way planets orbit the Sun, then electrons would be required to do so at speeds that would far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud. Inside the atomic nucleus, the protons and neutrons are also probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles.
Light
Light moves at a speed of 299,792,458 m/s, or , in a vacuum. The speed of light in vacuum (or ) is also the speed of all massless particles and associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for all physical systems.
In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement unit for speed and a fundamental constant of nature.
In 2019, the speed of light was redefined alongside all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. A new, but completely equivalent, wording of the metre's definition was proposed: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This implicit change to the speed of light was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI.
Superluminal motion
Some motion appears to an observer to exceed the speed of light. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole, responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. This occurs owing to how motion is often calculated at long distances; oftentimes calculations fail to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed.
Types of motion
Simple harmonic motion – motion in which the body oscillates in such a way that the restoring force acting on it is directly proportional to the body's displacement. Mathematically Force is directly proportional to the negative of displacement. Negative sign signifies the restoring nature of the force. (e.g., that of a pendulum).
Linear motion – motion that follows a straight linear path, and whose displacement is exactly the same as its trajectory. [Also known as rectilinear motion]
Reciprocal motion
Brownian motion – the random movement of very small particles
Circular motion
Rotatory motion – a motion about a fixed point. (e.g. Ferris wheel).
Curvilinear motion – It is defined as the motion along a curved path that may be planar or in three dimensions.
Rolling motion – (as of the wheel of a bicycle)
Oscillatory – (swinging from side to side)
Vibratory motion
Combination (or simultaneous) motions – Combination of two or more above listed motions
Projectile motion – uniform horizontal motion + vertical accelerated motion
Fundamental motions
Linear motion
Circular motion
Oscillation
Wave
Relative motion
Rotary motion
See also
References
External links
Feynman's lecture on motion
Mechanics
Physical phenomena | 0.804408 | 0.996903 | 0.801917 |
Mass–energy equivalence | In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula.
The formula defines the energy of a particle in its rest frame as the product of mass with the speed of light squared. Because the speed of light is a large number in everyday units (approximately ), the formula implies that a small amount of "rest mass", measured when the system is at rest, corresponds to an enormous amount of energy, which is independent of the composition of the matter.
Rest mass, also called invariant mass, is a fundamental physical property that is independent of momentum, even at extreme speeds approaching the speed of light. Its value is the same in all inertial frames of reference. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy.
The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics.
Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.
Description
Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared. In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by , which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with as the energy released (removed), and as the change in mass.
In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by . For an observer in the rest frame, removing energy is the same as removing mass and the formula indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by .
Mass in special relativity
An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by . Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. Physicists typically use the term mass, though experiments have shown an object's gravitational mass depends on its total energy and not just its rest mass. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum.
Relativistic mass
Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by , or . The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws.
Conservation of mass and energy
Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy.
Massless particles
Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation , where is the Planck constant and is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy.
Composite systems
For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses.
For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by ) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass.
Relation to gravity
Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity.
The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the Solar eclipse of May 29, 1919. During the solar eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation.
Efficiency
In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation.
As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang.
Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes.
Extension for systems in motion
Unlike a system's energy in an inertial frame, the relativistic energy of a system depends on both the rest mass and the total momentum of the system. The extension of Einstein's equation to these systems is given by:
or
where the term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to when the momentum term is zero. For photons where , the equation reduces to .
Low-speed approximation
Using the Lorentz factor, , the energy–momentum can be rewritten as and expanded as a power series:
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because is small. For low speeds, all but the first two terms can be ignored:
In classical mechanics, both the term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields:
.
The difference between the two approximations is given by , a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of . The difference between the approximations for the Parker Solar Probe in 2018 is , which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about .
Applications
Application to nuclear physics
The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power.
A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by ), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is blown up in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation.
Practical examples
Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: . In the SI system (expressing the ratio in joules per kilogram using the value of in metres per second):
(≈ 9.0 × 1016 joules per kilogram).
So the energy equivalent of one kilogram of mass is
89.9 petajoules
25.0 billion kilowatt-hours (≈ 25,000 GW·h)
21.5 trillion kilocalories (≈ 21 Pcal)
85.2 trillion BTUs
0.0852 quads
or the energy released by combustion of the following:
21 500 kilotons of TNT-equivalent energy (≈ 21 Mt)
litres or US gallons of automotive gasoline
Any time energy is released, the process can be evaluated from an perspective. For instance, the "Gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass.
Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged:
A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring.
Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = ).
A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg.
History
While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps.
Developments prior to Einstein
Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it.
During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of , where . The writings of the English engineer Samuel Tolver Preston, and a 1903 paper by the Italian industrialist and geologist Olinto De Pretto, presented a mass–energy relation. Italian mathematician and math historian Umberto Bartocci observed that there were only three degrees of separation linking De Pretto to Einstein, concluding that Einstein was probably aware of De Pretto's work. Preston and De Pretto, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed . Each of these particles has a kinetic energy of up to a small numerical factor. The nonrelativistic kinetic energy formula did not always include the traditional factor of , since German polymath Gottfried Leibniz introduced kinetic energy without it, and the is largely conventional in prerelativistic physics. By assuming that every particle has a mass that is the sum of the masses of the ether particles, the authors concluded that all matter contains an amount of kinetic energy either given by or depending on the convention. A particle ether was usually considered unacceptably speculative science at the time, and since these authors did not formulate relativity, their reasoning is completely different from that of Einstein, who used relativity to change frames.
In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics.
Electromagnetic mass
There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass:
,
where
Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass
By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes.
Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass"
to the cavity's mass. He argued that this implies mass dependence on temperature as well.
Einstein: mass–energy equivalence
Einstein did not write the exact formula in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy by emitting light, its mass diminishes by . This formulation relates only a change in mass to a change in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."
Mass–velocity relationship
In developing special relativity, Einstein found that the kinetic energy of a moving body is
with the velocity, the rest mass, and the Lorentz factor.
He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle:
Without this second term, there would be an additional contribution in the energy when the particle is not moving.
Einstein's view on mass
Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on , he treated as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass".
In older physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection.
Einstein's 1905 derivation
Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles:
.
Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used to represent the speed of light in vacuum and to represent the energy lost by a body in the form of radiation. Consequently, the equation was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy in the form of radiation, its mass diminishes by ." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of before and after the emission as seen in its rest frame. As seen from a moving frame, becomes and becomes . Einstein obtained, in modern notation:
.
He then argued that can only differ from the kinetic energy by an additive constant, which gives
.
Neglecting effects higher than third order in after a Taylor series expansion of the right side of this yields:
Einstein concluded that the emission reduces the body's mass by , and that the mass of a body is a measure of its energy content.
The correctness of Einstein's 1905 derivation of was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons.
Relativistic center-of-mass theorem of 1906
Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's , because mass conservation appears as a special case of the energy conservation law.
Further developments
There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be (where is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula , with being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as in June 1907, where is the pressure and the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form and concluded: "A mass is equivalent, as regards inertia, to a quantity of energy . […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: and , with being the relativistic energy (the energy of an object when the object is moving), is the rest energy (the energy when not moving), is the relativistic mass (the rest mass and the extra mass gained when moving), and is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: and , with being the total energy (rest energy plus kinetic energy) of a moving material point, its rest energy, the relativistic mass, and the invariant mass.
In 1911, German physicist Max von Laue gave a more comprehensive proof of from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918.
Einstein returned to the topic once again after World War II and this time he wrote in the title of his article intended as an explanation for a general reader by analogy.
Alternative version
An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass . If the body is examined in a frame moving with nonrelativistic velocity , it is no longer at rest and in the moving frame it has momentum . Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy . In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor . The momentum of the light is its energy divided by , and it is increased by a factor of . So the right-moving light is carrying an extra momentum given by:
The left-moving light carries a little less momentum, by the same amount . So the total right-momentum in both light pulses is twice . This is the right-momentum that the object lost.
The momentum of the object in the moving frame after the emission is reduced to this amount:
So the change in the object's mass is equal to the total energy lost divided by . Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass.
Radioactivity and nuclear energy
It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter."
Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine."
This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem.
While is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation , plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of allowed them to realize on the spot that the basic fission process was energetically possible.
Einstein's equation written
According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021.
See also
Notes
References
External links
Einstein on the Inertia of Energy – MathPages
Einstein-on film explaining a mass energy equivalence
Mass and Energy – Conversations About Science with Theoretical Physicist Matt Strassler
The Equivalence of Mass and Energy – Entry in the Stanford Encyclopedia of Philosophy
1905 introductions
1905 in science
1905 in Germany
Albert Einstein
Energy (physics)
Equations
Mass
Special relativity | 0.800938 | 0.99939 | 0.800449 |
Force | A force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol .
Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium they can cause deformation of solid materials, or flow in fluids.
In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes.
Development of the concept
Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years.
By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.
Pre-Newtonian concepts
Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion.
Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics.
In the early 17th century, before Newton's Principia, the term "force" was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named (live force) by Leibniz. The modern concept of force corresponds to Newton's (accelerating force).
Newtonian mechanics
Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches.
First law
Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.
Second law
According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion.
Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.
A modern statement of Newton's second law is a vector equation:
where is the momentum of the system, and is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time.
In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum,
where m is the mass and is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes
By substituting the definition of acceleration, the algebraic version of Newton's second law is derived:
Third law
Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if is the force of body 1 on body 2 and that of body 2 on body 1, then
This law is sometimes referred to as the action-reaction law, with called the action and the reaction.
Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body.
In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero:
More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.
Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if is the momentum of object 1 and the momentum of object 2, then
Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.
Defining "force"
Some textbooks use Newton's second law as a definition of force. However, for the equation for a constant mass to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.
Combining forces
Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.
Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.
Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.
As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.
Equilibrium
When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.
Static
Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them.
The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.
Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.
A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.
Dynamic
Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.
Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.
Examples of forces in classical mechanics
Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body.
Gravitational force or Gravity
What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of will experience a force:
For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.
Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion.
Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass and the radius of the Earth to the gravitational acceleration:
where the vector direction is given by , is the unit vector directed outward from the center of the Earth.
In this equation, a dimensional constant is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass due to the gravitational pull of mass is
where is the distance between the two objects' centers of mass and is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.
This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.
Electromagnetic
The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.
Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as
where is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge due to electric and magnetic fields:
where is the electromagnetic force, is the electric field at the body's location, is the magnetic field, and is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.
The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.
Normal
When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.
Friction
Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.
The static friction force will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction multiplied by the normal force. In other words, the magnitude of the static friction force satisfies the inequality:
The kinetic friction force is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:
where is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.
Tension
Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.
Spring
A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If is the displacement, the force exerted by an ideal spring equals:
where is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.
Centripetal
For an object in uniform circular motion, the net force acting on the object equals:
where is the mass of the object, is the velocity of the object and is the distance to the center of the circular path and is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.
Continuum mechanics
Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:
where is the volume of the object in the fluid and is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.
A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction:
where:
is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and
is the velocity of the object.
More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as
where is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.
Fictitious
There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as "pseudo forces".
In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry.
Concepts derived from force
Rotation and torque
Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force is defined relative to an arbitrary reference point as the cross product:
where is the position vector of the force application point relative to the reference point.
Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:
where
is the moment of inertia of the body
is the angular acceleration of the body.
This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.
Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:
where is the angular momentum of the particle.
Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.
Yank
The yank is defined as the rate of change of force
The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used.
Kinematic integrals
Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:
which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem).
Similarly, integrating with respect to position gives a definition for the work done by a force:
which is equivalent to changes in kinetic energy (yielding the work energy theorem).
Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change in a time interval dt:
so
with the velocity.
Potential energy
Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field is defined as that field whose gradient is equal and opposite to the force produced at every point:
Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.
Conservation
A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.
Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector emanating from spherically symmetric potentials. Examples of this follow:
For gravity:
where is the gravitational constant, and is the mass of object n.
For electrostatic forces:
where is electric permittivity of free space, and is the electric charge of object n.
For spring forces:
where is the spring constant.
For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.
The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.
Units
The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes.
The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.
The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.
See also Ton-force.
Revisions of the force concept
At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly.
Special theory of relativity
In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law,
remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, , momentum must be redefined as:
where is the rest mass and the speed of light.
The expression relating force and acceleration for a particle with constant non-zero rest mass moving in the direction at velocity is:
where
is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach .
If is very small compared to , then is very close to 1 and
is a close approximation. Even for use in relativity, one can restore the form of
through the use of four-vectors. This relation is correct in relativity when is the four-force, is the invariant mass, and is the four-acceleration.
The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below.
Quantum mechanics
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability.
Quantum field theory
In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".
While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.
Fundamental interactions
All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.
The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation.
Gravitational
Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact.
Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".
Electromagnetic
Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force.
Strong nuclear
There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.
The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.
Weak nuclear
Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately . Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.
See also
References
External links
Natural philosophy
Classical mechanics
Vector physical quantities
Temporal rates | 0.800791 | 0.999094 | 0.800065 |
Electromagnetism | In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
History
Ancient world
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
19th century
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges or one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
A fundamental force
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
Classical electrodynamics
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10May 1752 by Thomas-François Dalibard of France using a iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
Extension to nonlinear phenomena
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
Quantities and units
Here is a list of common units related to electromagnetism:
ampere (electric current, SI unit)
coulomb (electric charge)
farad (capacitance)
henry (inductance)
ohm (resistance)
siemens (conductance)
tesla (magnetic flux density)
volt (electric potential)
watt (power)
weber (magnetic flux)
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
Applications
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
See also
Abraham–Lorentz force
Aeromagnetic surveys
Computational electromagnetics
Double-slit experiment
Electrodynamic droplet deformation
Electromagnet
Electromagnetic induction
Electromagnetic wave equation
Electromagnetic scattering
Electromechanics
Geophysics
Introduction to electromagnetism
Magnetostatics
Magnetoquasistatic field
Optics
Relativistic electromagnetism
Wheeler–Feynman absorber theory
References
Further reading
Web sources
Textbooks
General coverage
External links
Magnetic Field Strength Converter
Electromagnetic Force – from Eric Weisstein's World of Physics
Fundamental interactions | 0.800679 | 0.998973 | 0.799856 |
Rotational energy | Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed:
where
The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass.
Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion:
In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow).
An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of . The Earth has a moment of inertia, I = . Therefore, it has a rotational kinetic energy of .
Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process).
See also
Flywheel
List of energy storage projects
Rigid rotor
Rotational spectroscopy
Notes
References
Resnick, R. and Halliday, D. (1966) PHYSICS, Section 12-5, John Wiley & Sons Inc.
Forms of energy
Rotation | 0.806606 | 0.990035 | 0.798569 |
Energy–momentum relation | In physics, the energy–momentum relation, or relativistic dispersion relation, is the relativistic equation relating total energy (which is also called relativistic energy) to invariant mass (which is also called rest mass) and momentum. It is the extension of mass–energy equivalence for bodies or systems with non-zero momentum.
It can be formulated as:
This equation holds for a body or system, such as one or more particles, with total energy , invariant mass , and momentum of magnitude ; the constant is the speed of light. It assumes the special relativity case of flat spacetime and that the particles are free. Total energy is the sum of rest energy and relativistic kinetic energy:
Invariant mass is mass measured in a center-of-momentum frame.
For bodies or systems with zero momentum, it simplifies to the mass–energy equation , where total energy in this case is equal to rest energy.
The Dirac sea model, which was used to predict the existence of antimatter, is closely related to the energy–momentum relation.
Connection to E = mc2
The energy–momentum relation is consistent with the familiar mass–energy relation in both its interpretations: relates total energy to the (total) relativistic mass (alternatively denoted or ), while relates rest energy to (invariant) rest mass .
Unlike either of those equations, the energy–momentum equation relates the total energy to the rest mass . All three equations hold true simultaneously.
Special cases
If the body is a massless particle, then reduces to . For photons, this is the relation, discovered in 19th century classical electromagnetism, between radiant momentum (causing radiation pressure) and radiant energy.
If the body's speed is much less than , then reduces to ; that is, the body's total energy is simply its classical kinetic energy plus its rest energy.
If the body is at rest, i.e. in its center-of-momentum frame, we have and ; thus the energy–momentum relation and both forms of the mass–energy relation (mentioned above) all become the same.
A more general form of relation holds for general relativity.
The invariant mass (or rest mass) is an invariant for all frames of reference (hence the name), not just in inertial frames in flat spacetime, but also accelerated frames traveling through curved spacetime (see below). However the total energy of the particle and its relativistic momentum are frame-dependent; relative motion between two frames causes the observers in those frames to measure different values of the particle's energy and momentum; one frame measures and , while the other frame measures and , where and , unless there is no relative motion between observers, in which case each observer measures the same energy and momenta. Although we still have, in flat spacetime:
The quantities , , , are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy and momenta by equating the relations in the different frames. Again in flat spacetime, this translates to;
Since does not change from frame to frame, the energy–momentum relation is used in relativistic mechanics and particle physics calculations, as energy and momentum are given in a particle's rest frame (that is, and as an observer moving with the particle would conclude to be) and measured in the lab frame (i.e. and as determined by particle physicists in a lab, and not moving with the particles).
In relativistic quantum mechanics, it is the basis for constructing relativistic wave equations, since if the relativistic wave equation describing the particle is consistent with this equation – it is consistent with relativistic mechanics, and is Lorentz invariant. In relativistic quantum field theory, it is applicable to all particles and fields.
Origins and derivation of the equation
The energy–momentum relation goes back to Max Planck's article
published in 1906.
It was used by Walter Gordon in 1926 and then by Paul Dirac in 1928 under the form , where V is the amount of potential energy.
The equation can be derived in a number of ways, two of the simplest include:
From the relativistic dynamics of a massive particle,
By evaluating the norm of the four-momentum of the system. This method applies to both massive and massless particles, and can be extended to multi-particle systems with relatively little effort (see below).
Heuristic approach for massive particles
For a massive object moving at three-velocity with magnitude in the lab frame:
is the total energy of the moving object in the lab frame,
is the three dimensional relativistic momentum of the object in the lab frame with magnitude . The relativistic energy and momentum include the Lorentz factor defined by:
Some authors use relativistic mass defined by:
although rest mass has a more fundamental significance, and will be used primarily over relativistic mass in this article.
Squaring the 3-momentum gives:
then solving for and substituting into the Lorentz factor one obtains its alternative form in terms of 3-momentum and mass, rather than 3-velocity:
Inserting this form of the Lorentz factor into the energy equation gives:
followed by more rearrangement it yields. The elimination of the Lorentz factor also eliminates implicit velocity dependence of the particle in, as well as any inferences to the "relativistic mass" of a massive particle. This approach is not general as massless particles are not considered. Naively setting would mean that and and no energy–momentum relation could be derived, which is not correct.
Norm of the four-momentum
Special relativity
In Minkowski space, energy (divided by ) and momentum are two components of a Minkowski four-vector, namely the four-momentum;
(these are the contravariant components).
The Minkowski inner product of this vector with itself gives the square of the norm of this vector, it is proportional to the square of the rest mass of the body:
a Lorentz invariant quantity, and therefore independent of the frame of reference. Using the Minkowski metric with metric signature , the inner product is
and
so
or, in natural units where = 1,
.
General relativity
In general relativity, the 4-momentum is a four-vector defined in a local coordinate frame, although by definition the inner product is similar to that of special relativity,
in which the Minkowski metric is replaced by the metric tensor field :
solved from the Einstein field equations. Then:
Performing the summations over indices followed by collecting "time-like", "spacetime-like", and "space-like" terms gives:
where the factor of 2 arises because the metric is a symmetric tensor, and the convention of Latin indices , taking space-like values 1, 2, 3 is used. As each component of the metric has space and time dependence in general; this is significantly more complicated than the formula quoted at the beginning, see metric tensor (general relativity) for more information.
Units of energy, mass and momentum
In natural units where , the energy–momentum equation reduces to
In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·−1, and mass in units of eV·−2. In electromagnetism, and because of relativistic invariance, it is useful to have the electric field and the magnetic field in the same unit (Gauss), using the cgs (Gaussian) system of units, where energy is given in units of erg, mass in grams (g), and momentum in g·cm·s−1.
Energy may also in theory be expressed in units of grams, though in practice it requires a large amount of energy to be equivalent to masses in this range. For example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat. Energies of thermonuclear bombs are usually given in tens of kilotons and megatons referring to the energy liberated by exploding that amount of trinitrotoluene (TNT).
Special cases
Centre-of-momentum frame (one particle)
For a body in its rest frame, the momentum is zero, so the equation simplifies to
where is the rest mass of the body.
Massless particles
If the object is massless, as is the case for a photon, then the equation reduces to
This is a useful simplification. It can be rewritten in other ways using the de Broglie relations:
if the wavelength or wavenumber are given.
Correspondence principle
Rewriting the relation for massive particles as:
and expanding into power series by the binomial theorem (or a Taylor series):
in the limit that , we have so the momentum has the classical form , then to first order in (i.e. retain the term for and neglect all terms for ) we have
or
where the second term is the classical kinetic energy, and the first is the rest energy of the particle. This approximation is not valid for massless particles, since the expansion required the division of momentum by mass. Incidentally, there are no massless particles in classical mechanics.
Many-particle systems
Addition of four momenta
In the case of many particles with relativistic momenta and energy , where (up to the total number of particles) simply labels the particles, as measured in a particular frame, the four-momenta in this frame can be added;
and then take the norm; to obtain the relation for a many particle system:
where is the invariant mass of the whole system, and is not equal to the sum of the rest masses of the particles unless all particles are at rest (see mass in special relativity for more detail). Substituting and rearranging gives the generalization of;
The energies and momenta in the equation are all frame-dependent, while is frame-independent.
Center-of-momentum frame
In the center-of-momentum frame (COM frame), by definition we have:
with the implication from that the invariant mass is also the centre of momentum (COM) mass–energy, aside from the factor:
and this is true for all frames since is frame-independent. The energies are those in the COM frame, not the lab frame. However, many familiar bound systems have the lab frame as COM frame, since the system itself is not in motion and so the momenta all cancel to zero. An example would be a simple object (where vibrational momenta of atoms cancel) or a container of gas where the container is at rest. In such systems, all the energies of the system are measured as mass. For example the heat in an object on a scale, or the total of kinetic energies in a container of gas on the scale, all are measured by the scale as the mass of the system.
Rest masses and the invariant mass
Either the energies or momenta of the particles, as measured in some frame, can be eliminated using the energy momentum relation for each particle:
allowing to be expressed in terms of the energies and rest masses, or momenta and rest masses. In a particular frame, the squares of sums can be rewritten as sums of squares (and products):
so substituting the sums, we can introduce their rest masses in:
The energies can be eliminated by:
similarly the momenta can be eliminated by:
where is the angle between the momentum vectors and .
Rearranging:
Since the invariant mass of the system and the rest masses of each particle are frame-independent, the right hand side is also an invariant (even though the energies and momenta are all measured in a particular frame).
Matter waves
Using the de Broglie relations for energy and momentum for matter waves,
where is the angular frequency and is the wavevector with magnitude , equal to the wave number, the energy–momentum relation can be expressed in terms of wave quantities:
and tidying up by dividing by throughout:
This can also be derived from the magnitude of the four-wavevector
in a similar way to the four-momentum above.
Since the reduced Planck constant and the speed of light both appear and clutter this equation, this is where natural units are especially helpful. Normalizing them so that , we have:
Tachyon and exotic matter
The velocity of a bradyon with the relativistic energy–momentum relation
can never exceed . On the contrary, it is always greater than for a tachyon whose energy–momentum equation is
By contrast, the hypothetical exotic matter has a negative mass and the energy–momentum equation is
See also
Mass–energy equivalence
Four-momentum
Mass in special relativity
References
Momentum
Special relativity | 0.801465 | 0.995787 | 0.798089 |
Boltzmann equation | The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element ) centered at the position , and has momentum nearly equal to a given momentum vector (thus occupying a very small region of momentum space ), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
Overview
The phase space and density function
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component , , . The entire space is 6-dimensional: a point in this space is , and each coordinate is parameterized by time t. The small volume ("differential volume element") is written
Since the probability of molecules, which all have and within , is in question, at the heart of the equation is a quantity which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time . This is a probability density function: , defined so that,
is the number of molecules which all have positions lying within a volume element about and momenta lying within a momentum space element about , at time . Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
which is a 6-fold integral. While is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one and is in question. It is not part of the analysis to use , for particle 1, , for particle 2, etc. up to , for particle N.
It is assumed the particles in the system are identical (so each has an identical mass ). For a mixture of more than one chemical species, one distribution is needed for each, see below.
Principal statement
The general equation can then be written as
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity instead of momentum ; they are related in the definition of momentum by .
The force and diffusion terms
Consider particles described by , each experiencing an external force not due to other particles (see the collision term for the latter treatment).
Suppose at time some number of particles all have position within element and momentum within . If a force instantly acts on each particle, then at time their position will be and momentum . Then, in the absence of collisions, must satisfy
Note that we have used the fact that the phase space volume element is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume changes, so
where is the total change in . Dividing by and taking the limits and , we have
The total differential of is:
where is the gradient operator, is the dot product,
is a shorthand for the momentum analogue of , and , , are Cartesian unit vectors.
Final statement
Dividing by and substituting into gives:
In this context, is the force field acting on the particles in the fluid, and is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since cannot be solved unless the collision term in is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
The collision term (Stosszahlansatz) and molecular chaos
Two-body collision term
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
where and are the momenta of any two particles (labeled as A and B for convenience) before a collision, and are the momenta after the collision,
is the magnitude of the relative momenta (see relative velocity for more on this concept), and is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle into the element of the solid angle , due to the collision.
Simplifications to the collision term
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
where is the molecular collision frequency, and is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
General equation (for a mixture)
For a mixture of chemical species labelled by indices the equation for species is
where , and the collision term is
where , the magnitude of the relative momenta is
and is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.
Applications and extensions
Conservation equations
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy. For a fluid consisting of only one kind of particle, the number density is given by
The average value of any function is
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus and , where is the particle velocity vector. Define as some function of momentum only, whose total value is conserved in a collision. Assume also that the force is a function of position only, and that f is zero for . Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
where the last term is zero, since is conserved in a collision. The values of correspond to moments of velocity (and momentum , as they are linearly dependent).
Zeroth moment
Letting , the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:
where is the mass density, and is the average fluid velocity.
First moment
Letting , the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:
where is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
Second moment
Letting , the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:
where is the kinetic thermal energy density, and is the heat flux vector.
Hamiltonian mechanics
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
where is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and is the collision operator. The non-relativistic form of is
Quantum theory and violation of particle number conservation
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
General relativity and astronomy
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
where is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant phase space as opposed to fully contravariant phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
Solving the equation
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
Limitations and further uses of the Boltzmann equation
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
See also
Vlasov equation
The Vlasov–Poisson equation
Fokker–Planck equation
Williams–Boltzmann equation
Derivation of Navier–Stokes equation from LBE
Derivation of Jeans equation from BE
Jeans's theorem
H-theorem
Notes
References
. Very inexpensive introduction to the modern framework (starting from a formal deduction from Liouville and the Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy (BBGKY) in which the Boltzmann equation is placed). Most statistical mechanics textbooks like Huang still treat the topic using Boltzmann's original arguments. To derive the equation, these books use a heuristic explanation that does not bring out the range of validity and the characteristic assumptions that distinguish Boltzmann's from other transport equations like Fokker–Planck or Landau equations.
External links
The Boltzmann Transport Equation by Franz Vesely
Boltzmann gaseous behaviors solved
Eponymous equations of physics
Partial differential equations
Statistical mechanics
Transport phenomena
Equation
1872 in science
1872 in Germany
Thermodynamic equations | 0.801548 | 0.995461 | 0.79791 |
Maxwell's equations | Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside.
Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c. Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays.
In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as
With the electric field, the magnetic field, the electric charge density and the current density. is the vacuum permittivity and the vacuum permeability.
The equations have two major variants:
The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale.
The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials.
The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences.
The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation.
Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics.
History of the equations
Conceptual descriptions
Gauss's law
Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space.
Gauss's law for magnetism
Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field.
Faraday's law
The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface.
The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire.
Ampère–Maxwell law
The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve.
Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space.
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.
Formulation in terms of electric and magnetic fields (microscopic or in vacuum version)
In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see ).
The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.
Key to the notation
Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
The equations introduce the electric field, , a vector field, and the magnetic field, , a pseudovector field, each generally having a time and location dependence.
The sources are
the total electric charge density (total charge per unit volume), , and
the total electric current density (total current per unit area), .
The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are:
the permittivity of free space, , and
the permeability of free space, , and
the speed of light,
Differential equations
In the differential equations,
the nabla symbol, , denotes the three-dimensional gradient operator, del,
the symbol (pronounced "del dot") denotes the divergence operator,
the symbol (pronounced "del cross") denotes the curl operator.
Integral equations
In the integral equations,
is any volume with closed boundary surface , and
is any surface with closed boundary curve ,
The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law:
Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss and Stokes formula appropriately.
is a surface integral over the boundary surface , with the loop indicating the surface is closed
is a volume integral over the volume ,
is a line integral around the boundary curve , with the loop indicating the curve is closed.
is a surface integral over the surface ,
The total electric charge enclosed in is the volume integral over of the charge density (see the "macroscopic formulation" section below): where is the volume element.
The net magnetic flux is the surface integral of the magnetic field passing through a fixed surface, :
The net electric flux is the surface integral of the electric field passing through :
The net electric current is the surface integral of the electric current density passing through : where denotes the differential vector element of surface area , normal to surface . (Vector area is sometimes denoted by rather than , but this conflicts with the notation for magnetic vector potential).
Formulation with SI quantities
Formulation with Gaussian quantities
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of and into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units",
the Maxwell equations become:
The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1.
Further changes are possible by absorbing factors of . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics).
Relationship between differential and integral formulations
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.
Flux and divergence
According to the (purely mathematical) Gauss divergence theorem, the electric flux through the
boundary surface can be rewritten as
The integral version of Gauss's equation can thus be rewritten as
Since is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is
the differential equations formulation of Gauss equation up to a trivial rearrangement.
Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives
which is satisfied for all if and only if everywhere.
Circulation and curl
By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e.
Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as
Since can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied.
The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.
Charge conservation
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:
i.e.,
By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
In particular, in an isolated system the total charge is conserved.
Vacuum equations, electromagnetic waves and speed of light
In a region with no charges and no currents, such as in vacuum, Maxwell's equations reduce to:
Taking the curl of the curl equations, and using the curl of the curl identity we obtain
The quantity has the dimension (T/L)2. Defining , the equations above have the form of the standard wave equations
Already during Maxwell's lifetime, it was found that the known values for and give , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of and are defined constants, (which means that by definition ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value.
In materials with relative permittivity, , and relative permeability, , the phase velocity of light becomes
which is usually less than .
In addition, and are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity .
Macroscopic formulation
The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping.
The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.
"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.
In the macroscopic equations, the influence of bound charge and bound current is incorporated into the displacement field and the magnetizing field , while the equations depend only on the free charges and free currents . This reflects a splitting of the total electric charge Q and current I (and their densities and J) into free and bound parts:
The cost of this splitting is that the additional fields and need to be determined through phenomenological constituent equations relating these fields to the electric field and the magnetic field , together with the bound charge and current.
See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum;
and the macroscopic equations, dealing with free charge and current, practical to use within materials.
Bound charge and current
When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization of the material, its dipole moment per unit volume. If is uniform, a macroscopic separation of charge is produced only at the surfaces where enters and leaves the material. For non-uniform , a charge is also produced in the bulk.
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization .
The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of and , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.
Auxiliary fields, polarization and magnetization
The definitions of the auxiliary fields are:
where is the polarization field and is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density and bound current density in terms of polarization and magnetization are then defined as
If we define the total, bound, and free charge and current density by
and use the defining relations above to eliminate , and , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.
Constitutive relations
In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field and the electric field , as well as the magnetizing field and the magnetic field . Equivalently, we have to specify the dependence of the polarization (hence the bound charge) and the magnetization (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.
For materials without polarization and magnetization, the constitutive relations are (by definition)
where is the permittivity of free space and the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.
An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization.
More generally, for linear materials the constitutive relations are
where is the permittivity and the permeability of the material. For the displacement field the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however.
For homogeneous materials, and are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).
For isotropic materials, and are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.
Materials are generally dispersive, so and depend on the frequency of any incident EM waves.
Even more generally, in the case of non-linear materials (see for example nonlinear optics), and are not necessarily proportional to , similarly or is not necessarily proportional to . In general and depend on both and , on location and time, and possibly other physical quantities.
In applications one also has to describe how the free currents and charge density behave in terms of and possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form
Alternative formulations
Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential and the vector potential . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect).
Each table describes one formalism. See the main article for details of each formulation.
The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well.
Each table below describes one formalism.
In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric covariant order 2 tensor; the four-potential, , is a covariant vector; the current, , is a vector; the square brackets, , denote antisymmetrization of indices; is the partial derivative with respect to the coordinate, . In Minkowski space coordinates are chosen with respect to an inertial frame; , so that the metric tensor used to raise and lower indices is . The d'Alembert operator on Minkowski space is as in the vector formulation. In general spacetimes, the coordinate system is arbitrary, the covariant derivative , the Ricci tensor, and raising and lowering of indices are defined by the Lorentzian metric, and the d'Alembert operator is defined as . The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line.
In the differential form formulation on arbitrary space times, is the electromagnetic tensor considered as a 2-form, is the potential 1-form, is the current 3-form, is the exterior derivative, and is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact.
Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used.
Solutions
Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow.
As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator).
Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create.
Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics.
Overdetermination of Maxwell's equations
Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of and ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles.
This explanation was first introduced by Julius Adams Stratton in 1941.
Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account.
Both identities , which reduce eight equations to six independent ones, are the true reason of overdetermination.
Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws.
For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing.
Maxwell's equations as the classical limit of QED
Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However they do not account for quantum effects and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED).
Some observed electromagnetic phenomena are incompatible with Maxwell's equations. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances.
Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be approximated using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations.
Variations
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.
Magnetic monopoles
Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.
See also
Explanatory notes
References
Further reading
Historical publications
On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF).
On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise.
James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books.
J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism":
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Developments before the theory of relativity
Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" , Archives Néerlandaises, V, 253–278.
Henri Poincaré (1902) "La Science et l'Hypothèse" .
Henri Poincaré (1905) "Sur la dynamique de l'électron" , Comptes Rendus de l'Académie des Sciences, 140, 1504–1508.
Catt, Walton and Davidson. "The History of Displacement Current" . Wireless World, March 1979.
External links
maxwells-equations.com — An intuitive tutorial of Maxwell's equations.
The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations
Wikiversity Page on Maxwell's Equations
Modern treatments
Electromagnetism (ch. 11), B. Crowell, Fullerton College
Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin
Electromagnetic waves from Maxwell's equations on Project PHYSNET.
MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin.
Other
Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations
Electromagnetism
Equations of physics
Functions of space and time
James Clerk Maxwell
Partial differential equations
Scientific laws | 0.797697 | 0.999627 | 0.797399 |
Transport phenomena | In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others.
The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume.
Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism.
Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems.
Overview
In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium.
Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors.
Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy.
The transport of mass, energy, and momentum can be affected by the presence of external sources:
An odor dissipates more slowly (and may intensify) when the source of the odor remains present.
The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied.
The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air.
Commonalities among phenomena
An important principle in the study of transport phenomena is analogy between phenomena.
Diffusion
There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples:
Mass: the spreading and dissipation of odors in air is an example of mass diffusion.
Energy: the conduction of heat in a solid material is an example of heat diffusion.
Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates).
The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena.
A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations.
The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass.
Onsager reciprocal relations
In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal.
This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once.
Momentum transfer
In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion).
When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer.
The equation for momentum transfer is Newton's law of viscosity written as follows:
where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed.
Mass transfer
When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are:
Mass can be transferred by the action of a pressure gradient (pressure diffusion)
Forced diffusion occurs because of the action of some external force
Diffusion can be caused by temperature gradients (thermal diffusion)
Diffusion can be caused by differences in chemical potential
This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B:
where D is the diffusivity constant.
Heat transfer
Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system:
The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position.
For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient.
where A is the surface area, is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient.
Within heat transfer, two principal types of convection can occur:
Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is .
Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data.
Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers.
Heat and mass transfer analogy
The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer.
Derivation
The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected:
Where and are the velocities in the x and y directions respectively normalized by the free stream velocity, and are the x and y coordinates non-dimensionalized by a relevant length scale, is the Reynolds number, is the Prandtl number, and is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures:
The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation:
Where is the non-dimensional concentration, and is the Schmidt number.
Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity and thermal diffusion, given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity and mass Diffusivity, given by the Schmidt number.
In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport.
At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling:
Where q” is the heat flux, is the thermal conductivity, is the heat transfer coefficient, and the subscripts and compare the surface and bulk values respectively.
For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding:
Where is the mass flux [kg/s ], is the diffusivity of species a in fluid b, and is the mass transfer coefficient. As we can see, and are analogous, and are analogous, while and are analogous.
Implementing the Analogy
Heat-Mass Analogy:
Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat.
In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient . Therefore, one can directly calculate these numbers from one another using:
Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent.
We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding:
For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy.
Limitations
The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species.
The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge.
Applications of the Heat-Mass Analogy
The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes.
Applications
Pollution
The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S.
See also
Constitutive equation
Continuity equation
Wave propagation
Pulse
Action potential
Bioheat transfer
References
External links
Transport Phenomena Archive in the Teaching Archives of the Materials Digital Library Pathway
Chemical engineering | 0.806653 | 0.988396 | 0.797293 |
Kinematics | Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics.
Kinematics is used in astrophysics to describe the motion of celestial bodies and collections of such bodies. In mechanical engineering, robotics, and biomechanics, kinematics is used to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the human skeleton.
Geometric transformations, also called rigid transformations, are used to describe the movement of components in a mechanical system, simplifying the derivation of the equations of motion. They are also central to dynamic analysis.
Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism and, working in reverse, using kinematic synthesis to design a mechanism for a desired range of motion. In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system or mechanism.
Etymology
The term kinematic is the English version of A.M. Ampère's cinématique, which he constructed from the Greek kinema ("movement, motion"), itself derived from kinein ("to move").
Kinematic and cinématique are related to the French word cinéma, but neither are directly derived from it. However, they do share a root word in common, as cinéma came from the shortened form of cinématographe, "motion picture projector and camera", once again from the Greek word for movement and from the Greek grapho ("to write").
Kinematics of a particle trajectory in a non-rotating frame of reference
Particle kinematics is the study of the trajectory of particles. The position of a particle is defined as the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is centered at your home, such that east is in the direction of the x-axis and north is in the direction of the y-axis, then the coordinate vector to the base of the tower is r = (0 m, −50 m, 0 m). If the tower is 50 m high, and this height is measured along the z-axis, then the coordinate vector to the top of the tower is r = (0 m, −50 m, 50 m).
In the most general case, a three-dimensional coordinate system is used to define the position of a particle. However, if the particle is constrained to move within a plane, a two-dimensional coordinate system is sufficient. All observations in physics are incomplete without being described with respect to a reference frame.
The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position vector can be expressed as
where , , and are the Cartesian coordinates and , and are the unit vectors along the , , and coordinate axes, respectively. The magnitude of the position vector gives the distance between the point and the origin.
The direction cosines of the position vector provide a quantitative measure of direction. In general, an object's position vector will depend on the frame of reference; different frames will lead to different values for the position vector.
The trajectory of a particle is a vector function of time, , which defines the curve traced by the moving particle, given by
where , , and describe each coordinate of the particle's position as a function of time.
Velocity and speed
The velocity of a particle is a vector quantity that describes the direction as well as the magnitude of motion of the particle. More mathematically, the rate of change of the position vector of a point with respect to time is the velocity of the point. Consider the ratio formed by dividing the difference of two positions of a particle (displacement) by the time interval. This ratio is called the average velocity over that time interval and is defined aswhere is the displacement vector during the time interval . In the limit that the time interval approaches zero, the average velocity approaches the instantaneous velocity, defined as the time derivative of the position vector,
Thus, a particle's velocity is the time rate of change of its position. Furthermore, this velocity is tangent to the particle's trajectory at every position along its path. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants.
The speed of an object is the magnitude of its velocity. It is a scalar quantity:
where is the arc-length measured along the trajectory of the particle. This arc-length must always increase as the particle moves. Hence, is non-negative, which implies that speed is also non-negative.
Acceleration
The velocity vector can change in magnitude and in direction or both at once. Hence, the acceleration accounts for both the rate of change of the magnitude of the velocity vector and the rate of change of direction of that vector. The same reasoning used with respect to the position of a particle to define velocity, can be applied to the velocity to define acceleration. The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio.
where Δv is the average velocity and Δt is the time interval.
The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative,
Alternatively,
Thus, acceleration is the first derivative of the velocity vector and the second derivative of the position vector of that particle. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants.
The magnitude of the acceleration of an object is the magnitude |a| of its acceleration vector. It is a scalar quantity:
Relative position vector
A relative position vector is a vector that defines the position of one point relative to another. It is the difference in position of the two points.
The position of one point A relative to another point B is simply the difference between their positions
which is the difference between the components of their position vectors.
If point A has position components
and point B has position components
then the position of point A relative to point B is the difference between their components:
Relative velocity
The velocity of one point relative to another is simply the difference between their velocities
which is the difference between the components of their velocities.
If point A has velocity components and point B has velocity components then the velocity of point A relative to point B is the difference between their components:
Alternatively, this same result could be obtained by computing the time derivative of the relative position vector rB/A.
Relative acceleration
The acceleration of one point C relative to another point B is simply the difference between their accelerations.
which is the difference between the components of their accelerations.
If point C has acceleration components
and point B has acceleration components
then the acceleration of point C relative to point B is the difference between their components:
Alternatively, this same result could be obtained by computing the second time derivative of the relative position vector rB/A.
Assuming that the initial conditions of the position, , and velocity at time are known, the first integration yields the velocity of the particle as a function of time.
A second integration yields its path (trajectory),
Additional relations between displacement, velocity, acceleration, and time can be derived. Since the acceleration is constant,
can be substituted into the above equation to give:
A relationship between velocity, position and acceleration without explicit time dependence can be had by solving the average acceleration for time and substituting and simplifying
where denotes the dot product, which is appropriate as the products are scalars rather than vectors.
The dot product can be replaced by the cosine of the angle between the vectors (see Geometric interpretation of the dot product for more details) and the vectors by their magnitudes, in which case:
In the case of acceleration always in the direction of the motion and the direction of motion should be in positive or negative, the angle between the vectors is 0, so , and
This can be simplified using the notation for the magnitudes of the vectors where can be any curvaceous path taken as the constant tangential acceleration is applied along that path, so
This reduces the parametric equations of motion of the particle to a Cartesian relationship of speed versus position. This relation is useful when time is unknown. We also know that or is the area under a velocity–time graph. We can take by adding the top area and the bottom area. The bottom area is a rectangle, and the area of a rectangle is the where is the width and is the height. In this case and (the here is different from the acceleration ). This means that the bottom area is . Now let's find the top area (a triangle). The area of a triangle is where is the base and is the height. In this case, and or . Adding and results in the equation results in the equation . This equation is applicable when the final velocity is unknown.
Particle trajectories in cylindrical-polar coordinates
It is often convenient to formulate the trajectory of a particle r(t) = (x(t), y(t), z(t)) using polar coordinates in the X–Y plane. In this case, its velocity and acceleration take a convenient form.
Recall that the trajectory of a particle P is defined by its coordinate vector r measured in a fixed reference frame F. As the particle moves, its coordinate vector r(t) traces its trajectory, which is a curve in space, given by:
where x̂, ŷ, and ẑ are the unit vectors along the x, y and z axes of the reference frame F, respectively.
Consider a particle P that moves only on the surface of a circular cylinder r(t) = constant, it is possible to align the z axis of the fixed frame F with the axis of the cylinder. Then, the angle θ around this axis in the x–y plane can be used to define the trajectory as,
where the constant distance from the center is denoted as r, and θ(t) is a function of time.
The cylindrical coordinates for r(t) can be simplified by introducing the radial and tangential unit vectors,
and their time derivatives from elementary calculus:
Using this notation, r(t) takes the form,
In general, the trajectory r(t) is not constrained to lie on a circular cylinder, so the radius R varies with time and the trajectory of the particle in cylindrical-polar coordinates becomes:
Where r, θ, and z might be continuously differentiable functions of time and the function notation is dropped for simplicity. The velocity vector vP is the time derivative of the trajectory r(t), which yields:
Similarly, the acceleration aP, which is the time derivative of the velocity vP, is given by:
The term acts toward the center of curvature of the path at that point on the path, is commonly called the centripetal acceleration. The term is called the Coriolis acceleration.
Constant radius
If the trajectory of the particle is constrained to lie on a cylinder, then the radius r is constant and the velocity and acceleration vectors simplify. The velocity of vP is the time derivative of the trajectory r(t),
Planar circular trajectories
A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the z axis:
where r and z0 are constants. In this case, the velocity vP is given by:
where is the angular velocity of the unit vector around the z axis of the cylinder.
The acceleration aP of the particle P is now given by:
The components
are called, respectively, the radial and tangential components of acceleration.
The notation for angular velocity and angular acceleration is often defined as
so the radial and tangential acceleration components for circular trajectories are also written as
Point trajectories in a body moving in the plane
The movement of components of a mechanical system are analyzed by attaching a reference frame to each part and determining how the various reference frames move relative to each other. If the structural stiffness of the parts are sufficient, then their deformation can be neglected and rigid transformations can be used to define this relative movement. This reduces the description of the motion of the various parts of a complicated mechanical system to a problem of describing the geometry of each part and geometric association of each part relative to other parts.
Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways—more technically, it is the study of invariants under a set of transformations. These transformations can cause the displacement of the triangle in the plane, while leaving the vertex angle and the distances between vertices unchanged. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry.
The coordinates of points in a plane are two-dimensional vectors in R2 (two dimensional space). Rigid transformations are those that preserve the distance between any two points. The set of rigid transformations in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n).
Displacements and motion
The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say M, on one that moves relative to a fixed frame, F, on the other. The rigid transformation, or displacement, of M relative to F defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation.
The set of all displacements of M relative to F is called the configuration space of M. A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of M relative to F. The motion of a body consists of a continuous set of rotations and translations.
Matrix representation
The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3×3 matrix known as a homogeneous transform. The 3×3 homogeneous transform is constructed from a 2×2 rotation matrix A(φ) and the 2×1 translation vector d = (dx, dy), as:
These homogeneous transforms perform rigid transformations on the points in the plane z = 1, that is, on points with coordinates r = (x, y, 1).
In particular, let r define the coordinates of points in a reference frame M coincident with a fixed frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin of F and rotated by the angle φ relative to the x-axis of F, the new coordinates in F of points in M are given by:
Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered a subset of R3, translations become affine linear transformations.
Pure translation
If a rigid body moves so that its reference frame M does not rotate (θ = 0) relative to the fixed frame F, the motion is called pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d(t) of the origin of M, that is:
Thus, for bodies in pure translation, the velocity and acceleration of every point P in the body are given by:
where the dot denotes the derivative with respect to time and vO and aO are the velocity and acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in M is constant, so its derivative is zero.
Rotation of a body around a fixed axis
Rotational or angular kinematics is the description of the rotation of an object. In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The z-axis has been chosen for convenience.
Position
This allows the description of a rotation as the angular position of a planar reference frame M relative to a fixed F about this shared z-axis. Coordinates p = (x, y) in M are related to coordinates P = (X, Y) in F by the matrix equation:
where
is the rotation matrix that defines the angular position of M relative to F as a function of time.
Velocity
If the point p does not move in M, its velocity in F is given by
It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t),
where the matrix
is known as the angular velocity matrix of M relative to F. The parameter ω is the time derivative of the angle θ, that is:
Acceleration
The acceleration of P(t) in F is obtained as the time derivative of the velocity,
which becomes
where
is the angular acceleration matrix of M on F, and
The description of rotation then involves these three quantities:
Angular position: the oriented distance from a selected origin on the rotational axis to a point of an object is a vector r(t) locating the point. The vector r(t) has some projection (or, equivalently, some component) r⊥(t) on a plane perpendicular to the axis of rotation. Then the angular position of that point is the angle θ from a reference axis (typically the positive x-axis) to the vector r⊥(t) in a known rotation sense (typically given by the right-hand rule).
Angular velocity: the angular velocity ω is the rate at which the angular position θ changes with respect to time t: The angular velocity is represented in Figure 1 by a vector Ω pointing along the axis of rotation with magnitude ω and sense determined by the direction of rotation as given by the right-hand rule.
Angular acceleration: the magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t:
The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges:
Here θi and θf are, respectively, the initial and final angular positions, ωi and ωf are, respectively, the initial and final angular velocities, and α is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector.
Point trajectories in body moving in three dimensions
Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three-dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations.
Position
In order to define these formulas, the movement of a component B of a mechanical system is defined by the set of rotations [A(t)] and translations d(t) assembled into the homogeneous transformation [T(t)]=[A(t), d(t)]. If p is the coordinates of a point P in B measured in the moving reference frame M, then the trajectory of this point traced in F is given by:
This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context.
This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as:
This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is:
Velocity
The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this position vector,
The dot denotes the derivative with respect to time; because p is constant, its derivative is zero.
This formula can be modified to obtain the velocity of P by operating on its trajectory P(t) measured in the fixed frame F. Substituting the inverse transform for p into the velocity equation yields:
The matrix [S] is given by:
where
is the angular velocity matrix.
Multiplying by the operator [S], the formula for the velocity vP takes the form:
where the vector ω is the angular velocity vector obtained from the components of the matrix [Ω]; the vector
is the position of P relative to the origin O of the moving frame M; and
is the velocity of the origin O.
Acceleration
The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity vector:
This equation can be expanded firstly by computing
and
The formula for the acceleration AP can now be obtained as:
or
where α is the angular acceleration vector obtained from the derivative of the angular velocity matrix;
is the relative position vector (the position of P relative to the origin O of the moving frame M); and
is the acceleration of the origin of the moving frame M.
Kinematic constraints
Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. The following are some common examples.
Kinematic coupling
A kinematic coupling exactly constrains all 6 degrees of freedom.
Rolling without slipping
An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass:
For the case of an object that does not tip or turn, this reduces to .
Inextensible cord
This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero. A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord. An equilibrium problem (i.e. not kinematic) of this type is the catenary.
Kinematic pairs
Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips shows that there are many ways to construct pairs that do not fit this simple classification.
Lower pair
A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in the fixed solid body. There are the following cases:
A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom, which is pure rotation about the axis of the hinge.
A prismatic joint, or slider, requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. This degree of freedom is the distance of the slide along the line.
A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. The position of the moving body is defined by both the rotation about and slide along the axis.
A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom.
A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom.
Higher pairs
Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints.
Kinematic chains
Rigid bodies ("links") connected by kinematic pairs ("joints") are known as kinematic chains. Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as type synthesis in machine design.
Examples
The planar one degree-of-freedom linkages assembled from N links and j hinges or sliding joints are:
N = 2, j = 1 : a two-bar linkage that is the lever;
N = 4, j = 4 : the four-bar linkage;
N = 6, j = 7 : a six-bar linkage. This must have two links ("ternary links") that support three joints. There are two distinct topologies that depend on how the two ternary linkages are connected. In the Watt topology, the two ternary links have a common joint; in the Stephenson topology, the two ternary links do not have a common joint and are connected by binary links.
N = 8, j = 10 : eight-bar linkage with 16 different topologies;
N = 10, j = 13 : ten-bar linkage with 230 different topologies;
N = 12, j = 16 : twelve-bar linkage with 6,856 topologies.
For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm", Mechanism and Machine Theory #41, pp. 1021–1030 (2006).
See also
Absement
Acceleration
Analytical mechanics
Applied mechanics
Celestial mechanics
Centripetal force
Classical mechanics
Distance
Dynamics (physics)
Fictitious force
Forward kinematics
Four-bar linkage
Inverse kinematics
Jerk (physics)
Kepler's laws
Kinematic coupling
Kinematic diagram
Kinematic synthesis
Kinetics (physics)
Motion (physics)
Orbital mechanics
Statics
Velocity
Integral kinematics
Chebychev–Grübler–Kutzbach criterion
References
Further reading
Eduard Study (1913) D.H. Delphenich translator, "Foundations and goals of analytical kinematics".
External links
Java applet of 1D kinematics
Physclips: Mechanics with animations and video clips from the University of New South Wales.
Kinematic Models for Design Digital Library (KMODDL), featuring movies and photos of hundreds of working models of mechanical systems at Cornell University and an e-book library of classic texts on mechanical design and engineering.
Micro-Inch Positioning with Kinematic Components
Classical mechanics
Mechanisms (engineering) | 0.799292 | 0.99747 | 0.79727 |
Poynting vector | In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area, per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884. Nikolay Umov is also credited with formulating the concept. Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition. The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields.
Definition
In Poynting's original paper and in most textbooks, the Poynting vector is defined as the cross product
where bold letters represent vectors and
E is the electric field vector;
H is the magnetic field's auxiliary field vector or magnetizing field.
This expression is often called the Abraham form and is the most widely used. The Poynting vector is usually denoted by S or N.
In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy.
If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem:
where is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit.
Example: Power flow in a coaxial cable
Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z).
The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations.
The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form:
W can be evaluated by integrating the electric field from to which must be the negative of the voltage V:
so that:
The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r:
Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r):
where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors:
Substituting the earlier solution for the constant W we find:
that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity.
Other similar examples in which the P = V · I result can be analytically calculated are: the parallel-plate transmission line, using Cartesian coordinates, and the two-wire transmission line, using bipolar cylindrical coordinates.
Other forms
In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article).
It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al. summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy).
The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view.
Interpretation
The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law:
where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by
where
E is the electric field;
D is the electric displacement field;
B is the magnetic flux density;
H is the magnetizing field.
The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u.
For light in free space, the linear momentum density is
For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as
where
ε is the permittivity of the material;
μ is the permeability of the material.
Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency.
In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms.
One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work.
Plane waves
In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium:
where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction:
Finding the time-averaged power in the plane wave then requires averaging over the wave period (the inverse frequency of the wave):
where Erms is the root mean square (RMS) electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, Erms is , with the average Poynting vector then given by:
This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 ≈377Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index , the intrinsic impedance is found as:
In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term).
Formulation in terms of microscopic fields
The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as
where
μ0 is the vacuum permeability;
E is the electric field vector;
B is the magnetic flux.
This is actually the general expression of the Poynting vector. The corresponding form of Poynting's theorem is
where J is the total current density and the energy density u is given by
where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only.
The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where . In all other cases, they differ in that and the corresponding u are purely radiative, since the dissipation term covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term.
Since only the microscopic fields E and B occur in the derivation of and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials.
Time-averaged Poynting vector
The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes.
We would thus not be considering the instantaneous and used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as is understood to signify a sinusoidally varying field whose instantaneous amplitude follows the real part of where is the (radian) frequency of the sinusoidal wave being considered.
In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle . The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as:
where ∗ denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of . The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), and are exactly in phase, so is simply a real number according to the above definition.
The equivalence of to the time-average of the instantaneous Poynting vector can be shown as follows.
The average of the instantaneous Poynting vector S over time is given by:
The second term is the double-frequency component having an average value of zero, so we find:
According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of and refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ), then the correct average power flow is obtained without multiplication by 1/2.
Resistive dissipation
If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface. This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454.
Radiation pressure
The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by
Uniqueness of the Poynting vector
The Poynting vector occurs in Poynting's theorem only through its divergence , that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem.
However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique. The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H.
Static fields
The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, . To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end.
While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore, the circular flow of electromagnetic energy implies an angular momentum. If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged.
See also
Wave vector
References
Further reading
Electromagnetic radiation
Optical quantities
Vectors (mathematics and physics) | 0.799799 | 0.996427 | 0.796941 |
Analytical mechanics | In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation.
Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems.
Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics.
Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.
Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory.
Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory.
The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
Motivation for analytical mechanics
The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system.
Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation.
When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description.
The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system.
Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted.
Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion.
It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations.
Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves.
Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed.
Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed.
Intrinsic motion
Generalized coordinates and constraints
In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...).
Difference between curvillinear and generalized coordinates
Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule:
For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple:
and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities:
D'Alembert's principle of virtual work
D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is:
where
are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics:
where T is the total kinetic energy of the system, and the notation
is a useful shorthand (see matrix calculus for this notation).
Constraints
If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic.
Lagrangian mechanics
The introduction of generalized coordinates and the fundamental Lagrangian function:
where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations;
which are a set of N second-order ordinary differential equations, one for each qi(t).
This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit.
The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates:
where is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time:
The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle.
Hamiltonian mechanics
The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates:
and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta):
where denotes the dot product, also leading to Hamilton's equations:
which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian:
which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law:
Analogous to the configuration space, the set of all momenta is the generalized momentum space:
("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves)
The set of all positions and momenta form the phase space:
that is, the Cartesian product of the configuration space and generalized momentum space.
A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait:
The Poisson bracket
All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta:
Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A:
This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization:
Properties of the Lagrangian and the Hamiltonian
Following are overlapping properties between the Lagrangian and Hamiltonian functions.
All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence.
The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique.
Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below).
If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates.
If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time).
If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it.
Principle of least action
Action is another quantity in analytical mechanics defined as a functional of the Lagrangian:
A general way to find the equations of motion from the action is the principle of least action:
where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system.
From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity.
Hamiltonian-Jacobi mechanics
Canonical transformations
The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways:
With the restriction on P and Q such that the transformed Hamiltonian system is:
the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem.
The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity,
for all i = 1, 2,...N. If this does not hold then the transformation is not canonical.
The Hamilton–Jacobi equation
By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C:
the generalized momenta become:
and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation:
where H is the Hamiltonian as before:
Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H.
The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields.
Routhian mechanics
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian:
which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q,
and N − s Lagrangian equations in the non cyclic coordinates ζ.
Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom.
The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion.
Appellian mechanics
Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates:
as well as generalized forces mentioned above in D'Alembert's principle. The equations are
where
is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr.
Classical field theory
Lagrangian field theory
Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves:
and the Euler–Lagrange equations have an analogue for fields:
where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear.
This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields.
The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973,
Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation.
Hamiltonian field theory
The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are:
where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics:
The equations of motion are:
where the variational derivative
must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear.
Again, the volume integral of the Hamiltonian density is the Hamiltonian
Symmetry, conservation, and Noether's theorem
Symmetry transformations in classical space and time
Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries.
where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ.
Noether's theorem
Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s:
the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved.
See also
Lagrangian mechanics
Hamiltonian mechanics
Theoretical mechanics
Classical mechanics
Hamilton–Jacobi equation
Hamilton's principle
Kinematics
Kinetics (physics)
Non-autonomous mechanics
Udwadia–Kalaba equation
References and notes
Mathematical physics
Dynamical systems | 0.80641 | 0.987635 | 0.796439 |
Classical mechanics | Classical mechanics is a physical theory describing the motion of objects such as projectiles, parts of machinery, spacecraft, planets, stars, and galaxies. The development of classical mechanics involved substantial change in the methods and philosophy of physics. The qualifier classical distinguishes this type of mechanics from physics developed after the revolutions in physics of the early 20th century, all of which revealed limitations in classical mechanics.
The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on the 17th century foundational works of Sir Isaac Newton, and the mathematical methods invented by Gottfried Wilhelm Leibniz, Leonhard Euler and others to describe the motion of bodies under the influence of forces. Later, methods based on energy were developed by Euler, Joseph-Louis Lagrange, William Rowan Hamilton and others, leading to the development of analytical mechanics (which includes Lagrangian mechanics and Hamiltonian mechanics). These advances, made predominantly in the 18th and 19th centuries, extended beyond earlier works; they are, with some modification, used in all areas of modern physics.
If the present state of an object that obeys the laws of classical mechanics is known, it is possible to determine how it will move in the future, and how it has moved in the past. Chaos theory shows that the long term predictions of classical mechanics are not reliable. Classical mechanics provides accurate results when studying objects that are not extremely massive and have speeds not approaching the speed of light. With objects about the size of an atom's diameter, it becomes necessary to use quantum mechanics. To describe velocities approaching the speed of light, special relativity is needed. In cases where objects become extremely massive, general relativity becomes applicable. Some modern sources include relativistic mechanics in classical physics, as representing the field in its most developed and accurate form.
Branches
Traditional division
Classical mechanics was traditionally divided into three main branches.
Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. Kinematics describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it.
Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics.
Forces vs. energy
Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity.
In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in configuration space, and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space. Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.
By region of application
Alternatively, a division can be made by region of application:
Celestial mechanics, relating to stars, planets and other celestial bodies
Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases).
Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light.
Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials.
Description of objects and their motion
For simplicity, classical mechanics often models real-world objects as point particles, that is, objects with negligible size. The motion of a point particle is determined by a small number of parameters: its position, mass, and the forces applied to it. Classical mechanics also describes the more complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass". (These generalizations/extensions are derived from Newton's laws, say, by decomposing a solid body into a collection of points.)
In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The behavior of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle.
Classical mechanics assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance).
Kinematics
The position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O. In cases where P is moving relative to O, r is defined as a function of t, time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval that is observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space.
Velocity and speed
The velocity, or the rate of change of displacement with time, is defined as the derivative of the position with respect to time:
.
In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at . However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as −10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis.
Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector and the velocity of the second object by the vector , where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each object respectively, then the velocity of the first object as seen by the second object is:
Similarly, the first object sees the velocity of the second object as:
When both objects are moving in the same direction, this equation can be simplified to:
Or, by ignoring direction, the difference can be given in terms of speed only:
Acceleration
The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time):
Acceleration represents the velocity's change over time. Velocity can change in magnitude, direction, or both. Occasionally, a decrease in the magnitude of velocity "v" is referred to as deceleration, but generally any change in the velocity over time, including deceleration, is referred to as acceleration.
Frames of reference
While the position, velocity and acceleration of a particle can be described with respect to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is an idealized frame of reference within which an object with zero net force acting upon it moves with a constant velocity; that is, it is either at rest or moving uniformly in a straight line. In an inertial frame Newton's law of motion, , is valid.
Non-inertial reference frames accelerate in relation to another inertial frame. A body rotating with respect to an inertial frame is not an inertial frame. When viewed from an inertial frame, particles in the non-inertial frame appear to move in ways not explained by forces from existing fields in the reference frame. Hence, it appears that there are other forces that enter the equations of motion solely as a result of the relative acceleration. These forces are referred to as fictitious forces, inertia forces, or pseudo-forces.
Consider two reference frames S and S'. For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is measured the same in all reference frames, if we require when , then the relation between the space-time coordinates of the same event observed from the reference frames S' and S, which are moving at a relative velocity u in the x direction, is:
This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light.
The transformations have the following consequences:
v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S)
a′ = a (the acceleration of a particle is the same in any inertial reference frame)
F′ = F (the force on a particle is the same in any inertial reference frame)
the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics.
For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force.
Newtonian mechanics
A force in physics is any action that causes an object's velocity to change; that is, to accelerate. A force originates from within a field, such as an electro-static field (caused by static electrical charges), electro-magnetic field (caused by moving charges), or gravitational field (caused by mass), among others.
Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's Second Law":
The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is , the second law can be written in the simplified and more familiar form:
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example:
where λ is a positive constant, the negative sign states that the force is opposite the sense of the velocity. Then the equation of motion is
This can be integrated to obtain
where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), and the particle is slowing down. This expression can be further integrated to obtain the position r of the particle as a function of time.
Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces.
Work and energy
If a constant force F is applied to a particle that makes a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors:
More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral
If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative.
The kinetic energy Ek of a particle of mass m travelling at speed v is given by
For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles.
The work–energy theorem states that for a particle of constant mass m, the total work W done on the particle as it moves from position r1 to r2 is equal to the change in kinetic energy Ek of the particle:
Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep:
If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force
The decrease in the potential energy is equal to the increase in the kinetic energy
This result is known as conservation of energy and states that the total energy,
is constant in time. It is often useful, because many commonly encountered forces are conservative.
Lagrangian mechanics
Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair consisting of a configuration space and a smooth function within that space called a Lagrangian. For many systems, where and are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
Hamiltonian mechanics
Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics.
In this formalism, the dynamics of a system are governed by Hamilton's equations, which express the time derivatives of position and momentum variables in terms of partial derivatives of a function called the Hamiltonian:
The Hamiltonian is the Legendre transform of the Lagrangian, and in many situations of physical interest it is equal to the total energy of the system.
Limits of validity
Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form.
When both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) is of use. QFT deals with small distances, and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. When treating large degrees of freedom at the macroscopic level, statistical mechanics becomes useful. Statistical mechanics describes the behavior of large (but countable) numbers of particles and their interactions as a whole at the macroscopic level. Statistical mechanics is mainly used in thermodynamics for systems that lie outside the bounds of the assumptions of classical thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. In case that objects become extremely heavy (i.e., their Schwarzschild radius is not negligibly small for a given application), deviations from Newtonian mechanics become apparent and can be quantified by using the parameterized post-Newtonian formalism. In that case, general relativity (GR) becomes applicable. However, until now there is no theory of quantum gravity unifying GR and QFT in the sense that it could be used when objects become extremely small and heavy.[4][5]
Newtonian approximation to special relativity
In special relativity, the momentum of a particle is given by
where m is the particle's rest mass, v its velocity, v is the modulus of v, and c is the speed of light.
If v is very small compared to c, v2/c2 is approximately zero, and so
Thus the Newtonian equation is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light.
For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by
where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage.
Classical approximation to quantum mechanics
The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is
where h is the Planck constant and p is the momentum.
Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 V, had a wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory.
More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits.
Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies.
History
The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering, and technology. The development of classical mechanics lead to the development of many areas of mathematics.
Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These later became decisive factors in forming modern science, and their early application came to be known as classical mechanics. In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore introduced the concept of "positional gravity" and the use of component forces.
The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia nova, published in 1609. He concluded, based on Tycho Brahe's observations on the orbit of Mars, that the planet's orbits were ellipses. This break with ancient thought was happening around the same time that Galileo was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of that particular experiment is disputed, but he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion was derived from the results of such experiments and forms a cornerstone of classical mechanics. In 1673 Christiaan Huygens described in his Horologium Oscillatorium the first two laws of motion. The work is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics.
Newton founded his principles of natural philosophy on three proposed laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given the proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica. Here they are distinguished from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provides the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets.
Newton had previously invented the calculus; however, the Principia was formulated entirely in terms of long-established geometric methods in emulation of Euclid. Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) he maintained his own corpuscular theory of light.
After Newton, classical mechanics became a principal field of study in mathematics as well as physics. Mathematical formulations progressively allowed finding solutions to a far greater number of problems. The first notable mathematical treatment was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton.
Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems led to the special theory of relativity, often still considered a part of classical mechanics.
A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics.
Since the end of the 20th century, classical mechanics in physics has no longer been an independent theory. Instead, classical mechanics is now considered an approximate theory to the more general quantum mechanics. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard Model and its more modern extensions into a unified theory of everything. Classical mechanics is a theory useful for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields.
See also
Dynamical system
List of equations in classical mechanics
List of publications in classical mechanics
List of textbooks on classical mechanics and quantum mechanics
Molecular dynamics
Newton's laws of motion
Special relativity
Quantum mechanics
Quantum field theory
Notes
References
Further reading
External links
Crowell, Benjamin. Light and Matter (an introductory text, uses algebra with optional sections involving calculus)
Fitzpatrick, Richard. Classical Mechanics (uses calculus)
Hoiland, Paul (2004). Preferred Frames of Reference & Relativity
Horbatsch, Marko, "Classical Mechanics Course Notes".
Rosu, Haret C., "Classical Mechanics". Physics Education. 1999. [arxiv.org : physics/9909035]
Shapiro, Joel A. (2003). Classical Mechanics
Sussman, Gerald Jay & Wisdom, Jack & Mayer, Meinhard E. (2001). Structure and Interpretation of Classical Mechanics
Tong, David. Classical Dynamics (Cambridge lecture notes on Lagrangian and Hamiltonian formalism)
Kinematic Models for Design Digital Library (KMODDL) Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering.
MIT OpenCourseWare 8.01: Classical Mechanics Free videos of actual course lectures with links to lecture notes, assignments and exams.
Alejandro A. Torassa, On Classical Mechanics | 0.797682 | 0.997995 | 0.796082 |
Heat transfer physics | Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.
Introduction
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is
where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., .
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons (p), electrons (e), fluid particles (f), and photons (ph).
Length and time scales
Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction. Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics. Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule). Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT, CASTEP, Gaussian, Q-Chem, Quantum ESPRESSO, SIESTA, VASP, WIEN2k). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons.
The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics. In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived.
At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors (x, p) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations (ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method, etc.
Depending on the length and time scale, the proper level of treatment (ab initio, MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation.
So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale (ab initio, MD, BTE and macroscale) analyses, including low-dimensionality and size effects.
Phonon
Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor Kp (W/m-K, from the Fourier law qk,p = -Kp⋅∇ T) for bulk materials, and the phonon boundary resistance ARp,b [K/(W/m2)] for solid interfaces, where A is the interface area. The phonon specific heat capacity cv,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in . Heat transfer physics describes and predicts, cv,p, Kp, Rp,b (or conductance Gp,b) and , based on atomic-level properties.
For an equilibrium potential ⟨φ⟩o of a system with N atoms, the total potential ⟨φ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as
where di is the displacement vector of atom i, and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [d(jl,t): displacement vector of the j-th atom in the l-th unit cell at time t] is
where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [sα: unit vector of mode α, ωp: angular frequency of wave, and κp: wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation
where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ωp and the wave vector κp, and this relation is called the phonon dispersion relation. Thus, the phonon dispersion relation is determined by matrices M and D, which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope dωp/dκp). The Hamiltonian of phonon system with the harmonic approximation is
where Dij is the dynamical matrix element between atoms i and j, and di (dj) is the displacement of i (j) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as
where N is the number of normal modes divided by α and ħ is the reduced Planck constant. The creation operator is the adjoint of the annihilation operator,
The Hamiltonian in terms of bκ,α† and bκ,α is Hp = Σκ,αħωp,α[bκ,α†bκ,α + 1/2] and bκ,α†bκ,α is the phonon number operator. The energy of quantum-harmonic oscillator is Ep = Σκ,α [fp(κ,α) + 1/2]ħωp,α(κp), and thus the quantum of phonon energy ħωp.
The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space), and the phonon density of states Dp (the number density of possible phonon modes). The phonon group velocity up,g is the slope of the dispersion curve, dωp/dκp. Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution {fpo = [exp(ħωp/kBT)-1]−1, kB: Boltzmann constant}. Using the phonon density of states and this occupancy distribution, the phonon energy is Ep(T) = ∫Dp(ωp)fp(ωp,T)ħωpdωp, and the phonon density is np(T) = ∫Dp(ωp)fp(ωp,T)dωp. Phonon heat capacity cv,p (in solid cv,p = cp,p, cv,p : constant-volume heat capacity, cp,p: constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is
where TD is the Debye temperature, m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3n). This gives the Debye T3 law at low temperature and Dulong-Petit law at high temperatures.
From the kinetic theory of gases, thermal conductivity of principal carrier i (p, e, f and ph) is
where ni is the carrier density and the heat capacity is per carrier, ui is the carrier speed and λi is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λp represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τp or rate (= 1/τp) through λp= upτp. Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λp combines these interaction mechanisms through the Matthiessen rule. At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2TD. The interaction rates are reviewed in and includes quantum perturbation theory and MD.
A number of conductivity models are available with approximations regarding the dispersion and λp. Using the single-mode relaxation time approximation (∂fp′/∂t|s = −fp′/τp) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as
With the Debye model (a single group velocity up,g, and a specific heat capacity calculated above), this becomes
where a is the lattice constant a = n−1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as
where is the mean atomic weight of the atoms in the primitive cell, Va=1/n is the average volume per atom, TD,∞ is the high-temperature Debye temperature, T is the temperature, No is the number of atoms in the primitive cell, and ⟨γ2G⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals.
Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy.
Phonon transport across hetero-structure boundaries (represented with Rp,b, phonon boundary resistance) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models. Larger phonon transmission (small Rp,b) occurs at boundaries where material pairs have similar phonon properties (up, Dp, etc.), and in contract large Rp,b occurs when some material is softer (lower cut-off phonon frequency) than the other.
Electron
Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (-ħ2∇2/2me) and potential energy terms (φe). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom, can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) is a measure of excitability of the molecules.
In a crystal structure of metallic solids, the free electron model (zero potential, φe = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal), there is periodic crystal potential, so the electron Hamiltonian becomes
where me is the electron mass, and the periodic potential is expressed as φc (x) = Σg φgexp[i(g∙x)] (g: reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation)
where the eigenfunction ψe,κ is the electron wave function, and eigenvalue Ee(κe), is the electron energy (κe: electron wavevector). The relation between wavevector, κe and energy Ee provides the electronic band structure. In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software (ABINIT, CASTEP, Quantum ESPRESSO, SIESTA, VASP, WIEN2k, etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = Ke + Kp.
Electrons are affected by two thermodynamic forces [from the charge, ∇(EF/ec) where EF is the Fermi level and ec is the electron charge and temperature gradient, ∇(1/T)] because they carry both charge and thermal energy, and thus electric current je and heat flow q are described with the thermoelectric tensors (Aee, Aet, Ate, and Att) from the Onsager reciprocal relations as
Converting these equations to have je equation in terms of electric field ee and ∇T and q equation with je and ∇T, (using scalar coefficients for isotropic transport, αee, αet, αte, and αtt instead of Aee, Aet, Ate, and Att)
Electrical conductivity/resistivity σe (Ω−1m−1)/ ρe (Ω-m), electric thermal conductivity ke (W/m-K) and the Seebeck/Peltier coefficients αS (V/K)/αP (V) are defined as,
Various carriers (electrons, magnons, phonons, and polarons) and their interactions substantially affect the Seebeck coefficient. The Seebeck coefficient can be decomposed with two contributions, αS = αS,pres + αS,trans, where αS,pres is the sum of contributions to the carrier-induced entropy change, i.e., αS,pres = αS,mix + αS,spin + αS,vib (αS,mix: entropy-of-mixing, αS,spin: spin entropy, and αS,vib: vibrational entropy). The other contribution αS,trans is the net energy transferred in moving a carrier divided by qT (q: carrier charge). The electron's contributions to the Seebeck coefficient are mostly in αS,pres. The αS,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula
where feo = N/Na is the ratio of electrons to sites (carrier concentration). Using the chemical potential (μ), the thermal energy (kBT) and the Fermi function, above equation can be expressed in an alternative form, αS,mix = (kB/q)[(Ee − μ)/(kBT)].
Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by αS,spin = ΔSspin/q = (kB/q)ln[(2s + 1)/(2s0 +1)], where s0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e.,
where Dp(ω) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as αS,vib = (ΔSvib/q) = (kB/q)Σi(-Δωi/ωi).
The Seebeck coefficient derived in the above Onsager formulation is the mixing component αS,mix, which dominates in most semiconductors. The vibrational component in high-band gap materials such as B13C2 is very important.
Considering the microscopic transport (transport is a results of nonequilibrium),
where ue is the electron velocity vector, fe (feo) is the electron nonequilibrium (equilibrium) distribution, τe is the electron scattering time, Ee is the electron energy, and Fte is the electric and thermal forces from ∇(EF/ec) and ∇(1/T).
Relating the thermoelectric coefficients to the microscopic transport equations for je and q, the thermal, electric, and thermoelectric properties are calculated. Thus, ke increases with the electrical conductivity σe and temperature T, as the Wiedemann–Franz law presents [ke/(σeTe) = (1/3)(πkB/ec)2 = ]. Electron transport (represented as σe) is a function of carrier density ne,c and electron mobility μe (σe = ecne,cμe). μe is determined by electron scattering rates (or relaxation time, ) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries.
Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating. Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode, solar photovoltaic cells, etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach.
Fluid particle
Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are Hf,t = −(ħ2/2m)∇2, Hf,v = −(ħ2/2m)∇2 + Γx2/2 and Hf,r = −(ħ2/2If)∇2 for translational, vibrational and rotational modes. (Γ: spring constant, If: the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state Ef and partition functions Zf [with the Maxwell–Boltzmann (MB) occupancy distribution] are found as
translational
vibrational
rotational
total
Here, gf is the degeneracy, n, l, and j are the transitional, vibrational and rotational quantum numbers, Tf,v is the characteristic temperature for vibration (= ħωf,v/kB, : vibration frequency), and Tf,r is the rotational temperature [= ħ2/(2IfkB)]. The average specific internal energy is related to the partition function through Zf,
With the energy states and the partition function, the fluid particle specific heat capacity cv,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, cv,f has different formulas depending on the configuration,
monatomic ideal gas
diatomic ideal gas
nonlinear, polyatomic ideal gas
where Rg is the gas constant (= NAkB, NA: the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, No is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity cp,f has a larger value and the difference depends on the temperature T, volumetric thermal expansion coefficient β and the isothermal compressibility κ [cp,f – cv,f = Tβ2/(ρfκ), ρf : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and cv,f and cp,f would change accordingly.
The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux qu = ρfcp,fufT. Conduction heat flux qk for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is
where ⟨uf2⟩1/2 is the RMS (root mean square) thermal velocity (3kBT/m from the MB distribution function, m: atomic mass) and τf-f is the relaxation time (or intercollision time period) [(21/2π d2nf ⟨uf⟩)−1 from the gas kinetic theory, ⟨uf⟩: average thermal speed (8kBT/πm)1/2, d: the collision diameter of fluid particle (atom or molecule), nf: fluid number density].
kf is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of kf, the equilibrium MD with Green–Kubo relations, which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed.
Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO2 gas laser. Also, fluid particles can be adsorbed on solid surfaces (physisorption and chemisorption), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e−-h+ pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule.
Photon
Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer. The EM wave is governed by the classical Maxwell equations, and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe). The quanta EM wave (photon) energy of angular frequency ωph is Eph = ħωph, and follows the Bose–Einstein distribution function (fph). The photon Hamiltonian for the quantized radiation field (second quantization) is
where ee and be are the electric and magnetic fields of the EM radiation, εo and μo are the free-space permittivity and permeability, V is the interaction volume, ωph,α is the photon angular frequency for the α mode and cα† and cα are the photon creation and annihilation operators. The vector potential ae of EM fields (ee = −∂ae/∂t and be = ∇×ae) is
where sph,α is the unit polarization vector, κα is the wave vector.
Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal (uph = d ωph/dκ = ωph/κ, uph: photon speed) and the Debye (used for dispersionless photon) density of states is Dph,b,ωdω = ωph2dωph/π2uph3. With Dph,b,ω and equilibrium distribution fph, photon energy spectral distribution dIb,ω or dIb,λ (λph: wavelength) and total emissive power Eb are derived as
(Planck law),
(Stefan–Boltzmann law).
Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δω). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition (stimulated emission) between electronic energy states.
Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites.
The BTE for photon particle momentum pph = ħωphs/uph along direction s experiencing absorption/emission (= uphσph,ω[fph(ωph,T) - fph(s)], σph,ω: spectral absorption coefficient), and generation/removal , is
In terms of radiation intensity (Iph,ω = uphfphħωphDph,ω/4π, Dph,ω: photon density of states), this is called the equation of radiative transfer (ERT)
The net radiative heat flux vector is
From the Einstein population rate equation, spectral absorption coefficient σph,ω in ERT is,
where is the interaction probability (absorption) rate or the Einstein coefficient B12 (J−1 m3 s−1), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and ne is electron density (in ground state). This can be obtained using the transition dipole moment μe with the FGR and relationship between Einstein coefficients. Averaging σph,ω over ω gives the average photon absorption coefficient σph.
For the case of optically thick medium of length L, i.e., σphL >> 1, and using the gas kinetic theory, the photon conductivity kph is 16σSBT3/3σph (σSB: Stefan–Boltzmann constant, σph: average photon absorption), and photon heat capacity nphcv,ph is 16σSBT3/uph.
Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is
where pe is the dipole moment vector and a† and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level). The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling.
Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function (electrical permittivity, εe,ω), spectral absorption coefficient (σph,ω), and the complex refraction index (mω), are calculated for various interactions between photons and electric/magnetic entities in matter. For example, the imaginary part (εe,c,ω) of complex dielectric function (εe,ω = εe,r,ω + i εe,c,ω) for electronic transition across a bandgap is
where V is the unit-cell volume, VB and CB denote the valence and conduction bands, wκ is the weight associated with a κ-point, and pij is the transition momentum matrix element.
The real part is εe,r,ω is obtained from εe,c,ω using the Kramers-Kronig relation
Here, denotes the principal value of the integral.
In another example, for the far IR regions where the optical phonons are involved, the dielectric function (εe,ω) are calculated as
where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. εe,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential.
From these dielectric function (εe,ω) calculations (e.g., Abinit, VASP, etc.), the complex refractive index mω(= nω + i κω, nω: refraction index and κω: extinction index) is found, i.e., mω2 = εe,ω = εe,r,ω + i εe,c,ω). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as R = [(nω - 1)2 + κω2]/[(nω + 1)2 + κω2]. The spectral absorption coefficient is then found from σph,ω = 2ω κω/uph. The spectral absorption coefficient for various electric entities are listed in the below table.
See also
Energy transfer
Mass transfer
Energy transformation (Energy conversion)
Thermal physics
Thermal science
Thermal engineering
References
Heat transfer
Thermodynamics
Condensed matter physics | 0.816335 | 0.974413 | 0.795447 |
Action principles | Action principles lie at the heart of fundamental physics, from classical mechanics through quantum mechanics, particle physics, and general relativity. Action principles start with an energy function called a Lagrangian describing the physical system. The accumulated value of this energy function between two states of the system is called the action. Action principles apply the calculus of variation to the action. The action depends on the energy function, and the energy function depends on the position, motion, and interactions in the system: variation of the action allows the derivation of the equations of motion without vector or forces.
Several distinct action principles differ in the constraints on their initial and final conditions.
The names of action principles have evolved over time and differ in details of the endpoints of the paths and the nature of the variation. Quantum action principles generalize and justify the older classical principles. Action principles are the basis for Feynman's version of quantum mechanics, general relativity and quantum field theory.
The action principles have applications as broad as physics, including many problems in classical mechanics but especially in modern problems of quantum mechanics and general relativity. These applications built up over two centuries as the power of the method and its further mathematical development rose.
This article introduces the action principle concepts and summarizes other articles with more details on concepts and specific principles.
Common concepts
Action principles are "integral" approaches rather than the "differential" approach of Newtonian mechanics. The core ideas are based on energy, paths, an energy function called the Lagrangian along paths, and selection of a path according to the "action", a continuous sum or integral of the Lagrangian along the path.
Energy, not force
Introductory study of mechanics, the science of interacting objects, typically begins with Newton's laws based on the concept of force, defined by the acceleration it causes when applied to mass: This approach to mechanics focuses on a single point in space and time, attempting to answer the question: "What happens next?". Mechanics based on action principles begin with the concept of action, an energy tradeoff between kinetic energy and potential energy, defined by the physics of the problem. These approaches answer questions relating starting and ending points: Which trajectory will place a basketball in the hoop? If we launch a rocket to the Moon today, how can it land there in 5 days? The Newtonian and action-principle forms are equivalent, and either one can solve the same problems, but selecting the appropriate form will make solutions much easier.
The energy function in the action principles is not the total energy (conserved in an isolated system), but the Lagrangian, the difference between kinetic and potential energy. The kinetic energy combines the energy of motion for all the objects in the system; the potential energy depends upon the instantaneous position of the objects and drives the motion of the objects. The motion of the objects places them in new positions with new potential energy values, giving a new value for the Lagrangian.
Using energy rather than force gives immediate advantages as a basis for mechanics. Force mechanics involves 3-dimensional vector calculus, with 3 space and 3 momentum coordinates for each object in the scenario; energy is a scalar magnitude combining information from all objects, giving an immediate simplification in many cases. The components of force vary with coordinate systems; the energy value is the same in all coordinate systems. Force requires an inertial frame of reference; once velocities approach the speed of light, special relativity profoundly affects mechanics based on forces. In action principles, relativity merely requires a different Lagrangian: the principle itself is independent of coordinate systems.
Paths, not points
The explanatory diagrams in force-based mechanics usually focus on a single point, like the center of momentum, and show vectors of forces and velocities. The explanatory diagrams of action-based mechanics have two points with actual and possible paths connecting them. These diagrammatic conventions reiterate the different strong points of each method.
Depending on the action principle, the two points connected by paths in a diagram may represent two particle positions at different times, or the two points may represent values in a configuration space or in a phase space. The mathematical technology and terminology of action principles can be learned by thinking in terms of physical space, then applied in the more powerful and general abstract spaces.
Action along a path
Action principles assign a number—the action—to each possible path between two points. This number is computed by adding an energy value for each small section of the path multiplied by the time spent in that section:
action
where the form of the kinetic and potential energy expressions depend upon the physics problem, and their value at each point on the path depends upon relative coordinates corresponding to that point. The energy function is called a Lagrangian; in simple problems it is the kinetic energy minus the potential energy of the system.
Path variation
A system moving between two points takes one particular path; other similar paths are not taken. Each path corresponds to a value of the action.
An action principle predicts or explains that the particular path taken has a stationary value for the system's action: similar paths near the one taken have very similar action value. This variation in the action value is key to the action principles.
The symbol is used to indicate the path variations so an action principle appears mathematically as
meaning that at the stationary point, the variation of the action with some fixed constraints is zero.
For action principles, the stationary point may be a minimum or a saddle point, but not a maximum. Elliptical planetary orbits provide a simple example of two paths with equal action one in each direction around the orbit; neither can be the minimum or "least action". The path variation implied by is not the same as a differential like . The action integral depends on the coordinates of the objects, and these coordinates depend upon the path taken. Thus the action integral is a functional, a function of a function.
Conservation principles
An important result from geometry known as Noether's theorem states that any conserved quantities in a Lagrangian imply a continuous symmetry and conversely. For examples, a Lagrangian independent of time corresponds to a system with conserved energy; spatial translation independence implies momentum conservation; angular rotation invariance implies angular momentum conservation.
These examples are global symmetries, where the independence is itself independent of space or time; more general local symmetries having a functional dependence on space or time lead to gauge theory. The observed conservation of isospin was used by Chen Ning Yang and Robert Mills in 1953 to construct a gauge theory for mesons, leading some decades later to modern particle physics theory.
Distinct principles
Action principles apply to a wide variety of physical problems, including all of fundamental physics. The only major exceptions are cases involving friction or when only the initial position and velocities are given. Different action principles have different meaning for the variations; each specific application of an action principle requires a specific Lagrangian describing the physics. A common name for any or all of these principles is "the principle of least action". For a discussion of the names and historical origin of these principles see action principle names.
Fixed endpoints with conserved energy
When total energy and the endpoints are fixed, Maupertuis's least action principle applies. For example, to score points in basketball the ball must leave the shooters hand and go through the hoop, but the time of the flight is not constrained. Maupertuis's least action principle is written mathematically as the stationary condition
on the abbreviated action
(sometimes written ), where are the particle momenta or the conjugate momenta of generalized coordinates, defined by the equation
where is the Lagrangian. Some textbooks write as , to emphasize that the variation used in this form of the action principle differs from Hamilton's variation. Here the total energy is fixed during the variation, but not the time, the reverse of the constraints on Hamilton's principle. Consequently, the same path and end points take different times and energies in the two forms. The solutions in the case of this form of Maupertuis's principle are orbits: functions relating coordinates to each other in which time is simply an index or a parameter.
Time-independent potentials; no forces
For time-invariant system, the action relates simply to the abbreviated action on the stationary path as
for energy and time difference . For a rigid body with no net force, the actions are identical, and the variational principles become equivalent to Fermat's principle of least time:
Fixed events
When the physics problem gives the two endpoints as a position and a time, that is as events, Hamilton's action principle applies. For example, imagine planning a trip to the Moon. During your voyage the Moon will continue its orbit around the Earth: it's a moving target. Hamilton's principle for objects at positions is written mathematically as
The constraint means that we only consider paths taking the same time, as well as connecting the same two points and . The Lagrangian is the difference between kinetic energy and potential energy at each point on the path. Solution of the resulting equations gives the world line . Starting with Hamilton's principle, the local differential Euler–Lagrange equation can be derived for systems of fixed energy. The action in Hamilton's principle is the Legendre transformation of the action in Maupertuis' principle.
Classical field theory
The concepts and many of the methods useful for particle mechanics also apply to continuous fields. The action integral runs over a Lagrangian density, but the concepts are so close that the density is often simply called the Lagrangian.
Quantum action principles
For quantum mechanics, the action principles have significant advantages: only one mechanical postulate is needed, if a covariant Lagrangian is used in the action, the result is relativistically correct, and they transition clearly to classical equivalents.
Both Richard Feynman and Julian Schwinger developed quantum action principles based on early work by Paul Dirac. Feynman's integral method was not a variational principle but reduces to the classical least action principle; it led to his Feynman diagrams. Schwinger's differential approach relates infinitesimal amplitude changes to infinitesimal action changes.
Feynman's action principle
When quantum effects are important, new action principles are needed. Instead of a particle following a path, quantum mechanics defines a probability amplitude at one point and time related to a probability amplitude at a different point later in time:
where is the classical action.
Instead of single path with stationary action, all possible paths add (the integral over ), weighted by a complex probability amplitude . The phase of the amplitude is given by the action divided by the Planck constant or quantum of action: . When the action of a particle is much larger than , , the phase changes rapidly along the path: the amplitude averages to a small number.
Thus the Planck constant sets the boundary between classical and quantum mechanics.
All of the paths contribute in the quantum action principle. At the end point, where the paths meet, the paths with similar phases add, and those with phases differing by subtract. Close to the path expected from classical physics, phases tend to align; the tendency is stronger for more massive objects that have larger values of action. In the classical limit, one path dominates the path of stationary action.
Schwinger's action principle
Schwinger's approach relates variations in the transition amplitudes to variations in an action matrix element:
where the action operator is
The Schwinger form makes analysis of variation of the Lagrangian itself, for example, variation in potential source strength, especially transparent.
The optico-mechanical analogy
For every path, the action integral builds in value from zero at the starting point to its final value at the end. Any nearby path has similar values at similar distances from the starting point. Lines or surfaces of constant partial action value can be drawn across the paths, creating a wave-like view of the action. Analysis like this connects particle-like rays of geometrical optics with the wavefronts of Huygens–Fresnel principle.
Applications
Action principles are applied to derive differential equations like the Euler–Lagrange equations or as direct applications to physical problems.
Classical mechanics
Action principles can be directly applied to many problems in classical mechanics, e.g. the shape of elastic rods under load,
the shape of a liquid between two vertical plates (a capillary),
or the motion of a pendulum when its support is in motion.
Chemistry
Quantum action principles are used in the quantum theory of atoms in molecules (QTAIM), a way of decomposing the computed electron density of molecules in to atoms as a way of gaining insight into chemical bonding.
General relativity
Inspired by Einstein's work on general relativity, the renowned mathematician David Hilbert applied the principle of least action to derive the field equations of general relativity. His action, now known as the Einstein–Hilbert action,
contained a relativistically invariant volume element and the Ricci scalar curvature . The scale factor is the Einstein gravitational constant.
Other applications
The action principle is so central in modern physics and mathematics that it is widely applied including in thermodynamics, fluid mechanics, the theory of relativity, quantum mechanics, particle physics, and string theory.
History
The action principle is preceded by earlier ideas in optics. In ancient Greece, Euclid wrote in his Catoptrica that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection. Hero of Alexandria later showed that this path has the shortest length and least time.
Building on the early work of Pierre Louis Maupertuis, Leonhard Euler, and Joseph Louis Lagrange defining versions of principle of least action,
William Rowan Hamilton and in tandem Carl Gustav Jacobi developed a variational form for classical mechanics known as the Hamilton–Jacobi equation.
In 1915, David Hilbert applied the variational principle to derive Albert Einstein's equations of general relativity.
In 1933, the physicist Paul Dirac demonstrated how this principle can be used in quantum calculations by discerning the quantum mechanical underpinning of the principle in the quantum interference of amplitudes. Subsequently Julian Schwinger and Richard Feynman independently applied this principle in quantum electrodynamics.
References
Dynamics (mechanics)
Classical mechanics | 0.80096 | 0.992644 | 0.795068 |
Momentum | In Newtonian mechanics, momentum (: momenta or momentums; more specifically linear momentum or translational momentum) is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. If is an object's mass and is its velocity (also a vector quantity), then the object's momentum (from Latin pellere "push, drive") is:
In the International System of Units (SI), the unit of measurement of momentum is the kilogram metre per second (kg⋅m/s), which is dimensionally equivalent to the newton-second.
Newton's second law of motion states that the rate of change of a body's momentum is equal to the net force acting on it. Momentum depends on the frame of reference, but in any inertial frame it is a conserved quantity, meaning that if a closed system is not affected by external forces, its total momentum does not change. Momentum is also conserved in special relativity (with a modified formula) and, in a modified form, in electrodynamics, quantum mechanics, quantum field theory, and general relativity. It is an expression of one of the fundamental symmetries of space and time: translational symmetry.
Advanced formulations of classical mechanics, Lagrangian and Hamiltonian mechanics, allow one to choose coordinate systems that incorporate symmetries and constraints. In these systems the conserved quantity is generalized momentum, and in general this is different from the kinetic momentum defined above. The concept of generalized momentum is carried over into quantum mechanics, where it becomes an operator on a wave function. The momentum and position operators are related by the Heisenberg uncertainty principle.
In continuous systems such as electromagnetic fields, fluid dynamics and deformable bodies, a momentum density can be defined as momentum per volume (a volume-specific quantity). A continuum version of the conservation of momentum leads to equations such as the Navier–Stokes equations for fluids or the Cauchy momentum equation for deformable solids or fluids.
Classical
Momentum is a vector quantity: it has both magnitude and direction. Since momentum has a direction, it can be used to predict the resulting direction and speed of motion of objects after they collide. Below, the basic properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations (see multiple dimensions).
Single particle
The momentum of a particle is conventionally represented by the letter . It is the product of two quantities, the particle's mass (represented by the letter ) and its velocity:
The unit of momentum is the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity is in meters per second then the momentum is in kilogram meters per second (kg⋅m/s). In cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters per second (g⋅cm/s).
Being a vector, momentum has magnitude and direction. For example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg⋅m/s due north measured with reference to the ground.
Many particles
The momentum of a system of particles is the vector sum of their momenta. If two particles have respective masses and , and velocities and , the total momentum is
The momenta of more than two particles can be added more generally with the following:
A system of particles has a center of mass, a point determined by the weighted sum of their positions:
If one or more of the particles is moving, the center of mass of the system will generally be moving as well (unless the system is in pure rotation around it). If the total mass of the particles is , and the center of mass is moving at velocity , the momentum of the system is:
This is known as Euler's first law.
Relation to force
If the net force applied to a particle is constant, and is applied for a time interval , the momentum of the particle changes by an amount
In differential form, this is Newton's second law; the rate of change of the momentum of a particle is equal to the instantaneous force acting on it,
If the net force experienced by a particle changes as a function of time, , the change in momentum (or impulse ) between times and is
Impulse is measured in the derived units of the newton second (1 N⋅s = 1 kg⋅m/s) or dyne second (1 dyne⋅s = 1 g⋅cm/s)
Under the assumption of constant mass , it is equivalent to write
hence the net force is equal to the mass of the particle times its acceleration.
Example: A model airplane of mass 1 kg accelerates from rest to a velocity of 6 m/s due north in 2 s. The net force required to produce this acceleration is 3 newtons due north. The change in momentum is 6 kg⋅m/s due north. The rate of change of momentum is 3 (kg⋅m/s)/s due north which is numerically equivalent to 3 newtons.
Conservation
In a closed system (one that does not exchange any matter with its surroundings and is not acted on by external forces) the total momentum remains constant. This fact, known as the law of conservation of momentum, is implied by Newton's laws of motion. Suppose, for example, that two particles interact. As explained by the third law, the forces between them are equal in magnitude but opposite in direction. If the particles are numbered 1 and 2, the second law states that and . Therefore,
with the negative sign indicating that the forces oppose. Equivalently,
If the velocities of the particles are and before the interaction, and afterwards they are and , then
This law holds no matter how complicated the force is between particles. Similarly, if there are several particles, the momentum exchanged between each pair of particles adds to zero, so the total change in momentum is zero. The conservation of the total momentum of a number of interacting particles can be expressed as
This conservation law applies to all interactions, including collisions (both elastic and inelastic) and separations caused by explosive forces. It can also be generalized to situations where Newton's laws do not hold, for example in the theory of relativity and in electrodynamics.
Dependence on reference frame
Momentum is a measurable quantity, and the measurement depends on the frame of reference. For example: if an aircraft of mass 1000 kg is flying through the air at a speed of 50 m/s its momentum can be calculated to be 50,000 kg.m/s. If the aircraft is flying into a headwind of 5 m/s its speed relative to the surface of the Earth is only 45 m/s and its momentum can be calculated to be 45,000 kg.m/s. Both calculations are equally correct. In both frames of reference, any change in momentum will be found to be consistent with the relevant laws of physics.
Suppose is a position in an inertial frame of reference. From the point of view of another frame of reference, moving at a constant speed relative to the other, the position (represented by a primed coordinate) changes with time as
This is called a Galilean transformation.
If a particle is moving at speed in the first frame of reference, in the second, it is moving at speed
Since does not change, the second reference frame is also an inertial frame and the accelerations are the same:
Thus, momentum is conserved in both reference frames. Moreover, as long as the force has the same form, in both frames, Newton's second law is unchanged. Forces such as Newtonian gravity, which depend only on the scalar distance between objects, satisfy this criterion. This independence of reference frame is called Newtonian relativity or Galilean invariance.
A change of reference frame, can, often, simplify calculations of motion. For example, in a collision of two particles, a reference frame can be chosen, where, one particle begins at rest. Another, commonly used reference frame, is the center of mass frame – one that is moving with the center of mass. In this frame, the total momentum is zero.
Application to collisions
If two particles, each of known momentum, collide and coalesce, the law of conservation of momentum can be used to determine the momentum of the coalesced body. If the outcome of the collision is that the two particles separate, the law is not sufficient to determine the momentum of each particle. If the momentum of one particle after the collision is known, the law can be used to determine the momentum of the other particle. Alternatively if the combined kinetic energy after the collision is known, the law can be used to determine the momentum of each particle after the collision. Kinetic energy is usually not conserved. If it is conserved, the collision is called an elastic collision; if not, it is an inelastic collision.
Elastic collisions
An elastic collision is one in which no kinetic energy is transformed into heat or some other form of energy. Perfectly elastic collisions can occur when the objects do not touch each other, as for example in atomic or nuclear scattering where electric repulsion keeps the objects apart. A slingshot maneuver of a satellite around a planet can also be viewed as a perfectly elastic collision. A collision between two pool balls is a good example of an almost totally elastic collision, due to their high rigidity, but when bodies come in contact there is always some dissipation.
A head-on elastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision and and after, the equations expressing conservation of momentum and kinetic energy are:
A change of reference frame can simplify analysis of a collision. For example, suppose there are two bodies of equal mass , one stationary and one approaching the other at a speed (as in the figure). The center of mass is moving at speed and both bodies are moving towards it at speed . Because of the symmetry, after the collision both must be moving away from the center of mass at the same speed. Adding the speed of the center of mass to both, we find that the body that was moving is now stopped and the other is moving away at speed . The bodies have exchanged their velocities. Regardless of the velocities of the bodies, a switch to the center of mass frame leads us to the same conclusion. Therefore, the final velocities are given by
In general, when the initial velocities are known, the final velocities are given by
If one body has much greater mass than the other, its velocity will be little affected by a collision while the other body will experience a large change.
Inelastic collisions
In an inelastic collision, some of the kinetic energy of the colliding bodies is converted into other forms of energy (such as heat or sound). Examples include traffic collisions, in which the effect of loss of kinetic energy can be seen in the damage to the vehicles; electrons losing some of their energy to atoms (as in the Franck–Hertz experiment); and particle accelerators in which the kinetic energy is converted into mass in the form of new particles.
In a perfectly inelastic collision (such as a bug hitting a windshield), both bodies have the same motion afterwards. A head-on inelastic collision between two bodies can be represented by velocities in one dimension, along a line passing through the bodies. If the velocities are and before the collision then in a perfectly inelastic collision both bodies will be travelling with velocity after the collision. The equation expressing conservation of momentum is:
If one body is motionless to begin with (e.g. ), the equation for conservation of momentum is
so
In a different situation, if the frame of reference is moving at the final velocity such that , the objects would be brought to rest by a perfectly inelastic collision and 100% of the kinetic energy is converted to other forms of energy. In this instance the initial velocities of the bodies would be non-zero, or the bodies would have to be massless.
One measure of the inelasticity of the collision is the coefficient of restitution , defined as the ratio of relative velocity of separation to relative velocity of approach. In applying this measure to a ball bouncing from a solid surface, this can be easily measured using the following formula:
The momentum and energy equations also apply to the motions of objects that begin together and then move apart. For example, an explosion is the result of a chain reaction that transforms potential energy stored in chemical, mechanical, or nuclear form into kinetic energy, acoustic energy, and electromagnetic radiation. Rockets also make use of conservation of momentum: propellant is thrust outward, gaining momentum, and an equal and opposite momentum is imparted to the rocket.
Multiple dimensions
Real motion has both direction and velocity and must be represented by a vector. In a coordinate system with axes, velocity has components in the -direction, in the -direction, in the -direction. The vector is represented by a boldface symbol:
Similarly, the momentum is a vector quantity and is represented by a boldface symbol:
The equations in the previous sections, work in vector form if the scalars and are replaced by vectors and . Each vector equation represents three scalar equations. For example,
represents three equations:
The kinetic energy equations are exceptions to the above replacement rule. The equations are still one-dimensional, but each scalar represents the magnitude of the vector, for example,
Each vector equation represents three scalar equations. Often coordinates can be chosen so that only two components are needed, as in the figure. Each component can be obtained separately and the results combined to produce a vector result.
A simple construction involving the center of mass frame can be used to show that if a stationary elastic sphere is struck by a moving sphere, the two will head off at right angles after the collision (as in the figure).
Objects of variable mass
The concept of momentum plays a fundamental role in explaining the behavior of variable-mass objects such as a rocket ejecting fuel or a star accreting gas. In analyzing such an object, one treats the object's mass as a function that varies with time: . The momentum of the object at time is therefore . One might then try to invoke Newton's second law of motion by saying that the external force on the object is related to its momentum by , but this is incorrect, as is the related expression found by applying the product rule to :
This equation does not correctly describe the motion of variable-mass objects. The correct equation is
where is the velocity of the ejected/accreted mass as seen in the object's rest frame. This is distinct from , which is the velocity of the object itself as seen in an inertial frame.
This equation is derived by keeping track of both the momentum of the object as well as the momentum of the ejected/accreted mass. When considered together, the object and the mass constitute a closed system in which total momentum is conserved.
Generalized
Newton's laws can be difficult to apply to many kinds of motion because the motion is limited by constraints. For example, a bead on an abacus is constrained to move along its wire and a pendulum bob is constrained to swing at a fixed distance from the pivot. Many such constraints can be incorporated by changing the normal Cartesian coordinates to a set of generalized coordinates that may be fewer in number. Refined mathematical methods have been developed for solving mechanics problems in generalized coordinates. They introduce a generalized momentum, also known as the canonical momentum or conjugate momentum, that extends the concepts of both linear momentum and angular momentum. To distinguish it from generalized momentum, the product of mass and velocity is also referred to as mechanical momentum, kinetic momentum or kinematic momentum. The two main methods are described below.
Lagrangian mechanics
In Lagrangian mechanics, a Lagrangian is defined as the difference between the kinetic energy and the potential energy :
If the generalized coordinates are represented as a vector and time differentiation is represented by a dot over the variable, then the equations of motion (known as the Lagrange or Euler–Lagrange equations) are a set of equations:
If a coordinate is not a Cartesian coordinate, the associated generalized momentum component does not necessarily have the dimensions of linear momentum. Even if is a Cartesian coordinate, will not be the same as the mechanical momentum if the potential depends on velocity. Some sources represent the kinematic momentum by the symbol .
In this mathematical framework, a generalized momentum is associated with the generalized coordinates. Its components are defined as
Each component is said to be the conjugate momentum for the coordinate .
Now if a given coordinate does not appear in the Lagrangian (although its time derivative might appear), then is constant. This is the generalization of the conservation of momentum.
Even if the generalized coordinates are just the ordinary spatial coordinates, the conjugate momenta are not necessarily the ordinary momentum coordinates. An example is found in the section on electromagnetism.
Hamiltonian mechanics
In Hamiltonian mechanics, the Lagrangian (a function of generalized coordinates and their derivatives) is replaced by a Hamiltonian that is a function of generalized coordinates and momentum. The Hamiltonian is defined as
where the momentum is obtained by differentiating the Lagrangian as above. The Hamiltonian equations of motion are
As in Lagrangian mechanics, if a generalized coordinate does not appear in the Hamiltonian, its conjugate momentum component is conserved.
Symmetry and conservation
Conservation of momentum is a mathematical consequence of the homogeneity (shift symmetry) of space (position in space is the canonical conjugate quantity to momentum). That is, conservation of momentum is a consequence of the fact that the laws of physics do not depend on position; this is a special case of Noether's theorem. For systems that do not have this symmetry, it may not be possible to define conservation of momentum. Examples where conservation of momentum does not apply include curved spacetimes in general relativity or time crystals in condensed matter physics.
Momentum density
In deformable bodies and fluids
Conservation in a continuum
In fields such as fluid dynamics and solid mechanics, it is not feasible to follow the motion of individual atoms or molecules. Instead, the materials must be approximated by a continuum in which, at each point, there is a particle or fluid parcel that is assigned the average of the properties of atoms in a small region nearby. In particular, it has a density and velocity that depend on time and position . The momentum per unit volume is .
Consider a column of water in hydrostatic equilibrium. All the forces on the water are in balance and the water is motionless. On any given drop of water, two forces are balanced. The first is gravity, which acts directly on each atom and molecule inside. The gravitational force per unit volume is , where is the gravitational acceleration. The second force is the sum of all the forces exerted on its surface by the surrounding water. The force from below is greater than the force from above by just the amount needed to balance gravity. The normal force per unit area is the pressure . The average force per unit volume inside the droplet is the gradient of the pressure, so the force balance equation is
If the forces are not balanced, the droplet accelerates. This acceleration is not simply the partial derivative because the fluid in a given volume changes with time. Instead, the material derivative is needed:
Applied to any physical quantity, the material derivative includes the rate of change at a point and the changes due to advection as fluid is carried past the point. Per unit volume, the rate of change in momentum is equal to . This is equal to the net force on the droplet.
Forces that can change the momentum of a droplet include the gradient of the pressure and gravity, as above. In addition, surface forces can deform the droplet. In the simplest case, a shear stress , exerted by a force parallel to the surface of the droplet, is proportional to the rate of deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because the fluid is moving faster on one side than another. If the speed in the direction varies with , the tangential force in direction per unit area normal to the direction is
where is the viscosity. This is also a flux, or flow per unit area, of -momentum through the surface.
Including the effect of viscosity, the momentum balance equations for the incompressible flow of a Newtonian fluid are
These are known as the Navier–Stokes equations.
The momentum balance equations can be extended to more general materials, including solids. For each surface with normal in direction and force in direction , there is a stress component . The nine components make up the Cauchy stress tensor , which includes both pressure and shear. The local conservation of momentum is expressed by the Cauchy momentum equation:
where is the body force.
The Cauchy momentum equation is broadly applicable to deformations of solids and liquids. The relationship between the stresses and the strain rate depends on the properties of the material (see Types of viscosity).
Acoustic waves
A disturbance in a medium gives rise to oscillations, or waves, that propagate away from their source. In a fluid, small changes in pressure can often be described by the acoustic wave equation:
where is the speed of sound. In a solid, similar equations can be obtained for propagation of pressure (P-waves) and shear (S-waves).
The flux, or transport per unit area, of a momentum component by a velocity is equal to . In the linear approximation that leads to the above acoustic equation, the time average of this flux is zero. However, nonlinear effects can give rise to a nonzero average. It is possible for momentum flux to occur even though the wave itself does not have a mean momentum.
In electromagnetics
Particle in a field
In Maxwell's equations, the forces between particles are mediated by electric and magnetic fields. The electromagnetic force (Lorentz force) on a particle with charge due to a combination of electric field and magnetic field is
(in SI units).
It has an electric potential and magnetic vector potential .
In the non-relativistic regime, its generalized momentum is
while in relativistic mechanics this becomes
The quantity is sometimes called the potential momentum. It is the momentum due to the interaction of the particle with the electromagnetic fields. The name is an analogy with the potential energy , which is the energy due to the interaction of the particle with the electromagnetic fields. These quantities form a four-vector, so the analogy is consistent; besides, the concept of potential momentum is important in explaining the so-called hidden momentum of the electromagnetic fields.
Conservation
In Newtonian mechanics, the law of conservation of momentum can be derived from the law of action and reaction, which states that every force has a reciprocating equal and opposite force. Under some circumstances, moving charged particles can exert forces on each other in non-opposite directions. Nevertheless, the combined momentum of the particles and the electromagnetic field is conserved.
Vacuum
The Lorentz force imparts a momentum to the particle, so by Newton's second law the particle must impart a momentum to the electromagnetic fields.
In a vacuum, the momentum per unit volume is
where is the vacuum permeability and is the speed of light. The momentum density is proportional to the Poynting vector which gives the directional rate of energy transfer per unit area:
If momentum is to be conserved over the volume over a region , changes in the momentum of matter through the Lorentz force must be balanced by changes in the momentum of the electromagnetic field and outflow of momentum. If is the momentum of all the particles in , and the particles are treated as a continuum, then Newton's second law gives
The electromagnetic momentum is
and the equation for conservation of each component of the momentum is
The term on the right is an integral over the surface area of the surface representing momentum flow into and out of the volume, and is a component of the surface normal of . The quantity is called the Maxwell stress tensor, defined as
Media
The above results are for the microscopic Maxwell equations, applicable to electromagnetic forces in a vacuum (or on a very small scale in media). It is more difficult to define momentum density in media because the division into electromagnetic and mechanical is arbitrary. The definition of electromagnetic momentum density is modified to
where the H-field is related to the B-field and the magnetization by
The electromagnetic stress tensor depends on the properties of the media.
Non-classical
Quantum mechanical
In quantum mechanics, momentum is defined as a self-adjoint operator on the wave function. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables.
For a single particle described in the position basis the momentum operator can be written as
where is the gradient operator, is the reduced Planck constant, and is the imaginary unit. This is a commonly encountered form of the momentum operator, though the momentum operator in other bases can take other forms. For example, in momentum space the momentum operator is represented by the eigenvalue equation
where the operator acting on a wave eigenfunction yields that wave function multiplied by the eigenvalue , in an analogous fashion to the way that the position operator acting on a wave function yields that wave function multiplied by the eigenvalue .
For both massive and massless objects, relativistic momentum is related to the phase constant by
Electromagnetic radiation (including visible light, ultraviolet light, and radio waves) is carried by photons. Even though photons (the particle aspect of light) have no mass, they still carry momentum. This leads to applications such as the solar sail. The calculation of the momentum of light within dielectric media is somewhat controversial (see Abraham–Minkowski controversy).
Relativistic
Lorentz invariance
Newtonian physics assumes that absolute time and space exist outside of any observer; this gives rise to Galilean invariance. It also results in a prediction that the speed of light can vary from one reference frame to another. This is contrary to what has been observed. In the special theory of relativity, Einstein keeps the postulate that the equations of motion do not depend on the reference frame, but assumes that the speed of light is invariant. As a result, position and time in two reference frames are related by the Lorentz transformation instead of the Galilean transformation.
Consider, for example, one reference frame moving relative to another at velocity in the direction. The Galilean transformation gives the coordinates of the moving frame as
while the Lorentz transformation gives
where is the Lorentz factor:
Newton's second law, with mass fixed, is not invariant under a Lorentz transformation. However, it can be made invariant by making the inertial mass of an object a function of velocity:
is the object's invariant mass.
The modified momentum,
obeys Newton's second law:
Within the domain of classical mechanics, relativistic momentum closely approximates Newtonian momentum: at low velocity, is approximately equal to , the Newtonian expression for momentum.
Four-vector formulation
In the theory of special relativity, physical quantities are expressed in terms of four-vectors that include time as a fourth coordinate along with the three space coordinates. These vectors are generally represented by capital letters, for example for position. The expression for the four-momentum depends on how the coordinates are expressed. Time may be given in its normal units or multiplied by the speed of light so that all the components of the four-vector have dimensions of length. If the latter scaling is used, an interval of proper time, , defined by
is invariant under Lorentz transformations (in this expression and in what follows the metric signature has been used, different authors use different conventions). Mathematically this invariance can be ensured in one of two ways: by treating the four-vectors as Euclidean vectors and multiplying time by ; or by keeping time a real quantity and embedding the vectors in a Minkowski space. In a Minkowski space, the scalar product of two four-vectors and is defined as
In all the coordinate systems, the (contravariant) relativistic four-velocity is defined by
and the (contravariant) four-momentum is
where is the invariant mass. If (in Minkowski space), then
Using Einstein's mass–energy equivalence, , this can be rewritten as
Thus, conservation of four-momentum is Lorentz-invariant and implies conservation of both mass and energy.
The magnitude of the momentum four-vector is equal to :
and is invariant across all reference frames.
The relativistic energy–momentum relationship holds even for massless particles such as photons; by setting it follows that
In a game of relativistic "billiards", if a stationary particle is hit by a moving particle in an elastic collision, the paths formed by the two afterwards will form an acute angle. This is unlike the non-relativistic case where they travel at right angles.
The four-momentum of a planar wave can be related to a wave four-vector
For a particle, the relationship between temporal components, , is the Planck–Einstein relation, and the relation between spatial components, , describes a de Broglie matter wave.
History of the concept
Impetus
John Philoponus
In about 530 AD, John Philoponus developed a concept of momentum in On Physics, a commentary to Aristotle's Physics. Aristotle claimed that everything that is moving must be kept moving by something. For example, a thrown ball must be kept moving by motions of the air. Philoponus pointed out the absurdity in Aristotle's claim that motion of an object is promoted by the same air that is resisting its passage. He proposed instead that an impetus was imparted to the object in the act of throwing it.
Ibn Sīnā
In 1020, Ibn Sīnā (also known by his Latinized name Avicenna) read Philoponus and published his own theory of motion in The Book of Healing. He agreed that an impetus is imparted to a projectile by the thrower; but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as a persistent, requiring external forces such as air resistance to dissipate it.
Peter Olivi, Jean Buridan
In the 13th and 14th century, Peter Olivi and Jean Buridan read and refined the work of Philoponus, and possibly that of Ibn Sīnā. Buridan, who in about 1350 was made rector of the University of Paris, referred to impetus being proportional to the weight times the speed. Moreover, Buridan's theory was different from his predecessor's in that he did not consider impetus to be self-dissipating, asserting that a body would be arrested by the forces of air resistance and gravity which might be opposing its impetus.
Quantity of motion
René Descartes
In Principles of Philosophy (Principia Philosophiae) from 1644, the French philosopher René Descartes defined "quantity of motion" (Latin: quantitas motus) as the product of size and speed, and claimed that the total quantity of motion in the universe is conserved.
This should not be read as a statement of the modern law of conservation of momentum, since Descartes had no concept of mass as distinct from weight and size. (The concept of mass, as distinct from weight, was introduced by Newton in 1686.) More important, he believed that it is speed rather than velocity that is conserved. So for Descartes, if a moving object were to bounce off a surface, changing its direction but not its speed, there would be no change in its quantity of motion. Galileo, in his Two New Sciences (published in 1638), used the Italian word to similarly describe Descartes's quantity of motion.
Christiaan Huygens
In the 1600s, Christiaan Huygens concluded quite early that Descartes's laws for the elastic collision of two bodies must be wrong, and he formulated the correct laws. An important step was his recognition of the Galilean invariance of the problems. His views then took many years to be circulated. He passed them on in person to William Brouncker and Christopher Wren in London, in 1661. What Spinoza wrote to Henry Oldenburg about them, in 1666 during the Second Anglo-Dutch War, was guarded. Huygens had actually worked them out in a manuscript in the period 1652–1656. The war ended in 1667, and Huygens announced his results to the Royal Society in 1668. He published them in the in 1669.
Momentum
John Wallis
In 1670, John Wallis, in , stated the law of conservation of momentum: "the initial state of the body, either of rest or of motion, will persist" and "If the force is greater than the resistance, motion will result". Wallis used momentum for quantity of motion, and for force.
Gottfried Leibniz
In 1686, Gottfried Wilhelm Leibniz, in Discourse on Metaphysics, gave an argument against Descartes' construction of the conservation of the "quantity of motion" using an example of dropping blocks of different sizes different distances. He points out that force is conserved but quantity of motion, construed as the product of size and speed of an object, is not conserved.
Isaac Newton
In 1687, Isaac Newton, in , just like Wallis, showed a similar casting around for words to use for the mathematical momentum. His Definition II defines , "quantity of motion", as "arising from the velocity and quantity of matter conjointly", which identifies it as momentum. Thus when in Law II he refers to , "change of motion", being proportional to the force impressed, he is generally taken to mean momentum and not motion.
John Jennings
In 1721, John Jennings published Miscellanea, where the momentum in its current mathematical sense is attested, five years before the final edition of Newton's . Momentum or "quantity of motion" was being defined for students as "a rectangle", the product of and , where is "quantity of material" and is "velocity", .
In 1728, the Cyclopedia states:
See also
Angular momentum
Crystal momentum
Galilean cannon
Momentum compaction
Momentum transfer
Newton's cradle
Position and momentum space
References
Bibliography
External links
Conservation of momentum – A chapter from an online textbook
Conservation laws
Mechanical quantities
Moment (physics)
Motion (physics)
Vector physical quantities | 0.795248 | 0.999053 | 0.794495 |
Stress–energy tensor | The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
Definition
The stress–energy tensor involves the use of superscripted variables ( exponents; see tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector are given by: , where is time in seconds, and , , and are distances in meters.
The stress–energy tensor is defined as the tensor of order two that gives the flux of the -th component of the momentum vector across a surface with constant coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric,
In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor.
Components
Because the stress–energy tensor is of order 2, its components can be displayed in 4 × 4 matrix form:
where the indices and take on the values 0, 1, 2, 3.
In the following, and range from 1 through 3:
In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering differs from the relativistic stress–energy tensor by a momentum-convective term.
Covariant and mixed forms
Most of this article works with the contravariant form, of the stress–energy tensor. However, it is often necessary to work with the covariant form,
or the mixed form,
or as a mixed tensor density
This article uses the spacelike sign convention (−+++) for the metric signature.
Conservation law
In special relativity
The stress–energy tensor is the conserved Noether current associated with spacetime translations.
The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved,
When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as
The integral form of the non-covariant formulation is
where is any compact four-dimensional region of spacetime; is its boundary, a three-dimensional hypersurface; and is an element of the boundary regarded as the outward pointing normal.
In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved:
In general relativity
When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative
where is the Christoffel symbol which is the gravitational force field.
Consequently, if is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as
The integral form of this is
In special relativity
In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities.
Given a Lagrangian density that is a function of a set of fields and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition
By using the chain rule, we then have
Written in useful shorthand,
Then, we can use the Euler–Lagrange Equation:
And then use the fact that partial derivatives commute so that we now have
We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that
Now, in flat space, one can write . Doing this and moving it to the other side of the equation tells us that
And upon regrouping terms,
This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor:
By construction it has the property that
Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor.
Indeed, since this is the case, observing that , we then have
We can then conclude that the terms of represent the energy flux density of the system.
Trace
Note that the trace of the stress–energy tensor is defined to be , so
Since ,
In general relativity
In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.)
In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the gravitational field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation.
In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime.
Einstein field equations
In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as
where is the Ricci tensor, is the Ricci scalar (the tensor contraction of the Ricci tensor), is the metric tensor, is the cosmological constant (negligible at the scale of a galaxy or smaller), and is the Einstein gravitational constant.
Stress–energy in special situations
Isolated particle
In special relativity, the stress–energy of a non-interacting particle with rest mass and trajectory is:
where is the velocity vector (which should not be confused with four-velocity, since it is missing a )
is the Dirac delta function and is the energy of the particle.
Written in language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity)
.
Stress–energy of a fluid in equilibrium
For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form
where is the mass–energy density (kilograms per cubic meter), is the hydrostatic pressure (pascals), is the fluid's four-velocity, and is the matrix inverse of the metric tensor. Therefore, the trace is given by
The four-velocity satisfies
In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is
the matrix inverse of the metric tensor is simply
and the stress–energy tensor is a diagonal matrix
Electromagnetic stress–energy tensor
The Hilbert stress–energy tensor of a source-free electromagnetic field is
where is the electromagnetic field tensor.
Scalar field
The stress–energy tensor for a complex scalar field that satisfies the Klein–Gordon equation is
and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be:
Variant definitions of stress–energy
There are a number of inequivalent definitions of non-gravitational stress–energy:
Hilbert stress–energy tensor
The Hilbert stress–energy tensor is defined as the functional derivative
where is the nongravitational part of the action, is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information.
Canonical stress–energy tensor
Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations.
In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor.
Belinfante–Rosenfeld stress–energy tensor
In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor.
Gravitational stress–energy
By the equivalence principle gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor.
In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system.
See also
Electromagnetic stress–energy tensor
Energy condition
Energy density of electric and magnetic fields
Maxwell stress tensor
Poynting vector
Ricci calculus
Segre classification
Notes and references
External links
Lecture, Stephan Waner
Caltech Tutorial on Relativity — A simple discussion of the relation between the Stress–Energy tensor of General Relativity and the metric
Tensor physical quantities
Density | 0.797248 | 0.996347 | 0.794336 |
Potential energy | In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality.
Common types of potential energy include the gravitational potential energy of an object, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J).
Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is in-turn called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function.
Overview
There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration.
Forces derivable from a potential are also called conservative forces. The work done by a conservative force is
where is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep.
Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall.
Consider a ball whose mass is dropped from height . The acceleration of free fall is approximately constant, so the weight force of the ball is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus
The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position.
History
From around 1840 scientists sought to define and understand energy and work.
The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as ‘energy of configuration’ in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded.
Work and potential energy
Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B.
The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces.
Derivable from a potential
In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that
This means that the units of U′ must be this case, work along the curve is given by
which can be evaluated using the gradient theorem to obtain
This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path.
Potential energy is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is
In this case, the application of the del operator to the work function yields,
and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field.
Computing potential energy
Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve from to , and computing,
For the force field F, let , then the gradient theorem yields,
The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is
Examples of work that can be computed from potential functions are gravity and spring forces.
Potential energy for near-Earth gravity
For small height changes, gravitational potential energy can be computed using
where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules.
In classical physics, gravity exerts a constant downward force on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory , such as the track of a roller coaster is calculated using its velocity, , to obtain
where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve .
Potential energy for a linear spring
A horizontal spring exerts a force that is proportional to its deformation in the axial or x direction. The work of this spring on a body moving along the space curve , is calculated using its velocity, , to obtain
For convenience, consider contact with the spring occurs at , then the integral of the product of the distance x and the x-velocity, xvx, is x2/2.
The function
is called the potential energy of a linear spring.
Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy.
Potential energy for gravitational forces between two bodies
The gravitational potential function, also known as gravitational potential energy, is:
The negative sign follows the convention that work is gained from a loss of potential energy.
Derivation
The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation
where is a vector of length 1 pointing from M to m and G is the gravitational constant.
Let the mass m move at the velocity then the work of gravity on this mass as it moves from position to is given by
The position and velocity of the mass m are given by
where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to,
This calculation uses the fact that
Potential energy for electrostatic forces between two bodies
The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's Law
where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity.
The work W required to move q from A to any point B in the electrostatic force field is given by the potential function
Reference level
The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience.
Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions.
Gravitational potential energy
Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount.
Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact.
The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail.
Local approximation
The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the equation for work, and the equation
The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember ). The upward force required while moving at a constant velocity is equal to the weight, , of an object, so the work done in lifting it through a height is the product . Thus, when accounting only for mass, gravity, and altitude, the equation is:
where is the potential energy of the object relative to its being on the Earth's surface, is the mass of the object, is the acceleration due to gravity, and h is the altitude of the object.
Hence, the potential difference is
General formula
However, over large variations in distance, the approximation that is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance between the two bodies. Using that definition, the gravitational potential energy of a system of masses and at a distance using the Newtonian constant of gravitation is
where is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making negative; for why this is physically reasonable, see below.
Given this formula for , the total potential energy of a system of bodies is found by summing, for all pairs of two bodies, the potential energy of the system of those two bodies.
Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity.
therefore,
Negative gravitational energy
As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which becomes zero: and . The choice of at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.
The singularity at in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with for , would result in potential energy being positive, but infinitely large for all nonzero values of , and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and is always non-zero in practice, the choice of at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first.
The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this.
Uses
Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction.
Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window.
Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls.
Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES).
Chemical potential energy
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc.
Electric potential energy
An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy).
Electrostatic potential energy
Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q which is given by
where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity.
If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby.
The work W required to move q from A to any point B in the electrostatic force field is given by
typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge.
Magnetic potential energy
The energy of a magnetic moment in an externally produced magnetic B-field has potential energy
The magnetization in a field is
where the integral can be over all space or, equivalently, where is nonzero.
Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart.
Nuclear potential energy
Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay.
Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space.
Forces and potential energy
Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by or , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is
The gravitational potential (specific energy) of the two bodies is
where is the reduced mass.
The work done against gravity by moving an infinitesimal mass from point A with to point B with is and the work done going back the other way is so that the total work done in moving from A to B and returning to A is
If the potential is redefined at A to be and the potential at B to be , where is a constant (i.e. can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is
as before.
In practical terms, this means that one can set the zero of and anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section).
A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field.
Notes
References
External links
What is potential energy?
Energy (physics)
Forms of energy
Mechanical quantities | 0.795275 | 0.998369 | 0.793978 |
Fluid dynamics | In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids — liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation.
Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.
Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.
Equations
The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem.
In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.
For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form.
In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state:
where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful.
Conservation laws
Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow.
Classifications
Compressible versus incompressible flow
All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used.
Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is,
where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.
For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.
Newtonian versus non-Newtonian fluids
All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate.
Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants.
Inviscid versus viscous versus Stokes flow
The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects.
The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow.
In contrast, high Reynolds numbers indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression.
This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox.
A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions.
Steady versus unsteady flow
A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.
Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.
Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.
Laminar versus turbulent flow
Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.
It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows.
Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling.
Other approximations
There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.
The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small.
Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected.
Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid.
The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small.
Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths.
In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics.
Multidisciplinary types
Flows according to Mach regimes
While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately.
Reactive versus non-reactive flows
Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics.
Magnetohydrodynamics
Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.
Relativistic fluid dynamics
Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime.
Fluctuating hydrodynamics
This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model
thermal fluctuations.
As formulated by Landau and Lifshitz,
a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics
is added to the viscous stress tensor and heat flux.
Terminology
The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.
Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.
Characteristic numbers
Terminology in incompressible fluid dynamics
The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.
A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.
Terminology in compressible fluid dynamics
In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion.
To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference.
Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy".
See also
List of publications in fluid dynamics
List of fluid dynamicists
References
Further reading
Originally published in 1879, the 6th extended edition appeared first in 1932.
Originally published in 1938.
Encyclopedia: Fluid dynamics Scholarpedia
External links
National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format)
Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society
List of Fluid Dynamics books
Piping
Aerodynamics
Continuum mechanics | 0.795764 | 0.997496 | 0.793771 |
Action (physics) | In physics, action is a scalar quantity that describes how the balance of kinetic versus potential energy of a physical system changes with trajectory. Action is significant because it is an input to the principle of stationary action, an approach to classical mechanics that is simpler for multiple objects. Action and the variational principle are used in Feynman's formulation of quantum mechanics and in general relativity. For systems with small values of action similar to the Planck constant, quantum effects are significant.
In the simple case of a single particle moving with a constant velocity (thereby undergoing uniform linear motion), the action is the momentum of the particle times the distance it moves, added up along its path; equivalently, action is the difference between the particle's kinetic energy and its potential energy, times the duration for which it has that amount of energy.
More formally, action is a mathematical functional which takes the trajectory (also called path or history) of the system as its argument and has a real number as its result. Generally, the action takes different values for different paths. Action has dimensions of energy × time or momentum × length, and its SI unit is joule-second (like the Planck constant h).
Introduction
Introductory physics often begins with Newton's laws of motion, relating force and motion; action is part of a completely equivalent alternative approach with practical and educational advantages. However the concept took many decades to supplant Newtonian approaches and remains a challenge to introduce to students.
Simple example
For a trajectory of a baseball moving in the air on Earth the action is defined between two points in time, and as the kinetic energy (KE) minus the potential energy (PE), integrated over time.
The action balances kinetic against potential energy.
The kinetic energy of a baseball of mass is where is the velocity of the ball; the potential energy is where is the gravitational constant. Then the action between and is
The action value depends upon the trajectory taken by the baseball through and . This makes the action an input to the powerful stationary-action principle for classical and for quantum mechanics. Newton's equations of motion for the baseball can be derived from the action using the stationary-action principle, but the advantages of action-based mechanics only begin to appear in cases where Newton's laws are difficult to apply. Replace the baseball with an electron: classical mechanics fails but stationary action continues to work. The energy difference in the simple action definition, kinetic minus potential energy, is generalized and called the Lagrangian for more complex cases.
Planck's quantum of action
The Planck constant, written as or when including a factor of , is called the quantum of action. Like action, this constant has unit of energy times time. It figures in all significant quantum equations, like the uncertainty principle and the de Broglie wavelength. Whenever the value of the action approaches the Planck constant, quantum effects are significant.
History
Pierre Louis Maupertuis and Leonhard Euler working in the 1740s developed early versions of the action principle. Joseph Louis Lagrange clarified the mathematics when he invented the calculus of variations. William Rowan Hamilton made the next big breakthrough, formulating Hamilton's principle in 1853. Hamilton's principle became the cornerstone for classical work with different forms of action until Richard Feynman and Julian Schwinger developed quantum action principles.
Definitions
Expressed in mathematical language, using the calculus of variations, the evolution of a physical system (i.e., how the system actually progresses from one state to another) corresponds to a stationary point (usually, a minimum) of the action.
Action has the dimensions of [energy] × [time], and its SI unit is joule-second, which is identical to the unit of angular momentum.
Several different definitions of "the action" are in common use in physics. The action is usually an integral over time. However, when the action pertains to fields, it may be integrated over spatial variables as well. In some cases, the action is integrated along the path followed by the physical system.
The action is typically represented as an integral over time, taken along the path of the system between the initial time and the final time of the development of the system:
where the integrand L is called the Lagrangian. For the action integral to be well-defined, the trajectory has to be bounded in time and space.
Action (functional)
Most commonly, the term is used for a functional which takes a function of time and (for fields) space as input and returns a scalar. In classical mechanics, the input function is the evolution q(t) of the system between two times t1 and t2, where q represents the generalized coordinates. The action is defined as the integral of the Lagrangian L for an input evolution between the two times:
where the endpoints of the evolution are fixed and defined as and . According to Hamilton's principle, the true evolution qtrue(t) is an evolution for which the action is stationary (a minimum, maximum, or a saddle point). This principle results in the equations of motion in Lagrangian mechanics.
Abbreviated action (functional)
In addition to the action functional, there is another functional called the abbreviated action. In the abbreviated action, the input function is the path followed by the physical system without regard to its parameterization by time. For example, the path of a planetary orbit is an ellipse, and the path of a particle in a uniform gravitational field is a parabola; in both cases, the path does not depend on how fast the particle traverses the path.
The abbreviated action (sometime written as ) is defined as the integral of the generalized momenta,
for a system Lagrangian along a path in the generalized coordinates :
where and are the starting and ending coordinates.
According to Maupertuis' principle, the true path of the system is a path for which the abbreviated action is stationary.
Hamilton's characteristic function
When the total energy E is conserved, the Hamilton–Jacobi equation can be solved with the additive separation of variables:
where the time-independent function W(q1, q2, ..., qN) is called Hamilton's characteristic function. The physical significance of this function is understood by taking its total time derivative
This can be integrated to give
which is just the abbreviated action.
Action of a generalized coordinate
A variable Jk in the action-angle coordinates, called the "action" of the generalized coordinate qk, is defined by integrating a single generalized momentum around a closed path in phase space, corresponding to rotating or oscillating motion:
The corresponding canonical variable conjugate to Jk is its "angle" wk, for reasons described more fully under action-angle coordinates. The integration is only over a single variable qk and, therefore, unlike the integrated dot product in the abbreviated action integral above. The Jk variable equals the change in Sk(qk) as qk is varied around the closed path. For several physical systems of interest, Jk is either a constant or varies very slowly; hence, the variable Jk is often used in perturbation calculations and in determining adiabatic invariants. For example, they are used in the calculation of planetary and satellite orbits.
Single relativistic particle
When relativistic effects are significant, the action of a point particle of mass m travelling a world line C parametrized by the proper time is
If instead, the particle is parametrized by the coordinate time t of the particle and the coordinate time ranges from t1 to t2, then the action becomes
where the Lagrangian is
Action principles and related ideas
Physical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. Given the initial and boundary conditions for the situation, the "solution" to these empirical equations is one or more functions that describe the behavior of the system and are called equations of motion.
Action is a part of an alternative approach to finding such equations of motion. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or more generally, is stationary. In other words, the action satisfies a variational principle: the principle of stationary action (see also below). The action is defined by an integral, and the classical equations of motion of a system can be derived by minimizing the value of that integral.
The action principle provides deep insights into physics, and is an important concept in modern theoretical physics. Various action principles and related concepts are summarized below.
Maupertuis's principle
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). Maupertuis's principle uses the abbreviated action between two generalized points on a path.
Hamilton's principal function
Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models.
Hamilton's principle applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system explores all possible paths, with the phase of the probability amplitude for each path being determined by the action for the path; the final probability amplitude adds all paths using their complex amplitude and phase.
Hamilton–Jacobi equation
Hamilton's principal function is obtained from the action functional by fixing the initial time and the initial endpoint while allowing the upper time limit and the second endpoint to vary. The Hamilton's principal function satisfies the Hamilton–Jacobi equation, a formulation of classical mechanics. Due to a similarity with the Schrödinger equation, the Hamilton–Jacobi equation provides, arguably, the most direct link with quantum mechanics.
Euler–Lagrange equations
In Lagrangian mechanics, the requirement that the action integral be stationary under small perturbations is equivalent to a set of differential equations (called the Euler–Lagrange equations) that may be obtained using the calculus of variations.
Classical fields
The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravitational field.
Maxwell's equations can be derived as conditions of stationary action.
The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle. The trajectory (path in spacetime) of a body in a gravitational field can be found using the action principle. For a free falling body, this trajectory is a geodesic.
Conservation laws
Implications of symmetries in a physical situation can be found with the action principle, together with the Euler–Lagrange equations, which are derived from the action principle. An example is Noether's theorem, which states that to every continuous symmetry in a physical situation there corresponds a conservation law (and conversely). This deep connection requires that the action principle be assumed.
Path integral formulation of quantum field theory
In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all permitted paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, which gives the probability amplitudes of the various outcomes.
Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. It is best understood within quantum mechanics, particularly in Richard Feynman's path integral formulation, where it arises out of destructive interference of quantum amplitudes.
Modern extensions
The action principle can be generalized still further. For example, the action need not be an integral, because nonlocal actions are possible. The configuration space need not even be a functional space, given certain features such as noncommutative geometry. However, a physical basis for these mathematical extensions remains to be established experimentally.
See also
Calculus of variations
Functional derivative
Functional integral
Hamiltonian mechanics
Lagrangian
Lagrangian mechanics
Measure (physics)
Noether's theorem
Path integral formulation
Principle of least action
Principle of maximum entropy
Some actions:
Nambu–Goto action
Polyakov action
Bagger–Lambert–Gustavsson action
Einstein–Hilbert action
References
Further reading
The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, .
Dare A. Wells, Lagrangian Dynamics, Schaum's Outline Series (McGraw-Hill, 1967) , A 350-page comprehensive "outline" of the subject.
External links
Principle of least action interactive Interactive explanation/webpage
Lagrangian mechanics
Hamiltonian mechanics
Calculus of variations
Dynamics (mechanics) | 0.797927 | 0.994417 | 0.793472 |
Lorentz factor | The Lorentz factor or Lorentz term (also known as the gamma factor) is a quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity, and it arises in derivations of the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz.
It is generally denoted (the Greek lowercase letter gamma). Sometimes (especially in discussion of superluminal motion) the factor is written as (Greek uppercase-gamma) rather than .
Definition
The Lorentz factor is defined as
where:
is the relative velocity between inertial reference frames,
is the speed of light in vacuum,
is the ratio of to ,
is coordinate time,
is the proper time for an observer (measuring time intervals in the observer's own frame).
This is the most frequently used form in practice, though not the only one (see below for alternative forms).
To complement the definition, some authors define the reciprocal
see velocity addition formula.
Occurrence
Following is a list of formulae from Special relativity which use as a shorthand:
The Lorentz transformation: The simplest case is a boost in the -direction (more general forms including arbitrary directions and rotations not listed here), which describes how spacetime coordinates change from one inertial frame using coordinates to another with relative velocity :
Corollaries of the above transformations are the results:
Time dilation: The time between two ticks as measured in the frame in which the clock is moving, is longer than the time between these ticks as measured in the rest frame of the clock:
Length contraction: The length of an object as measured in the frame in which it is moving, is shorter than its length in its own rest frame:
Applying conservation of momentum and energy leads to these results:
Relativistic mass: The mass of an object in motion is dependent on and the rest mass :
Relativistic momentum: The relativistic momentum relation takes the same form as for classical momentum, but using the above relativistic mass:
Relativistic kinetic energy: The relativistic kinetic energy relation takes the slightly modified form: As is a function of , the non-relativistic limit gives , as expected from Newtonian considerations.
Numerical values
In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of ). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact.
Alternative representations
There are other ways to write the factor. Above, velocity was used, but related variables such as momentum and rapidity may also be convenient.
Momentum
Solving the previous relativistic momentum equation for leads to
This form is rarely used, although it does appear in the Maxwell–Jüttner distribution.
Rapidity
Applying the definition of rapidity as the hyperbolic angle :
also leads to (by use of hyperbolic identities):
Using the property of Lorentz transformation, it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group, a foundation for physical models.
Bessel function
The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions:
Series expansion (velocity)
The Lorentz factor has the Maclaurin series:
which is a special case of a binomial series.
The approximation may be used to calculate relativistic effects at low speeds. It holds to within 1% error for < 0.4 ( < 120,000 km/s), and to within 0.1% error for < 0.22 ( < 66,000 km/s).
The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold:
For and , respectively, these reduce to their Newtonian equivalents:
The Lorentz factor equation can also be inverted to yield
This has an asymptotic form
The first two terms are occasionally used to quickly calculate velocities from large values. The approximation holds to within 1% tolerance for and to within 0.1% tolerance for
Applications in astronomy
The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal.
Muons, a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation. Since muons have a mean lifetime of just 2.2 μs, muons generated from cosmic-ray collisions high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate.
See also
Inertial frame of reference
Proper velocity
Pseudorapidity
References
External links
Doppler effects
Equations
Hendrik Lorentz
Minkowski spacetime
Special relativity | 0.796579 | 0.996032 | 0.793419 |
Wick rotation | In physics, Wick rotation, named after Italian physicist Gian Carlo Wick, is a method of finding a solution to a mathematical problem in Minkowski space from a solution to a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable.
Wick rotations are useful because of an analogy between two important but seemingly distinct fields of physics: statistical mechanics and quantum mechanics. In this analogy, inverse temperature plays a role in statistical mechanics formally akin to imaginary time in quantum mechanics: that is, , where is time and is the imaginary unit.
More precisely, in statistical mechanics, the Gibbs measure describes the relative probability of the system to be in any given state at temperature , where is a function describing the energy of each state and is the Boltzmann constant. In quantum mechanics, the transformation describes time evolution, where is an operator describing the energy (the Hamiltonian) and is the reduced Planck constant. The former expression resembles the latter when we replace with , and this replacement is called Wick rotation.
Wick rotation is called a rotation because when we represent complex numbers as a plane, the multiplication of a complex number by the imaginary unit is equivalent to counter-clockwise rotating the vector representing that number by an angle of magnitude about the origin.
Overview
Wick rotation is motivated by the observation that the Minkowski metric in natural units (with metric signature convention)
and the four-dimensional Euclidean metric
are equivalent if one permits the coordinate to take on imaginary values. The Minkowski metric becomes Euclidean when is restricted to the imaginary axis, and vice versa. Taking a problem expressed in Minkowski space with coordinates , , , , and substituting
sometimes yields a problem in real Euclidean coordinates , , , which is easier to solve. This solution may then, under reverse substitution, yield a solution to the original problem.
Statistical and quantum mechanics
Wick rotation connects statistical mechanics to quantum mechanics by replacing inverse temperature with imaginary time, or more precisely replacing with , where is temperature, is the Boltzmann constant, is time, and is the reduced Planck constant.
For example, consider a quantum system whose Hamiltonian has eigenvalues . When this system is in thermal equilibrium at temperature , the probability of finding it in its th energy eigenstate is proportional to . Thus, the expected value of any observable that commutes with the Hamiltonian is, up to a normalizing constant,
where runs over all energy eigenstates and is the value of in the th eigenstate.
Alternatively, consider this system in a superposition of energy eigenstates, evolving for a time under the Hamiltonian . After time , the relative phase change of the th eigenstate is . Thus, the probability amplitude that a uniform (equally weighted) superposition of states
evolves to an arbitrary superposition
is, up to a normalizing constant,
Note that this formula can be obtained from the formula for thermal equilibrium by replacing with .
Statics and dynamics
Wick rotation relates statics problems in dimensions to dynamics problems in dimensions, trading one dimension of space for one dimension of time. A simple example where is a hanging spring with fixed endpoints in a gravitational field. The shape of the spring is a curve . The spring is in equilibrium when the energy associated with this curve is at a critical point (an extremum); this critical point is typically a minimum, so this idea is usually called "the principle of least energy". To compute the energy, we integrate the energy spatial density over space:
where is the spring constant, and is the gravitational potential.
The corresponding dynamics problem is that of a rock thrown upwards. The path the rock follows is that which extremalizes the action; as before, this extremum is typically a minimum, so this is called the "principle of least action". Action is the time integral of the Lagrangian:
We get the solution to the dynamics problem (up to a factor of ) from the statics problem by Wick rotation, replacing by and the spring constant by the mass of the rock :
Both thermal/quantum and static/dynamic
Taken together, the previous two examples show how the path integral formulation of quantum mechanics is related to statistical mechanics. From statistical mechanics, the shape of each spring in a collection at temperature will deviate from the least-energy shape due to thermal fluctuations; the probability of finding a spring with a given shape decreases exponentially with the energy difference from the least-energy shape. Similarly, a quantum particle moving in a potential can be described by a superposition of paths, each with a phase : the thermal variations in the shape across the collection have turned into quantum uncertainty in the path of the quantum particle.
Further details
The Schrödinger equation and the heat equation are also related by Wick rotation.
Wick rotation also relates a quantum field theory at a finite inverse temperature to a statistical-mechanical model over the "tube" with the imaginary time coordinate being periodic with period . However, there is a slight difference. Statistical-mechanical -point functions satisfy positivity, whereas Wick-rotated quantum field theories satisfy reflection positivity.
Note, however, that the Wick rotation cannot be viewed as a rotation on a complex vector space that is equipped with the conventional norm and metric induced by the inner product, as in this case the rotation would cancel out and have no effect.
Rigorous proof
Dirk Schlingemann proved that a more rigorous link between Euclidean and quantum field theory can be constructed using the Osterwalder–Schrader axioms.
See also
Complex spacetime
Imaginary time
Schwinger function
References
External links
A Spring in Imaginary Time – a worksheet in Lagrangian mechanics illustrating how replacing length by imaginary time turns the parabola of a hanging spring into the inverted parabola of a thrown particle
Euclidean Gravity – a short note by Ray Streater on the "Euclidean Gravity" programme.
Quantum field theory
Statistical mechanics | 0.802645 | 0.988327 | 0.793275 |
Lorentz transformation | In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
The most common form of the transformation, parametrized by the real constant representing a velocity confined to the -direction, is expressed as
where and are the coordinates of an event in two frames with the spatial origins coinciding at ==0, where the primed frame is seen from the unprimed frame as moving with speed along the -axis, where is the speed of light, and is the Lorentz factor. When speed is much smaller than , the Lorentz factor is negligibly different from 1, but as approaches , grows without bound. The value of must be smaller than for the transformation to make sense.
Expressing the speed as an equivalent form of the transformation is
Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity.
In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame.
They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity.
Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity.
The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. This property is the defining property of a Lorentz transformation. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group.
History
Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905.
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations.
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group,
and he named it after Lorentz.
Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary.
Derivation of the group of Lorentz transformations
An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates to specify position in space in that frame. Subscripts label individual events.
From Einstein's second postulate of relativity (invariance of c) it follows that:
in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events and . The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that:
where are the spacetime coordinates used to define events in one frame, and are the coordinates in another frame. First one observes that is satisfied if an arbitrary -tuple of numbers are added to events and . Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too:
(a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. First equation in can be written more compactly as:
where refers to the bilinear form of signature on exposed by the right hand side formula in. The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as endowed with this bilinear form is known as Minkowski space . The Lorentz transformation is thus an element of the group , the Lorentz group or, for those that prefer the other metric signature, (also called the Lorentz group). One has:
which is precisely preservation of the bilinear form which implies (by linearity of and bilinearity of the form) that is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group.
Generalities
The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations.
Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and boost is a homogeneous transformation, which transforms the origin back to the origin.
The full Lorentz group also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed.
Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group.
Physical formulation of Lorentz boosts
Coordinate transformation
A "stationary" observer in frame defines events with coordinates . Another frame moves with velocity relative to , and an observer in this "moving" frame defines events using the coordinates .
The coordinate axes in each frame are parallel (the and axes are parallel, the and axes are parallel, and the and axes are parallel), remain mutually perpendicular, and relative motion is along the coincident axes. At , the origins of both coordinate systems are the same, . In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized.
If an observer in records an event , then an observer in records the same event with coordinates
where is the relative velocity between frames in the -direction, is the speed of light, and
(lowercase gamma) is the Lorentz factor.
Here, is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity is motion along the positive directions of the axes, zero relative velocity is no relative motion, while negative relative velocity is relative motion along the negative directions of the axes. The magnitude of relative velocity cannot equal or exceed , so only subluminal speeds are allowed. The corresponding range of is .
The transformations are not defined if is outside these limits. At the speed of light is infinite, and faster than light is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers.
As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the axes, because of the in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the axes, while the event does not change and is simply represented in another coordinate system, a passive transformation.
The inverse relations ( in terms of ) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here is the "stationary" frame while is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from to must take exactly the same form as the transformations from to . The only difference is moves with velocity relative to (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in notes an event , then an observer in notes the same event with coordinates
and the value of remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction.
Sometimes it is more convenient to use (lowercase beta) instead of , so that
which shows much more clearly the symmetry in the transformation. From the allowed ranges of and the definition of , it follows . The use of and is standard throughout the literature.
When the boost velocity is in an arbitrary vector direction with the boost vector , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by
where the Lorentz factor is . The determinant of the transformation matrix is +1 and its trace is . The inverse of the transformation is given by reversing the sign of .
The Lorentz transformations can also be derived in a way that resembles circular rotations in 3d space using the hyperbolic functions. For the boost in the direction, the results are
where (lowercase zeta) is a parameter called rapidity (many other symbols are used, including ). Given the strong resemblance to rotations of spatial coordinates in 3d space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4d Minkowski space. The parameter is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram.
The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking or in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying , which parametrizes the curves according to the identity
Conversely the and axes can be constructed for varying coordinates but constant . The definition
provides the link between a constant value of rapidity, and the slope of the axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor
Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between , , and are
Taking the inverse hyperbolic tangent gives the rapidity
Since , it follows . From the relation between and , positive rapidity is motion along the positive directions of the axes, zero rapidity is no relative motion, while negative rapidity is relative motion along the negative directions of the axes.
The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity since this is equivalent to negating the relative velocity. Therefore,
The inverse transformations can be similarly visualized by considering the cases when and .
So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences;
with inverse relations
where (uppercase delta) indicates a difference of quantities; e.g., for two values of coordinates, and so on.
These transformations on differences rather than spatial points or instants of time are useful for a number of reasons:
in calculations and experiments, it is lengths between two points or time intervals that are measured or of interest (e.g., the length of a moving vehicle, or time duration it takes to travel from one place to another),
the transformations of velocity can be readily derived by making the difference infinitesimally small and dividing the equations, and the process repeated for the transformation of acceleration,
if the coordinate systems are never coincident (i.e., not in standard configuration), and if both observers can agree on an event in and in , then they can use that event as the origin, and the spacetime coordinate differences are the differences between their coordinates and this origin, e.g., , , etc.
Physical implications
A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in the equation for a pulse of light along the direction is , then in the Lorentz transformations give , and vice versa, for any .
For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation
in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance".
Three counterintuitive, but correct, predictions of the transformations are:
Relativity of simultaneity
Suppose two events occur along the x axis simultaneously in , but separated by a nonzero displacement . Then in , we find that , so the events are no longer simultaneous according to a moving observer.
Time dilation
Suppose there is a clock at rest in . If a time interval is measured at the same point in that frame, so that , then the transformations give this interval in by . Conversely, suppose there is a clock at rest in . If an interval is measured at the same point in that frame, so that , then the transformations give this interval in F by . Either way, each observer measures the time interval between ticks of a moving clock to be longer by a factor than the time interval between ticks of his own clock.
Length contraction
Suppose there is a rod at rest in aligned along the x axis, with length . In , the rod moves with velocity , so its length must be measured by taking two simultaneous measurements at opposite ends. Under these conditions, the inverse Lorentz transform shows that . In the two measurements are no longer simultaneous, but this does not matter because the rod is at rest in . So each observer measures the distance between the end points of a moving rod to be shorter by a factor than the end points of an identical rod at rest in his own frame. Length contraction affects any geometric quantity related to lengths, so from the perspective of a moving observer, areas and volumes will also appear to shrink along the direction of motion.
Vector transformations
The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector with a magnitude that cannot equal or exceed , so that .
Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector as measured in , and as measured in , each into components perpendicular (⊥) and parallel ( ‖ ) to ,
then the transformations are
where is the dot product. The Lorentz factor retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition with magnitude is also used by some authors.
Introducing a unit vector in the direction of relative motion, the relative velocity is with magnitude and direction , and vector projection and rejection give respectively
Accumulating the results gives the full transformations,
The projection and rejection also applies to . For the inverse transformations, exchange and to switch observed coordinates, and negate the relative velocity (or simply the unit vector since the magnitude is always positive) to obtain
The unit vector has the advantage of simplifying equations for a single boost, allows either or to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing and . It is not convenient for multiple boosts.
The vectorial relation between relative velocity and rapidity is
and the "rapidity vector" can be defined as
each of which serves as a useful abbreviation in some contexts. The magnitude of is the absolute value of the rapidity scalar confined to , which agrees with the range .
Transformation of velocities
Defining the coordinate velocities and Lorentz factor by
taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to
The velocities and are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity relative to F, or equivalently with velocity relative to F′, in turn F′ moves with velocity relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange and , and change to .
The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect.
The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential.
Transformation of other quantities
In general, given four quantities and and their Lorentz-boosted counterparts and , a relation of the form
implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates;
The decomposition of (and ) into components perpendicular and parallel to is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange and to switch observed quantities, and reverse the direction of relative motion by the substitution ).
The quantities collectively make up a four-vector, where is the "timelike component", and the "spacelike component". Examples of and are the following:
For a given object (e.g., particle, fluid, field, material), if or correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity , however a boosted observer will perceive a nonzero timelike component and an altered spin.
Not all quantities are invariant in the form as shown above, for example orbital angular momentum does not have a timelike quantity, and neither does the electric field nor the magnetic field . The definition of angular momentum is , and in a boosted frame the altered angular momentum is . Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out transforms with another vector quantity related to boosts, see relativistic angular momentum for details. For the case of the and fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in it is while in it is . A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below.
Mathematical formulation
Throughout, italic non-bold capital letters are 4×4 matrices, while non-italic bold letters are 3×3 matrices.
Homogeneous Lorentz group
Writing the coordinates in column vectors and the Minkowski metric as a square matrix
the spacetime interval takes the form (superscript denotes transpose)
and is invariant under a Lorentz transformation
where is a square matrix which can depend on parameters.
The set of all Lorentz transformations in this article is denoted . This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication.
From the invariance of the spacetime interval it follows
and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule gives immediately
Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form,
carrying out the block matrix multiplications obtains general conditions on to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results
is useful; always so it follows that
The negative inequality may be unexpected, because multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then is the Lorentz factor.
The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets.
where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities.
The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets
A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations and from a particular subgroup, the composite Lorentz transformations and must be in the same subgroup as and . This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets , , , and all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. , , ) do not form subgroups.
Proper transformations
If a Lorentz covariant 4-vector is measured in one inertial frame with result , and the same measurement made in another inertial frame (with the same orientation and origin) gives result , the two results will be related by
where the boost matrix represents the rotation-free Lorentz transformation between the unprimed and primed frames and is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by
where is the magnitude of the velocity and is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by .
If a frame is boosted with velocity relative to frame , and another frame is boosted with velocity relative to , the separate boosts are
and the composition of the two boosts connects the coordinates in and ,
Successive transformations act on the left. If and are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: . This composite transformation happens to be another boost, , where is collinear with and .
If and are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: and are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of or . The and are composite velocities, while and are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply
where is a 3d rotation matrix, which rotates any 3d vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect and (or and ) to the original boost parameters and . In a composition of boosts, the matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for .
In this article the axis-angle representation is used for . The rotation is about an axis in the direction of a unit vector , through angle (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector"
will serve as a useful abbreviation.
Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include:
inverses: (relative motion in the opposite direction), and (rotation in the opposite sense about the same axis)
identity transformation for no relative motion/rotation:
unit determinant: . This property makes them proper transformations.
matrix symmetry: is symmetric (equals transpose), while is nonsymmetric but orthogonal (transpose equals inverse, ).
The most general proper Lorentz transformation includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, and . An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes and .
The Lie group SO+(3,1)
The set of transformations
with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension).
For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ,
where the higher order terms not shown are negligible because is small, and is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ,
For now, is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained
where the limit definition of the exponential has been used (see also characterizations of the exponential function). More generally
The axis-angle vector and rapidity vector are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are and , each vectors of matrices with the explicit forms
These are all defined in an analogous way to above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: are the rotation generators which correspond to angular momentum, and are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve with in the group depending on some group parameter with respect to that group parameter, evaluated at , serves as a definition of a corresponding group generator , and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map smoothly back into the group via for all ; this curve will yield again when differentiated at .
Expanding the exponentials in their Taylor series obtains
which compactly reproduce the boost and rotation matrices as given in the previous section.
It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product
is commutative because only linear terms are required (products like and count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential
The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular,
because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators and ), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies.
The Lie algebra so(3,1)
Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators
together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers. The generators form a basis set of V, and the components of the axis-angle and rapidity vectors, , are the coordinates of a Lorentz generator with respect to this basis.
Three of the commutation relations of the Lorentz generators are
where the bracket is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat).
These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers.
Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense.
The exponential map from the Lie algebra to the Lie group,
provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra.
Improper transformations
Lorentz transformations also include parity inversion
which negates all the spatial coordinates only, and time reversal
which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here is the 3d identity matrix. These are both symmetric, they are their own inverses (see involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations.
If is a proper orthochronous Lorentz transformation, then is improper antichronous, is improper orthochronous, and is proper antichronous.
Inhomogeneous Lorentz group
Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form
where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article.
Tensor formulation
Contravariant vectors
Writing the general matrix transformation of coordinates as the matrix equation
allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4d spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is
where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index.
The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If is any four-vector, then in tensor index notation
Alternatively, one writes in which the primed indices denote the indices of A in the primed frame. For a general -component object one may write where is the appropriate representation of the Lorentz group, an matrix for every . In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from to . E.g., if is a bispinor, then the indices are called Dirac indices.
Covariant vectors
There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g.,
where is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by
where, when viewed as matrices, is the inverse of . As it happens, . This is referred to as raising an index. To transform a covariant vector , first raise its index, then transform it according to the same rule as for contravariant -vectors, then finally lower the index;
But
That is, it is the -component of the inverse Lorentz transformation. One defines (as a matter of notation),
and may in this notation write
Now for a subtlety. The implied summation on the right hand side of
is running over a row index of the matrix representing . Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of acting on the column vector . That is, in pure matrix notation,
This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace with .
Tensors
If and are linear operators on vector spaces and , then a linear operator may be defined on the tensor product of and , denoted according to
From this it is immediately clear that if and are a four-vectors in , then transforms as
The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor .
These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity . It is given by
where is defined above. This form can generally be reduced to the form for general -component objects given above with a single matrix operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor.
Transformation of the electromagnetic field
Lorentz transformations can also be used to illustrate that the magnetic field and electric field are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment.
An observer measures a charge at rest in frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer does not observe any magnetic field.
The other observer in frame F′ moves at velocity relative to F and the charge. This observer sees a different electric field because the charge moves at velocity in their rest frame. The motion of the charge corresponds to an electric current, and thus the observer in frame F′ also sees a magnetic field.
The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector.
The electromagnetic field strength tensor is given by
in SI units. In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field and the magnetic induction have the same units making the appearance of the electromagnetic field tensor more natural. Consider a Lorentz boost in the -direction. It is given by
where the field tensor is displayed side by side for easiest possible reference in the manipulations below.
The general transformation law becomes
For the magnetic field one obtains
For the electric field results
Here, is used. These results can be summarized by
and are independent of the metric signature. For SI units, substitute . refer to this last form as the view as opposed to the geometric view represented by the tensor expression
and make a strong point of the ease with which results that are difficult to achieve using the view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields (alone) and (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations and that immediately yield . One should note that the primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is
Length contraction has an effect on charge density and current density , and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors,
or, in the simpler geometric view,
Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector.
The Maxwell equations are invariant under Lorentz transformations.
Spinors
Equation hold unmodified for any representation of the Lorentz group, including the bispinor representation. In one simply replaces all occurrences of by the bispinor representation ,
The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons.
Transformation of general fields
A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule
where is the Wigner's little group and is the representation of .
See also
Footnotes
Notes
References
Websites
Papers
. See also: English translation.
eqn (55).
Books
Further reading
External links
Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties.
The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page.
Relativity – a chapter from an online textbook
Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects.
visualizing the Lorentz transformation.
MinutePhysics video on YouTube explaining and visualizing the Lorentz transformation with a mechanical Minkowski diagram
Interactive graph on Desmos (graphing) showing Lorentz transformations with a virtual Minkowski diagram
Interactive graph on Desmos showing Lorentz transformations with points and hyperbolas
Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc.
Special relativity
Mathematical physics
Spacetime
Coordinate systems
Hendrik Lorentz | 0.793608 | 0.999025 | 0.792834 |
Mechanical equilibrium | In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a physical system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero.
In addition to defining mechanical equilibrium in terms of force, there are many alternative definitions for mechanical equilibrium which are all mathematically equivalent.
In terms of momentum, a system is in equilibrium if the momentum of its parts is all constant.
In terms of velocity, the system is in equilibrium if velocity is constant. * In a rotational mechanical equilibrium the angular momentum of the object is conserved and the net torque is zero.
More generally in conservative systems, equilibrium is established at a point in configuration space where the gradient of the potential energy with respect to the generalized coordinates is zero.
If a particle in equilibrium has zero velocity, that particle is in static equilibrium. Since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame.
Stability
An important property of systems at mechanical equilibrium is their stability.
Potential energy stability test
In a function which describes the system's potential energy, the system's equilibria can be determined using calculus. A system is in mechanical equilibrium at the critical points of the function describing the system's potential energy. These points can be located using the fact that the derivative of the function is zero at these points. To determine whether or not the system is stable or unstable, the second derivative test is applied. With denoting the static equation of motion of a system with a single degree of freedom the following calculations can be performed:
Second derivative < 0 The potential energy is at a local maximum, which means that the system is in an unstable equilibrium state. If the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away.
Second derivative > 0 The potential energy is at a local minimum. This is a stable equilibrium. The response to a small perturbation is forces that tend to restore the equilibrium. If more than one stable equilibrium state is possible for a system, any equilibria whose potential energy is higher than the absolute minimum represent metastable states.
Second derivative = 0 The state is neutral to the lowest order and nearly remains in equilibrium if displaced a small amount. To investigate the precise stability of the system, higher order derivatives can be examined. The state is unstable if the lowest nonzero derivative is of odd order or has a negative value, stable if the lowest nonzero derivative is both of even order and has a positive value. If all derivatives are zero then it is impossible to derive any conclusions from the derivatives alone. For example, the function (defined as 0 in x=0) has all derivatives equal to zero. At the same time, this function has a local minimum in x=0, so it is a stable equilibrium. If this function is multiplied by the Sign function, all derivatives will still be zero but it will become an unstable equilibrium.
Function is locally constant In a truly neutral state the energy does not vary and the state of equilibrium has a finite width. This is sometimes referred to as a state that is marginally stable, or in a state of indifference, or astable equilibrium.
When considering more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in the x-direction but instability in the y-direction, a case known as a saddle point. Generally an equilibrium is only referred to as stable if it is stable in all directions.
Statically indeterminate system
Sometimes the equilibrium equations force and moment equilibrium conditions are insufficient to determine the forces and reactions. Such a situation is described as statically indeterminate.
Statically indeterminate situations can often be solved by using information from outside the standard equilibrium equations.
Examples
A stationary object (or set of objects) is in "static equilibrium," which is a special case of mechanical equilibrium. A paperweight on a desk is an example of static equilibrium. Other examples include a rock balance sculpture, or a stack of blocks in the game of Jenga, so long as the sculpture or stack of blocks is not in the state of collapsing.
Objects in motion can also be in equilibrium. A child sliding down a slide at constant speed would be in mechanical equilibrium, but not in static equilibrium (in the reference frame of the earth or slide).
Another example of mechanical equilibrium is a person pressing a spring to a defined point. He or she can push it to an arbitrary point and hold it there, at which point the compressive load and the spring reaction are equal. In this state the system is in mechanical equilibrium. When the compressive force is removed the spring returns to its original state.
The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point. Such an object is called a gömböc.
See also
Dynamic equilibrium
Engineering mechanics
Metastability
Statically indeterminate
Statics
Hydrostatic equilibrium
Notes and references
Further reading
Marion JB and Thornton ST. (1995) Classical Dynamics of Particles and Systems. Fourth Edition, Harcourt Brace & Company.
Statics | 0.803737 | 0.986276 | 0.792707 |
Annus mirabilis papers | The annus mirabilis papers (from Latin annus mīrābilis, "miraculous year") are the four that Albert Einstein published in the scientific journal Annalen der Physik (Annals of Physics) in . As major contributions to the foundation of modern physics, these scientific publications were the ones for which he gained fame among physicists. They revolutionized science's understanding of the fundamental concepts of space, time, mass, and energy. Because Einstein published all four of these papers in a single year, 1905 is called his annus mirabilis (miraculous year).
The first paper explained the photoelectric effect, which established the energy of the light quanta , and was the only specific discovery mentioned in the citation awarding Einstein the 1921 Nobel Prize in Physics.
The second paper explained Brownian motion, which established the Einstein relation and compelled physicists to accept the existence of atoms.
The third paper introduced Einstein's special theory of relativity, which proclaims the constancy of the speed of light and derives the Lorentz transformations. Einstein also examined relativistic aberration and the transverse Doppler effect.
The fourth, a consequence of special relativity, developed the principle of mass–energy equivalence, expressed in the equation and which led to the discovery and use of nuclear power decades later.
These four papers, together with quantum mechanics and Einstein's later general theory of relativity, are the foundation of modern physics.
Papers
Photoelectric effect
The article "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" ("On a Heuristic Viewpoint Concerning the Production and Transformation of Light") received 18 March and published 9 June, proposed the idea of energy quanta. This idea, motivated by Max Planck's earlier derivation of the law of black-body radiation (which was preceded by the discovery of Wien's displacement law, by Wilhelm Wien, several years prior to Planck) assumes that luminous energy can be absorbed or emitted only in discrete amounts, called quanta. Einstein states,
In explaining the photoelectric effect, the hypothesis that energy consists of discrete packets, as Einstein illustrates, can be directly applied to black bodies, as well.
The idea of light quanta contradicts the wave theory of light that follows naturally from James Clerk Maxwell's equations for electromagnetic behavior and, more generally, the assumption of infinite divisibility of energy in physical systems.
Einstein noted that the photoelectric effect depended on the wavelength, and hence the frequency of the light. At too low a frequency, even intense light produced no electrons. However, once a certain frequency was reached, even low intensity light produced electrons. He compared this to Planck's hypothesis that light could be emitted only in packets of energy given by hf, where h is the Planck constant and f is the frequency. He then postulated that light travels in packets whose energy depends on the frequency, and therefore only light above a certain frequency would bring sufficient energy to liberate an electron.
Even after experiments confirmed that Einstein's equations for the photoelectric effect were accurate, his explanation was not universally accepted. Niels Bohr, in his 1922 Nobel address, stated, "The hypothesis of light-quanta is not able to throw light on the nature of radiation."
By 1921, when Einstein was awarded the Nobel Prize and his work on photoelectricity was mentioned by name in the award citation, some physicists accepted that the equation was correct and light quanta were possible. In 1923, Arthur Compton's X-ray scattering experiment helped more of the scientific community to accept this formula. The theory of light quanta was a strong indicator of wave–particle duality, a fundamental principle of quantum mechanics. A complete picture of the theory of photoelectricity was realized after the maturity of quantum mechanics.
Brownian motion
The article "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" ("On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"), received 11 May and published 18 July, delineated a stochastic model of Brownian motion.
Einstein derived expressions for the mean squared displacement of particles. Using the kinetic theory of gases, which at the time was controversial, the article established that the phenomenon, which had lacked a satisfactory explanation even decades after it was first observed, provided empirical evidence for the reality of the atom. It also lent credence to statistical mechanics, which had been controversial at that time, as well. Before this paper, atoms were recognized as a useful concept, but physicists and chemists debated whether atoms were real entities. Einstein's statistical discussion of atomic behavior gave experimentalists a way to count atoms by looking through an ordinary microscope. Wilhelm Ostwald, one of the leaders of the anti-atom school, later told Arnold Sommerfeld that he had been convinced of the existence of atoms by Jean Perrin's subsequent Brownian motion experiments.
Special relativity
Einstein's (On the Electrodynamics of Moving Bodies), his third paper that year, was received on 30 June and published 26 September. It reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity.
The paper mentions the names of only five other scientists: Isaac Newton, James Clerk Maxwell, Heinrich Hertz, Christian Doppler, and Hendrik Lorentz. It does not have any references to any other publications. Many of the ideas had already been published by others, as detailed in history of special relativity and relativity priority dispute. However, Einstein's paper introduces a theory of time, distance, mass, and energy that was consistent with electromagnetism, but omitted the force of gravity.
At the time, it was known that Maxwell's equations, when applied to moving bodies, led to asymmetries (moving magnet and conductor problem), and that it had not been possible to discover any motion of the Earth relative to the 'light medium' (i.e. aether). Einstein puts forward two postulates to explain these observations. First, he applies the principle of relativity, which states that the laws of physics remain the same for any non-accelerating frame of reference (called an inertial reference frame), to the laws of electrodynamics and optics as well as mechanics. In the second postulate, Einstein proposes that the speed of light has the same value in all frames of reference, independent of the state of motion of the emitting body.
Special relativity is thus consistent with the result of the Michelson–Morley experiment, which had not detected a medium of conductance (or aether) for light waves unlike other known waves that require a medium (such as water or air), and which had been crucial for the development of the Lorentz transformations and the principle of relativity. Einstein may not have known about that experiment, but states,
The speed of light is fixed, and thus not relative to the movement of the observer. This was impossible under Newtonian classical mechanics. Einstein argues,
It had previously been proposed, by George FitzGerald in 1889 and by Lorentz in 1892, independently of each other, that the Michelson–Morley result could be accounted for if moving bodies were contracted in the direction of their motion. Some of the paper's core equations, the Lorentz transforms, had been published by Joseph Larmor (1897, 1900), Hendrik Lorentz (1895, 1899, 1904) and Henri Poincaré (1905), in a development of Lorentz's 1904 paper. Einstein's presentation differed from the explanations given by FitzGerald, Larmor, and Lorentz, but was similar in many respects to the formulation by Poincaré (1905).
His explanation arises from two axioms. The first is Galileo's idea that the laws of nature should be the same for all observers that move with constant speed relative to each other. Einstein writes,
The second axiom is the rule that the speed of light is the same for every observer.
The theory, now called the special theory of relativity, distinguishes it from his later general theory of relativity, which considers all observers to be equivalent. Acknowledging the role of Max Planck in the early dissemination of his ideas, Einstein wrote in 1913 "The attention that this theory so quickly received from colleagues is surely to be ascribed in large part to the resoluteness and warmth with which he [Planck] intervened for this theory". In addition, the spacetime formulation by Hermann Minkowski in 1907 was influential in gaining widespread acceptance. Also, and most importantly, the theory was supported by an ever-increasing body of confirmatory experimental evidence.
Mass–energy equivalence
On 21 November Annalen der Physik published a fourth paper (received September 27) "Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?" ("Does the Inertia of a Body Depend Upon Its Energy Content?"), in which Einstein deduced what is sometimes described as the most famous of all equations: .
Einstein considered the equivalency equation to be of paramount importance because it showed that a massive particle possesses an energy, the "rest energy", distinct from its classical kinetic and potential energies. The paper is based on James Clerk Maxwell's and Heinrich Rudolf Hertz's investigations and, in addition, the axioms of relativity, as Einstein states,
The equation sets forth that the energy of a body at rest equals its mass times the speed of light squared, or .
The mass–energy relation can be used to predict how much energy will be released or consumed by nuclear reactions; one simply measures the mass of all constituents and the mass of all the products and multiplies the difference between the two by . The result shows how much energy will be released or consumed, usually in the form of light or heat. When applied to certain nuclear reactions, the equation shows that an extraordinarily large amount of energy will be released, millions of times as much as in the combustion of chemical explosives, where the amount of mass converted to energy is negligible. This explains why nuclear weapons and nuclear reactors produce such phenomenal amounts of energy, as they release binding energy during nuclear fission and nuclear fusion, and convert a portion of subatomic mass to energy.
Commemoration
The International Union of Pure and Applied Physics (IUPAP) resolved to commemorate the 100th year of the publication of Einstein's extensive work in 1905 as the World Year of Physics 2005. This was subsequently endorsed by the United Nations.
Notes
References
Citations
Primary sources
Secondary sources
Gribbin, John, and Gribbin, Mary. Annus Mirabilis: 1905, Albert Einstein, and the Theory of Relativity, Chamberlain Bros., 2005. . (Includes DVD.)
Renn, Jürgen, and Dieter Hoffmann, "1905a miraculous year". 2005 J. Phys. B: At. Mol. Opt. Phys. 38 S437-S448 (Max Planck Institute for the History of Science) [Issue 9 (14 May 2005)]. .
Stachel, John, et al., Einstein's Miraculous Year. Princeton University Press, 1998. .
External links
Collection of the Annus Mirabilis papers and their English translations at the Library of Congress website
1905 documents
1905 in science
Historical physics publications
Old quantum theory
Physics papers
Works by Albert Einstein
Works originally published in Annalen der Physik | 0.794799 | 0.997144 | 0.792529 |
Ordinary differential equation | In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
Differential equations
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
where , ..., and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of the unknown function of the variable .
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
Background
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement x and the time t of an object under the force F, is given by the differential equation
which constrains the motion of a particle of constant mass m. In general, F is a function of the position x(t) of the particle at time t. The unknown function x(t) appears on both sides of the differential equation, and is indicated in the notation F(x(t)).
Definitions
In what follows, y is a dependent variable representing an unknown function of the independent variable x. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation is more useful for differentiation and integration, whereas Lagrange's notation is more useful for representing higher-order derivatives compactly, and Newton's notation is often used in physics for representing derivatives of low order with respect to time.
General definition
Given F, a function of x, y, and derivatives of y. Then an equation of the form
is called an explicit ordinary differential equation of order n.
More generally, an implicit ordinary differential equation of order n takes the form:
There are further classifications:
System of ODEs
A number of coupled differential equations form a system of equations. If y is a vector whose elements are functions; y(x) = [y1(x), y2(x),..., ym(x)], and F is a vector-valued function of y and its derivatives, then
is an explicit system of ordinary differential equations of order n and dimension m. In column vector form:
These are not necessarily linear. The implicit analogue is:
where 0 = (0, 0, ..., 0) is the zero vector. In matrix form
For a system of the form , some sources also require that the Jacobian matrix be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
Solutions
Given a differential equation
a function , where I is an interval, is called a solution or integral curve for F, if u is n-times differentiable on I, and
Given two solutions and , u is called an extension of v if and
A solution that has no extension is called a maximal solution. A solution defined on all of R is called a global solution.
A general solution of an nth-order equation is a solution containing n arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
Solutions of finite duration
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
Theories
Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.
Fuchsian theory
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.
Lie's theory
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
Existence and uniqueness of solutions
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
{| class="wikitable"
|-
! Theorem
! Assumption
! Conclusion
|-
|Peano existence theorem
||F continuous
||local existence only
|-
|Picard–Lindelöf theorem
||F Lipschitz continuous
||local existence and uniqueness
|-
|}
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
Local existence and uniqueness theorem simplified
The theorem can be stated simply as follows. For the equation and initial value problem:
if F and ∂F/∂y are continuous in a closed rectangle
in the x-y plane, where a and b are real (symbolically: ) and denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
for some where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on F to be linear, this applies to non-linear equations that take the form F(x, y), and it can also be applied to systems of equations.
Global uniqueness and maximum domain of solution
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition (x0, y0) there exists a unique maximum (possibly infinite) open interval
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain .
In the case that , there are exactly two possibilities
explosion in finite time:
leaves domain of definition:
where Ω is the open set in which F is defined, and is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
may depend on the specific choice of (x0, y0).
Example.
This means that F(x, y) = y2, which is C1 and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all since the solution is
which has maximum domain:
This shows clearly that the maximum interval may depend on the initial conditions. The domain of y could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not because
which is one of the two possible cases according to the above theorem.
Reduction of order
Differential equations are usually easier to solve if the order of the equation can be reduced.
Reduction to a first-order system
Any explicit differential equation of order n,
can be written as a system of n first-order differential equations by defining a new family of unknown functions
for i = 1, 2, ..., n. The n-dimensional system of first-order coupled differential equations is then
more compactly in vector notation:
where
Summary of exact solutions
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below, , , , , and , are any integrable functions of , ; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, λ and ε are dummy variables of integration (the continuum analogues of indices in summation), and the notation just means to integrate with respect to , then after the integration substitute , without adding constants (explicitly stated).
Separable equations
General first-order equations
General second-order equations
Linear to the nth order equations
The guessing method
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
Software for ODE solving
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
See also
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
Notes
References
.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003.
Bibliography
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications,
.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002.
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
External links
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha
Differential calculus | 0.793312 | 0.997986 | 0.791715 |
AMBER | Assisted Model Building with Energy Refinement (AMBER) is the name of a widely-used molecular dynamics software package originally developed by Peter Kollman's group at the University of California, San Francisco. It has also, subsequently, come to designate a family of force fields for molecular dynamics of biomolecules that can be used both within the AMBER software suite and with many modern computational platforms.
The original version of the AMBER software package was written by Paul Weiner as a post-doc in Peter Kollman's laboratory, and was released in 1981.
Subsequently, U Chandra Singh expanded AMBER as a post-doc in Kollman's laboratory, adding molecular dynamics and free energy capabilities.
The next iteration of AMBER was started around 1987 by a group of developers in (and associated with) the Kollman lab, including David Pearlman, David Case, James Caldwell, William Ross, Thomas Cheatham, Stephen DeBolt, David Ferguson, and George Seibel. This team headed development for more than a decade and introduced a variety of improvements, including significant expansion of the free energy capabilities, accommodation for modern parallel and array processing hardware platforms (Cray, Star, etc.), restructuring of the code and revision control for greater maintainability, PME Ewald summations, tools for NMR refinement, and many others.
Currently, AMBER is maintained by an active collaboration between David Case at Rutgers University, Tom Cheatham at the University of Utah, Adrian Roitberg at University of Florida, Ken Merz at Michigan State University, Carlos Simmerling at Stony Brook University, Ray Luo at UC Irvine, and Junmei Wang at University of Pittsburgh.
Force field
The term AMBER force field generally refers to the functional form used by the family of AMBER force fields. This form includes several parameters; each member of the family of AMBER force fields provides values for these parameters and has its own name.
Functional form
The functional form of the AMBER force field is
Despite the term force field, this equation defines the potential energy of the system; the force is the derivative of this potential relative to position.
The meanings of right hand side terms are:
First term (summing over bonds): represents the energy between covalently bonded atoms. This harmonic (ideal spring) force is a good approximation near the equilibrium bond length, but becomes increasingly poor as atoms separate.
Second term (summing over angles): represents the energy due to the geometry of electron orbitals involved in covalent bonding.
Third term (summing over torsions): represents the energy for twisting a bond due to bond order (e.g., double bonds) and neighboring bonds or lone pairs of electrons. One bond may have more than one of these terms, such that the total torsional energy is expressed as a Fourier series.
Fourth term (double summation over and ): represents the non-bonded energy between all atom pairs, which can be decomposed into van der Waals (first term of summation) and electrostatic (second term of summation) energies.
The form of the van der Waals energy is calculated using the equilibrium distance and well depth. The factor of ensures that the equilibrium distance is . The energy is sometimes reformulated in terms of , where , as used e.g. in the implementation of the softcore potentials.
The form of the electrostatic energy used here assumes that the charges due to the protons and electrons in an atom can be represented by a single point charge (or in the case of parameter sets that employ lone pairs, a small number of point charges.)
Parameter sets
To use the AMBER force field, it is necessary to have values for the parameters of the force field (e.g. force constants, equilibrium bond lengths and angles, charges). A fairly large number of these parameter sets exist, and are described in detail in the AMBER software user manual. Each parameter set has a name, and provides parameters for certain types of molecules.
Peptide, protein, and nucleic acid parameters are provided by parameter sets with names starting with "ff" and containing a two digit year number, for instance "ff99". As of 2018 the primary protein model used by the AMBER suit is the ff14SB force field.
General AMBER force field (GAFF) provides parameters for small organic molecules to facilitate simulations of drugs and small molecule ligands in conjunction with biomolecules.
The GLYCAM force fields have been developed by Rob Woods for simulating carbohydrates.
The primary force field used in the AMBER suit for lipids is lipid14.
Software
The AMBER software suite provides a set of programs to apply the AMBER forcefields to simulations of biomolecules. It is written in the programming languages Fortran 90 and C, with support for most major Unix-like operating systems and compilers. Development is conducted by a loose association of mostly academic labs. New versions are released usually in the spring of even numbered years; AMBER 10 was released in April 2008. The software is available under a site license agreement, which includes full source, currently priced at US$500 for non-commercial and US$20,000 for commercial organizations.
Programs
LEaP prepares input files for the simulation programs.
Antechamber automates the process of parameterizing small organic molecules using GAFF.
Simulated Annealing with NMR-Derived Energy Restraints (SANDER) is the central simulation program and provides facilities for energy minimizing and molecular dynamics with a wide variety of options.
pmemd is a somewhat more feature-limited reimplementation of SANDER by Bob Duke. It was designed for parallel computing, and performs significantly better than SANDER when running on more than 8–16 processors.
pmemd.cuda runs simulations on machines with graphics processing units (GPUs).
pmemd.amoeba handles the extra parameters in the polarizable AMOEBA force field.
nmode calculates normal modes.
ptraj numerically analyzes simulation results. AMBER includes no visualizing abilities, which is commonly performed with Visual Molecular Dynamics (VMD). Ptraj is now unsupported as of AmberTools 13.
cpptraj is a rewritten version of ptraj made in C++ to give faster analysis of simulation results. Several actions have been made parallelizable with OpenMP and MPI.
MM-PBSA allows implicit solvent calculations on snap shots from molecular dynamics simulations.
NAB is a built-in nucleic acid building environment made to aid in the process of manipulating proteins and nucleic acids where an atomic level of description will aid computing.
See also
References
Related reading
1.
External links
AMBER mailing list archive
Amber on the German HPC-C5 Cluster-Systems
Fortran software
Molecular dynamics software
Force fields (chemistry) | 0.792091 | 0.999471 | 0.791672 |
Differential calculus | In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration.
Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.
Derivative
The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below:
For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to :
The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:
The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and , where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula . This gives
As gets closer and closer to , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as
The above expression means 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as . If , the derivative can also be written as , with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if , then the derivative of is
provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is :
As approaches , approaches . Therefore, . This proof can be generalised to show that if and are constants. This is known as the power rule. For example, . However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability.
A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at . Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted . The linearization of in all directions at once is called the total derivative.
History of differentiation
The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems).
The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem".
The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive ) of the cubic occurs when , and concluded therefrom that the equation has exactly one positive solution when , and two positive solutions whenever . The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known.
The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.
The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations.
Applications of derivatives
Optimization
If is a differentiable function on (or an open interval) and is a local maximum or a local minimum of , then the derivative of at is zero. Points where are called critical points or stationary points (and the value of at is called a critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.
If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at :
if it is positive, is a local minimum;
if it is negative, is a local maximum;
if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.)
This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the on each side of the critical point.
Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.
This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.
In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.
Calculus of variations
One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.
Physics
Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:
velocity is the derivative (with respect to time) of an object's displacement (distance from the original position)
acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.
For example, if an object's position on a line is given by
then the object's velocity is
and the object's acceleration is
which is constant.
Differential equations
A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation
The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation
Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod.
Mean value theorem
The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words,
In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
Taylor polynomials and Taylor series
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible.
In the neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals .
The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic.
Implicit function theorem
Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.)
The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.
See also
Differential (calculus)
Numerical differentiation
Techniques for differentiation
List of calculus topics
Notation for differentiation
Notes
References
Citations
Works cited
Other sources
Boman, Eugene, and Robert Rogers. Differential Calculus: From Practice to Theory. 2022, personal.psu.edu/ecb5/DiffCalc.pdf .
Calculus | 0.794321 | 0.996503 | 0.791543 |
Entropy | Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI).
History
In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as , a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
Etymology
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy after the Greek word for 'transformation'. He gave "transformational content" as a synonym, paralleling his "thermal and ergonal content" as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
Definitions and descriptions
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
State variables and functions of state
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
Reversible process
The entropy change of a system excluding its surroundings can be well-defined as a small portion of heat transferred to the system during reversible process divided by the temperature of the system during this heat transfer:The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible.
In contrast, irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible, the total entropy increases, and the potential for maximum work to be done during the process is lost.
Carnot cycle
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle that is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle the heat is transferred from a hot reservoir to a working gas at the constant temperature during isothermal expansion stage and the heat is transferred from a working gas to a cold reservoir at the constant temperature during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats and , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat is greater than the magnitude of heat . Through the efforts of Clausius and Kelvin, the work done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat absorbed by a working body of the engine during isothermal expansion:To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale.
It is known that a work produced by an engine over a cycle equals to a net heat absorbed over a cycle. Thus, with the sign convention for a heat transferred in a thermodynamic process ( for an absorption and for a dissipation) we get:Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function with a change of . It is called an internal energy and forms a central concept for the first law of thermodynamics.
Finally, comparison for the both representations of a work output in a Carnot cycle gives us:Similarly to the derivation of internal energy, this equality implies existence of a state function with a change of and which is conserved over an entire cycle. Clausius called this state function entropy.
In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:Here we denote the entropy change for a thermal reservoir by , where is either for a hot reservoir or for a cold one.
If we consider a heat engine which is less effective than Carnot cycle (i.e., the work produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:Substitution of the work as the net heat into the inequality above gives us:or in terms of the entropy change :A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analyzed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) what is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics.
For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, description of devices operating near limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics.
Classical thermodynamics
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic thermodynamic process: which means the line integral is path-independent. Thus we can define a state function , called entropy:Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change.
We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy .
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up of energy to the surrounding at the temperature , its entropy falls by and at least of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined).
Statistical mechanics
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1).
Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability of being occupied (usually given by the Boltzmann distribution):where is the Boltzmann constant and the summation is performed over all possible microstates of the system.
In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:where is a density matrix, is a trace operator and is a matrix logarithm. Density matrix formalism is not required if the system occurs to be in a thermal equilibrium so long as the basis states are chosen to be eigenstates of Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for can be derived from it, but not vice versa.
In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability , where is the number of microstates whose energy equals to the one of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables , , and observer B using variables , , , . If observer B changes variable , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment.
Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
Entropy of a system
Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state.
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state.
As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
Equivalence of definitions
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:and the entropy in classical thermodynamics:together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalized Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
Second law of thermodynamics
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature absorbing an infinitesimal amount of heat in a reversible way, is given by . More explicitly, an energy is not available to do useful work, where is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximizes its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
Applications
The fundamental thermodynamic relation
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure bears on the volume as the only external parameter, this relation is:Since both internal energy and entropy are monotonic functions of temperature , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
Entropy in chemical thermodynamics
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously.
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1.
Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, must be incorporated in an expression that includes both the system and its surroundings: Via additional steps this expression becomes the equation of Gibbs free energy change for reactants and products in the system at the constant pressure and temperature :where is the enthalpy change and is the entropy change.
World's technological capacity to store and communicate entropic information
A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
Entropy balance equation for open systems
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat , flow of shaft work and pressure-volume work across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer , where is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that , i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy , the entropy balance equation is:where is the net rate of entropy flow due to the flows of mass into and out of the system with entropy per unit mass , is the rate of entropy flow due to the flow of heat across the system boundary and is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity.
In case of multiple heat flows the term is replaced by , where is the heat flow through -th port into the system and is the temperature at the -th port.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:with zero for reversible process and positive values for irreversible one.
Entropy change formulas for simple processes
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
Isothermal expansion or compression of an ideal gas
For the expansion (or compression) of an ideal gas from an initial volume and pressure to a final volume and pressure at any constant temperature, the change in entropy is given by:Here is the amount of gas (in moles) and is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
Cooling and heating
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature to a final temperature , the entropy change is:
provided that the constant-pressure molar heat capacity (or specific heat) is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is:where the constant-volume molar heat capacity is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:Similarly if the temperature and pressure of an ideal gas both vary:
Phase transitions
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point , the entropy of fusion is:Similarly, for vaporization of a liquid to a gas at the boiling point , the entropy of vaporization is:
Approaches to understanding entropy
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
Standard textbook definitions
The following is a list of additional definitions of entropy from a collection of textbooks:
a measure of energy dispersal at a specific temperature.
a measure of disorder in the universe or of the availability of the energy in a system to do work.
a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work.
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
Order and disorder
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" in the system is given by:Similarly, the total amount of "order" in the system is given by:In which is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and is the "order" capacity of the system.
Energy dispersal
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
Relating entropy to energy usefulness
It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorized to lead to the heat death of the universe.
Entropy and adiabatic accessibility
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state is defined as the largest number such that is adiabatically accessible from a composite state consisting of an amount in the state and a complementary amount, , in the state . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
Entropy in quantum mechanics
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":where is the density matrix, is the trace operator and is the Boltzmann constant.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities :i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
Information theory
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities so that:where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is:and if entropy is measured in units of per nat, then the entropy is given by:which is the Boltzmann entropy formula, where is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the function of information theory and using Shannon's other term, "uncertainty", instead.
Measurement
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles and constant volume , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat :The resulting relation describes how entropy changes when a small amount of energy is introduced into the system at a certain temperature .
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
Interdisciplinary applications
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
Philosophy and theoretical physics
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions.
Biology
Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimization.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
Cosmology
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.
Economics
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.
See also
Boltzmann entropy
Brownian ratchet
Configuration entropy
Conformational entropy
Entropic explosion
Entropic force
Entropic value at risk
Entropy and life
Entropy unit
Free entropy
Harmonic entropy
Info-metrics
Negentropy (negative entropy)
Phase space
Principle of maximum entropy
Residual entropy
Thermodynamic potential
Notes
References
Further reading
Lambert, Frank L.;
Sharp, Kim (2019). Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics (SpringerBriefs in Physics). Springer Nature. .
Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering
External links
"Entropy" at Scholarpedia
Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008
Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle
Khan Academy: entropy lectures, part of Chemistry playlist
Entropy Intuition
More on Entropy
Proof: S (or Entropy) is a valid state variable
Reconciling Thermodynamic and State Definitions of Entropy
Thermodynamic Entropy Definition Clarification
The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013.
The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200)
Physical quantities
Philosophy of thermal and statistical physics
State functions
Asymmetry
Extensive quantities | 0.791396 | 0.999795 | 0.791233 |
Thermodynamic system | A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics.
Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another.
Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy.
The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems.
Overview
Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.”
Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'.
Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article.
Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process.
History
The classification of thermodynamic systems arose with the development of thermodynamics as a science.
Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment.
At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium.
In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems).
Passive systems
If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium.
Active systems
If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment.
Systems in equilibrium
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic.
For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly.
The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium.
In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings.
Walls
A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct.
A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available.
The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings.
A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time.
The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used.
Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.
Surroundings
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions.
Closed system
In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
Adiabatic boundary – not allowing any heat exchange: A thermally isolated system
Rigid boundary – not allowing exchange of work: A mechanically isolated system
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
The first law of thermodynamics for energy transfers for closed system may be stated:
where denotes the internal energy of the system, heat added to the system, the work done by the system. For infinitesimal changes the first law for closed systems may stated:
If the work is due to a volume expansion by at a pressure then:
For a quasi-reversible heat transfer, the second law of thermodynamics reads:
where denotes the thermodynamic temperature and the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as:
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
where denotes the number of -type molecules, the number of atoms of element in molecule , and the total number of atoms of element in the system, which remains constant, since the system is closed. There is one such equation for each element in the system.
Isolated system
An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified.
The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system.
Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system.
Selective transfer of matter
For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes.
An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential.
A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number.
A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance it is usually denoted . The corresponding extensive variable can be the number of moles of the component substance in the system.
For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics.
Open system
In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where is a relaxation time of a corresponding variable. It is convenient to consider the initial value equal to zero.
The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables.
The increments of Gibbs free energy and entropy at and are determined as
The stationary states of the system exist due to exchange of both thermal energy and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances that can be positive or negative; the quantity is chemical potential of substance .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables , while are thermodynamic forces.
This approach to the open system allows describing the growth and development of living objects in thermodynamic terms.
See also
Dynamical system
Energy system
Isolated system
Mechanical system
Physical system
Quantum system
Thermodynamic cycle
Thermodynamic process
Two-state quantum system
GENERIC formalism
References
Sources
Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier.
Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru)
Thermodynamic systems
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic processes | 0.795195 | 0.994382 | 0.790728 |
Angular momentum | Angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it.
The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it.
Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body.
Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant.
The change in angular momentum for a particular interaction is called angular impulse, sometimes twirl. Angular impulse is the angular analog of (linear) impulse.
Examples
The trivial case of the angular momentum of a body in an orbit is given by
where is the mass of the orbiting object, is the orbit's frequency and is the orbit's radius.
The angular momentum of a uniform rigid sphere rotating around its axis, instead, is given by
where is the sphere's mass, is the frequency of rotation and is the sphere's radius.
Thus, for example, the orbital angular momentum of the Earth with respect to the Sun is about 2.66 × 1040 kg⋅m2⋅s−1, while its rotational angular momentum is about 7.05 × 1033 kg⋅m2⋅s−1.
In the case of a uniform rigid sphere rotating around its axis, if, instead of its mass, its density is known, the angular momentum is given by
where is the sphere's density, is the frequency of rotation and is the sphere's radius.
In the simplest case of a spinning disk, the angular momentum is given by
where is the disk's mass, is the frequency of rotation and is the disk's radius.
If instead the disk rotates about its diameter (e.g. coin toss), its angular momentum is given by
Definition in classical mechanics
Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The Earth has an orbital angular momentum by nature of revolving around the Sun, and a spin angular momentum by nature of its daily rotation around the polar axis. The total angular momentum is the sum of the spin and orbital angular momenta. In the case of the Earth the primary conserved quantity is the total angular momentum of the solar system because angular momentum is exchanged to a small but important extent among the planets and the Sun. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ω, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Ω, making the constant of proportionality a second-rank tensor rather than a scalar.
Orbital angular momentum in two dimensions
Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed
angular momentum is proportional to moment of inertia and angular speed measured in radians per second.
Unlike mass, which depends only on amount of matter, moment of inertia depends also on the position of the axis of rotation and the distribution of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center.
In the case of circular motion of a single particle, we can use and to expand angular momentum as reducing to:
the product of the radius of rotation and the linear momentum of the particle , where is the linear (tangential) speed.
This simple analysis can also apply to non-circular motion if one uses the component of the motion perpendicular to the radius vector:
where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed,
where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, , to which the term moment of momentum refers.
Scalar angular momentum from Lagrangian mechanics
Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is
And the potential energy is
Then the Lagrangian is
The generalized momentum "canonically conjugate to" the coordinate is defined by
Orbital angular momentum in three dimensions
To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as:
where
is the moment of inertia for a point mass,
is the orbital angular velocity of the particle about the origin,
is the position vector of the particle relative to the origin, and ,
is the linear velocity of the particle relative to the origin, and
is the mass of the particle.
This can be expanded, reduced, and by the rules of vector algebra, rearranged:
which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie.
By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where
and
where is the perpendicular component of the motion, as above.
The two-dimensional scalar equations of the previous section can thus be given direction:
and for circular motion, where all of the motion is perpendicular to the radius .
In the spherical coordinate system the angular momentum vector expresses as
Analogy to linear momentum
Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape.
Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product,
is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point,
is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia.
The above analogy of the translational momentum and rotational momentum can be expressed in vector form:
for linear motion
for rotation
The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation.
Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits.
For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass.
For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random.
In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by,
where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated.
Similarly, for a point mass the moment of inertia is defined as,
where is the radius of the point mass from the center of rotation,
and for any collection of particles as the sum,
Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is not done in the International system of units). The units if angular momentum can be interpreted as torque⋅time. An object with angular momentum of can be reduced to zero angular velocity by an angular impulse of .
The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System.
Angular momentum and torque
Newton's second law of motion can be expressed mathematically,
or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows:
which means that the torque (i.e. the time derivative of the angular momentum) is
Because the moment of inertia is , it follows that , and which, reduces to
This is the rotational analog of Newton's second law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass.
Conservation of angular momentum
General considerations
A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque about the same axis." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved).
Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted upon by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant.
The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom.
For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year.
The conservation of angular momentum explains the angular acceleration of an ice skater as they bring their arms and legs close to the vertical axis of rotation. By bringing part of the mass of their body closer to the axis, they decrease their body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase.
The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars.
Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved.
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
Relation to Newton's second law of motion
While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.
As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in their hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ω:
Using this, we see that the change requires an energy of:
so that a decrease in the moment of inertia requires investing energy.
This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of:
Let us observe a point of mass m, whose position vector relative to the center of motion is perpendicular to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is:
Thus the work required for moving this point to a distance dz farther from the center of motion is:
For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives:
which is exactly the energy required for keeping the angular momentum conserved.
Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling their hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed.
Stationary-action principle
In classical mechanics it can be shown that the rotational invariance of action functionals implies conservation of angular momentum. The action is defined in classical physics as a functional of positions, often represented by the use of square brackets, and the final and initial times. It assumes the following form in cartesian coordinates:where the repeated indices indicate summation over the index. If the action is invariant of an infinitesimal transformation, it can be mathematically stated as: .
Under the transformation, , the action becomes:
where we can employ the expansion of the terms up-to first order in :
giving the following change in action:
Since all rotations can be expressed as matrix exponential of skew-symmetric matrices, i.e. as where is a skew-symmetric matrix and is angle of rotation, we can express the change of coordinates due to the rotation , up-to first order of infinitesimal angle of rotation, as:
Combining the equation of motion and rotational invariance of action, we get from the above equations that:Since this is true for any matrix that satisfies it results in the conservation of the following quantity:
as . This corresponds to the conservation of angular momentum throughout the motion.
Lagrangian formalism
In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is:
where is the Lagrangian and is the angle around the z axis.
Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to:
where the subscript i stands for the i-th body, and m, vT and ωz stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively.
For a body that is not point-like, with density ρ, we have instead:
where integration runs over the area of the body, and Iz is the moment of inertia around the z-axis.
Thus, assuming the potential energy does not depend on ωz (this assumption may fail for electromagnetic systems), we have the angular momentum of the ith object:
We have thus far rotated each object by a separate angle; we may also define an overall angle θz by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum:
From Euler–Lagrange equations it then follows that:
Since the lagrangian is dependent upon the angles of the object only through the potential, we have:
which is the torque on the ith object.
Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle θz (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum:
And thus the angular momentum around the z-axis is conserved.
This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator.
Hamiltonian formalism
Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the ith object is:
which is analogous to the energy dependence upon momentum along the z-axis, .
Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis:
The first equation gives
And so we get the same results as in the Lagrangian formalism.
Note, that for combining all axes together, we write the kinetic energy as:
where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors.
For point-like bodies we have:
This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem).
Angular momentum in orbital mechanics
While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame.
In astrodynamics and celestial mechanics, a quantity closely related to angular momentum is defined as
called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion of a body is determined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the gravitational effect of the smaller bodies on it can be neglected; it maintains, in effect, constant velocity. The motion of all bodies is affected by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions.
Solid bodies
Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet.
For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is:
and integrating this differential over the volume of the entire mass gives its total angular momentum:
In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass.
Collection of particles
For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given,
is the mass of particle ,
is the position vector of particle w.r.t. the origin,
is the velocity of particle w.r.t. the origin,
is the position vector of the center of mass w.r.t. the origin,
is the velocity of the center of mass w.r.t. the origin,
is the position vector of particle w.r.t. the center of mass,
is the velocity of particle w.r.t. the center of mass,
The total mass of the particles is simply their sum,
The position vector of the center of mass is defined by,
By inspection,
and
The total angular momentum of the collection of particles is the sum of the angular momentum of each particle,
Expanding ,
Expanding ,
It can be shown that (see sidebar),
and
therefore the second and third terms vanish,
The first term can be rearranged,
and total angular momentum for the collection of particles is finally,
The first term is the angular momentum of the center of mass relative to the origin. Similar to , below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to , below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body.
Rearranging equation by vector identities, multiplying both terms by "one", and grouping appropriately,
gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity ,
Single particle case
In the case of a single particle moving about the arbitrary origin,
and equations and for total angular momentum reduce to,
Case of a fixed center of mass
For the case of the center of mass fixed in space with respect to the origin,
and equations and for total angular momentum reduce to,
Angular momentum in general relativity
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is generally not conserved locally for general curved spacetimes, unless they have rotational symmetry; whereas globally the notion of angular momentum itself only makes sense if the spacetime is asymptotically flat. If the spacetime is only axially symmetric like for the Kerr metric, the total angular momentum is not conserved but is conserved which is related to the invariance of rotating around the symmetry-axis, where note that where is the metric, is the rest mass, is the four-velocity, and is the four-position in spherical coordinates.
In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element:
in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates:
or more compactly in index notation:
The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor:
Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them.
In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an anti-symmetric tensor of second order:
in terms of four-vectors, namely the four-position X and the four-momentum P, and absorbs the above L together with the moment of mass, i.e., the product of the relativistic mass of the particle and its centre of mass, which can be thought of as describing the motion of its centre of mass, since mass–energy is conserved.
In each of the above cases, for a system of particles the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system.
Angular momentum in quantum mechanics
In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator.
Spin, orbital, and total angular momentum
The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.)
However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero), and almost all elementary particles have nonzero spin. For example electrons have "spin 1/2" (this actually means "spin ħ/2"), photons have "spin 1" (this actually means "spin ħ"), and pi-mesons have spin 0.
Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, .) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have half-integer values.
In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes.
Quantization
In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z:
The reduced Planck constant is tiny by everyday standards, about 10−34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum.
Quantization of angular momentum was first postulated by Niels Bohr in his model of the atom and was later predicted by Erwin Schrödinger in his Schrödinger equation.
Uncertainty
In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis.
The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.)
Total angular momentum as generator of rotations
As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations". More specifically, J is defined so that the operator
is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential.) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators.
The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant.
Angular momentum in electrodynamics
When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units)
where e is the electric charge of the particle and A the magnetic vector potential of the electromagnetic field. The gauge-invariant angular momentum, that is kinetic angular momentum, is given by
The interplay with quantum mechanics is discussed further in the article on canonical commutation relations.
Angular momentum in optics
In classical Maxwell electrodynamics the Poynting vector
is a linear momentum density of electromagnetic field.
The angular momentum density vector is given by a vector product
as in classical mechanics:
The above identities are valid locally, i.e. in each space point in a given moment .
Angular momentum in nature and the cosmos
Tropical cyclones and other related weather phenomena involve conservation of angular momentum in order to explain the dynamics. Winds revolve slowly around low pressure systems, mainly due to the coriolis effect. If the low pressure intensifies and the slowly circulating air is drawn toward the center, the molecules must speed up in order to conserve angular momentum. By the time they reach the center, the speeds become destructive.
Johannes Kepler determined the laws of planetary motion without knowledge of conservation of momentum. However, not long after his discovery their derivation was determined from conservation of angular momentum. Planets move more slowly the further they are out in their elliptical orbits, which is explained intuitively by the fact that orbital angular momentum is proportional to the radius of the orbit. Since the mass does not change and the angular momentum is conserved, the velocity drops.
Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit (~3.8 cm per year) and Earth to be decelerated (by −25.858 ± 0.003″/cy²) in its rotation (the length of the day increases by ~1.7 ms per century, +2.3 ms from tidal effect and −0.6 ms from post-glacial rebound). The Earth loses angular momentum which is transferred to the Moon such that the overall angular momentum is conserved.
Angular momentum in engineering and technology
Examples of using conservation of angular momentum for practical advantage are abundant. In engines such as steam engines or internal combustion engines, a flywheel is needed to efficiently convert the lateral motion of the pistons to rotational motion.
Inertial navigation systems explicitly use the fact that angular momentum is conserved with respect to the inertial frame of space. Inertial navigation is what enables submarine trips under the polar ice cap, but are also crucial to all forms of modern navigation.
Rifled bullets use the stability provided by conservation of angular momentum to be more true in their trajectory. The invention of rifled firearms and cannons gave their users significant strategic advantage in battle, and thus were a technological turning point in history.
History
Isaac Newton, in the Principia, hinted at angular momentum in his examples of the first law of motion,A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.He did not further investigate angular momentum directly in the Principia, saying:From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.However, his geometric proof of the law of areas is an outstanding example of Newton's genius, and indirectly proves angular momentum conservation in the case of a central force.
The Law of Areas
Newton's derivation
As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws.
During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times.
At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve.
Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero.
Conservation of angular momentum in the Law of Areas
The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius , and that the heights of the triangles are proportional to the perpendicular component of velocity . Hence, if the area swept per unit time is constant, then by the triangular area formula , the product and therefore the product are constant: if and the base length are decreased, and height must increase proportionally. Mass is constant, therefore angular momentum is conserved by this exchange of distance and velocity.
In the case of triangle SBC, area is equal to (SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore remain constant. Similarly so for each of the triangles.
Another areal proof of conservation of angular momentum for any central force uses Mamikon's sweeping tangents theorem.
After Newton
Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter.
In 1736 Euler, like Newton, touched on some of the equations of angular momentum in his Mechanica without further developing them.
Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it.
In 1799, Pierre-Simon Laplace first realized that a fixed plane was associated with rotation—his invariable plane.
Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments".
In 1852 Léon Foucault used a gyroscope in an experiment to display the Earth's rotation.
William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:...a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward," probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications, which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries. However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English.
See also
References
Further reading
.
External links
"What Do a Submarine, a Rocket and a Football Have in Common? Why the prolate spheroid is the shape for success" (Scientific American, November 8, 2010)
Conservation of Angular Momentum – a chapter from an online textbook
Angular Momentum in a Collision Process – derivation of the three-dimensional case
Angular Momentum and Rolling Motion – more momentum theory
Mechanical quantities
Rotation
Conservation laws
Moment (physics)
Angular momentum | 0.791387 | 0.999028 | 0.790617 |
Noether's theorem | Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems (see Noether's second theorem) published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Basic illustrations and background
As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric.
As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively.
Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory.
There are numerous versions of Noether's theorem, with varying degrees of generality. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist.
Informal statement of the theorem
All fine technical points aside, Noether's theorem can be stated informally:
A more sophisticated version of the theorem involving fields states that:
The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation.
The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field.
In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants:
Brief illustration and overview of the concept
The main idea behind Noether's theorem is most easily illustrated by a system with one coordinate and a continuous symmetry (gray arrows on the diagram).
Consider any trajectory (bold on the diagram) that satisfies the system's laws of motion. That is, the action governing this system is stationary on this trajectory, i.e. does not change under any local variation of the trajectory. In particular it would not change under a variation that applies the symmetry flow on a time segment and is motionless outside that segment. To keep the trajectory continuous, we use "buffering" periods of small time to transition between the segments gradually.
The total change in the action now comprises changes brought by every interval in play. Parts, where variation itself vanishes, i.e outside bring no . The middle part does not change the action either, because its transformation is a symmetry and thus preserves the Lagrangian and the action . The only remaining parts are the "buffering" pieces. In these regions both the coordinate and velocity change, but changes by , and the change in the coordinate is negligible by comparison since the time span of the buffering is small (taken to the limit of 0), so . So the regions contribute mostly through their "slanting" .
That changes the Lagrangian by , which integrates to
These last terms, evaluated around the endpoints and , should cancel each other in order to make the total change in the action be zero, as would be expected if the trajectory is a solution. That is
meaning the quantity is conserved, which is the conclusion of Noether's theorem. For instance if pure translations of by a constant are the symmetry, then the conserved quantity becomes just , the canonical momentum.
More general cases follow the same idea:
Historical context
A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion – it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero,
Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws.
The earliest constants of motion discovered were momentum and kinetic energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's laws of motion. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector.
In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L
where the dot over q signifies the rate of change of the coordinates q,
Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations,
Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that
where the momentum
is conserved throughout the motion (on the physical path).
Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem.
Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation.
Emmy Noether's work on the invariance theorem began in 1915 when she was helping Felix Klein and David Hilbert with their work related to Albert Einstein's theory of general relativity By March 1918 she had most of the key ideas for the paper which would be published later in the year.
Mathematical expression
Simple form using perturbations
The essence of Noether's theorem is generalizing the notion of ignorable coordinates.
One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write
where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N.
Then the resultant perturbation can be written as a linear sum of the individual types of perturbations,
where εr are infinitesimal parameter coefficients corresponding to each:
generator Tr of time evolution, and
generator Qr of the generalized coordinates.
For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle.
Using these definitions, Noether showed that the N quantities
are conserved (constants of motion).
Examples
I. Time invariance
For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H
II. Translational invariance
Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding linear momentum pk
In special and general relativity, these two conservation laws can be expressed either globally (as it is done above), or locally as a continuity equation. The global versions can be united into a single global conservation law: the conservation of the energy-momentum 4-vector. The local versions of energy and momentum conservation (at any point in space-time) can also be united, into the conservation of a quantity defined locally at the space-time point: the stress–energy tensor(this will be derived in the next section).
III. Rotational invariance
The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart. It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation
Since time is not being transformed, T = 0, and N = 1. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by
Then Noether's theorem states that the following quantity is conserved,
In other words, the component of the angular momentum L along the n axis is conserved. And if n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved.
Field theory version
Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of Noether's theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used (or most often implemented) version of Noether's theorem.
Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time
(the theorem can be further generalized to the case where the Lagrangian depends on up to the nth derivative, and can also be formulated using jet bundles).
A continuous transformation of the fields can be written infinitesimally as
where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence,
since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by so the most general symmetry transformation would be written as
with the consequence
For such systems, Noether's theorem states that there are conserved current densities
(where the dot product is understood to contract the field indices, not the index or index).
In such cases, the conservation law is expressed in a four-dimensional way
which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere.
For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as
The Lagrangian density transforms in the same way, , so
and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν, where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives
with
(we relabelled as at an intermediate step to avoid conflict). (However, the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.)
The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives. In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as
a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density
In this case, Noether's theorem states that the conserved (∂ ⋅ j = 0) current equals
which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics.
Derivations
One independent variable
Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral
is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations
And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows
where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time.
The action integral flows to
which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using Leibniz's rule, we get
Notice that the Euler–Lagrange equations imply
Substituting this into the previous equation, one gets
Again using the Euler–Lagrange equations we get
Substituting this into the previous equation, one gets
From which one can see that
is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to
To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case.
Field-theoretic derivation
Noether's theorem may also be derived for tensor fields where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written
whereas the transformation of the field variables is expressed as
By this definition, the field variations result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined
If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively.
Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as
where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g.
Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form
The difference in Lagrangians can be written to first-order in the infinitesimal variations as
However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute
Using the Euler–Lagrange field equations
the difference in Lagrangians can be written neatly as
Thus, the change in the action can be written as
Since this holds for any region Ω, the integrand must be zero
For any combination of the various symmetry transformations, the perturbation can be written
where is the Lie derivative of
in the Xμ direction. When is a scalar or ,
These equations imply that the field variation taken at one point equals
Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law
where the conserved current equals
Manifold/fiber bundle derivation
Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle over M.)
Examples of this M in physics include:
In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold , representing time and the target space is the cotangent bundle of space of generalized positions.
In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is . If the field is a real vector field, then the target manifold is isomorphic to .
Now suppose there is a functional
called the action. (It takes values into , rather than ; this is for physical reasons, and is unimportant for this proof.)
To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function
called the Lagrangian density, depending on , its derivative and the position. In other words, for in
Suppose we are given boundary conditions, i.e., a specification of the value of at the boundary if M is compact, or some limit on as x approaches ∞. Then the subspace of consisting of functions such that all functional derivatives of at are zero, that is:
and that satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action)
Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that
for all compact submanifolds N or in other words,
for all x, where we set
If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group.
Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have
Since this is true for any N, we have
But this is the continuity equation for the current defined by:
which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity).
Comments
Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that
The quantum analogs of Noether's theorem involving expectation values (e.g., ) probing off shell quantities as well are the Ward–Takahashi identities.
Generalization to Lie algebras
Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let us see this explicitly. Let us say
and
Then,
where f12 = Q1[f2μ] − Q2[f1μ]. So,
This shows we can extend Noether's theorem to larger Lie algebras in a natural way.
Generalization of the proof
This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem.
To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on and its first derivatives. Also, assume
Then,
for all .
More generally, if the Lagrangian depends on higher derivatives, then
Examples
Example 1: Conservation of energy
Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is:
The first term in the brackets is the kinetic energy of the particle, while the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . The coordinate x has an explicit dependence on time, whilst V does not; consequently:
so we can set
Then,
The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations).
More generally, if the Lagrangian does not depend explicitly on time, the quantity
(called the Hamiltonian) is conserved.
Example 2: Conservation of center of momentum
Still considering 1-dimensional time, let
for Newtonian particles where the potential only depends pairwise upon the relative displacement.
For , consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words,
And
This has the form of so we can set
Then,
where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states:
Example 3: Conformal transformation
Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime.
For Q, consider the generator of a spacetime rescaling. In other words,
The second term on the right hand side is due to the "conformal weight" of . And
This has the form of
(where we have performed a change of dummy indices) so set
Then
Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side).
If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies.
Applications
Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example:
Invariance of an isolated system with respect to spatial translation (in other words, that the laws of physics are the same at all locations in space) gives the law of conservation of linear momentum (which states that the total linear momentum of an isolated system is constant)
Invariance of an isolated system with respect to time translation (i.e. that the laws of physics are the same at all points in time) gives the law of conservation of energy (which states that the total energy of an isolated system is constant)
Invariance of an isolated system with respect to rotation (i.e., that the laws of physics are the same with respect to all angular orientations in space) gives the law of conservation of angular momentum (which states that the total angular momentum of an isolated system is constant)
Invariance of an isolated system with respect to Lorentz boosts (i.e., that the laws of physics are the same with respect to all inertial reference frames) gives the center-of-mass theorem (which states that the center-of-mass of an isolated system moves at a constant velocity).
In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential.
The Noether charge is also used in calculating the entropy of stationary black holes.
See also
Conservation law
Charge (physics)
Gauge symmetry
Gauge symmetry (mathematics)
Invariant (physics)
Goldstone boson
Symmetry (physics)
References
Further reading
Online copy.
External links
(Original in Gott. Nachr. 1918:235–257)
Noether's Theorem at MathPages.
Articles containing proofs
Calculus of variations
Conservation laws
Concepts in physics
Eponymous theorems of physics
Partial differential equations
Physics theorems
Quantum field theory
Symmetry | 0.791931 | 0.998171 | 0.790483 |
Energy | Energy is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed; matter and energy may also be converted to one another. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, renewable energy, and geothermal energy.
Forms
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
History
The word energy derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
Units of measure
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, French physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
Scientific use
Classical mechanics
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
This says that the work is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Chemistry
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
Biology
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C6H12O6 + 6O2 -> 6CO2 + 6H2O
C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
Daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
Earth sciences
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
Cosmology
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
Quantum mechanics
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is the Planck constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
Relativity
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
Transformation
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy to kinetic energy and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding .
Conservation of energy and mass in transformation
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
Reversible and non-reversible transformations
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
Conservation of energy
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
Energy transfer
Closed systems
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where is the amount of energy transferred, represents the work done on or by the system, and represents the heat flow into or out of the system. As a simplification, the heat term, , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
Open systems
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write
Thermodynamics
Internal energy
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
First law of thermodynamics
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
,
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
where is the heat supplied to the system and is the work applied to the system.
Equipartition of energy
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
See also
Combustion
Efficient energy use
Energy democracy
Energy crisis
Energy recovery
Energy recycling
Index of energy articles
Index of wave articles
List of low-energy building techniques
Orders of magnitude (energy)
Power station
Sustainable energy
Transfer energy
Waste-to-energy
Waste-to-energy plant
Zero-energy building
Notes
References
Further reading
The Biosphere (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1970.. This book, originally a 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources, population trends, and environmental degradation.
Energy and Power (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1971..
Santos, Gildo M. "Energy in Brazil: a historical overview," The Journal of Energy History (2018), online.
Journals
The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018–
External links
Differences between Heat and Thermal energy – BioCab
Main topic articles
Nature
Universe
Scalar physical quantities | 0.790739 | 0.999426 | 0.790285 |
Einstein field equations | In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
Mathematical form
The Einstein field equations (EFE) may be written in the form:
where is the Einstein tensor, is the metric tensor, is the stress–energy tensor, is the cosmological constant and is the Einstein gravitational constant.
The Einstein tensor is defined as
where is the Ricci curvature tensor, and is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
or
where is the Newtonian constant of gravitation and is the speed of light in vacuum.
The EFE can thus also be written as
In standard units, each term on the left has units of 1/length2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor , since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
Sign convention
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
The third sign above is related to the choice of convention for the Ricci tensor:
With these definitions Misner, Thorne, and Wheeler classify themselves as , whereas Weinberg (1972) is , Peebles (1980) and Efstathiou et al. (1990) are , Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are .
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
The sign of the cosmological term would change in both these versions if the metric sign convention is used rather than the MTW metric sign convention adopted here.
Equivalent formulations
Taking the trace with respect to the metric of both sides of the EFE one gets
where is the spacetime dimension. Solving for and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
In dimensions this reduces to
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace in the expression on the right with the Minkowski metric without significant loss of accuracy).
The cosmological constant
In the Einstein field equations
the term containing the cosmological constant was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned , remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
This tensor describes a vacuum state with an energy density and isotropic pressure that are fixed constants and given by
where it is assumed that has SI unit m and is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
Features
Conservation of energy and momentum
General relativity is consistent with the local conservation of energy and momentum expressed as
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
Nonlinearity
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is Schrödinger's equation of quantum mechanics, which is linear in the wavefunction.
The correspondence principle
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the slow-motion approximation. In fact, the constant appearing in the EFE is determined by making these two approximations.
Vacuum field equations
If the energy–momentum tensor is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
In the case of nonzero cosmological constant, the equations are
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, , are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
Einstein–Maxwell equations
If the energy–momentum tensor is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant , taken to be zero in conventional relativity theory):
Additionally, the covariant Maxwell equations are also applicable in free space:
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential such that
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
Solutions
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
The linearized EFE
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
Polynomial form
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein-Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
See also
Conformastatic spacetimes
Einstein–Hilbert action
Equivalence principle
Exact solutions in general relativity
General relativity resources
History of general relativity
Hamilton–Jacobi–Einstein equation
Mathematics of general relativity
Numerical relativity
Ricci calculus
Notes
References
See General relativity resources.
External links
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
External images
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia.
Albert Einstein
Equations of physics
General relativity
Partial differential equations | 0.791399 | 0.998572 | 0.790269 |
Inertia | Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes its velocity to change. It is one of the fundamental principles in classical physics, and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). It is one of the primary manifestations of mass, one of the core quantitative properties of physical systems. Newton writes:
In his 1687 work Philosophiæ Naturalis Principia Mathematica, Newton defined inertia as a property:
History and development
Early understanding of inertial motion
Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. Before the European Renaissance, the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile.
Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis (stagnation). In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas.
In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon.
Theory of impetus
In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
Classical inertia
According to science historian Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614.
The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today.
The principle of inertia, as formulated by Aristotle for "motions in a void", includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun.
Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement."
This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity.
Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica, in 1687):
Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.
Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself.
However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent.
Relativity
Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames.
In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration.
Etymology
The term inertia comes from the Latin word iners, meaning idle or sluggish.
Rotational inertia
A quantity related to inertia is rotational inertia (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation.
See also
Flywheel energy storage devices which may also be known as an Inertia battery
General relativity
Vertical and horizontal
Inertial navigation system
Inertial response of synchronous generators in an electrical grid
Kinetic energy
List of moments of inertia
Mach's principle
Newton's laws of motion
Classical mechanics
Special relativity
Parallel axis theorem
References
Further reading
Butterfield, H (1957), The Origins of Modern Science, .
Clement, J (1982), "Students' preconceptions in introductory mechanics", American Journal of Physics vol 50, pp 66–71
Crombie, A C (1959), Medieval and Early Modern Science, vol. 2.
McCloskey, M (1983), "Intuitive physics", Scientific American, April, pp. 114–123.
McCloskey, M & Carmazza, A (1980), "Curvilinear motion in the absence of external forces: naïve beliefs about the motion of objects", Science vol. 210, pp. 1139–1141.
External links
Why Does the Earth Spin? (YouTube)
Classical mechanics
Gyroscopes
Mass
Velocity
Articles containing video clips | 0.791367 | 0.998605 | 0.790263 |
Sankey diagram | Sankey diagrams are a data visualisation technique or flow diagram that emphasizes flow/movement/change from one state to another or one time to another, in which the width of the arrows is proportional to the flow rate of the depicted extensive property.
Sankey diagrams can also visualize the energy accounts, material flow accounts on a regional or national level, and cost breakdowns. The diagrams are often used in the visualization of material flow analysis.
Sankey diagrams emphasize the major transfers or flows within a system. They help locate the most important contributions to a flow. They often show conserved quantities within defined system boundaries.
History
Sankey diagrams are named after Irish Captain Matthew Henry Phineas Riall Sankey, who used this type of diagram in 1898 in a classic figure (see diagram) showing the energy efficiency of a steam engine. The original charts in black and white displayed just one type of flow (e.g. steam); using colors for different types of flows lets the diagram express additional variables.
Over time, it became a standard model used in science and engineering to represent heat balance, energy flows, material flows, and since the 1990s this visual model has been used in life-cycle assessment of products.
One of the most famous Sankey diagrams is Charles Minard's Map of Napoleon's Russian Campaign of 1812. It is a flow map, overlaying a Sankey diagram onto a geographical map. It was created in 1869, predating Sankey's first Sankey diagram of 1898. Minard had used this form of diagram for visualising flow of goods and transport of people from at least since 1844.
Science
Sankey diagrams are often used in fields of science, especially physics. They are used to represent energy inputs, useful output, and wasted output.
Active examples
The United States Energy Information Administration (EIA) produces numerous Sankey diagrams annually in its Annual Energy Review which illustrate the production and consumption of various forms of energy.
The US Department of Energy's Lawrence Livermore Laboratory maintains a site of Sankey diagrams, including US energy flow and carbon flow.
Eurostat, the Statistical Office of the European Union, has developed an interactive Sankey web tool to visualise energy data by means of flow diagrams. The tool allows the building and customisation of diagrams by playing with different options (country, year, fuel, level of detail).
The International Energy Agency (IEA) created an interactive Sankey web application that details the flow of energy for the entire planet. Users can select specific countries, points of time back to 1973, and modify the arrangement of various flows within the Sankey diagram.
See also
Alluvial diagrama type of Sankey diagram that uses the same kind of representation to depict how items re-group
Material flow management
Parallel coordinates
Time geography
References
External links
Diagrams
Irish inventions
British inventions | 0.792331 | 0.997282 | 0.790177 |
Gravitoelectromagnetism | Gravitoelectromagnetism, abbreviated GEM, refers to a set of formal analogies between the equations for electromagnetism and relativistic gravitation; specifically: between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein field equations for general relativity. Gravitomagnetism is a widely used term referring specifically to the kinetic effects of gravity, in analogy to the magnetic effects of moving electric charge. The most common version of GEM is valid only far from isolated sources, and for slowly moving test particles.
The analogy and equations differing only by some small factors were first published in 1893, before general relativity, by Oliver Heaviside as a separate theory expanding Newton's law of universal gravitation.
Background
This approximate reformulation of gravitation as described by general relativity in the weak field limit makes an apparent field appear in a frame of reference different from that of a freely moving inertial body. This apparent field may be described by two components that act respectively like the electric and magnetic fields of electromagnetism, and by analogy these are called the gravitoelectric and gravitomagnetic fields, since these arise in the same way around a mass that a moving electric charge is the source of electric and magnetic fields. The main consequence of the gravitomagnetic field, or velocity-dependent acceleration, is that a moving object near a massive, rotating object will experience acceleration that deviates from that predicted by a purely Newtonian gravity (gravitoelectric) field. More subtle predictions, such as induced rotation of a falling object and precession of a spinning object are among the last basic predictions of general relativity to be directly tested.
Indirect validations of gravitomagnetic effects have been derived from analyses of relativistic jets. Roger Penrose had proposed a mechanism that relies on frame-dragging-related effects for extracting energy and momentum from rotating black holes. Reva Kay Williams, University of Florida, developed a rigorous proof that validated Penrose's mechanism. Her model showed how the Lense–Thirring effect could account for the observed high energies and luminosities of quasars and active galactic nuclei; the collimated jets about their polar axis; and the asymmetrical jets (relative to the orbital plane). All of those observed properties could be explained in terms of gravitomagnetic effects. Williams's application of Penrose's mechanism can be applied to black holes of any size. Relativistic jets can serve as the largest and brightest form of validations for gravitomagnetism.
A group at Stanford University is currently analyzing data from the first direct test of GEM, the Gravity Probe B satellite experiment, to see whether they are consistent with gravitomagnetism. The Apache Point Observatory Lunar Laser-ranging Operation also plans to observe gravitomagnetism effects.
Equations
According to general relativity, the gravitational field produced by a rotating object (or any rotating mass–energy) can, in a particular limiting case, be described by equations that have the same form as in classical electromagnetism. Starting from the basic equation of general relativity, the Einstein field equation, and assuming a weak gravitational field or reasonably flat spacetime, the gravitational analogs to Maxwell's equations for electromagnetism, called the "GEM equations", can be derived. GEM equations compared to Maxwell's equations are:
where:
Eg is the gravitoelectric field (conventional gravitational field), with SI unit m⋅s−2
E is the electric field
Bg is the gravitomagnetic field, with SI unit s−1
B is the magnetic field
ρg is mass density, with SI unit kg⋅m−3
ρ is charge density
Jg is mass current density or mass flux (Jg = ρgvρ, where vρ is the velocity of the mass flow), with SI unit kg⋅m−2⋅s−1
J is electric current density
G is the gravitational constant
ε0 is the vacuum permittivity
c is both the speed of propagation of gravity and the speed of light.
Potentials
Faraday's law of induction (third line of the table) and the Gaussian law for the gravitomagnetic field (second line of the table) can be solved by the definition of a gravitation potential and the vector potential according to:
and
Inserting this four potentials into the Gaussian law for the gravitation field (first line of the table) and Ampère's circuital law (fourth line of the table) and applying the Lorenz gauge the following inhomogeneous wave-equations are obtained:
For a stationary situation the Poisson equation of the classical gravitation theory is obtained. In a vacuum a wave equation is obtained under non-stationary conditions. GEM therefore predicts the existence of gravitational waves. In this way GEM can be regarded as a generalization of Newton's gravitation theory.
The wave equation for the gravitomagnetic potential can also be solved for a rotating spherical body (which is a stationary case) leading to gravitomagnetic moments.
Lorentz force
For a test particle whose mass m is "small", in a stationary system, the net (Lorentz) force acting on it due to a GEM field is described by the following GEM analog to the Lorentz force equation:
where:
v is the velocity of the test particle
m is the mass of the test particle
q is the electric charge of the test particle.
Poynting vector
The GEM Poynting vector compared to the electromagnetic Poynting vector is given by:
Scaling of fields
The literature does not adopt a consistent scaling for the gravitoelectric and gravitomagnetic fields, making comparison tricky. For example, to obtain agreement with Mashhoon's writings, all instances of Bg in the GEM equations must be multiplied by − and Eg by −1. These factors variously modify the analogues of the equations for the Lorentz force. There is no scaling choice that allows all the GEM and EM equations to be perfectly analogous. The discrepancy in the factors arises because the source of the gravitational field is the second order stress–energy tensor, as opposed to the source of the electromagnetic field being the first order four-current tensor. This difference becomes clearer when one compares non-invariance of relativistic mass to electric charge invariance. This can be traced back to the spin-2 character of the gravitational field, in contrast to the electromagnetism being a spin-1 field. (See Relativistic wave equations for more on "spin-1" and "spin-2" fields).
Higher-order effects
Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized charges. For instance, if two wheels are spun on a common axis, the mutual gravitational attraction between the two wheels will be greater if they spin in opposite directions than in the same direction. This can be expressed as an attractive or repulsive gravitomagnetic component.
Gravitomagnetic arguments also predict that a flexible or fluid toroidal mass undergoing minor axis rotational acceleration (accelerating "smoke ring" rotation) will tend to pull matter through the throat (a case of rotational frame dragging, acting through the throat). In theory, this configuration might be used for accelerating objects (through the throat) without such objects experiencing any g-forces.
Consider a toroidal mass with two degrees of rotation (both major axis and minor-axis spin, both turning inside out and revolving). This represents a "special case" in which gravitomagnetic effects generate a chiral corkscrew-like gravitational field around the object. The reaction forces to dragging at the inner and outer equators would normally be expected to be equal and opposite in magnitude and direction respectively in the simpler case involving only minor-axis spin. When both rotations are applied simultaneously, these two sets of reaction forces can be said to occur at different depths in a radial Coriolis field that extends across the rotating torus, making it more difficult to establish that cancellation is complete.
Modelling this complex behaviour as a curved spacetime problem has yet to be done and is believed to be very difficult.
Gravitomagnetic fields of astronomical objects
A rotating spherical body with a homogeneous density distribution produces a stationary gravitomagnetic potential, which is described by:
Due to the body's angular velocity the velocity inside the body can be described as . Therefore
has to be solved to obtain the gravitomagnetic potential . The analytical solution outside of the body is (see for example):
where:
is the angular momentum vector;
is the moment of inertia of a ball-shaped body (see: list of moments of inertia);
is the angular velocity;
m is the mass;
R is the radius;
T is the rotational period.
The formula for the gravitomagnetic field Bg can now be obtained by:
It is exactly half of the Lense–Thirring precession rate. This suggests that the gravitomagnetic analog of the g-factor is two. This factor of two can be explained completely analogous to the electron's g-factor by taking into account relativistic calculations. At the equatorial plane, r and L are perpendicular, so their dot product vanishes, and this formula reduces to:
Gravitational waves have equal gravitomagnetic and gravitoelectric components.
Earth
Therefore, the magnitude of Earth's gravitomagnetic field at its equator is:
where is Earth's gravity. The field direction coincides with the angular moment direction, i.e. north.
From this calculation it follows that the strength of the Earth's equatorial gravitomagnetic field is about . Such a field is extremely weak and requires extremely sensitive measurements to be detected. One experiment to measure such field was the Gravity Probe B mission.
Pulsar
If the preceding formula is used with the pulsar PSR J1748-2446ad (which rotates 716 times per second), assuming a radius of 16 km and a mass of two solar masses, then
equals about 166 Hz. This would be easy to notice. However, the pulsar is spinning at a quarter of the speed of light at the equator, and its radius is only three times its Schwarzschild radius. When such fast motion and such strong gravitational fields exist in a system, the simplified approach of separating gravitomagnetic and gravitoelectric forces can be applied only as a very rough approximation.
Lack of invariance
While Maxwell's equations are invariant under Lorentz transformations, the GEM equations are not. The fact that ρg and jg do not form a four-vector (instead they are merely a part of the stress–energy tensor) is the basis of this difference.
Although GEM may hold approximately in two different reference frames connected by a Lorentz boost, there is no way to calculate the GEM variables of one such frame from the GEM variables of the other, unlike the situation with the variables of electromagnetism. Indeed, their predictions (about what motion is free fall) will probably conflict with each other.
Note that the GEM equations are invariant under translations and spatial rotations, just not under boosts and more general curvilinear transformations. Maxwell's equations can be formulated in a way that makes them invariant under all of these coordinate transformations.
See also
Anti-gravity
Artificial gravity
Frame-dragging
Geodetic effect
Gravitational radiation
Gravity Probe B
Kaluza–Klein theory
Linearized gravity
Modified Newtonian dynamics
Non-Relativistic Gravitational Fields
Speed of gravity § Electrodynamical analogies
Stationary spacetime
References
Further reading
Books
Papers
in
External links
Gravity Probe B: Testing Einstein's Universe
Gyroscopic Superconducting Gravitomagnetic Effects news on tentative result of European Space Agency (esa) research
In Search of Gravitomagnetism , NASA, 20 April 2004.
Gravitomagnetic London Moment – New test of General Relativity?
Measurement of Gravitomagnetic and Acceleration Fields Around Rotating Superconductors M. Tajmar, et al., 17 October 2006.
Test of the Lense–Thirring effect with the MGS Mars probe, New Scientist, January 2007.
General relativity
Effects of gravity
Tests of general relativity | 0.797063 | 0.991036 | 0.789919 |
Gravity assist | A gravity assist, gravity assist maneuver, swing-by, or generally a gravitational slingshot in orbital mechanics, is a type of spaceflight flyby which makes use of the relative movement (e.g. orbit around the Sun) and gravity of a planet or other astronomical object to alter the path and speed of a spacecraft, typically to save propellant and reduce expense.
Gravity assistance can be used to accelerate a spacecraft, that is, to increase or decrease its speed or redirect its path. The "assist" is provided by the motion of the gravitating body as it pulls on the spacecraft. Any gain or loss of kinetic energy and linear momentum by a passing spacecraft is correspondingly lost or gained by the gravitational body, in accordance with Newton's Third Law. The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of Earth's Moon, and it was used by interplanetary probes from Mariner 10 onward, including the two Voyager probes' notable flybys of Jupiter and Saturn.
Explanation
A gravity assist around a planet changes a spacecraft's velocity (relative to the Sun) by entering and leaving the gravitational sphere of influence of a planet. The sum of the kinetic energies of both bodies remains constant (see elastic collision). A slingshot maneuver can therefore be used to change the spaceship's trajectory and speed relative to the Sun.
A close terrestrial analogy is provided by a tennis ball bouncing off the front of a moving train. Imagine standing on a train platform, and throwing a ball at 30 km/h toward a train approaching at 50 km/h. The driver of the train sees the ball approaching at 80 km/h and then departing at 80 km/h after the ball bounces elastically off the front of the train. Because of the train's motion, however, that departure is at 130 km/h relative to the train platform; the ball has added twice the train's velocity to its own.
Translating this analogy into space: in the planet reference frame, the spaceship has a vertical velocity of v relative to the planet. After the slingshot occurs the spaceship is leaving on a course 90 degrees to that which it arrived on. It will still have a velocity of v, but in the horizontal direction. In the Sun reference frame, the planet has a horizontal velocity of v, and by using the Pythagorean Theorem, the spaceship initially has a total velocity of v. After the spaceship leaves the planet, it will have a velocity of v + v = 2v, gaining approximately 0.6v.
This oversimplified example cannot be refined without additional details regarding the orbit, but if the spaceship travels in a path which forms a hyperbola, it can leave the planet in the opposite direction without firing its engine. This example is one of many trajectories and gains of speed the spaceship can experience.
This explanation might seem to violate the conservation of energy and momentum, apparently adding velocity to the spacecraft out of nothing, but the spacecraft's effects on the planet must also be taken into consideration to provide a complete picture of the mechanics involved. The linear momentum gained by the spaceship is equal in magnitude to that lost by the planet, so the spacecraft gains velocity and the planet loses velocity. However, the planet's enormous mass compared to the spacecraft makes the resulting change in its speed negligibly small even when compared to the orbital perturbations planets undergo due to interactions with other celestial bodies on astronomically short timescales. For example, one metric ton is a typical mass for an interplanetary space probe whereas Jupiter has a mass of almost 2 x 1024 metric tons. Therefore, a one-ton spacecraft passing Jupiter will theoretically cause the planet to lose approximately 5 x 10−25 km/s of orbital velocity for every km/s of velocity relative to the Sun gained by the spacecraft. For all practical purposes the effects on the planet can be ignored in the calculation.
Realistic portrayals of encounters in space require the consideration of three dimensions. The same principles apply as above except adding the planet's velocity to that of the spacecraft requires vector addition as shown below.
Due to the reversibility of orbits, gravitational slingshots can also be used to reduce the speed of a spacecraft. Both Mariner 10 and MESSENGER performed this maneuver to reach Mercury.
If more speed is needed than available from gravity assist alone, a rocket burn near the periapsis (closest planetary approach) uses the least fuel. A given rocket burn always provides the same change in velocity (Δv), but the change in kinetic energy is proportional to the vehicle's velocity at the time of the burn. Therefore the maximum kinetic energy is obtained when the burn occurs at the vehicle's maximum velocity (periapsis). The Oberth effect describes this technique in more detail.
Historical origins
In his paper "To Those Who Will Be Reading in Order to Build", published in 1938 but dated 1918–1919, Yuri Kondratyuk suggested that a spacecraft traveling between two planets could be accelerated at the beginning and end of its trajectory by using the gravity of the two planets' moons. The portion of his manuscript considering gravity-assists received no later development and was not published until the 1960s. In his 1925 paper "Problems of Flight by Jet Propulsion: Interplanetary Flights", Friedrich Zander showed a deep understanding of the physics behind the concept of gravity assist and its potential for the interplanetary exploration of the solar system.
Italian engineer Gaetano Crocco was first to calculate an interplanetary journey considering multiple gravity-assists.
The gravity assist maneuver was first used in 1959 when the Soviet probe Luna 3 photographed the far side of the Moon. The maneuver relied on research performed under the direction of Mstislav Keldysh at the Keldysh Institute of Applied Mathematics.
In 1961, Michael Minovitch, UCLA graduate student who worked at NASA's Jet Propulsion Laboratory (JPL), developed a gravity assist technique, that would later be used for the Gary Flandro's Planetary Grand Tour idea.
During the summer of 1964 at the NASA JPL, Gary Flandro was assigned the task of studying techniques for exploring the outer planets of the solar system. In this study he discovered the rare alignment of the outer planets (Jupiter, Saturn, Uranus, and Neptune) and conceived the Planetary Grand Tour multi-planet mission utilizing gravity assist to reduce mission duration from forty years to less than ten.
Purpose
A spacecraft traveling from Earth to an inner planet will increase its relative speed because it is falling toward the Sun, and a spacecraft traveling from Earth to an outer planet will decrease its speed because it is leaving the vicinity of the Sun.
Although the orbital speed of an inner planet is greater than that of the Earth, a spacecraft traveling to an inner planet, even at the minimum speed needed to reach it, is still accelerated by the Sun's gravity to a speed notably greater than the orbital speed of that destination planet. If the spacecraft's purpose is only to fly by the inner planet, then there is typically no need to slow the spacecraft. However, if the spacecraft is to be inserted into orbit about that inner planet, then there must be some way to slow it down.
Similarly, while the orbital speed of an outer planet is less than that of the Earth, a spacecraft leaving the Earth at the minimum speed needed to travel to some outer planet is slowed by the Sun's gravity to a speed far less than the orbital speed of that outer planet. Therefore, there must be some way to accelerate the spacecraft when it reaches that outer planet if it is to enter orbit about it.
Rocket engines can certainly be used to increase and decrease the speed of the spacecraft. However, rocket thrust takes propellant, propellant has mass, and even a small change in velocity (known as Δv, or "delta-v", the delta symbol being used to represent a change and "v" signifying velocity) translates to a far larger requirement for propellant needed to escape Earth's gravity well. This is because not only must the primary-stage engines lift the extra propellant, they must also lift the extra propellant beyond that which is needed to lift that additional propellant. The liftoff mass requirement increases exponentially with an increase in the required delta-v of the spacecraft.
Because additional fuel is needed to lift fuel into space, space missions are designed with a tight propellant "budget", known as the "delta-v budget". The delta-v budget is in effect the total propellant that will be available after leaving the earth, for speeding up, slowing down, stabilization against external buffeting (by particles or other external effects), or direction changes, if it cannot acquire more propellant. The entire mission must be planned within that capability. Therefore, methods of speed and direction change that do not require fuel to be burned are advantageous, because they allow extra maneuvering capability and course enhancement, without spending fuel from the limited amount which has been carried into space. Gravity assist maneuvers can greatly change the speed of a spacecraft without expending propellant, and can save significant amounts of propellant, so they are a very common technique to save fuel.
Limits
The main practical limit to the use of a gravity assist maneuver is that planets and other large masses are seldom in the right places to enable a voyage to a particular destination. For example, the Voyager missions which started in the late 1970s were made possible by the "Grand Tour" alignment of Jupiter, Saturn, Uranus and Neptune. A similar alignment will not occur again until the middle of the 22nd century. That is an extreme case, but even for less ambitious missions there are years when the planets are scattered in unsuitable parts of their orbits.
Another limitation is the atmosphere, if any, of the available planet. The closer the spacecraft can approach, the faster its periapsis speed as gravity accelerates the spacecraft, allowing for more kinetic energy to be gained from a rocket burn. However, if a spacecraft gets too deep into the atmosphere, the energy lost to drag can exceed that gained from the planet's gravity. On the other hand, the atmosphere can be used to accomplish aerobraking. There have also been theoretical proposals to use aerodynamic lift as the spacecraft flies through the atmosphere. This maneuver, called an aerogravity assist, could bend the trajectory through a larger angle than gravity alone, and hence increase the gain in energy.
Even in the case of an airless body, there is a limit to how close a spacecraft may approach. The magnitude of the achievable change in velocity depends on the spacecraft's approach velocity and the planet's escape velocity at the point of closest approach (limited by either the surface or the atmosphere.)
Interplanetary slingshots using the Sun itself are not possible because the Sun is at rest relative to the Solar System as a whole. However, thrusting when near the Sun has the same effect as the powered slingshot described as the Oberth effect. This has the potential to magnify a spacecraft's thrusting power enormously, but is limited by the spacecraft's ability to resist the heat.
A rotating black hole might provide additional assistance, if its spin axis is aligned the right way. General relativity predicts that a large spinning frame-dragging—close to the object, space itself is dragged around in the direction of the spin. Any ordinary rotating object produces this effect. Although attempts to measure frame dragging about the Sun have produced no clear evidence, experiments performed by Gravity Probe B have detected frame-dragging effects caused by Earth. General relativity predicts that a spinning black hole is surrounded by a region of space, called the ergosphere, within which standing still (with respect to the black hole's spin) is impossible, because space itself is dragged at the speed of light in the same direction as the black hole's spin. The Penrose process may offer a way to gain energy from the ergosphere, although it would require the spaceship to dump some "ballast" into the black hole, and the spaceship would have had to expend energy to carry the "ballast" to the black hole.
Notable examples of use
Luna 3
The gravity assist maneuver was first attempted in 1959 for Luna 3, to photograph the far side of the Moon. The satellite did not gain speed, but its orbit was changed in a way that allowed successful transmission of the photos.
Pioneer 10
NASA's Pioneer 10 is a space probe launched in 1972 that completed the first mission to the planet Jupiter. Thereafter, Pioneer 10 became the first of five artificial objects to achieve the escape velocity needed to leave the Solar System. In December 1973, Pioneer 10 spacecraft was the first one to use the gravitational slingshot effect to reach escape velocity to leave Solar System.
Pioneer 11
Pioneer 11 was launched by NASA in 1973, to study the asteroid belt, the environment around Jupiter and Saturn, solar winds, and cosmic rays. It was the first probe to encounter Saturn, the second to fly through the asteroid belt, and the second to fly by Jupiter. To get to Saturn, the spacecraft got a gravity assist on Jupiter.
Mariner 10
The Mariner 10 probe was the first spacecraft to use the gravitational slingshot effect to reach another planet, passing by Venus on 5 February 1974 on its way to becoming the first spacecraft to explore Mercury.
Voyager 1
Voyager 1 was launched by NASA on September 5, 1977. It gained the energy to escape the Sun's gravity by performing slingshot maneuvers around Jupiter and Saturn. Having operated for as of , the spacecraft still communicates with the Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of from Earth as of January 12, 2020, it is the most distant human-made object from Earth.
Voyager 2
Voyager 2 was launched by NASA on August 20, 1977, to study the outer planets. Its trajectory took longer to reach Jupiter and Saturn than its twin spacecraft but enabled further encounters with Uranus and Neptune.
Galileo
The Galileo spacecraft was launched by NASA in 1989 and on its route to Jupiter got three gravity assists, one from Venus (February 10, 1990), and two from Earth (December 8, 1990 and December 8, 1992). Spacecraft reached Jupiter in December 1995. Gravity assists also allowed Galileo to flyby two asteroids, 243 Ida and 951 Gaspra.
Ulysses
In 1990, NASA launched the ESA spacecraft Ulysses to study the polar regions of the Sun. All the planets orbit approximately in a plane aligned with the equator of the Sun. Thus, to enter an orbit passing over the poles of the Sun, the spacecraft would have to eliminate the speed it inherited from the Earth's orbit around the Sun and gain the speed needed to orbit the Sun in the pole-to-pole plane. It was achieved by a gravity assist from Jupiter on February 8, 1992.
MESSENGER
The MESSENGER mission (launched in August 2004) made extensive use of gravity assists to slow its speed before orbiting Mercury. The MESSENGER mission included one flyby of Earth, two flybys of Venus, and three flybys of Mercury before finally arriving at Mercury in March 2011 with a velocity low enough to permit orbit insertion with available fuel. Although the flybys were primarily orbital maneuvers, each provided an opportunity for significant scientific observations.
Cassini
The Cassini–Huygens spacecraft was launched from Earth on 15 October 1997, followed by gravity assist flybys of Venus (26 April 1998 and 21 June 1999), Earth (18 August 1999), and Jupiter (30 December 2000). Transit to Saturn took 6.7 years, the spacecraft arrived at 1 July 2004. Its trajectory was called "the Most Complex Gravity-Assist Trajectory Flown to Date" in 2019.
After entering orbit around Saturn, the Cassini spacecraft used multiple Titan gravity assists to achieve significant changes in the inclination of its orbit as well so that instead of staying nearly in the equatorial plane, the spacecraft's flight path was inclined well out of the plane of the rings. A typical Titan encounter changed the spacecraft's velocity by 0.75 km/s, and the spacecraft made 127 Titan encounters. These encounters enabled an orbital tour with a wide range of periapsis and apoapsis distances, various alignments of the orbit with respect to the Sun, and orbital inclinations from 0° to 74°. The multiple flybys of Titan also allowed Cassini to flyby other moons, such as Rhea and Enceladus.
Rosetta
The Rosetta probe, launched in March 2004, used four gravity assist maneuvers (including one just 250 km from the surface of Mars, and three assists from Earth) to accelerate throughout the inner Solar System. That enabled it to flyby the asteroids 21 Lutetia and 2867 Šteins as well as eventually match the velocity of the 67P/Churyumov–Gerasimenko comet at the rendezvous point in August 2014.
New Horizons
New Horizons was launched by NASA in 2006, and reached Pluto in 2015. In 2007 it performed a gravity assist on Jupiter.
Juno
The Juno spacecraft was launched on August 5, 2011 (UTC). The trajectory used a gravity assist speed boost from Earth, accomplished by an Earth flyby in October 2013, two years after its launch on August 5, 2011. In that way Juno changed its orbit (and speed) toward its final goal, Jupiter, after only five years.
Parker Solar Probe
The Parker Solar Probe, launched by NASA in 2018, has seven planned Venus gravity assists. Each gravity assist brings the Parker Solar Probe progressively closer to the Sun. As of 2022, the spacecraft has performed five of its seven assists. The Parker Solar Probe's mission will make the closest approach to the Sun by any space mission.
Solar Orbiter
Solar Orbiter was launched by ESA in 2020. In its initial cruise phase, which lasts until November 2021, Solar Orbiter performed two gravity-assist manoeuvres around Venus and one around Earth to alter the spacecraft's trajectory, guiding it towards the innermost regions of the Solar System. The first close solar pass will take place on 26 March 2022 at around a third of Earth's distance from the Sun.
BepiColombo
BepiColombo is a joint mission of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) to the planet Mercury. It was launched on 20 October 2018. It will use the gravity assist technique with Earth once, with Venus twice, and six times with Mercury. It will arrive in 2025. BepiColombo is named after Giuseppe (Bepi) Colombo who was a pioneer thinker with this way of maneuvers.
Lucy
Lucy was launched by NASA on 16 October 2021. It gained one gravity assist from Earth on the 16th of October, 2022, and after a flyby of the main-belt asteroid 152830 Dinkinesh it will gain another in 2024. In 2025, it will fly by the inner main-belt asteroid 52246 Donaldjohanson. In 2027, it will arrive at the Trojan cloud (the Greek camp of asteroids that orbits about 60° ahead of Jupiter), where it will fly by four Trojans, 3548 Eurybates (with its satellite), 15094 Polymele, 11351 Leucus, and 21900 Orus. After these flybys, Lucy will return to Earth in 2031 for another gravity assist toward the Trojan cloud (the Trojan camp which trails about 60° behind Jupiter), where it will visit the binary Trojan 617 Patroclus with its satellite Menoetius in 2033.
See also
3753 Cruithne, an asteroid which periodically has gravitational slingshot encounters with Earth
Delta-v budget
Low-energy transfer, a type of gravitational assist where a spacecraft is gravitationally snagged into orbit by a celestial body. This method is usually executed in the Earth-Moon system.
Dynamical friction
Flyby anomaly, an anomalous delta-v increase during gravity assists
Gravitational keyhole
Interplanetary Transport Network
n-body problem
Oberth effect, applying thrust near closest approach in a gravity well
Pioneer H, first Out-Of-The-Ecliptic mission (OOE) proposed, for Jupiter and solar (Sun) observations
STEREO, a gravity-assisted mission which used Earth's Moon to eject two spacecraft from Earth's orbit into heliocentric orbit
Notes
References
External links
Basics of Space Flight: A Gravity Assist Primer at NASA.gov
Spaceflight and Spacecraft: Gravity Assist, discussion at Phy6.org
Double-ball drop experiment
Astrodynamics
Soviet inventions
Orbital maneuvers
Spacecraft propulsion
Assist
Articles containing video clips | 0.791752 | 0.996785 | 0.789206 |
Mechanical energy | In physical sciences, mechanical energy is the sum of potential energy and kinetic energy. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant. If an object moves in the opposite direction of a conservative net force, the potential energy will increase; and if the speed (not the velocity) of the object changes, the kinetic energy of the object also changes. In all real systems, however, nonconservative forces, such as frictional forces, will be present, but if they are of negligible magnitude, the mechanical energy changes little and its conservation is a useful approximation. In elastic collisions, the kinetic energy is conserved, but in inelastic collisions some mechanical energy may be converted into thermal energy. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule.
Many devices are used to convert mechanical energy to or from other forms of energy, e.g. an electric motor converts electrical energy to mechanical energy, an electric generator converts mechanical energy into electrical energy and a heat engine converts heat to mechanical energy.
General
Energy is a scalar quantity and the mechanical energy of a system is the sum of the potential energy (which is measured by the position of the parts of the system) and the kinetic energy (which is also called the energy of motion):
The potential energy, U, depends on the position of an object subjected to gravity or some other conservative force. The gravitational potential energy of an object is equal to the weight W of the object multiplied by the height h of the object's center of gravity relative to an arbitrary datum:
The potential energy of an object can be defined as the object's ability to do work and is increased as the object is moved in the opposite direction of the direction of the force. If F represents the conservative force and x the position, the potential energy of the force between the two positions x1 and x2 is defined as the negative integral of F from x1 to x2:
The kinetic energy, K, depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them. It is defined as one half the product of the object's mass with the square of its speed, and the total kinetic energy of a system of objects is the sum of the kinetic energies of the respective objects:
The principle of conservation of mechanical energy states that if a body or system is subjected only to conservative forces, the mechanical energy of that body or system remains constant. The difference between a conservative and a non-conservative force is that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path. On the contrary, when a non-conservative force acts upon an object, the work done by the non-conservative force is dependent of the path.
Conservation of mechanical energy
According to the principle of conservation of mechanical energy, the mechanical energy of an isolated system remains constant in time, as long as the system is free of friction and other non-conservative forces. In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the principle of conservation of mechanical energy can be used as a fair approximation. Though energy cannot be created or destroyed, it can be converted to another form of energy.
Swinging pendulum
In a mechanical system like a swinging pendulum subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible, energy passes back and forth between kinetic and potential energy but never leaves the system. The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points. However, when taking the frictional forces into account, the system loses mechanical energy with each swing because of the negative work done on the pendulum by these non-conservative forces.
Irreversibilities
That the loss of mechanical energy in a system always resulted in an increase of the system's temperature has been known for a long time, but it was the amateur physicist James Prescott Joule who first experimentally demonstrated how a certain amount of work done against friction resulted in a definite quantity of heat which should be conceived as the random motions of the particles that comprise matter. This equivalence between mechanical energy and heat is especially important when considering colliding objects. In an elastic collision, mechanical energy is conserved – the sum of the mechanical energies of the colliding objects is the same before and after the collision. After an inelastic collision, however, the mechanical energy of the system will have changed. Usually, the mechanical energy before the collision is greater than the mechanical energy after the collision. In inelastic collisions, some of the mechanical energy of the colliding objects is transformed into kinetic energy of the constituent particles. This increase in kinetic energy of the constituent particles is perceived as an increase in temperature. The collision can be described by saying some of the mechanical energy of the colliding objects has been converted into an equal amount of heat. Thus, the total energy of the system remains unchanged though the mechanical energy of the system has reduced.
Satellite
A satellite of mass at a distance from the centre of Earth possesses both kinetic energy, , (by virtue of its motion) and gravitational potential energy, , (by virtue of its position within the Earth's gravitational field; Earth's mass is ).
Hence, mechanical energy of the satellite-Earth system is given by
If the satellite is in circular orbit, the energy conservation equation can be further simplified into
since in circular motion, Newton's 2nd Law of motion can be taken to be
Conversion
Today, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories:
An electric motor converts electrical energy into mechanical energy.
A generator converts mechanical energy into electrical energy.
A hydroelectric powerplant converts the mechanical energy of water in a storage dam into electrical energy.
An internal combustion engine is a heat engine that obtains mechanical energy from chemical energy by burning fuel. From this mechanical energy, the internal combustion engine often generates electricity.
A steam engine converts the heat energy of steam into mechanical energy.
A turbine converts the kinetic energy of a stream of gas or liquid into mechanical energy.
Distinction from other types
The classification of energy into different types often follows the boundaries of the fields of study in the natural sciences.
Chemical energy is the kind of potential energy "stored" in chemical bonds and is studied in chemistry.
Nuclear energy is energy stored in interactions between the particles in the atomic nucleus and is studied in nuclear physics.
Electromagnetic energy is in the form of electric charges, magnetic fields, and photons. It is studied in electromagnetism.
Various forms of energy in quantum mechanics; e.g., the energy levels of electrons in an atom.
References
Notes
Citations
Bibliography
Energy (physics)
Mechanical quantities
Articles containing video clips | 0.792267 | 0.995616 | 0.788794 |
Classical electromagnetism | Classical electromagnetism or classical electrodynamics is a branch of theoretical physics that studies the interactions between electric charges and currents using an extension of the classical Newtonian model. It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics which is a quantum field theory.
Fundamental physical aspects of classical electrodynamics are presented in many textbooks. For the undergraduate level, textbooks like The Feynman Lectures on Physics, Electricity and Magnetism, and Introduction to Electrodynamics are considered as classic references and for the graduate level, textbooks like Classical Electricity and Magnetism, Classical Electrodynamics, and Course of Theoretical Physics are considered as classic references.
History
The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity. For example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. However, the theory of electromagnetism, as it is currently understood, grew out of Michael Faraday's experiments suggesting the existence of an electromagnetic field and James Clerk Maxwell's use of differential equations to describe it in his A Treatise on Electricity and Magnetism (1873). The development of electromagnetism in Europe included the development of methods to measure voltage, current, capacitance, and resistance. Detailed historical accounts are given by Wolfgang Pauli, E. T. Whittaker, Abraham Pais, and Bruce J. Hunt.
Lorentz force
The electromagnetic field exerts the following force (often called the Lorentz force) on charged particles:
where all boldfaced quantities are vectors: is the force that a particle with charge q experiences, is the electric field at the location of the particle, is the velocity of the particle, is the magnetic field at the location of the particle.
The above equation illustrates that the Lorentz force is the sum of two vectors. One is the cross product of the velocity and magnetic field vectors. Based on the properties of the cross product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the same direction as the electric field. The sum of these two vectors is the Lorentz force.
Although the equation appears to suggest that the electric and magnetic fields are independent, the equation can be rewritten in term of four-current (instead of charge) and a single electromagnetic tensor that represents the combined field:
Electric field
The electric field E is defined such that, on a stationary charge:
where q0 is what is known as a test charge and is the force on that charge. The size of the charge does not really matter, as long as it is small enough not to influence the electric field by its mere presence. What is plain from this definition, though, is that the unit of is N/C (newtons per coulomb). This unit is equal to V/m (volts per meter); see below.
In electrostatics, where charges are not moving, around a distribution of point charges, the forces determined from Coulomb's law may be summed. The result after dividing by q0 is:
where n is the number of charges, qi is the amount of charge associated with the ith charge, ri is the position of the ith charge, r is the position where the electric field is being determined, and ε0 is the electric constant.
If the field is instead produced by a continuous distribution of charge, the summation becomes an integral:
where is the charge density and is the vector that points from the volume element to the point in space where E is being determined.
Both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the electric potential can help. Electric potential, also called voltage (the units for which are the volt), is defined by the line integral
where is the electric potential, and C is the path over which the integral is being taken.
Unfortunately, this definition has a caveat. From Maxwell's equations, it is clear that is not always zero, and hence the scalar potential alone is insufficient to define the electric field exactly. As a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met.
From the definition of charge, one can easily show that the electric potential of a point charge as a function of position is:
where q is the point charge's charge, r is the position at which the potential is being determined, and ri is the position of each point charge. The potential for a continuous distribution of charge is:
where is the charge density, and is the distance from the volume element to point in space where φ is being determined.
The scalar φ will add to other potentials as a scalar. This makes it relatively easy to break complex problems down into simple parts and add their potentials. Taking the definition of φ backwards, we see that the electric field is just the negative gradient (the del operator) of the potential. Or:
From this formula it is clear that E can be expressed in V/m (volts per meter).
Electromagnetic waves
A changing electromagnetic field propagates away from its origin in the form of a wave. These waves travel in vacuum at the speed of light and exist in a wide spectrum of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in order of increasing frequency): radio waves, microwaves, light (infrared, visible light and ultraviolet), x-rays and gamma rays. In the field of particle physics this electromagnetic radiation is the manifestation of the electromagnetic interaction between charged particles.
General field equations
As simple and satisfying as Coulomb's equation may be, it is not entirely correct in the context of classical electromagnetism. Problems arise because changes in charge distributions require a non-zero amount of time to be "felt" elsewhere (required by special relativity).
For the fields of general charge distributions, the retarded potentials can be computed and differentiated accordingly to yield Jefimenko's equations.
Retarded potentials can also be derived for point charges, and the equations are known as the Liénard–Wiechert potentials. The scalar potential is:
where is the point charge's charge and is the position. and are the position and velocity of the charge, respectively, as a function of retarded time. The vector potential is similar:
These can then be differentiated accordingly to obtain the complete field equations for a moving point particle.
Models
Branches of classical electromagnetism such as optics, electrical and electronic engineering consist of a collection of relevant mathematical models of different degrees of simplification and idealization to enhance the understanding of specific electrodynamics phenomena. An electrodynamics phenomenon is determined by the particular fields, specific densities of electric charges and currents, and the particular transmission medium. Since there are infinitely many of them, in modeling there is a need for some typical, representative
(a) electrical charges and currents, e.g. moving pointlike charges and electric and magnetic dipoles, electric currents in a conductor etc.;
(b) electromagnetic fields, e.g. voltages, the Liénard–Wiechert potentials, the monochromatic plane waves, optical rays, radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, gamma rays etc.;
(c) transmission media, e.g. electronic components, antennas, electromagnetic waveguides, flat mirrors, mirrors with curved surfaces convex lenses, concave lenses; resistors, inductors, capacitors, switches; wires, electric and optical cables, transmission lines, integrated circuits etc.; all of which have only few variable characteristics.
See also
Mathematical descriptions of the electromagnetic field
Weber electrodynamics
Wheeler–Feynman absorber theory
References
Electromagnetism
Electrodynamics | 0.795093 | 0.992042 | 0.788766 |
Accelerationism | Accelerationism is a range of revolutionary and reactionary ideas in left-wing and right-wing ideologies that call for the drastic intensification of capitalist growth, technological change, infrastructure sabotage and other processes of social change to destabilize existing systems and create radical social transformations, otherwise referred to as "acceleration". It has been regarded as an ideological spectrum divided into mutually contradictory left-wing and right-wing variants, both of which support the indefinite intensification of capitalism and its structures as well as the conditions for a technological singularity, a hypothetical point in time at which technological growth becomes uncontrollable and irreversible.
Various ideas, including Gilles Deleuze and Félix Guattari's idea of deterritorialization, Jean Baudrillard's proposals for "fatal strategies", and aspects of the theoretical systems and processes developed by English philosopher and later Dark Enlightenment commentator Nick Land, are crucial influences on accelerationism, which aims to analyze and subsequently promote the social, economic, cultural, and libidinal forces that constitute the process of acceleration. While originally used by the far-left, the term has, in a manner strongly distinguished from original accelerationist theorists, been used by right-wing extremists such as neo-fascists, neo-Nazis, white nationalists and white supremacists to increasingly refer to an "acceleration" of racial conflict through assassinations, murders and terrorist attacks as a means to violently achieve a white ethnostate.
While predominantly a political strategy suited to the industrial economy, acceleration has recently been discussed in debates about humanism and artificial intelligence. Yuk Hui and Louis Morelle consider acceleration and the "Singularity Hypothesis". James Brusseau discusses acceleration as an ethics of innovation where humanistic dilemmas caused by AI innovation are resolved by still more innovation, as opposed to limiting or slowing the technology. A movement known as effective accelerationism (abbreviated to e/acc) advocates for technological progress "at all costs".
Background and precursors
The term "accelerationism" originated with sci-fi author Roger Zelazny in his third novel, 1967's Lord of Light.
The term was popularized by professor and author Benjamin Noys in his 2010 book The Persistence of the Negative to describe the trajectory of certain post-structuralists who embraced unorthodox Marxist and counter-Marxist overviews of capitalist growth, such as Gilles Deleuze and Félix Guattari in their 1972 book, Anti-Oedipus, Jean-François Lyotard in his 1974 book Libidinal Economy and Jean Baudrillard in his 1976 book Symbolic Exchange and Death.
English right-wing philosopher and writer Nick Land, commonly credited with creating and inspiring accelerationism's basic ideas and concepts, cited a number of philosophers who express anticipatory accelerationist attitudes in his 2017 essay "A Quick-and-Dirty Introduction to Accelerationism". Firstly, Friedrich Nietzsche argued in a fragment in The Will to Power that "the leveling process of European man is the great process which should not be checked: one should even accelerate it." Then, taking inspiration from this notion for Anti-Oedipus, Deleuze and Guattari speculated on an unprecedented "revolutionary path" to further perpetuate capitalism's tendencies that would later become a central idea of accelerationism:
Land also cited Karl Marx, who, in his 1848 speech "On the Question of Free Trade", anticipated accelerationist principles a century before Deleuze and Guattari by describing free trade as socially destructive and fuelling class conflict, then effectively arguing for it:
Land attributed the increasing speed of the modern world, along with the associated decrease in time available to think and make decisions about its events, to unregulated capitalism and its ability to exponentially grow and self-improve, describing capitalism as "a positive feedback circuit, within which commercialization and industrialization mutually excite each other in a runaway process." He argued that the best way to deal with capitalism is to participate more to foster even greater exponential growth and self-improvement via creative destruction, believing such acceleration of those abilities and technological progress to be intrinsic to capitalism but impossible for non-capitalist systems, stating that "capital revolutionizes itself more thoroughly than any extrinsic 'revolution' possibly could."
Contemporary accelerationism
The Cybernetic Culture Research Unit (CCRU), an experimental theory collective that existed from 1995 to 2003 at the University of Warwick, included Land as well as other influential social theorists such as Mark Fisher and Sadie Plant as members. Prominent contemporary left-wing accelerationists include Nick Srnicek and Alex Williams, authors of the "Manifesto for an Accelerationist Politics"; and the Laboria Cuboniks collective, who authored the manifesto "Xenofeminism: A Politics for Alienation". For Mark Fisher, writing in 2012, "Land's withering assaults on the academic left [...] remain trenchant", although problematic since "Marxism is nothing if it is not accelerationist". Aria Dean notably synthesized the analysis of racial capitalism with accelerationism, arguing that the binary between humans, and machines and capital, is already blurred by the scars of the Atlantic slave trade. Benjamin H. Bratton's book The Stack: On Software and Sovereignty has been described as concerning accelerationist ideas, focusing on how information technology infrastructures undermine modern political geographies and proposing an open-ended "design brief". Tiziana Terranova's "Red Stack Attack!" links Bratton's stack model and left-wing accelerationism.
Left-wing accelerationism
Left-wing accelerationism, commonly referred to as "L/Acc", is often attributed to Mark Fisher, a prior CCRU member and mentor for Srnicek and Williams. Left-wing accelerationism seeks to explore, in an orthodox and conventional manner, how modern society has the momentum to create futures that are equitable and liberatory. While both strands of accelerationist thinking remain rooted in a similar range of thinkers, left accelerationism appeared with the intent to use their ideas for the goal of achieving an egalitarian future. In response to this strand of accelerationism and its optimism for egalitarianism and liberation, which departs from prior interests in experimentation and delirium, Land rebuked its ideas in an interview with The Guardian, saying that "the notion that self-propelling technology is separable from capitalism is a deep theoretical error".
Other uses of the term
Since "accelerationism" was coined in 2010, the term has taken on several new meanings, particularly by right-wing extremist movements and terrorist organizations, that has led the term to be sensationalized on multiple occasions. Several commentators have used the label accelerationist to describe a controversial political strategy articulated by the Slovenian philosopher, Freudo-Marxist theorist, and writer Slavoj Žižek. An often-cited example of this is Žižek's assertion in a November 2016 interview with Channel 4 News that were he an American citizen, he would vote for former U.S. president Donald Trump as the candidate more likely to disrupt the political status quo in that country.
Far-right accelerationist terrorism
Despite its originally Marxist philosophical and theoretical interests, since the late 2010s, international networks of neo-fascists, neo-Nazis, White nationalists, and White supremacists have increasingly used the term "accelerationism" to refer to right-wing extremist goals, and have been known to refer to an "acceleration" of racial conflict through violent means such as assassinations, murders, terrorist attacks and eventual societal collapse, to achieve the building of a White ethnostate. Far-right accelerationism has been widely considered as detrimental to public safety. The inspiration for this distinct variation is occasionally cited as American Nazi Party and National Socialist Liberation Front member James Mason's newsletter Siege, where he argued for sabotage, mass killings, and assassinations of high-profile targets to destabilize and destroy the current society, seen as a system upholding a Jewish and multicultural New World Order. His works were republished and popularized by the Iron March forum and Atomwaffen Division, right-wing extremist organizations strongly connected to various terrorist attacks, murders, and assaults. According to the Southern Poverty Law Center (SPLC), which tracks hate groups and files class action lawsuits against discriminatory organizations and entities, "on the case of white supremacists, the accelerationist set sees modern society as irredeemable and believe it should be pushed to collapse so a fascist society built on ethnonationalism can take its place. What defines white supremacist accelerationists is their belief that violence is the only way to pursue their political goals."
Brenton Harrison Tarrant, the perpetrator of the Christchurch mosque shootings that killed 51 people and injured 49 others, strongly encouraged right-wing accelerationism in a section of his manifesto titled "Destabilization and Accelerationism: Tactics". It also influenced John Timothy Earnest, the perpetrator of the Escondido mosque fire at Dar-ul-Arqam Mosque in Escondido, California; and committing the Poway synagogue shooting which resulted in one dead and three injured, and influenced Patrick Crusius, the perpetrator of the El Paso Walmart shooting that killed 23 people and injured 23 others. Tarrant and Earnest, in turn, influenced Juraj Krajčík, the perpetrator of the 2022 Bratislava shooting that left dead two patrons of a gay bar. Sich Battalion urged its members to buy a copy of Tarrant's manifesto, encouraging them to "get inspired" by it.
Although these right-wing extremist variants and their connected strings of terrorist attacks and murders are regarded as certainly uninformed by critical theory, which was a prime source of inspiration for Land's original ideas that led to accelerationism, Land himself became interested in the Atomwaffen-affiliated theistic Satanist organization Order of Nine Angles (ONA), that adheres to the ideology of Neo-Nazi terrorist accelerationism, describing the ONA's works as "highly-recommended" in a blog post. Since the 2010s, the political ideology and religious worldview of the Order of Nine Angles, founded by the British neo-Nazi leader David Myatt in 1974, have increasingly influenced militant neo-fascist and neo-Nazi insurgent groups associated with right-wing extremist and White supremacist international networks, most notably the Iron March forum.
Fascist accelerationist organizations
Active Club Network is decentralized Clandestine cell system of white nationalists. It promotes mixed martial arts to fight against what it asserts is a system that is targeting the white race, as well as a "warrior spirit" to prepare for a forthcoming race war. Some extremism researchers have characterized the network as a "shadow or stand-by army" which is awaiting activation as the need for it arises.
Atomwaffen Division is a neo-Nazi terror organization found in 2013 by Brandon Russell responsible for multiple murders and mass casualty plots. Atomwaffen has been proscribed as a terror organization in United Kingdom, Canada and Australia.
The Base is a neo-Nazi, white supremacist paramilitary hate group and training network, formed in 2018 by Rinaldo Nazzaro and active in the United States, Canada, Australia, South Africa, and Europe. it is considered a terrorist organization in Canada, Australia, New Zealand, and the United Kingdom.
Combat 18 is a neo-Nazi organization that has been proscribed in Canada and Germany and is tied to the assassination of Walter Lübcke and the 2009 Vítkov arson attack.
The Manson Family was a doomsday cult, led by Charles Manson, responsible for the Tate–LaBianca murders, in which seven people were murdered between August 8 and August 10, 1969. Manson was a white supremacist and neo-Nazi who prophesized about a race war in which African-Americans would rise up and exterminate all white people in the United States, with him and his followers hiding in safety. Afterward, the Family would rule over the Black population, with Manson as their "master," as he believed that Black people were not intelligent enough to govern themselves. The Tate–LaBianca murders were an attempt to bring this scenario closer to reality, with Manson believing that the killing of people who he considered "pigs" would inspire Black people to do the same.
Nordic Resistance Movement is a pan-Nordic neo-Nazi organization that adheres to accelerationism and is tied to ONA and multiple terror plots and murders, like the murder of an antifascist in Helsinki in 2016. There has been an international effort to proscribe NRM as a terrorist organization, and it was banned as such in Finland in 2019. On 14 June 2024, the United States Department of State designated NRM and its leaders as Specially Designated Global Terrorists (SDGT).
Order of Nine Angles is a neo-Nazi satanist organization that has been connected to multiple murders and terror plots. There has been an international effort to proscribe ONA as a terror organization. Further, the ONA is connected to the Atomwaffen and the Base, and the founder of ONA David Myatt was a one-time leader of the C18.
Russian Imperial Movement is a white supremacist organization founded in Russia and proscribed as a terror organization in the United States and Canada for its connection to neo-fascist terrorists. People trained by RIM have gone on to commit a series of bombings and joined the separatist militants in Donbas.
See also
References
2010s neologisms
Anti-capitalism
Anti-fascism
Critical theory
Far-left politics
Right-wing terrorism
Ideologies of capitalism
Marxism
Neo-Nazism
Reactionary
Revolution terminology
Singularitarianism
Social theories
Social change
Transhumanism | 0.788887 | 0.999064 | 0.788149 |
Physics-informed neural networks | Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
Function approximation
Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization.
Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity.
PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation.
Modeling and computation
A general nonlinear partial differential equation can be:
where denotes the solution, is a nonlinear operator parameterized by , and is a subset of . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems:
data-driven solution
data-driven discovery of partial differential equations.
Data-driven solution of partial differential equations
The data-driven solution of PDE computes the hidden state of the system given boundary data and/or measurements , and fixed model parameters . We solve:
.
By defining the residual as
,
and approximating by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of and can be then learned by minimizing the following loss function :
.
Where is the error between the PINN and the set of boundary conditions and measured data on the set of points where the boundary conditions and data are defined, and is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process.
This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE.
Data-driven discovery of partial differential equations
Given noisy and incomplete measurements of the state of the system, the data-driven discovery of PDE results in computing the unknown state and learning model parameters that best describe the observed data and it reads as follows:
.
By defining as
,
and approximating by a deep neural network, results in a PINN. This network can be derived using automatic differentiation. The parameters of and , together with the parameter of the differential operator can be then learned by minimizing the following loss function :
.
Where , with and state solutions and measurements at sparse location , respectively and residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process.
This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation.
Physics-informed neural networks for piece-wise function approximation
PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources.
XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
Physics-informed neural networks and functional interpolation
In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of Functional Connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
Physics-informed PointNet (PIPN) for multiple sets of irregular geometries
Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity.
Physics-informed neural networks (PINNs) for inverse computations
Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations.
Physics-informed neural networks (PINNs) with backward stochastic differential equation
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
Physics-informed neural networks for biology
An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function is modified to include , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases.
A natural example of BINNs can be found in cell dynamics, where the cell density is governed by a reaction-diffusion equation with diffusion and growth functions and , respectively:
In this case, a component of could be for , which penalizes values of that fall outside a biologically relevant diffusion range defined by . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct from model inputs , serving as a surrogate model for the cell density . This surrogate is then fed into the two additional MLPs, and , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of , and to form the governing reaction-diffusion equation.
Note that since is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach.
Limitations
Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables.
This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution.
They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize.
More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima.
References
External links
Physics Informed Neural Network
PINN – repository to implement physics-informed neural network in Python
XPINN – repository to implement extended physics-informed neural network (XPINN) in Python
PIPN – repository to implement physics-informed PointNet (PIPN) in Python
Differential equations
Deep learning | 0.790615 | 0.996818 | 0.788099 |
D'Alembert's principle | D'Alembert's principle, also known as the Lagrange–d'Alembert principle, is a statement of the fundamental classical laws of motion. It is named after its discoverer, the French physicist and mathematician Jean le Rond d'Alembert, and Italian-French mathematician Joseph Louis Lagrange. D'Alembert's principle generalizes the principle of virtual work from static to dynamical systems by introducing forces of inertia which, when added to the applied forces in a system, result in dynamic equilibrium.
D'Alembert's principle can be applied in cases of kinematic constraints that depend on velocities. The principle does not apply for irreversible displacements, such as sliding friction, and more general specification of the irreversibility is required.
Statement of the principle
The principle states that the sum of the differences between the forces acting on a system of massive particles and the time derivatives of the momenta of the system itself projected onto any virtual displacement consistent with the constraints of the system is zero. Thus, in mathematical notation, d'Alembert's principle is written as follows,
where:
is an integer used to indicate (via subscript) a variable corresponding to a particular particle in the system,
is the total applied force (excluding constraint forces) on the -th particle,
is the mass of the -th particle,
is the velocity of the -th particle,
is the virtual displacement of the -th particle, consistent with the constraints.
Newton's dot notation is used to represent the derivative with respect to time. The above equation is often called d'Alembert's principle, but it was first written in this variational form by Joseph Louis Lagrange. D'Alembert's contribution was to demonstrate that in the totality of a dynamic system the forces of constraint vanish. That is to say that the generalized forces need not include constraint forces. It is equivalent to the somewhat more cumbersome Gauss's principle of least constraint.
Derivations
General case with variable mass
The general statement of d'Alembert's principle mentions "the time derivatives of the momenta of the system." By Newton's second law, the first time derivative of momentum is the force. The momentum of the -th mass is the product of its mass and velocity:
and its time derivative is
In many applications, the masses are constant and this equation reduces to
However, some applications involve changing masses (for example, chains being rolled up or being unrolled) and in those cases both terms and have to remain present, giving
Special case with constant mass
Consider Newton's law for a system of particles of constant mass, . The total force on each particle is
where
are the total forces acting on the system's particles,
are the inertial forces that result from the total forces.
Moving the inertial forces to the left gives an expression that can be considered to represent quasi-static equilibrium, but which is really just a small algebraic manipulation of Newton's law:
Considering the virtual work, , done by the total and inertial forces together through an arbitrary virtual displacement, , of the system leads to a zero identity, since the forces involved sum to zero for each particle.
The original vector equation could be recovered by recognizing that the work expression must hold for arbitrary displacements. Separating the total forces into applied forces, , and constraint forces, , yields
If arbitrary virtual displacements are assumed to be in directions that are orthogonal to the constraint forces (which is not usually the case, so this derivation works only for special cases), the constraint forces don't do any work, . Such displacements are said to be consistent with the constraints. This leads to the formulation of d'Alembert's principle, which states that the difference of applied forces and inertial forces for a dynamic system does no virtual work:
There is also a corresponding principle for static systems called the principle of virtual work for applied forces.
D'Alembert's principle of inertial forces
D'Alembert showed that one can transform an accelerating rigid body into an equivalent static system by adding the so-called "inertial force" and "inertial torque" or moment. The inertial force must act through the center of mass and the inertial torque can act anywhere. The system can then be analyzed exactly as a static system subjected to this "inertial force and moment" and the external forces. The advantage is that in the equivalent static system one can take moments about any point (not just the center of mass). This often leads to simpler calculations because any force (in turn) can be eliminated from the moment equations by choosing the appropriate point about which to apply the moment equation (sum of moments = zero). Even in the course of Fundamentals of Dynamics and Kinematics of machines, this principle helps in analyzing the forces that act on a link of a mechanism when it is in motion. In textbooks of engineering dynamics, this is sometimes referred to as d'Alembert's principle.
Some educators caution that attempts to use d'Alembert inertial mechanics lead students to make frequent sign errors. A potential cause for these errors is the sign of the inertial forces. Inertial forces can be used to describe an apparent force in a non-inertial reference frame that has an acceleration with respect to an inertial reference frame. In such a non-inertial reference frame, a mass that is at rest and has zero acceleration in an inertial reference system, because no forces are acting on it, will still have an acceleration and an apparent inertial, or pseudo or fictitious force will seem to act on it: in this situation the inertial force has a minus sign.
Dynamic equilibrium
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of rigid bodies with generalized coordinates requires
for any set of virtual displacements with being a generalized applied force and being a generalized inertia force. This condition yields equations,
which can also be written as
The result is a set of m equations of motion that define the dynamics of the rigid body system.
Formulation using the Lagrangian
D'Alembert's principle can be rewritten in terms of the Lagrangian L=T-V of the system as a generalized version of Hamilton's principle as follows,
where:
are the applied forces
is the virtual displacement of the -th particle, consistent with the constraints
the critical curve satisfies the constraints
With the Lagrangian
the previous statement of d'Alembert principle is recovered.
Generalization for thermodynamics
An extension of d'Alembert's principle can be used in thermodynamics. For instance, for an adiabatically closed thermodynamic system described by a Lagrangian depending on a single entropy S and with constant masses , such as
it is written as follows
where the previous constraints and are generalized to involve the entropy as:
Here is the temperature of the system, are the external forces, are the internal dissipative forces. It results in the mechanical and thermal balance equations:
Typical applications of the principle include thermo-mechanical systems, membrane transport, and chemical reactions.
For the classical d'Alembert principle and equations are recovered.
References
Classical mechanics
Dynamical systems
Lagrangian mechanics
Principles | 0.794377 | 0.992057 | 0.788067 |
Quantum mechanics | Quantum mechanics is a fundamental theory that describes the behavior of nature at and below the scale of atoms. It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science.
Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation, valid at large (macroscopic/microscopic) scale.
Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).
Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield.
Overview and fundamental concepts
Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron.
A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.
Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit.
Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor.
When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem.
Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.
It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.
Mathematical formulation
In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector belonging to a (separable) complex Hilbert space . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, and represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors with the usual inner product.
Physical quantities of interestposition, momentum, energy, spinare represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density.
After the measurement, if result was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics).
Time evolution of a quantum state
The time evolution of a quantum state is described by the Schrödinger equation:
Here denotes the Hamiltonian, the observable corresponding to the total energy of the system, and is the reduced Planck constant. The constant is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.
The solution of this differential equation is given by
The operator is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state – it makes a definite prediction of what the quantum state will be at any later time.
Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1).
Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form.
However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.
Uncertainty principle
One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator and momentum operator do not commute, but rather satisfy the canonical commutation relation:
Given a quantum state, the Born rule lets us compute expectation values for both and , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have
and likewise for the momentum:
The uncertainty principle states that
Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators and . The commutator of these two operators is
and this provides the lower bound on the product of standard deviations:
Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum is replaced by , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times .
Composite systems and entanglement
When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let and be two quantum systems, with Hilbert spaces and , respectively. The Hilbert space of the composite system is then
If the state for the first system is the vector and the state for the second system is , then the state of the composite system is
Not all states in the joint Hilbert space can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if and are both possible states for system , and likewise and are both possible states for system , then
is a valid joint state that is not separable. States that are not separable are called entangled.
If the state for a composite system is entangled, it is impossible to describe either component system or system by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.
As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.
Equivalence between formulations
There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.
Symmetries and conservation laws
The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.
Examples
Free particle
The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:
The general solution of the Schrödinger equation is given by
which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state .
It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet:
which has Fourier transform, and therefore momentum distribution
We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.
As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.
Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written
With the differential operator defined by
the previous equation is evocative of the classic kinetic energy analogue,
with state in this case having energy coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
or, from Euler's formula,
The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at ,
and . At ,
in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of ,
This constraint on implies a constraint on the energy levels, yielding
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
Harmonic oscillator
As in the classical case, the potential for the quantum harmonic oscillator is given by
This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by
where Hn are the Hermite polynomials
and the corresponding energy levels are
This is another example illustrating the discretization of energy for bound states.
Mach–Zehnder interferometer
The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement.
We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that .
Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path.
A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state
and the probabilities that it will be detected at the right or at the top are given respectively by
One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.
It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.
Applications
Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.
In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.
Relation to other scientific theories
Classical mechanics
The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.
When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically.
Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.
Special relativity and electrodynamics
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised.
The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg.
Relation to general relativity
Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.
One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force.
Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.
Philosophical implications
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."
The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations.
Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.
Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.
Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful.
Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later.
History
Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light.
During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible unitsthe word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons.
The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν):
,
where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser.
This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.
In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids.
See also
Bra–ket notation
Einstein's thought experiments
List of textbooks on classical and quantum mechanics
Macroscopic quantum phenomena
Phase-space formulation
Regularization (physics)
Two-state quantum system
Explanatory notes
References
Further reading
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus.
Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley.
Richard Feynman, 1985. QED: The Strange Theory of Light and Matter, Princeton University Press. . Four elementary lectures on quantum electrodynamics and quantum field theory, yet containing many insights for the expert.
Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading.
N. David Mermin, 1990, "Spooky actions at a distance: mysteries of the QT" in his Boojums All the Way Through. Cambridge University Press: 110–76.
Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo, NY: Prometheus Books. Chpts. 5–8. Includes cosmological and philosophical considerations.
More technical:
Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press.
D. Greenberger, K. Hentschel, F. Weinert, eds., 2009. Compendium of quantum physics, Concepts, experiments, history and philosophy, Springer-Verlag, Berlin, Heidelberg. Short articles on many QM topics.
A standard undergraduate text.
Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill.
Hagen Kleinert, 2004. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. Singapore: World Scientific. Draft of 4th edition.
Online copy
Gunther Ludwig, 1968. Wave Mechanics. London: Pergamon Press.
George Mackey (2004). The mathematical foundations of quantum mechanics. Dover Publications. .
Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G.M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III. online
Scerri, Eric R., 2006. The Periodic Table: Its Story and Its Significance. Oxford University Press. Considers the extent to which chemistry and the periodic system have been reduced to quantum mechanics.
Veltman, Martinus J.G. (2003), Facts and Mysteries in Elementary Particle Physics.
On Wikibooks
This Quantum World
External links
J. O'Connor and E. F. Robertson: A history of quantum mechanics.
Introduction to Quantum Theory at Quantiki.
Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe.
Course material
Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware.
Modern Physics: With waves, thermodynamics, and optics – an online textbook.
MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06.
Examples in Quantum Mechanics.
Imperial College Quantum Mechanics Course.
Philosophy | 0.78791 | 0.99985 | 0.787792 |
Vector field | In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).
A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space.
Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other.
Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).
More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.
Definition
Vector fields on subsets of Euclidean space
Given a subset of , a vector field is represented by a vector-valued function in standard Cartesian coordinates . If each component of is continuous, then is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.
One standard notation is to write for the unit vectors in the coordinate directions. In these terms, every smooth vector field on an open subset of can be written as
for some smooth functions on . The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, , given by differentiating in the direction of the vector field.
Example: The vector field describes a counterclockwise rotation around the origin in . To show that the function is rotationally invariant, compute:
Given vector fields , defined on and a smooth function defined on , the operations of scalar multiplication and vector addition,
make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise.
Coordinate transformation law
In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.
Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector are
and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law
Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.
Vector fields on manifolds
Given a differentiable manifold , a vector field on is an assignment of a tangent vector to each point in . More precisely, a vector field is a mapping from into the tangent bundle so that is the identity mapping
where denotes the projection from to . In other words, a vector field is a section of the tangent bundle.
An alternative definition: A smooth vector field on a manifold is a linear map such that is a derivation: for all .
If the manifold is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold is often denoted by or (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by (a fraktur "X").
Examples
A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas.
Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid.
Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are:
streaklines: the line produced by particles passing through a specific fixed point over various times
pathlines: showing the path that a given particle (of zero mass) would follow.
streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed).
Magnetic fields. The fieldlines can be revealed using small iron filings.
Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electric field.
A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases.
Gradient field in Euclidean spaces
Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).
A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that
The associated flow is called the , and is used in the method of gradient descent.
The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero:
Central field in euclidean spaces
A -vector field over is called a central field if
where is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.
The point 0 is called the center of the field.
Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.
Operations on vector fields
Line integral
A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve.
The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.
Given a vector field and a curve , parametrized by in (where and are real numbers), the line integral is defined as
To show vector field topology one can use line integral convolution.
Divergence
The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by
with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.
The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.
Curl in three dimensions
The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by
The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.
Index of a vector field
The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity.
Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself.
The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions.
The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes.
For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem.
For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic.
Physical intuition
Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory.
In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field.
In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm.
Flow curves
Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.
Given a vector field defined on , one defines curves on such that for each in an interval ,
By the Picard–Lindelöf theorem, if is Lipschitz continuous there is a unique -curve for each point in so that, for some ,
The curves are called integral curves or trajectories (or less commonly, flow lines) of the vector field and partition into equivalence classes. It is not always possible to extend the interval to the whole real number line. The flow may for example reach the edge of in a finite time.
In two or three dimensions one can visualize the vector field as giving rise to a flow on . If we drop a particle into this flow at a point it will move along the curve in the flow depending on the initial point . If is a stationary point of (i.e., the vector field is equal to the zero vector at the point ), then the particle will remain at .
Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups.
Complete vector fields
By definition, a vector field on is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If is a complete vector field on , then the one-parameter group of diffeomorphisms generated by the flow along exists for all time; it is described by a smooth mapping
On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field on the real line is given by . For, the differential equation , with initial condition , has as its unique solution if (and for all if ). Hence for , is undefined at so cannot be defined for all values of .
The Lie bracket
The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions :
f-relatedness
Given a smooth function between manifolds, , the derivative is an induced map on tangent bundles, . Given vector fields and , we say that is -related to if the equation holds.
If is -related to , , then the Lie bracket is -related to .
Generalizations
Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields.
Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.
See also
Eisenbud–Levine–Khimshiashvili signature formula
Field line
Field strength
Gradient flow and balanced flow in atmospheric dynamics
Lie derivative
Scalar field
Time-dependent vector field
Vector fields in cylindrical and spherical coordinates
Tensor fields
Slope field
References
Bibliography
External links
Online Vector Field Editor
Vector field — Mathworld
Vector field — PlanetMath
3D Magnetic field viewer
Vector fields and field lines
Vector field simulation An interactive application to show the effects of vector fields
Differential topology
Field
Functions and mappings
F | 0.790266 | 0.996867 | 0.78779 |
Thermodynamics | Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities, but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering and mechanical engineering, but also in other complex fields such as meteorology.
Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave to the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat.
The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics.
Introduction
A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system.
In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.
This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field.
History
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time.
The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865.
During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes.
Etymology
Thermodynamics has an intricate etymology.
By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power".
In 1849, the adjective thermo-dynamic is used by William Thomson.
In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines.
Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology.
Branches of thermodynamics
The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems.
Classical thermodynamics
Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics.
Statistical mechanics
Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level.
Chemical thermodynamics
Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation.
Equilibrium thermodynamics
Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings.
Non-equilibrium thermodynamics
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
Laws of thermodynamics
Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following.
Zeroth law
The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers.
The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law.
First law
The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy, , of a thermodynamic system is equal to the energy gained as heat, , less the thermodynamic work, , done by the system on its surroundings.
.
where denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible), denotes the quantity of energy supplied to the system as heat, and denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surrounding requires that the system's internal energy decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system (so that is recovered) to make the system work continuously.
For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then
,
where denotes the internal energy of the combined system, and and denote the internal energies of the respective separated systems.
Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state.
Second law
A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body.
The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium.
In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos.
Third law
The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".
Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine).
System models
An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities.
Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole.
Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle.
Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries:
As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium.
Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes.
States and processes
When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.
Several commonly studied thermodynamic processes are:
Adiabatic process: occurs without loss or gain of energy by heat
Isenthalpic process: occurs at a constant enthalpy
Isentropic process: a reversible adiabatic process, occurs at a constant entropy
Isobaric process: occurs at constant pressure
Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
Isothermal process: occurs at a constant temperature
Steady state process: occurs without a change in the internal energy
Instrumentation
There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants.
Conjugate variables
The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are:
Pressure-volume (the mechanical parameters);
Temperature-entropy (thermal parameters);
Chemical potential-particle number (material parameters).
Potentials
Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics.
The five most well known potentials are:
where is the temperature, the entropy, the pressure, the volume, the chemical potential, the number of particles in the system, and is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.
Axiomatic thermodynamics
Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics.
The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states.
Applied fields
See also
Thermodynamic process path
Lists and timelines
List of important publications in thermodynamics
List of textbooks on thermodynamics and statistical mechanics
List of thermal conductivities
List of thermodynamic properties
Table of thermodynamic equations
Timeline of thermodynamics
Thermodynamic equations
Notes
References
Further reading
A nontechnical introduction, good on historical and interpretive matters.
Vol. 1, pp. 55–349.
5th ed. (in Russian)
The following titles are more technical:
External links
Thermodynamics Data & Property Calculation Websites
Thermodynamics Educational Websites
Biochemistry Thermodynamics
Thermodynamics and Statistical Mechanics
Engineering Thermodynamics – A Graphical Approach
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Energy
Chemical engineering | 0.788593 | 0.998881 | 0.787711 |
Navier–Stokes equations | The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample.
Flow velocity
The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
General continuum equations
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is:
By setting the Cauchy stress tensor to be the sum of a viscosity term (the deviatoric stress) and a pressure term (volumetric stress), we arrive at:
where
is the material derivative, defined as ,
is the (mass) density,
is the flow velocity,
is the divergence,
is the pressure,
is time,
is the deviatoric stress tensor, which has order 2,
represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on.
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations.
Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative ) of any finite volume (V) to represent the change of velocity in fluid media:
where
is the material derivative of mass per unit volume (density, ),
is the mathematical operation for the integration throughout the volume (V),
is the partial derivative mathematical operator,
is the divergence of the flow velocity, which is a scalar field, Note 1
is the gradient of density, which is the vector derivative of a scalar field, Note 1
Note 1 - Refer to the mathematical operator del represented by the nabla symbol.
to arrive at the conservation form of the equations of motion. This is often written:
where is the outer product of the flow velocity:
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
Convective acceleration
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Compressible flow
Remark: here, the deviatoric stress tensor is denoted as it was in the general continuum equations and in the incompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
The most general of the Navier–Stokes equations become
in index notation, the equation can be written as
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to:
To give finally:
{{Equation box 1
|indent=:
|title=Navier–Stokes momentum equation (conservative form)
|equation=
|cellpadding
|border
|border colour = #FF0000
|background colour = #DCDCDC
}}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
If the dynamic and bulk viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor is and the divergence of tensor is , one finally arrives to the compressible Navier–Stokes momentum equation:
where is the material derivative. is the shear kinematic viscosity and is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, on also has:
The convective acceleration term can also be written as
where the vector is known as the Lamb vector.
For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
Incompressible flow
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient .
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity :
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This is constitutive equation is also called the Newtonian law of viscosity.
Dynamic viscosity need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state.
The divergence of the deviatoric stress in case of uniform viscosity is given by:
because for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density:
where is called the kinematic viscosity.
By isolating the fluid velocity, one can also state:
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, , then we have
where is called the unit pressure head.
In incompressible flows, the pressure field satisfies the Poisson equation,
which is obtained by taking the divergence of the momentum equations.
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation):
The higher-order term, namely the shear stress divergence , has simply reduced to the vector Laplacian term . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations.
In the usual case of an external field being a conservative field:
by defining the hydraulic head:
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or less dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,
where and are solenoidal and irrotational projection operators satisfying , and and are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:
with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb and Biot–Savart law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by,
for divergence-free test functions satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There one will be able to address the question "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This all would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Weak form of the incompressible Navier–Stokes equations
Strong form
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain
with boundary
being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied:
is the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as:
Let be the dynamic viscosity of the fluid, the second-order identity tensor and the strain-rate tensor defined as:
The functions and are given Dirichlet and Neumann boundary data, while is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation.
Assuming constant dynamic viscosity, using the vectorial identity
and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:
Moreover, note that the Neumann boundary conditions can be rearranged as:
Weak form
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation
multiply it for a test function , defined in a suitable space , and integrate both members with respect to the domain :
Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:
Using these relations, one gets:
In the same fashion, the continuity equation is multiplied for a test function belonging to a space and integrated in the domain :
The space functions are chosen as follows:
Considering that the test function vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:
Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:
Discrete velocity
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Pressure recovery
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions one would choose the irrotational vector elements obtained from the gradient of the pressure element.
Non-inertial frame of reference
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference , and a non-inertial frame of reference , which is translating with velocity and rotating with angular velocity with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
Here and are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of with respect to and the fourth term is due to the angular acceleration of with respect to .
Other equations
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state.
Continuity equation for incompressible fluid
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows:
A fluid media for which the density is constant is called incompressible. Therefore, the rate of change of density with respect to time and the gradient of density are equal to zero . In this case the general equation of continuity, , reduces to: . Furthermore, assuming that density is a non-zero constant means that the right-hand side of the equation is divisible by density. Therefore, the continuity equation for an incompressible fluid reduces further to:This relationship, , identifies that the divergence of the flow velocity vector is equal to zero , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator , and vorticity which is now expressed like so, for an incompressible fluid:
Stream function for incompressible 2D fluid
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with and no dependence of anything on ), where the equations reduce to:
Differentiating the first with respect to , the second with respect to and subtracting the resulting equations will eliminate pressure and any conservative force.
For incompressible flow, defining the stream function through
results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:
where is the 2D biharmonic operator and is the kinematic viscosity, . We can also express this compactly using the Jacobian determinant:
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
Properties
Nonlinearity
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.
Turbulence
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, –, –, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Applicability
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement.
Failing that, one may have to resort to molecular dynamics or various hybrid methods.
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.
Application to specific problems
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
Parallel flow
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is:
The boundary condition is the no slip condition. This problem is easily solved for the flow field:
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Radial flow
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function that must satisfy:
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for (approximately; this is not ), the parameter being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.
Convection
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Exact solutions of the Navier–Stokes equations
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex.Landau & Lifshitz (1987) pp. 75–88.
Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.
A three-dimensional steady-state vortex solution
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let be a constant radius of the inner coil. One set of solutions is given by:
for arbitrary constants and . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
Viscous three-dimensional periodic solutions
Two examples of periodic fully-three-dimensional viscous solutions are described in.
These solutions are defined on a three-dimensional torus and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by:
where is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is at .
The pressure field is obtained from the velocity field as (where and are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by .
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex.
Wyld diagrams
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions.
Representations in 3D
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. means the partial derivative of with respect to , and means the second-order partial derivative of with respect to .
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.
Cartesian coordinates
From the general form of the Navier–Stokes, with the velocity vector expanded as , sometimes respectively named , , , we may write the vector equation explicitly,
Note that gravity has been accounted for as a body force, and the values of , , will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:
When the flow is incompressible, does not change for any fluid particle, and its material derivative vanishes: . The continuity equation is reduced to:
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
Cylindrical coordinates
A change of variables on the Cartesian equations will yield the following momentum equations for , , and
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity, and the remaining quantities are independent of :
Spherical coordinates
In spherical coordinates, the , , and momentum equations are (note the convention used: is polar angle, or colatitude, ):
Mass continuity will read:
These equations could be (slightly) compacted by, for example, factoring from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
Navier–Stokes equations use in games
The Navier–Stokes equations are used extensively in video games in order to model a wide variety of natural phenomena. Simulations of small-scale gaseous fluids, such as fire and smoke, are often based on the seminal paper "Real-Time Fluid Dynamics for Games" by Jos Stam, which elaborates one of the methods proposed in Stam's earlier, more famous paper "Stable Fluids" from 1999. Stam proposes stable fluid simulation using a Navier–Stokes solution method from 1968, coupled with an unconditionally stable semi-Lagrangian advection scheme, as first proposed in 1992.
More recent implementations based upon this work run on the game systems graphics processing unit (GPU) as opposed to the central processing unit (CPU) and achieve a much higher degree of performance.
Many improvements have been proposed to Stam's original work, which suffers inherently from high numerical dissipation in both velocity and mass.
An introduction to interactive fluid simulation can be found in the 2007 ACM SIGGRAPH course, Fluid Simulation for Computer Animation.
See also
Citations
General references
V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986.
Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley,
Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis'', ACM Chelsea Publishing,
External links
Simplified derivation of the Navier–Stokes equations
Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA
Aerodynamics
Computational fluid dynamics
Concepts in physics
Equations of fluid dynamics
Functions of space and time
Partial differential equations
Transport phenomena | 0.787942 | 0.999623 | 0.787645 |
Laplace–Runge–Lenz vector | In classical mechanics, the Laplace–Runge–Lenz (LRL) vector is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems.
The hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today.
In classical and quantum mechanics, conserved quantities generally correspond to a symmetry of the system. The conservation of the LRL vector corresponds to an unusual symmetry; the Kepler problem is mathematically equivalent to a particle moving freely on the surface of a four-dimensional (hyper-)sphere, so that the whole problem is symmetric under certain rotations of the four-dimensional space. This higher symmetry results from two properties of the Kepler problem: the velocity vector always moves in a perfect circle and, for a given total energy, all such velocity circles intersect each other in the same two points.
The Laplace–Runge–Lenz vector is named after Pierre-Simon de Laplace, Carl Runge and Wilhelm Lenz. It is also known as the Laplace vector, the Runge–Lenz vector and the Lenz vector. Ironically, none of those scientists discovered it. The LRL vector has been re-discovered and re-formulated several times; for example, it is equivalent to the dimensionless eccentricity vector of celestial mechanics. Various generalizations of the LRL vector have been defined, which incorporate the effects of special relativity, electromagnetic fields and even different types of central forces.
Context
A single particle moving under any conservative central force has at least four constants of motion: the total energy and the three Cartesian components of the angular momentum vector with respect to the center of force. The particle's orbit is confined to the plane defined by the particle's initial momentum (or, equivalently, its velocity ) and the vector between the particle and the center of force (see Figure 1). This plane of motion is perpendicular to the constant angular momentum vector ; this may be expressed mathematically by the vector dot product equation . Given its mathematical definition below, the Laplace–Runge–Lenz vector (LRL vector) is always perpendicular to the constant angular momentum vector for all central forces. Therefore, always lies in the plane of motion. As shown below, points from the center of force to the periapsis of the motion, the point of closest approach, and its length is proportional to the eccentricity of the orbit.
The LRL vector is constant in length and direction, but only for an inverse-square central force. For other central forces, the vector is not constant, but changes in both length and direction. If the central force is approximately an inverse-square law, the vector is approximately constant in length, but slowly rotates its direction. A generalized conserved LRL vector can be defined for all central forces, but this generalized vector is a complicated function of position, and usually not expressible in closed form.
The LRL vector differs from other conserved quantities in the following property. Whereas for typical conserved quantities, there is a corresponding cyclic coordinate in the three-dimensional Lagrangian of the system, there does not exist such a coordinate for the LRL vector. Thus, the conservation of the LRL vector must be derived directly, e.g., by the method of Poisson brackets, as described below. Conserved quantities of this kind are called "dynamic", in contrast to the usual "geometric" conservation laws, e.g., that of the angular momentum.
History of rediscovery
The LRL vector is a constant of motion of the Kepler problem, and is useful in describing astronomical orbits, such as the motion of planets and binary stars. Nevertheless, it has never been well known among physicists, possibly because it is less intuitive than momentum and angular momentum. Consequently, it has been rediscovered independently several times over the last three centuries.
Jakob Hermann was the first to show that is conserved for a special case of the inverse-square central force, and worked out its connection to the eccentricity of the orbital ellipse. Hermann's work was generalized to its modern form by Johann Bernoulli in 1710. At the end of the century, Pierre-Simon de Laplace rediscovered the conservation of , deriving it analytically, rather than geometrically. In the middle of the nineteenth century, William Rowan Hamilton derived the equivalent eccentricity vector defined below, using it to show that the momentum vector moves on a circle for motion under an inverse-square central force (Figure 3).
At the beginning of the twentieth century, Josiah Willard Gibbs derived the same vector by vector analysis. Gibbs' derivation was used as an example by Carl Runge in a popular German textbook on vectors, which was referenced by Wilhelm Lenz in his paper on the (old) quantum mechanical treatment of the hydrogen atom. In 1926, Wolfgang Pauli used the LRL vector to derive the energy levels of the hydrogen atom using the matrix mechanics formulation of quantum mechanics, after which it became known mainly as the Runge–Lenz vector.
Mathematical definition
An inverse-square central force acting on a single particle is described by the equation
The corresponding potential energy is given by . The constant parameter describes the strength of the central force; it is equal to for gravitational and for electrostatic forces. The force is attractive if and repulsive if .
The LRL vector is defined mathematically by the formula
where
is the mass of the point particle moving under the central force,
is its momentum vector,
is its angular momentum vector,
is the position vector of the particle (Figure 1),
is the corresponding unit vector, i.e., , and
is the magnitude of , the distance of the mass from the center of force.
The SI units of the LRL vector are joule-kilogram-meter (J⋅kg⋅m). This follows because the units of and are kg⋅m/s and J⋅s, respectively. This agrees with the units of (kg) and of (N⋅m2).
This definition of the LRL vector pertains to a single point particle of mass moving under the action of a fixed force. However, the same definition may be extended to two-body problems such as the Kepler problem, by taking as the reduced mass of the two bodies and as the vector between the two bodies.
Since the assumed force is conservative, the total energy is a constant of motion,
The assumed force is also a central force. Hence, the angular momentum vector is also conserved and defines the plane in which the particle travels. The LRL vector is perpendicular to the angular momentum vector because both and are perpendicular to . It follows that lies in the plane of motion.
Alternative formulations for the same constant of motion may be defined, typically by scaling the vector with constants, such as the mass , the force parameter or the angular momentum . The most common variant is to divide by , which yields the eccentricity vector, a dimensionless vector along the semi-major axis whose modulus equals the eccentricity of the conic:
An equivalent formulation multiplies this eccentricity vector by the major semiaxis , giving the resulting vector the units of length. Yet another formulation divides by , yielding an equivalent conserved quantity with units of inverse length, a quantity that appears in the solution of the Kepler problem
where is the angle between and the position vector . Further alternative formulations are given below.
Derivation of the Kepler orbits
The shape and orientation of the orbits can be determined from the LRL vector as follows. Taking the dot product of with the position vector gives the equation
where is the angle between and (Figure 2). Permuting the scalar triple product yields
Rearranging yields the solution for the Kepler equation
This corresponds to the formula for a conic section of eccentricity e
where the eccentricity and is a constant.
Taking the dot product of with itself yields an equation involving the total energy ,
which may be rewritten in terms of the eccentricity,
Thus, if the energy is negative (bound orbits), the eccentricity is less than one and the orbit is an ellipse. Conversely, if the energy is positive (unbound orbits, also called "scattered orbits"), the eccentricity is greater than one and the orbit is a hyperbola. Finally, if the energy is exactly zero, the eccentricity is one and the orbit is a parabola. In all cases, the direction of lies along the symmetry axis of the conic section and points from the center of force toward the periapsis, the point of closest approach.
Circular momentum hodographs
The conservation of the LRL vector and angular momentum vector is useful in showing that the momentum vector moves on a circle under an inverse-square central force.
Taking the dot product of
with itself yields
Further choosing along the -axis, and the major semiaxis as the -axis, yields the locus equation for ,
In other words, the momentum vector is confined to a circle of radius centered on . For bounded orbits, the eccentricity corresponds to the cosine of the angle shown in Figure 3. For unbounded orbits, we have and so the circle does not intersect the -axis.
In the degenerate limit of circular orbits, and thus vanishing , the circle centers at the origin .
For brevity, it is also useful to introduce the variable .
This circular hodograph is useful in illustrating the symmetry of the Kepler problem.
Constants of motion and superintegrability
The seven scalar quantities , and (being vectors, the latter two contribute three conserved quantities each) are related by two equations, and , giving five independent constants of motion. (Since the magnitude of , hence the eccentricity of the orbit, can be determined from the total angular momentum and the energy , only the direction of is conserved independently; moreover, since must be perpendicular to , it contributes only one additional conserved quantity.)
This is consistent with the six initial conditions (the particle's initial position and velocity vectors, each with three components) that specify the orbit of the particle, since the initial time is not determined by a constant of motion. The resulting 1-dimensional orbit in 6-dimensional phase space is thus completely specified.
A mechanical system with degrees of freedom can have at most constants of motion, since there are initial conditions and the initial time cannot be determined by a constant of motion. A system with more than constants of motion is called superintegrable and a system with constants is called maximally superintegrable. Since the solution of the Hamilton–Jacobi equation in one coordinate system can yield only constants of motion, superintegrable systems must be separable in more than one coordinate system. The Kepler problem is maximally superintegrable, since it has three degrees of freedom and five independent constant of motion; its Hamilton–Jacobi equation is separable in both spherical coordinates and parabolic coordinates, as described below.
Maximally superintegrable systems follow closed, one-dimensional orbits in phase space, since the orbit is the intersection of the phase-space isosurfaces of their constants of motion. Consequently, the orbits are perpendicular to all gradients of all these independent isosurfaces, five in this specific problem, and hence are determined by the generalized cross products of all of these gradients. As a result, all superintegrable systems are automatically describable by Nambu mechanics, alternatively, and equivalently, to Hamiltonian mechanics.
Maximally superintegrable systems can be quantized using commutation relations, as illustrated below. Nevertheless, equivalently, they are also quantized in the Nambu framework, such as this classical Kepler problem into the quantum hydrogen atom.
Evolution under perturbed potentials
The Laplace–Runge–Lenz vector is conserved only for a perfect inverse-square central force. In most practical problems such as planetary motion, however, the interaction potential energy between two bodies is not exactly an inverse square law, but may include an additional central force, a so-called perturbation described by a potential energy . In such cases, the LRL vector rotates slowly in the plane of the orbit, corresponding to a slow apsidal precession of the orbit.
By assumption, the perturbing potential is a conservative central force, which implies that the total energy and angular momentum vector are conserved. Thus, the motion still lies in a plane perpendicular to and the magnitude is conserved, from the equation . The perturbation potential may be any sort of function, but should be significantly weaker than the main inverse-square force between the two bodies.
The rate at which the LRL vector rotates provides information about the perturbing potential . Using canonical perturbation theory and action-angle coordinates, it is straightforward to show that rotates at a rate of,
where is the orbital period, and the identity was used to convert the time integral into an angular integral (Figure 5). The expression in angular brackets, , represents the perturbing potential, but averaged over one full period; that is, averaged over one full passage of the body around its orbit. Mathematically, this time average corresponds to the following quantity in curly braces. This averaging helps to suppress fluctuations in the rate of rotation.
This approach was used to help verify Einstein's theory of general relativity, which adds a small effective inverse-cubic perturbation to the normal Newtonian gravitational potential,
Inserting this function into the integral and using the equation
to express in terms of , the precession rate of the periapsis caused by this non-Newtonian perturbation is calculated to be
which closely matches the observed anomalous precession of Mercury and binary pulsars. This agreement with experiment is strong evidence for general relativity.
Poisson brackets
The unscaled functions
The algebraic structure of the problem is, as explained in later sections, .
The three components Li of the angular momentum vector have the Poisson brackets
where =1,2,3 and is the fully antisymmetric tensor, i.e., the Levi-Civita symbol; the summation index is used here to avoid confusion with the force parameter defined above. Then since the LRL vector transforms like a vector, we have the following Poisson bracket relations between and :
Finally, the Poisson bracket relations between the different components of are as follows:
where is the Hamiltonian. Note that the span of the components of and the components of is not closed under Poisson brackets, because of the factor of on the right-hand side of this last relation.
Finally, since both and are constants of motion, we have
The Poisson brackets will be extended to quantum mechanical commutation relations in the next section and to Lie brackets in a following section.
The scaled functions
As noted below, a scaled Laplace–Runge–Lenz vector may be defined with the same units as angular momentum by dividing by . Since still transforms like a vector, the Poisson brackets of with the angular momentum vector can then be written in a similar form
The Poisson brackets of with itself depend on the sign of , i.e., on whether the energy is negative (producing closed, elliptical orbits under an inverse-square central force) or positive (producing open, hyperbolic orbits under an inverse-square central force). For negative energies—i.e., for bound systems—the Poisson brackets are
We may now appreciate the motivation for the chosen scaling of : With this scaling, the Hamiltonian no longer appears on the right-hand side of the preceding relation. Thus, the span of the three components of and the three components of forms a six-dimensional Lie algebra under the Poisson bracket. This Lie algebra is isomorphic to , the Lie algebra of the 4-dimensional rotation group .
By contrast, for positive energy, the Poisson brackets have the opposite sign,
In this case, the Lie algebra is isomorphic to .
The distinction between positive and negative energies arises because the desired scaling—the one that eliminates the Hamiltonian from the right-hand side of the Poisson bracket relations between the components of the scaled LRL vector—involves the square root of the Hamiltonian. To obtain real-valued functions, we must then take the absolute value of the Hamiltonian, which distinguishes between positive values (where ) and negative values (where ).
Laplace-Runge-Lenz operator for the hydrogen atom in momentum space
Scaled Laplace-Runge-Lenz operator in the momentum space was found in 2022 . The formula for the operator is simpler than in position space:
where the "degree operator"
multiplies a homogeneous polynomial by its degree.
Casimir invariants and the energy levels
The Casimir invariants for negative energies are
and have vanishing Poisson brackets with all components of and ,
C2 is trivially zero, since the two vectors are always perpendicular.
However, the other invariant, C1, is non-trivial and depends only on , and . Upon canonical quantization, this invariant allows the energy levels of hydrogen-like atoms to be derived using only quantum mechanical canonical commutation relations, instead of the conventional solution of the Schrödinger equation. This derivation is discussed in detail in the next section.
Quantum mechanics of the hydrogen atom
Poisson brackets provide a simple guide for quantizing most classical systems: the commutation relation of two quantum mechanical operators is specified by the Poisson bracket of the corresponding classical variables, multiplied by .
By carrying out this quantization and calculating the eigenvalues of the 1 Casimir operator for the Kepler problem, Wolfgang Pauli was able to derive the energy levels of hydrogen-like atoms (Figure 6) and, thus, their atomic emission spectrum. This elegant 1926 derivation was obtained before the development of the Schrödinger equation.
A subtlety of the quantum mechanical operator for the LRL vector is that the momentum and angular momentum operators do not commute; hence, the quantum operator cross product of and must be defined carefully. Typically, the operators for the Cartesian components are defined using a symmetrized (Hermitian) product,
Once this is done, one can show that the quantum LRL operators satisfy commutations relations exactly analogous to the Poisson bracket relations in the previous section—just replacing the Poisson bracket with times the commutator.
From these operators, additional ladder operators for can be defined,
These further connect different eigenstates of , so different spin multiplets, among themselves.
A normalized first Casimir invariant operator, quantum analog of the above, can likewise be defined,
where is the inverse of the Hamiltonian energy operator, and is the identity operator.
Applying these ladder operators to the eigenstates |ℓ〉 of the total angular momentum, azimuthal angular momentum and energy operators, the eigenvalues of the first Casimir operator, 1, are seen to be quantized, . Importantly, by dint of the vanishing of C2, they are independent of the ℓ and quantum numbers, making the energy levels degenerate.
Hence, the energy levels are given by
which coincides with the Rydberg formula for hydrogen-like atoms (Figure 6). The additional symmetry operators have connected the different ℓ multiplets among themselves, for a given energy (and C1), dictating states at each level. In effect, they have enlarged the angular momentum group to .
Conservation and symmetry
The conservation of the LRL vector corresponds to a subtle symmetry of the system. In classical mechanics, symmetries are continuous operations that map one orbit onto another without changing the energy of the system; in quantum mechanics, symmetries are continuous operations that "mix" electronic orbitals of the same energy, i.e., degenerate energy levels. A conserved quantity is usually associated with such symmetries. For example, every central force is symmetric under the rotation group SO(3), leading to the conservation of the angular momentum . Classically, an overall rotation of the system does not affect the energy of an orbit; quantum mechanically, rotations mix the spherical harmonics of the same quantum number without changing the energy.
The symmetry for the inverse-square central force is higher and more subtle. The peculiar symmetry of the Kepler problem results in the conservation of both the angular momentum vector and the LRL vector (as defined above) and, quantum mechanically, ensures that the energy levels of hydrogen do not depend on the angular momentum quantum numbers and . The symmetry is more subtle, however, because the symmetry operation must take place in a higher-dimensional space; such symmetries are often called "hidden symmetries".
Classically, the higher symmetry of the Kepler problem allows for continuous alterations of the orbits that preserve energy but not angular momentum; expressed another way, orbits of the same energy but different angular momentum (eccentricity) can be transformed continuously into one another. Quantum mechanically, this corresponds to mixing orbitals that differ in the and quantum numbers, such as the and atomic orbitals. Such mixing cannot be done with ordinary three-dimensional translations or rotations, but is equivalent to a rotation in a higher dimension.
For negative energies – i.e., for bound systems – the higher symmetry group is , which preserves the length of four-dimensional vectors
In 1935, Vladimir Fock showed that the quantum mechanical bound Kepler problem is equivalent to the problem of a free particle confined to a three-dimensional unit sphere in four-dimensional space. Specifically, Fock showed that the Schrödinger wavefunction in the momentum space for the Kepler problem was the stereographic projection of the spherical harmonics on the sphere. Rotation of the sphere and re-projection results in a continuous mapping of the elliptical orbits without changing the energy, an symmetry sometimes known as Fock symmetry; quantum mechanically, this corresponds to a mixing of all orbitals of the same energy quantum number . Valentine Bargmann noted subsequently that the Poisson brackets for the angular momentum vector and the scaled LRL vector formed the Lie algebra for . Simply put, the six quantities and correspond to the six conserved angular momenta in four dimensions, associated with the six possible simple rotations in that space (there are six ways of choosing two axes from four). This conclusion does not imply that our universe is a three-dimensional sphere; it merely means that this particular physics problem (the two-body problem for inverse-square central forces) is mathematically equivalent to a free particle on a three-dimensional sphere.
For positive energies – i.e., for unbound, "scattered" systems – the higher symmetry group is , which preserves the Minkowski length of 4-vectors
Both the negative- and positive-energy cases were considered by Fock and Bargmann and have been reviewed encyclopedically by Bander and Itzykson.
The orbits of central-force systems – and those of the Kepler problem in particular – are also symmetric under reflection. Therefore, the , and groups cited above are not the full symmetry groups of their orbits; the full groups are , , and O(3,1), respectively. Nevertheless, only the connected subgroups, , , and , are needed to demonstrate the conservation of the angular momentum and LRL vectors; the reflection symmetry is irrelevant for conservation, which may be derived from the Lie algebra of the group.
Rotational symmetry in four dimensions
The connection between the Kepler problem and four-dimensional rotational symmetry can be readily visualized. Let the four-dimensional Cartesian coordinates be denoted where represent the Cartesian coordinates of the normal position vector . The three-dimensional momentum vector is associated with a four-dimensional vector on a three-dimensional unit sphere
where is the unit vector along the new axis. The transformation mapping to can be uniquely inverted; for example, the component of the momentum equals
and similarly for and . In other words, the three-dimensional vector is a stereographic projection of the four-dimensional vector, scaled by (Figure 8).
Without loss of generality, we may eliminate the normal rotational symmetry by choosing the Cartesian coordinates such that the axis is aligned with the angular momentum vector and the momentum hodographs are aligned as they are in Figure 7, with the centers of the circles on the axis. Since the motion is planar, and and are perpendicular, and attention may be restricted to the three-dimensional vector The family of Apollonian circles of momentum hodographs (Figure 7) correspond to a family of great circles on the three-dimensional sphere, all of which intersect the axis at the two foci , corresponding to the momentum hodograph foci at . These great circles are related by a simple rotation about the -axis (Figure 8). This rotational symmetry transforms all the orbits of the same energy into one another; however, such a rotation is orthogonal to the usual three-dimensional rotations, since it transforms the fourth dimension . This higher symmetry is characteristic of the Kepler problem and corresponds to the conservation of the LRL vector.
An elegant action-angle variables solution for the Kepler problem can be obtained by eliminating the redundant four-dimensional coordinates in favor of elliptic cylindrical coordinates
where , and are Jacobi's elliptic functions.
Generalizations to other potentials and relativity
The Laplace–Runge–Lenz vector can also be generalized to identify conserved quantities that apply to other situations.
In the presence of a uniform electric field , the generalized Laplace–Runge–Lenz vector is
where is the charge of the orbiting particle. Although is not conserved, it gives rise to a conserved quantity, namely .
Further generalizing the Laplace–Runge–Lenz vector to other potentials and special relativity, the most general form can be written as
where and , with the angle defined by
and is the Lorentz factor. As before, we may obtain a conserved binormal vector by taking the cross product with the conserved angular momentum vector
These two vectors may likewise be combined into a conserved dyadic tensor ,
In illustration, the LRL vector for a non-relativistic, isotropic harmonic oscillator can be calculated. Since the force is central,
the angular momentum vector is conserved and the motion lies in a plane.
The conserved dyadic tensor can be written in a simple form
although and are not necessarily perpendicular.
The corresponding Runge–Lenz vector is more complicated,
where
is the natural oscillation frequency, and
Proofs that the Laplace–Runge–Lenz vector is conserved in Kepler problems
The following are arguments showing that the LRL vector is conserved under central forces that obey an inverse-square law.
Direct proof of conservation
A central force acting on the particle is
for some function of the radius . Since the angular momentum is conserved under central forces, and
where the momentum and where the triple cross product has been simplified using Lagrange's formula
The identity
yields the equation
For the special case of an inverse-square central force , this equals
Therefore, is conserved for inverse-square central forces
A shorter proof is obtained by using the relation of angular momentum to angular velocity, , which holds for a particle traveling in a plane perpendicular to . Specifying to inverse-square central forces, the time derivative of is
where the last equality holds because a unit vector can only change by rotation, and is the orbital velocity of the rotating vector. Thus, is seen to be a difference of two vectors with equal time derivatives.
As described elsewhere in this article, this LRL vector is a special case of a general conserved vector that can be defined for all central forces. However, since most central forces do not produce closed orbits (see Bertrand's theorem), the analogous vector rarely has a simple definition and is generally a multivalued function of the angle between and .
Hamilton–Jacobi equation in parabolic coordinates
The constancy of the LRL vector can also be derived from the Hamilton–Jacobi equation in parabolic coordinates , which are defined by the equations
where represents the radius in the plane of the orbit
The inversion of these coordinates is
Separation of the Hamilton–Jacobi equation in these coordinates yields the two equivalent equations
where is a constant of motion. Subtraction and re-expression in terms of the Cartesian momenta and shows that is equivalent to the LRL vector
Noether's theorem
The connection between the rotational symmetry described above and the conservation of the LRL vector can be made quantitative by way of Noether's theorem. This theorem, which is used for finding constants of motion, states that any infinitesimal variation of the generalized coordinates of a physical system
that causes the Lagrangian to vary to first order by a total time derivative
corresponds to a conserved quantity
In particular, the conserved LRL vector component corresponds to the variation in the coordinates
where equals 1, 2 and 3, with and being the -th components of the position and momentum vectors and , respectively; as usual, represents the Kronecker delta. The resulting first-order change in the Lagrangian is
Substitution into the general formula for the conserved quantity yields the conserved component of the LRL vector,
Lie transformation
Noether's theorem derivation of the conservation of the LRL vector is elegant, but has one drawback: the coordinate variation involves not only the position , but also the momentum or, equivalently, the velocity . This drawback may be eliminated by instead deriving the conservation of using an approach pioneered by Sophus Lie. Specifically, one may define a Lie transformation in which the coordinates and the time are scaled by different powers of a parameter λ (Figure 9),
This transformation changes the total angular momentum and energy ,
but preserves their product EL2. Therefore, the eccentricity and the magnitude are preserved, as may be seen from the equation for
The direction of is preserved as well, since the semiaxes are not altered by a global scaling. This transformation also preserves Kepler's third law, namely, that the semiaxis and the period form a constant .
Alternative scalings, symbols and formulations
Unlike the momentum and angular momentum vectors and , there is no universally accepted definition of the Laplace–Runge–Lenz vector; several different scaling factors and symbols are used in the scientific literature. The most common definition is given above, but another common alternative is to divide by the quantity to obtain a dimensionless conserved eccentricity vector
where is the velocity vector. This scaled vector has the same direction as and its magnitude equals the eccentricity of the orbit, and thus vanishes for circular orbits.
Other scaled versions are also possible, e.g., by dividing by alone
or by
which has the same units as the angular momentum vector .
In rare cases, the sign of the LRL vector may be reversed, i.e., scaled by . Other common symbols for the LRL vector include , , , and . However, the choice of scaling and symbol for the LRL vector do not affect its conservation.
An alternative conserved vector is the binormal vector studied by William Rowan Hamilton,
which is conserved and points along the minor semiaxis of the ellipse. (It is not defined for vanishing eccentricity.)
The LRL vector is the cross product of and (Figure 4). On the momentum hodograph in the relevant section above, is readily seen to connect the origin of momenta with the center of the circular hodograph, and to possess magnitude . At perihelion, it points in the direction of the momentum.
The vector is denoted as "binormal" since it is perpendicular to both and . Similar to the LRL vector itself, the binormal vector can be defined with different scalings and symbols.
The two conserved vectors, and can be combined to form a conserved dyadic tensor ,
where and are arbitrary scaling constants and represents the tensor product (which is not related to the vector cross product, despite their similar symbol). Written in explicit components, this equation reads
Being perpendicular to each another, the vectors and can be viewed as the principal axes of the conserved tensor , i.e., its scaled eigenvectors. is perpendicular to ,
since and are both perpendicular to as well, .
More directly, this equation reads, in explicit components,
See also
Astrodynamics
Orbit
Eccentricity vector
Orbital elements
Bertrand's theorem
Binet equation
Two-body problem
References
Further reading
Updated version of previous source.
.
Classical mechanics
Orbits
Rotational symmetry
Vectors (mathematics and physics)
Articles containing proofs
Mathematical physics | 0.7962 | 0.989015 | 0.787455 |
Quantization (physics) | Quantization (in British English quantisation) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics.
Historical overview
In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve the ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds
for the frequency . Here, is called the Planck constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities.
In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon". In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules".
The preceding theories have been successful, but they are very phenomenological theories. However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta".
The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics. (1931).
Canonical quantization
Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state.
Quantization schemes
Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, the Groenewold–van Hove theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables. See Groenewold's theorem for one version of this result.
Covariant canonical quantization
There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism.
Deformation quantization
One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions.
More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory.
For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term . (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing .)
As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics.
Geometric quantization
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space.
Path integral quantization
A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation.
Other types
Loop quantum gravity (loop quantization)
Uncertainty principle (quantum statistical mechanics approach)
Schwinger's quantum action principle
See also
First quantization
Feynman path integral
Light front quantization
Photon polarization
Quantum Hall effect
Quantum number
Stochastic quantization
References
Ali, S. T., & Engliš, M. (2005). "Quantization methods: a guide for physicists and analysts". Reviews in Mathematical Physics 17 (04), 391-490.
Abraham, R. & Marsden (1985): Foundations of Mechanics, ed. Addison–Wesley,
M. Peskin, D. Schroeder, An Introduction to Quantum Field Theory (Westview Press, 1995)
Weinberg, Steven, The Quantum Theory of Fields (3 volumes)
G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric and Algebraic Topological Methods in Quantum Mechanics (World Scientific, 2005)
Notes
Physical phenomena
Quantum field theory
Mathematical quantization
Mathematical physics | 0.794779 | 0.990647 | 0.787345 |
Applied mechanics | Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering.
Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology.
Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics.
Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. In the application of the natural sciences, mechanics was said to be complemented by thermodynamics, the study of heat and more generally energy, and electromechanics, the study of electricity and magnetism.
Overview
Engineering problems are generally tackled with applied mechanics through the application of theories of classical mechanics and fluid mechanics. Because applied mechanics can be applied in engineering disciplines like civil engineering, mechanical engineering, aerospace engineering, materials engineering, and biomedical engineering, it is sometimes referred to as engineering mechanics.
Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines. In civil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, and earthquake engineering. In mechanical engineering, it can be applied in mechatronics and robotics, design and drafting, nanotechnology, machine elements, structural analysis, friction stir welding, and acoustical engineering. In aerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics. In materials engineering, applied mechanics’ concepts are used in thermoelasticity, elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics. Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control.
Brief history
The first science with a theoretical foundation based in mathematics was mechanics; the underlying principles of mechanics were first delineated by Isaac Newton in his 1687 book Philosophiæ Naturalis Principia Mathematica. One of the earliest works to define applied mechanics as its own discipline was the three volume Handbuch der Mechanik written by German physicist and engineer Franz Josef Gerstner. The first seminal work on applied mechanics to be published in English was A Manual of Applied Mechanics in 1858 by English mechanical engineer William Rankine. August Föppl, a German mechanical engineer and professor, published Vorlesungen über technische Mechanik in 1898 in which he introduced calculus to the study of applied mechanics.
Applied mechanics was established as a discipline separate from classical mechanics in the early 1920s with the publication of Journal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of the International Congress of Applied Mechanics. In 1921 Austrian scientist Richard von Mises started the Journal of Applied Mathematics and Mechanics (Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientist Ludwig Prandtl founded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik). During a 1922 conference on hydrodynamics and aerodynamics in Innsbruck, Austria, Theodore von Kármán, a Hungarian engineer, and Tullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics. In 1924 the first meeting of the International Congress of Applied Mechanics was held in Delft, the Netherlands attended by more than 200 scientist from around the world. Since this first meeting the congress has been held every four years, except during World War II; the name of the meeting was changed to International Congress of Theoretical and Applied Mechanics in 1960.
Due to the unpredictable political landscape in Europe after the First World War and upheaval of World War II many European scientist and engineers emigrated to the United States. Ukrainian engineer Stephan Timoshenko fled the Bolsheviks Red Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at the University of Michigan and Stanford University. Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded the Applied Mechanics Division of the American Society of Mechanical Engineers in 1927 and is considered “America’s Father of Engineering Mechanics.” In 1930 Theodore von Kármán left Germany and became the first director of the Aeronautical Laboratory at the California Institute of Technology; von Kármán would later co-found the Jet Propulsion Laboratory in 1944. With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950.
Branches
Dynamics
Dynamics, the study of the motion and movement of various objects, can be further divided into two branches, kinematics and kinetics. For classical mechanics, kinematics would be the analysis of moving bodies using time, velocities, displacement, and acceleration. Kinetics would be the study of moving bodies through the lens of the effects of forces and masses. In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids.
Statics
The study of statics is the study and describing of bodies at rest. Static analysis in classical mechanics can be broken down into two categories, deformable bodies and non-deformable bodies. When studying deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying non-deformable bodies, the examination of the structure and material strength is observed. In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account.
Relationship to classical mechanics
Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below.
Examples
Newtonian foundation
Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton's Principia (published in 1687). It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics. Depending on the type of force, type of matter, and the external forces, acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies.
Archimedes' principle
Archimedes' principle is a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid. If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid. Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid. The weight of the amount of displaced fluids will then be equal to the weight of the solid.
Major topics
This section based on the "AMR Subject Classification Scheme" from the journal Applied Mechanics Reviews.
Foundations and basic methods
Continuum mechanics
Finite element method
Finite difference method
Other computational methods
Experimental system analysis
Dynamics and vibration
Dynamics (mechanics)
Kinematics
Vibrations of solids (basic)
Vibrations (structural elements)
Vibrations (structures)
Wave motion in solids
Impact on solids
Waves in incompressible fluids
Waves in compressible fluids
Solid fluid interactions
Astronautics (celestial and orbital mechanics)
Explosions and ballistics
Acoustics
Automatic control
System theory and design
Optimal control system
System and control applications
Robotics
Manufacturing
Mechanics of solids
Elasticity
Viscoelasticity
Plasticity and viscoplasticity
Composite material mechanics
Cables, rope, beams, etc
Plates, shells, membranes, etc
Structural stability (buckling, postbuckling)
Electromagneto solid mechanics
Soil mechanics (basic)
Soil mechanics (applied)
Rock mechanics
Material processing
Fracture and damage processes
Fracture and damage mechanics
Experimental stress analysis
Material Testing
Structures (basic)
Structures (ground)
Structures (ocean and coastal)
Structures (mobile)
Structures (containment)
Friction and wear
Machine elements
Machine design
Fastening and joining
Mechanics of fluids
Rheology
Hydraulics
Incompressible flow
Compressible flow
Rarefied flow
Multiphase flow
Wall Layers (incl boundary layers)
Internal flow (pipe, channel, and couette)
Internal flow (inlets, nozzles, diffusers, and cascades)
Free shear layers (mixing layers, jets, wakes, cavities, and plumes)\
Flow stability
Turbulence
Electromagneto fluid and plasma dynamics
Hydromechanics
Aerodynamics
Machinery fluid dynamics
Lubrication
Flow measurements and visualization
Thermal sciences
Thermodynamics
Heat transfer (one phase convection)
Heat transfer (two phase convection)
Heat transfer (conduction)
Heat transfer (radiation and combined modes)
Heat transfer (devices and systems)
Thermodynamics of solids
Mass transfer (with and without heat transfer)
Combustion
Prime movers and propulsion systems
Earth sciences
Micromeritics
Porous media
Geomechanics
Earthquake mechanics
Hydrology, oceanology, and meteorology
Energy systems and environment
Fossil fuel systems
Nuclear systems
Geothermal systems
Solar energy systems
Wind energy systems
Ocean energy system
Energy distribution and storage
Environmental fluid mechanics
Hazardous waste containment and disposal
Biosciences
Biomechanics
Human factor engineering
Rehabilitation engineering
Sports mechanics
Applications
Electrical Engineering
Civil engineering
Mechanical Engineering
Nuclear engineering
Architectural engineering
Chemical engineering
Petroleum engineering
Publications
Journal of Applied Mathematics and Mechanics
Newsletters of the Applied Mechanics Division
Journal of Applied Mechanics
Applied Mechanics Reviews
Applied Mechanics
Quarterly Journal of Mechanics and Applied Mathematics
Journal of Applied Mathematics and Mechanics (PMM)
Gesellschaft für Angewandte Mathematik und Mechanik
Acta Mechanica Sinica
See also
Biomechanics
Geomechanics
Mechanicians
Mechanics
Physics
Principle of moments
Structural analysis
Kinetics (physics)
Kinematics
Dynamics (physics)
Statics
References
Further reading
J.P. Den Hartog, Strength of Materials, Dover, New York, 1949.
F.P. Beer, E.R. Johnston, J.T. DeWolf, Mechanics of Materials, McGraw-Hill, New York, 1981.
S.P. Timoshenko, History of Strength of Materials, Dover, New York, 1953.
J.E. Gordon, The New Science of Strong Materials, Princeton, 1984.
H. Petroski, To Engineer Is Human, St. Martins, 1985.
T.A. McMahon and J.T. Bonner, On Size and Life, Scientific American Library, W.H. Freeman, 1983.
M. F. Ashby, Materials Selection in Design, Pergamon, 1992.
A.H. Cottrell, Mechanical Properties of Matter, Wiley, New York, 1964.
S.A. Wainwright, W.D. Biggs, J.D. Organisms, Edward Arnold, 1976.
S. Vogel, Comparative Biomechanics, Princeton, 2003.
J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associates, 2001.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 2: Dynamics, John Wiley & Sons., New York, 1986.
J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 1: Statics, John Wiley & Sons., New York, 1986.
External links
Video and web lectures
Engineering Mechanics Video Lectures and Web Notes
Applied Mechanics Video Lectures By Prof.SK. Gupta, Department of Applied Mechanics, IIT Delhi
Mechanics
.
Structural engineering | 0.798853 | 0.985528 | 0.787292 |
Hamiltonian mechanics | In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.
Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics.
Overview
Phase space coordinates (p, q) and Hamiltonian H
Let be a mechanical system with configuration space and smooth Lagrangian Select a standard coordinate system on The quantities are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant the Legendre transformation of is defined as the map which is assumed to have a smooth inverse For a system with degrees of freedom, the Lagrangian mechanics defines the energy function
The Legendre transform of turns into a function known as the . The Hamiltonian satisfies
which implies that
where the velocities are found from the (-dimensional) equation which, by assumption, is uniquely solvable for . The (-dimensional) pair is called phase space coordinates. (Also canonical coordinates).
From Euler–Lagrange equation to Hamilton's equations
In phase space coordinates , the (-dimensional) Euler–Lagrange equation
becomes Hamilton's equations in dimensions
From stationary action principle to Hamilton's equations
Let be the set of smooth paths for which and The action functional is defined via
where , and (see above). A path is a stationary point of (and hence is an equation of motion) if and only if the path in phase space coordinates obeys the Hamilton's equations.
Basic physical interpretation
A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass . The value of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy, traditionally denoted and , respectively. Here is the momentum and is the space coordinate. Then
is a function of alone, while is a function of alone (i.e., and are scleronomic).
In this example, the time derivative of is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy.
Example
A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of , where is fixed, .
The Lagrangian for this system is
Thus the Hamiltonian is
where
and
In terms of coordinates and momenta, the Hamiltonian reads
Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations,
Momentum , which corresponds to the vertical component of angular momentum , is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth is a cyclic coordinate, which implies conservation of its conjugate momentum.
Deriving Hamilton's equations
Hamilton's equations can be derived by a calculation with the Lagrangian , generalized positions , and generalized velocities , where . Here we work off-shell, meaning , , are independent coordinates in phase space, not constrained to follow any equations of motion (in particular, is not a derivative of ). The total differential of the Lagrangian is:
The generalized momentum coordinates were defined as , so we may rewrite the equation as:
After rearranging, one obtains:
The term in parentheses on the left-hand side is just the Hamiltonian defined previously, therefore:
One may also calculate the total differential of the Hamiltonian with respect to coordinates , , instead of , , , yielding:
One may now equate these two expressions for , one in terms of , the other in terms of :
Since these calculations are off-shell, one can equate the respective coefficients of , , on the two sides:
On-shell, one substitutes parametric functions which define a trajectory in phase space with velocities , obeying Lagrange's equations:
Rearranging and writing in terms of the on-shell gives:
Thus Lagrange's equations are equivalent to Hamilton's equations:
In the case of time-independent and , i.e. , Hamilton's equations consist of first-order differential equations, while Lagrange's equations consist of second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles.
Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate does not occur in the Hamiltonian (i.e. a cyclic coordinate), the corresponding momentum coordinate is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from coordinates to coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities still occur in the Lagrangian, and a system of equations in coordinates still has to be solved.
The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics: the path integral formulation and the Schrödinger equation.
Properties of the Hamiltonian
The value of the Hamiltonian is the total energy of the system if and only if the energy function has the same property. (See definition of ).
when , form a solution of Hamilton's equations. Indeed, and everything but the final term cancels out.
does not change under point transformations, i.e. smooth changes of space coordinates. (Follows from the invariance of the energy function under point transformations. The invariance of can be established directly).
(See ).
. (Compare Hamilton's and Euler-Lagrange equations or see ).
if and only if .A coordinate for which the last equation holds is called cyclic (or ignorable). Every cyclic coordinate reduces the number of degrees of freedom by , causes the corresponding momentum to be conserved, and makes Hamilton's equations easier to solve.
Hamiltonian as the total system energy
In its application to a given system, the Hamiltonian is often taken to be
where is the kinetic energy and is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems.
The relation holds true for nonrelativistic systems when all of the following conditions are satisfied
where is time, is the number of degrees of freedom of the system, and each is an arbitrary scalar function of .
In words, this means that the relation holds true if does not contain time as an explicit variable (it is scleronomic), does not contain generalised velocity as an explicit variable, and each term of is quadratic in generalised velocity.
Proof
Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate
,
it is important to note that
.
In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated.
Additionally, this proof uses the notation to imply that .
Application to systems of point masses
For a system of point masses, the requirement for to be quadratic in generalised velocity is always satisfied for the case where , which is a requirement for anyway.
Conservation of energy
If the conditions for are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that does not contain time as an explicit variable.
With respect to the extended Euler-Lagrange formulation (See ), the Rayleigh dissipation function represents energy dissipation by nature. Therefore, energy is not conserved when . This is similar to the velocity dependent potential.
In summary, the requirements for to be satisfied for a nonrelativistic system are
is a homogeneous quadratic function in
Hamiltonian of a charged particle in an electromagnetic field
A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field. In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units):
where is the electric charge of the particle, is the electric scalar potential, and the are the components of the magnetic vector potential that may all explicitly depend on and .
This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law
and is called minimal coupling.
The canonical momenta are given by:
The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore:
This equation is used frequently in quantum mechanics.
Under gauge transformation:
where is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like:
which still produces the same Hamilton's equation:
In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations.
Relativistic charged particle in an electromagnetic field
The relativistic Lagrangian for a particle (rest mass and charge ) is given by:
Thus the particle's canonical momentum is
that is, the sum of the kinetic momentum and the potential momentum.
Solving for the velocity, we get
So the Hamiltonian is
This results in the force equation (equivalent to the Euler–Lagrange equation)
from which one can derive
The above derivation makes use of the vector calculus identity:
An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum, , is
This has the advantage that kinetic momentum can be measured experimentally whereas canonical momentum cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest), , plus the potential energy, .
From symplectic geometry to Hamilton's equations
Geometry of Hamiltonian systems
The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold in several equivalent ways, the best known being the following:
As a closed nondegenerate symplectic 2-form ω. According to the Darboux's theorem, in a small neighbourhood around any point on there exist suitable local coordinates (canonical or symplectic coordinates) in which the symplectic form becomes:
The form induces a natural isomorphism of the tangent space with the cotangent space: . This is done by mapping a vector to the 1-form , where for all . Due to the bilinearity and non-degeneracy of , and the fact that , the mapping is indeed a linear isomorphism. This isomorphism is natural in that it does not change with change of coordinates on Repeating over all , we end up with an isomorphism between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every and ,
(In algebraic terms, one would say that the -modules and are isomorphic). If , then, for every fixed , , and . is known as a Hamiltonian vector field. The respective differential equation on
is called . Here and is the (time-dependent) value of the vector field at .
A Hamiltonian system may be understood as a fiber bundle over time , with the fiber being the position space at time . The Lagrangian is thus a function on the jet bundle over ; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at is the cotangent space , which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form.
Any smooth real-valued function on a symplectic manifold can be used to define a Hamiltonian system. The function is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field.
The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system.
The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra.
If and are smooth functions on then the smooth function is properly defined; it is called a Poisson bracket of functions and and is denoted . The Poisson bracket has the following properties:
bilinearity
antisymmetry
Leibniz rule:
Jacobi identity:
non-degeneracy: if the point on is not critical for then a smooth function exists such that .
Given a function
if there is a probability distribution , then (since the phase space velocity has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so
This is called Liouville's theorem. Every smooth function over the symplectic manifold generates a one-parameter family of symplectomorphisms and if , then is conserved and the symplectomorphisms are symmetry transformations.
A Hamiltonian may have multiple conserved quantities . If the symplectic manifold has dimension and there are functionally independent conserved quantities which are in involution (i.e., ), then the Hamiltonian is Liouville integrable. The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities as coordinates; the new coordinates are called action–angle coordinates. The transformed Hamiltonian depends only on the , and hence the equations of motion have the simple form
for some function . There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem.
The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic; concepts of measure, completeness, integrability and stability are poorly defined.
Riemannian manifolds
An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be written as
where is a smoothly varying inner product on the fibers , the cotangent space to the point in the configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term.
If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See also Geodesics as Hamiltonian flows.
Sub-Riemannian manifolds
When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point of the configuration space manifold , so that the rank of the cometric is less than the dimension of the manifold , one has a sub-Riemannian manifold.
The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem.
The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by
is not involved in the Hamiltonian.
Poisson algebras
Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology) such that for any element of the algebra, maps to a nonnegative real number.
A further generalization is given by Nambu dynamics.
Generalization to quantum mechanics through Poisson bracket
Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over and to the algebra of Moyal brackets.
Specifically, the more general form of the Hamilton's equation reads
where is some function of and , and is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.
See also
Canonical transformation
Classical field theory
Hamiltonian field theory
Covariant Hamiltonian field theory
Classical mechanics
Dynamical systems theory
Hamiltonian system
Hamilton–Jacobi equation
Hamilton–Jacobi–Einstein equation
Lagrangian mechanics
Maxwell's equations
Hamiltonian (quantum mechanics)
Quantum Hamilton's equations
Quantum field theory
Hamiltonian optics
De Donder–Weyl theory
Geometric mechanics
Routhian mechanics
Nambu mechanics
Hamiltonian fluid mechanics
Hamiltonian vector field
References
Further reading
External links
Classical mechanics
Dynamical systems
Mathematical physics | 0.788822 | 0.997901 | 0.787166 |
Internal energy | The internal energy of a thermodynamic system is the energy of the system as a state function, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics. The notion has been introduced to describe the systems characterized by temperature variations, temperature being added to the set of state parameters, the position variables known in mechanics (and their conjugated generalized force parameters), in a similar way to potential energy of the conservative fields of force, gravitational and electrostatic. Internal energy changes equal the algebraic sum of the heat transferred and the work done. In systems without temperature changes, potential energy changes equal the work done by/on the system.
The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of substance, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property.
Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds.
The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy.
Cardinal functions
The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: . It expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, , of the same list of extensive variables of state, except that the entropy, , is replaced in the list by the internal energy, . It expresses the entropy representation.
Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example , that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, for , to get .
In contrast, Legendre transformations are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy.
For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.
Description and definition
The internal energy of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state:
where denotes the difference between the internal energy of the given state and that of the reference state,
and the are the various energies transferred to the system in the steps from the reference state to the given state.
It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, , and microscopic kinetic energy, , components:
The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics.
Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the object with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.
For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. Therefore, a convenient null reference point may be chosen for the internal energy.
The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains.
At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.
The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy, The scaling property between temperature and thermal energy is the entropy change of the system.
Statistical mechanics considers any system to be statistically distributed across an ensemble of microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy and is associated with a probability . The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence:
This is the statistical expression of the law of conservation of energy.
Internal energy changes
Thermodynamics is chiefly concerned with the changes in internal energy .
For a closed system, with mass transfer excluded, the changes in internal energy are due to heat transfer and due to thermodynamic work done by the system on its surroundings. Accordingly, the internal energy change for a process may be written
When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible.
A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings.
If the system is not closed, the third mechanism that can increase the internal energy is transfer of substance into the system. This increase, cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy:
If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change.
Internal energy of the ideal gas
Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases. For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures.
Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): . It is not dependent on other thermodynamic quantities such as pressure or density.
The internal energy of an ideal gas is proportional to its amount of substance (number of moles) and to its temperature
where is the isochoric (at constant volume) molar heat capacity of the gas; is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties , , (entropy, volume, number of moles). In case of the ideal gas it is in the following way
where is an arbitrary positive constant and where is the universal gas constant. It is easily seen that is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex. Knowing temperature and pressure to be the derivatives
the ideal gas law immediately follows as below:
Internal energy of a closed thermodynamic system
The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings.
This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential. For a closed system, with transfers only as heat and work, the change in the internal energy is
expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement).
For example, the mechanical work done by the system may be related to the pressure and volume change . The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement:
This defines the direction of work, , to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer to be into the working fluid and assuming a reversible process, the heat is
where denotes the temperature, and denotes the entropy.
The change in internal energy becomes
Changes due to temperature and volume
The expression relating changes in internal energy to changes in temperature and volume is
This is useful if the equation of state is known.
In case of an ideal gas, we can derive that , i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature.
The expression relating changes in internal energy to changes in temperature and volume is
The equation of state is the ideal gas law
Solve for pressure:
Substitute in to internal energy expression:
Take the derivative of pressure with respect to temperature:
Replace:
And simplify:
To express in terms of and , the term
is substituted in the fundamental thermodynamic relation
This gives
The term is the heat capacity at constant volume
The partial derivative of with respect to can be evaluated if the equation of state is known. From the fundamental thermodynamic relation, it follows that the differential of the Helmholtz free energy is given by
The symmetry of second derivatives of with respect to and yields the Maxwell relation:
This gives the expression above.
Changes due to temperature and pressure
When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful:
where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to
The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion
and the isothermal compressibility
by writing
and equating dV to zero and solving for the ratio dP/dT. This gives
Substituting and in gives the above expression.
Changes due to volume at constant temperature
The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:
Internal energy of multi-component systems
In addition to including the entropy and volume terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:
where are the molar amounts of constituents of type in the system. The internal energy is an extensive function of the extensive variables , , and the amounts , the internal energy may be written as a linearly homogeneous function of first degree:
where is a factor describing the growth of the system. The differential internal energy may be written as
which shows (or defines) temperature to be the partial derivative of with respect to entropy and pressure to be the negative of the similar derivative with respect to volume ,
and where the coefficients are the chemical potentials for the components of type in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition:
As conjugate variables to the composition , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant and , because of the extensive nature of and its independent variables, using Euler's homogeneous function theorem, the differential may be integrated and yields an expression for the internal energy:
The sum over the composition of the system is the Gibbs free energy:
that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for .
Internal energy in an elastic medium
For an elastic medium the potential energy component of the internal energy has an elastic nature expressed in terms of the stress and strain involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is
Euler's theorem yields for the internal energy:
For a linearly elastic material, the stress is related to the strain by
where the are the components of the 4th-rank elastic constant tensor of the medium.
Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased.
History
James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius.
Notes
See also
Calorimetry
Enthalpy
Exergy
Thermodynamic equations
Thermodynamic potentials
Gibbs free energy
Helmholtz free energy
References
Bibliography of cited references
Adkins, C. J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, .
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
.
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, .
Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London.
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, .
Bibliography
Physical quantities
Thermodynamic properties
State functions
Statistical mechanics
Energy (physics) | 0.788735 | 0.997809 | 0.787007 |
Brownian motion | Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas).
This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem).
This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions.
The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter".
The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem).
History
The Roman philosopher-poet Lucretius' scientific poem "On the Nature of Things" has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms:
Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example".
While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained.
The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented a stochastic analysis of the stock and option markets. The Brownian model of financial markets is often cited, but Benoit Mandelbrot rejected its applicability to stock price movements in part because these are discontinuous.
Albert Einstein (in one of his 1905 papers) and Marian Smoluchowski (1906) brought the solution of the problem to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908.
Statistical mechanics theories
Einstein's theory
There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant.
The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second.
He regarded the increment of particle positions in time in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable with some probability density function (i.e., is the probability density for a jump of magnitude , i.e., the probability density of the particle incrementing its position from to in the time interval ). Further, assuming conservation of particle number, he expanded the number density (number of particles per unit volume around ) at time in a Taylor series,
where the second equality is by definition of . The integral in the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation:
Where the coefficient after the Laplacian, the second moment of probability of displacement , is interpreted as mass diffusivity D:
Then the density of Brownian particles at point at time satisfies the diffusion equation:
Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution
This expression (which is a normal distribution with the mean and variance usually called Brownian motion ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given by
This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point.
The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium.
In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways.
Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of , where is the mass of the particle, is the acceleration due to gravity, and is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius is , where is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution
where is the difference in density of particles separated by a height difference, of , is the Boltzmann constant (the ratio of the universal gas constant, , to the Avogadro constant, ), and is the absolute temperature.
Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law,
where . Introducing the formula for , we find that
In a state of dynamical equilibrium, this speed must also be equal to . Both expressions for are proportional to , reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces:
Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as , and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant , the temperature , the viscosity , and the particle radius , the Avogadro constant can be determined.
The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".
An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes for the diffusion coefficient , where is the osmotic pressure and is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path.
At first, the predictions of Einstein's formula were seemingly refuted by a series of experiments by Svedberg in 1906 and 1907, which gave displacements of the particles as 4 to 6 times the predicted value, and by Henri in 1908 who found displacements 3 times greater than Einstein's formula predicted. But Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law.
Smoluchowski model
Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution for the displacement of a Brownian particle along the in time . He therefore gets the same expression for the mean squared displacement: However, when he relates it to a particle of mass moving at a velocity which is the result of a frictional force governed by Stokes's law, he finds
where is the viscosity coefficient, and is the radius of the particle. Associating the kinetic energy with the thermal energy , the expression for the mean squared displacement is times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt."
Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal.
If the probability of gains and losses follows a binomial distribution,
with equal probabilities of 1/2, the mean total gain is
If is large enough so that Stirling's approximation can be used in the form
then the expected total gain will be
showing that it increases as the square root of the total population.
Suppose that a Brownian particle of mass is surrounded by lighter particles of mass which are traveling at a speed . Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be . This ratio is of the order of . But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between . Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts.
These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle, , which depends on the collisions that tend to accelerate and decelerate it. The larger is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, will be equal, on the average, to the kinetic energy of the surrounding fluid particle,
In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with where is the test particle's mass and the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of . If is the number of collisions from the right and the number of collisions from the left then after collisions the particle's velocity will have changed by . The multiplicity is then simply given by:
and the total number of possible states is given by . Therefore, the probability of the particle being hit from the right times is:
As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible s instead of always just one in a realistic situation.
Other physics models using partial differential equations
The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation is valid on short timescales.
The time evolution of the position of the Brownian particle itself is best described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. In Langevin dynamics and Brownian dynamics, the Langevin equation is used to efficiently simulate the dynamics of molecular systems that exhibit a strong Brownian component.
The displacement of a particle undergoing Brownian motion is obtained by solving the diffusion equation under appropriate boundary conditions and finding the rms of the solution. This shows that the displacement varies as the square root of the time (not linearly), which explains why previous experimental results concerning the velocity of Brownian particles gave nonsensical results. A linear time dependence was incorrectly assumed.
At very short time scales, however, the motion of a particle is dominated by its inertia and its displacement will be linearly dependent on time: . So the instantaneous velocity of the Brownian motion can be measured as , when , where is the momentum relaxation time. In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle.
Astrophysics: star motion within galaxies
In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity of the massive object, of mass , is related to the rms velocity of the background stars by
where is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both and . The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1.
Mathematics
In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics and physics.
The Wiener process is characterized by four facts:
is almost surely continuous
has independent increments
denotes the normal distribution with expected value and variance . The condition that it has independent increments means that if then and are independent random variables. In addition, for some filtration is measurable for all
An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem.
The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant.
The time evolution of the position of the Brownian particle itself can be described approximately by a Langevin equation, an equation which involves a random force field representing the effect of the thermal fluctuations of the solvent on the Brownian particle. On long timescales, the mathematical Brownian motion is well described by a Langevin equation. On small timescales, inertial effects are prevalent in the Langevin equation. However the mathematical Brownian motion is exempt of such inertial effects. Inertial effects have to be considered in the Langevin equation, otherwise the equation becomes singular. so that simply removing the inertia term from this equation would not yield an exact description, but rather a singular behavior in which the particle doesn't move at all.
A d-dimensional Gaussian free field has been described as "a d-dimensional-time analog of Brownian motion."
Statistics
The Brownian motion can be modeled by a random walk.
In the general case, Brownian motion is a Markov process and described by stochastic integral equations.
Lévy characterisation
The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous -valued stochastic process to actually be -dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion.
Let be a continuous stochastic process on a probability space taking values in . Then the following are equivalent:
is a Brownian motion with respect to , i.e., the law of with respect to is the same as the law of an -dimensional Brownian motion, i.e., the push-forward measure is classical Wiener measure on .
both
is a martingale with respect to (and its own natural filtration); and
for all , is a martingale with respect to (and its own natural filtration), where denotes the Kronecker delta.
Spectral content
The spectral content of a stochastic process can be found from the power spectral density, formally defined as
where stands for the expected value. The power spectral density of Brownian motion is found to be
where is the diffusion coefficient of . For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e.,
which for an individual realization of a Brownian motion trajectory, it is found to have expected value
and variance
For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral density but its coefficient of variation tends to This implies the distribution of is broad even in the infinite time limit.
Riemannian manifold
The infinitesimal generator (and hence characteristic operator) of a Brownian motion on is easily calculated to be , where denotes the Laplace operator. In image processing and computer vision, the Laplacian operator has been used for various tasks such as blob and edge detection. This observation is useful in defining Brownian motion on an -dimensional Riemannian manifold : a Brownian motion on is defined to be a diffusion on whose characteristic operator in local coordinates , , is given by , where is the Laplace–Beltrami operator given in local coordinates by
where in the sense of the inverse of a square matrix.
Narrow escape
The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem.
See also
Brownian bridge: a Brownian motion that is required to "bridge" specified values at specified times
Brownian covariance
Brownian dynamics
Brownian motion of sol particles
Brownian motor
Brownian noise
Brownian ratchet
Brownian surface
Brownian tree
Brownian web
Rotational Brownian motion
Clinamen
Complex system
Continuity equation
Diffusion equation
Geometric Brownian motion
Itô diffusion: a generalisation of Brownian motion
Langevin equation
Lévy arcsine law
Local time (mathematics)
Many-body problem
Marangoni effect
Nanoparticle tracking analysis
Narrow escape problem
Osmosis
Random walk
Schramm–Loewner evolution
Single particle trajectories
Single particle tracking
Statistical mechanics
Stochastic Eulerian Lagrangian methods : simulation methods for the Brownian motion of spatially extended structures and hydrodynamic coupling.
Stokesian dynamics
Surface diffusion: a type of constrained Brownian motion.
Thermal equilibrium
Thermodynamic equilibrium
Triangulation sensing
Tyndall effect: a phenomenon where particles are involved; used to differentiate between the different types of mixtures.
Ultramicroscope
References
Further reading
Also includes a subsequent defense by Brown of his original observations, Additional remarks on active molecules.
Lucretius, On The Nature of Things, translated by William Ellery Leonard. (on-line version, from Project Gutenberg. See the heading 'Atomic Motions'; this translation differs slightly from the one quoted).
Nelson, Edward, (1967). Dynamical Theories of Brownian Motion. (PDF version of this out-of-print book, from the author's webpage.) This is primarily a mathematical work, but the first four chapters discuss the history of the topic, in the era from Brown to Einstein.
See also Perrin's book "Les Atomes" (1914).
Theile, T. N.
Danish version: "Om Anvendelse af mindste Kvadraters Methode i nogle Tilfælde, hvor en Komplikation af visse Slags uensartede tilfældige Fejlkilder giver Fejlene en 'systematisk' Karakter".
French version: "Sur la compensation de quelques erreurs quasi-systématiques par la méthodes de moindre carrés" published simultaneously in Vidensk. Selsk. Skr. 5. Rk., naturvid. og mat. Afd., 12:381–408, 1880.
External links
Einstein on Brownian Motion
Discusses history, botany and physics of Brown's original observations, with videos
"Einstein's prediction finally witnessed one century later" : a test to observe the velocity of Brownian motion
Large-Scale Brownian Motion Demonstration
Statistical mechanics
Wiener process
Fractals
Colloidal chemistry
Robert Brown (botanist, born 1773)
Albert Einstein
Articles containing video clips
Lévy processes | 0.787638 | 0.999127 | 0.78695 |
Geodesy | Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics.
Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor.
History
Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or geodaisia (literally, "division of Earth").
Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South.
Definition
In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying.
In German, geodesy can refer to either higher geodesy ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy that includes surveying — measuring parts or regions of Earth.
For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also.
To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy.
Geoid and reference ellipsoid
The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation, and it varies globally between ±110 m based on the GRS 80 ellipsoid.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid.
The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable.
Coordinate systems in space
The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X, Y, and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis.
Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.
It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system.
Geocentric coordinate systems used in geodesy can be divided naturally into two classes:
The inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars or, equivalently, to the rotation axes of ideal gyroscopes. The X-axis points to the vernal equinox.
The co-rotating reference systems (also ECEF or "Earth Centred, Earth Fixed"), in which the axes are "attached" to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane.
The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists.
Coordinate systems in the plane
In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use:
Plano-polar, with points in the plane defined by their distance, s, from a specified point along a ray having a direction α from a baseline or axis.
Rectangular, with points defined by distances from two mutually perpendicular axes, x and y. Contrary to the mathematical convention, in geodetic practice, the x-axis points North and the y-axis East.
One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares.
An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence.
It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively, then we have
The reverse transformation is given by:
Heights
In geodesy, point or terrain heights are "above sea level" as an irregular, physically defined surface.
Height systems in use are:
Orthometric heights
Dynamic heights
Geopotential heights
Normal heights
Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid, which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses.
One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid.
Geodetic datums
Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others.
In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation".
Positioning
General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network.
Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied.
Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached.
Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points.
One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.
Geodetic problems
In geometrical geodesy, there are two main problems:
First geodetic problem (also known as direct or forward geodetic problem): given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point.
Second geodetic problem (also known as inverse or reverse geodetic problem): given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points.
The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle.
The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae.
Observational concepts
As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer):
Plumbline or vertical: (the line along) the direction of local gravity.
Zenith: the (direction to the) intersection of the upwards-extending gravity vector at a point and the celestial sphere.
Nadir: the (direction to the) antipodal point where the downward-extending gravity vector intersects the (obscured) celestial sphere.
Celestial horizon: a plane perpendicular to the gravity vector at a point.
Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France).
Elevation: the angular height of an object above the horizon; alternatively: zenith distance equal to 90 degrees minus elevation.
Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance.
North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.)
Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere.
Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles.
Local meridian: the plane which contains the direction to the zenith and the celestial pole.
Measurements
The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too.
The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position.
Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases.
Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys.
To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed.
Gravity is measured using gravimeters, of which there are two kinds. First are absolute gravimeters, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, relative gravimeters are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation.
In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks.
Units and measures on the ellipsoid
Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric
measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth.
One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile.
A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to the rounding of 1,000/0.54 m to four digits).
Temporal changes
Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms:
Continental plate motion, plate tectonics
The episodic motion of tectonic origin, especially close to fault lines
Periodic effects due to tides and tidal loading
Postglacial land uplift due to isostatic adjustment
Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology, and oceans
Sub-daily polar motion
Length-of-day variability
Earth's center-of-mass (geocenter) variations
Anthropogenic movements such as reservoir construction or petroleum or water extraction
Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS).
Techniques for studying geodynamic phenomena on global scales include:
Satellite positioning by GPS, GLONASS, Galileo, and BeiDou
Very-long-baseline interferometry (VLBI)
Satellite laser ranging (SLR) and lunar laser ranging (LLR)
DORIS
Regionally and locally precise leveling
Precise tachymeters
Monitoring of gravity change using land, airborne, shipborne, and spaceborne gravimetry
Satellite altimetry based on microwave and laser observations for studying the ocean surface, sea level rise, and ice cover monitoring
Interferometric synthetic aperture radar (InSAR) using satellite images.
Notable geodesists
See also
Fundamentals
Geodesy (book)
Concepts and Techniques in Modern Geography
Geodesics on an ellipsoid
History of geodesy
Physical geodesy
Earth's circumference
Physics
Geosciences
Governmental agencies
National mapping agencies
U.S. National Geodetic Survey
National Geospatial-Intelligence Agency
Ordnance Survey
United States Coast and Geodetic Survey
United States Geological Survey
International organizations
International Union of Geodesy and Geophysics (IUGG)
International Association of Geodesy (IAG)
International Federation of Surveyors (IFS)
International Geodetic Student Organisation (IGSO)
Other
EPSG Geodetic Parameter Dataset
Meridian arc
Surveying
References
Further reading
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 1, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 1 (Teubner, Leipzig, 1880).
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 2, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 2 (Teubner, Leipzig, 1884).
B. Hofmann-Wellenhof and H. Moritz, Physical Geodesy, Springer-Verlag Wien, 2005. (This text is an updated edition of the 1967 classic by W.A. Heiskanen and H. Moritz).
W. Kaula, Theory of Satellite Geodesy : Applications of Satellites to Geodesy, Dover Publications, 2000. (This text is a reprint of the 1966 classic).
Vaníček P. and E.J. Krakiwsky, Geodesy: the Concepts, pp. 714, Elsevier, 1986.
Torge, W (2001), Geodesy (3rd edition), published by de Gruyter, .
Thomas H. Meyer, Daniel R. Roman, and David B. Zilkoski. "What does height really mean?" (This is a series of four articles published in Surveying and Land Information Science, SaLIS.)
"Part I: Introduction" SaLIS Vol. 64, No. 4, pages 223–233, December 2004.
"Part II: Physics and gravity" SaLIS Vol. 65, No. 1, pages 5–15, March 2005.
"Part III: Height systems" SaLIS Vol. 66, No. 2, pages 149–160, June 2006.
"Part IV: GPS heighting" SaLIS Vol. 66, No. 3, pages 165–183, September 2006.
External links
Geodetic awareness guidance note, Geodesy Subcommittee, Geomatics Committee, International Association of Oil & Gas Producers
Earth sciences
Cartography
Measurement
Navigation
Applied mathematics
Articles containing video clips | 0.788693 | 0.996703 | 0.786092 |
Lorentz force | In physics, specifically in electromagnetism, the Lorentz force law is the combination of electric and magnetic force on a point charge due to electromagnetic fields. The Lorentz force, on the other hand, is a physical effect that occurs in the vicinity of electrically neutral, current-carrying conductors causing moving electrical charges to experience a magnetic force.
The Lorentz force law states that a particle of charge moving with a velocity in an electric field and a magnetic field experiences a force (in SI units) of
It says that the electromagnetic force on a charge is a combination of (1) a force in the direction of the electric field (proportional to the magnitude of the field and the quantity of charge), and (2) a force at right angles to both the magnetic field and the velocity of the charge (proportional to the magnitude of the field, the charge, and the velocity).
Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a moving charged particle.
Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force.
Lorentz force law as the definition of E and B
In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields and . To be specific, the Lorentz force is understood to be the following empirical statement:
The electromagnetic force on a test charge at a given point and time is a certain function of its charge and velocity , which can be parameterized by exactly two vectors and , in the functional form:
This is valid, even for particles approaching the speed of light (that is, magnitude of , ). So the two vector fields and are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force.
As a definition of and , the Lorentz force is only a definition in principle because a real particle (as opposed to the hypothetical "test charge" of infinitesimally-small mass and charge) would generate its own finite and fields, which would alter the electromagnetic force that it experiences. In addition, if the charge experiences acceleration, as if forced into a curved trajectory, it emits radiation that causes it to lose kinetic energy. See for example Bremsstrahlung and synchrotron light. These effects occur through both a direct effect (called the radiation reaction force) and indirectly (by affecting the motion of nearby charges and currents).
Physical interpretation of the Lorentz force
Coulomb's law is only valid for point charges at rest. In fact, the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity. For small relative velocities and very small accelerations, instead of the Coulomb force, the Weber force can be applied. The sum of the Weber forces of all charge carriers in a closed DC loop on a single test charge produces - regardless of the shape of the current loop - the Lorentz force.
The interpretation of magnetism by means of a modified Coulomb law was first proposed by Carl Friedrich Gauss. In 1835, Gauss assumed that each segment of a DC loop contains an equal number of negative and positive point charges that move at different speeds. If Coulomb's law were completely correct, no force should act between any two short segments of such current loops. However, around 1825, André-Marie Ampère demonstrated experimentally that this is not the case. Ampère also formulated a force law. Based on this law, Gauss concluded that the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity.
The Weber force is a central force and complies with Newton's third law. This demonstrates not only the conservation of momentum but also that the conservation of energy and the conservation of angular momentum apply. Weber electrodynamics is only a quasistatic approximation, i.e. it should not be used for higher velocities and accelerations. However, the Weber force illustrates that the Lorentz force can be traced back to central forces between numerous point-like charge carriers.
Equation
Charged particle
The force acting on a particle of electric charge with instantaneous velocity , due to an external electric field and magnetic field , is given by (SI definition of quantities):
where is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have:
In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as:
in which is the position vector of the charged particle, is time, and the overdot is a time derivative.
A positively charged particle will be accelerated in the same linear orientation as the field, but will curve perpendicularly to both the instantaneous velocity vector and the field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of and are then curled to point in the direction of , then the extended thumb will point in the direction of ).
The term is called the electric force, while the term is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: In what follows, the term "Lorentz force" will refer to the expression for the total force.
The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force.
The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is
Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle.
Continuous charge distribution
For a continuous charge distribution in motion, the Lorentz force equation becomes:
where is the force on a small piece of the charge distribution with charge . If both sides of this equation are divided by the volume of this small piece of the charge distribution , the result is:
where is the force density (force per unit volume) and is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is
so the continuous analogue to the equation is
The total force is the volume integral over the charge distribution:
By eliminating and , using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor , in turn this can be combined with the Poynting vector to obtain the electromagnetic stress–energy tensor T used in general relativity.
In terms of and , another way to write the Lorentz force (per unit volume) is
where is the speed of light and ∇· denotes the divergence of a tensor field. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details.
The density of power associated with the Lorentz force in a material medium is
If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is
where: is the density of free charge; is the polarization density; is the density of free current; and is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is
Equations with Gaussian quantities
The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead
where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations:
where is the vacuum permittivity and the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context.
History
Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields.
The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as
Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name.
Trajectories of particles due to the Lorentz force
In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
Significance of the Lorentz force
While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another.
In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory).
Force on a current-carrying wire
When a wire carrying an electric current is placed in a magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field:
where is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current .
If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire , then adding up all these forces by integration. This results in the same formal expression, but should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque.
If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current is given by integration along the wire,
One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's magnetic field.
EMF
The magnetic force component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire.
In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (qE) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations).
Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa.
Lorentz force and Faraday's law of induction
Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is:
where
is the magnetic flux through the loop, is the magnetic field, is a surface bounded by the closed contour , at time , is an infinitesimal vector area element of (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch).
The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wirebut also for a moving wire.
From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law.
Let be the moving wire, moving together without rotation and with constant velocity and be the internal surface of the wire. The EMF around the closed path is given by:
where is the electric field and is an infinitesimal vector element of the contour .
NB: Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem.
The above result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called here the Maxwell–Faraday equation:
The Maxwell–Faraday equation also can be written in an integral form using the Kelvin–Stokes theorem.
So we have, the Maxwell Faraday equation:
and the Faraday Law,
The two are equivalent if the wire is not moving. Using the Leibniz integral rule and that , results in,
and using the Maxwell Faraday equation,
since this is valid for any wire position it implies that,
Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law.
If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux linking the loop can change in several ways. For example, if the -field varies with position, and the loop moves to a location with different -field, will change. Alternatively, if the loop changes orientation with respect to the B-field, the differential element will change because of the different angle between and , also changing . As a third example, if a portion of the circuit is swept through a uniform, time-independent -field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in .
Note that the Maxwell Faraday's equation implies that the Electric Field is non conservative when the Magnetic Field varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero.
Lorentz force in terms of potentials
The and fields can be replaced by the magnetic vector potential and (scalar) electrostatic potential by
where is the gradient, is the divergence, and is the curl.
The force becomes
Using an identity for the triple product this can be rewritten as,
(Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on not on thus, there is no need of using Feynman's subscript notation in the equation above). Using the chain rule, the total derivative of is:
so that the above expression becomes:
With , we can put the equation into the convenient Euler–Lagrange form
where and
Lorentz force and analytical mechanics
The Lagrangian for a charged particle of mass and charge in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by:
where and are the potential fields as above. The quantity can be thought as a velocity-dependent potential function. Using Lagrange's equations, the equation for the Lorentz force given above can be obtained again.
The potential energy depends on the velocity of the particle, so the force is velocity dependent, so it is not conservative.
The relativistic Lagrangian is
The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential.
Relativistic form of the Lorentz force
Covariant form of the Lorentz force
Field tensor
Using the metric signature , the Lorentz force for a charge can be written in covariant form:
where is the four-momentum, defined as
the proper time of the particle, the contravariant electromagnetic tensor
and is the covariant 4-velocity of the particle, defined as:
in which
is the Lorentz factor.
The fields are transformed to a frame moving with constant relative velocity by:
where is the Lorentz transformation tensor.
Translation to vector notation
The component (x-component) of the force is
Substituting the components of the covariant electromagnetic tensor F yields
Using the components of covariant four-velocity yields
The calculation for (force components in the and directions) yields similar results, so collecting the 3 equations into one:
and since differentials in coordinate time and proper time are related by the Lorentz factor,
so we arrive at
This is precisely the Lorentz force law, however, it is important to note that is the relativistic expression,
Lorentz force in spacetime algebra (STA)
The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields , and an arbitrary time-direction, . This can be settled through Space-Time Algebra (or the geometric algebra of space-time), a type of Clifford algebra defined on a pseudo-Euclidean space, as
and
is a space-time bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in space-time planes) and rotations (rotations in space-space planes). The dot product with the vector pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector.
The relativistic velocity is given by the (time-like) changes in a time-position vector where
(which shows our choice for the metric) and the velocity is
The proper (invariant is an inadequate term because no transformation has been defined) form of the Lorentz force law is simply
Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression.
Lorentz force in general relativity
In the general theory of relativity the equation of motion for a particle with mass and charge , moving in a space with metric tensor and electromagnetic field , is given as
where ( is taken along the trajectory), and
The equation can also be written as
where is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as
where is the covariant differential in general relativity (metric, torsion-free).
Applications
The Lorentz force occurs in many devices, including:
Cyclotrons and other circular path particle accelerators
Mass spectrometers
Velocity Filters
Magnetrons
Lorentz force velocimetry
In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices including:
Electric motors
Railguns
Linear motors
Loudspeakers
Magnetoplasmadynamic thrusters
Electrical generators
Homopolar generators
Linear alternators
See also
Hall effect
Electromagnetism
Gravitomagnetism
Ampère's force law
Hendrik Lorentz
Maxwell's equations
Formulation of Maxwell's equations in special relativity
Moving magnet and conductor problem
Abraham–Lorentz force
Larmor formula
Cyclotron radiation
Magnetoresistance
Scalar potential
Helmholtz decomposition
Guiding center
Field line
Coulomb's law
Electromagnetic buoyancy
Footnotes
References
The numbered references refer in part to the list immediately below.
: volume 2.
External links
Lorentz force (demonstration)
Interactive Java applet on the magnetic deflection of a particle beam in a homogeneous magnetic field by Wolfgang Bauer
Physical phenomena
Electromagnetism
Maxwell's equations
Hendrik Lorentz | 0.787161 | 0.998506 | 0.785985 |
Fluctuation–dissipation theorem | The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance (in their general sense, not only in electromagnetic terms) of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951
and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion
during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors.
Qualitative overview and examples
The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples:
Drag and Brownian motion
If an object is moving through a fluid, it experiences drag (air resistance or fluid resistance). Drag dissipates kinetic energy, turning it into heat. The corresponding fluctuation is Brownian motion. An object in a fluid does not sit still, but rather moves around with a small and rapidly-changing velocity, as molecules in the fluid bump into it. Brownian motion converts heat energy into kinetic energy—the reverse of drag.
Resistance and Johnson noise
If electric current is running through a wire loop with a resistor in it, the current will rapidly go to zero because of the resistance. Resistance dissipates electrical energy, turning it into heat (Joule heating). The corresponding fluctuation is Johnson noise. A wire loop with a resistor in it does not actually have zero current, it has a small and rapidly-fluctuating current caused by the thermal fluctuations of the electrons and atoms in the resistor. Johnson noise converts heat energy into electrical energy—the reverse of resistance.
Light absorption and thermal radiation
When light impinges on an object, some fraction of the light is absorbed, making the object hotter. In this way, light absorption turns light energy into heat. The corresponding fluctuation is thermal radiation (e.g., the glow of a "red hot" object). Thermal radiation turns heat energy into light energy—the reverse of light absorption. Indeed, Kirchhoff's law of thermal radiation confirms that the more effectively an object absorbs light, the more thermal radiation it emits.
Examples in detail
The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations.
Brownian motion
For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction.
From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation
which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. kB is the Boltzmann constant, and T is the absolute temperature.
Thermal noise in a resistor
In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the mean-square voltage depends on the resistance , , and the bandwidth over which the voltage is measured:
This observation can be understood through the lens of the fluctuation-dissipation theorem. Take, for example, a simple circuit consisting of a resistor with a resistance and a capacitor with a small capacitance . Kirchhoff's voltage law yields
and so the response function for this circuit is
In the low-frequency limit , its imaginary part is simply
which then can be linked to the power spectral density function of the voltage via the fluctuation-dissipation theorem
The Johnson–Nyquist voltage noise was observed within a small frequency bandwidth centered around . Hence
General formulation
The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following:.
Let be an observable of a dynamical system with Hamiltonian subject to thermal fluctuations.
The observable will fluctuate around its mean value
with fluctuations characterized by a power spectrum .
Suppose that we can switch on a time-varying, spatially constant field which alters the Hamiltonian
to .
The response of the observable to a time-dependent field is
characterized to first order by the susceptibility or linear response function
of the system
where the perturbation is adiabatically (very slowly) switched on at .
The fluctuation–dissipation theorem relates the two-sided power spectrum (i.e. both positive and negative frequencies) of to the imaginary part of the Fourier transform of the susceptibility :
which holds under the Fourier transform convention . The left-hand side describes fluctuations in , the right-hand side is closely related to the energy dissipated by the system when pumped by an oscillatory field . The spectrum of fluctuations reveal the linear response, because past fluctuations cause future fluctuations via a linear response upon itself.
This is the classical form of the theorem; quantum fluctuations are taken into account by replacing with (whose limit for is ). A proof can be found by means of the LSZ reduction, an identity from quantum field theory.
The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of space-dependent fields, to the case of several variables or to a quantum-mechanics setting.
Derivation
Classical version
We derive the fluctuation–dissipation theorem in the form given above, using the same notation.
Consider the following test case: the field f has been on for infinite time and is switched off at t=0
where is the Heaviside function.
We can express the expectation value of by the probability distribution W(x,0) and the transition probability
The probability distribution function W(x,0) is an equilibrium distribution and hence
given by the Boltzmann distribution for the Hamiltonian
where .
For a weak field , we can expand the right-hand side
here is the equilibrium distribution in the absence of a field.
Plugging this approximation in the formula for yields
where A(t) is the auto-correlation function of x in the absence of a field:
Note that in the absence of a field the system is invariant under time-shifts.
We can rewrite using the susceptibility
of the system and hence find with the above equation (*)
Consequently,
To make a statement about frequency dependence, it is necessary to take the Fourier transform of equation (**). By integrating by parts, it is possible to show that
Since is real and symmetric, it follows that
Finally, for stationary processes, the Wiener–Khinchin theorem states that the two-sided spectral density is equal to the Fourier transform of the auto-correlation function:
Therefore, it follows that
Quantum version
The fluctuation-dissipation theorem relates the correlation function of the observable of interest (a measure of fluctuation) to the imaginary part of the response function in the frequency domain (a measure of dissipation). A link between these quantities can be found through the so-called Kubo formula
which follows, under the assumptions of the linear response theory, from the time evolution of the ensemble average of the observable in the presence of a perturbing source. Once Fourier transformed, the Kubo formula allows writing the imaginary part of the response function as
In the canonical ensemble, the second term can be re-expressed as
where in the second equality we re-positioned using the cyclic property of trace. Next, in the third equality, we inserted next to the trace and interpreted as a time evolution operator with imaginary time interval . The imaginary time shift turns into a factor after Fourier transform
and thus the expression for can be easily rewritten as the quantum fluctuation-dissipation relation
where the power spectral density is the Fourier transform of the auto-correlation and is the Bose-Einstein distribution function. The same calculation also yields
thus, differently from what obtained in the classical case, the power spectral density is not exactly frequency-symmetric in the quantum limit. Consistently, has an imaginary part originating from the commutation rules of operators. The additional "" term in the expression of at positive frequencies can also be thought of as linked to spontaneous emission. An often cited result is also the symmetrized power spectral density
The "" can be thought of as linked to quantum fluctuations, or to zero-point motion of the observable . At high enough temperatures, , i.e. the quantum contribution is negligible, and we recover the classical version.
Violations in glassy systems
While the fluctuation–dissipation theorem provides a general relation between the response of systems obeying detailed balance, when detailed balance is violated comparison of fluctuations to dissipation is more complex. Below the so called glass temperature , glassy systems are not equilibrated, and slowly approach their equilibrium state. This slow approach to equilibrium is synonymous with the violation of detailed balance. Thus these systems require large time-scales to be studied while they slowly move toward equilibrium.
To study the violation of the fluctuation-dissipation relation in glassy systems, particularly spin glasses, researchers have performed numerical simulations of macroscopic systems (i.e. large compared to their correlation lengths) described by the three-dimensional Edwards-Anderson model using supercomputers. In their simulations, the system is initially prepared at a high temperature, rapidly cooled to a temperature below the glass temperature , and left to equilibrate for a very long time under a magnetic field . Then, at a later time , two dynamical observables are probed, namely the response function
and the spin-temporal correlation function
where is the spin living on the node of the cubic lattice of volume , and is the magnetization density. The fluctuation-dissipation relation in this system can be written in terms of these observables as
Their results confirm the expectation that as the system is left to equilibrate for longer times, the fluctuation-dissipation relation is closer to be satisfied.
In the mid-1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered that holds for asymptotic non-stationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a non-trivial dependence on the time scales. This relation is proposed to hold in glassy systems beyond the models for which it was initially found.
See also
Non-equilibrium thermodynamics
Green–Kubo relations
Onsager reciprocal relations
Equipartition theorem
Boltzmann distribution
Dissipative system
Notes
References
Further reading
Audio recording of a lecture by Prof. E. W. Carlson of Purdue University
Kubo's famous text: Fluctuation-dissipation theorem
Statistical mechanics
Non-equilibrium thermodynamics
Physics theorems
Statistical mechanics theorems | 0.794478 | 0.989254 | 0.78594 |
Radiation pressure | Radiation pressure (also known as light pressure) is mechanical pressure exerted upon a surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength that is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules). The associated force is called the radiation pressure force, or sometimes just the force of light.
The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes and technologies. This particularly includes objects in outer space, where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the Sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars orbit by about . Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures and can sometimes dwarf the usual gas pressure, for instance, in stellar interiors and thermonuclear weapons. Furthermore, large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion.
Radiation pressure forces are the bedrock of laser technology and the branches of science that rely heavily on lasers and other optical technologies. That includes, but is not limited to, biomicroscopy (where light is used to irradiate and observe microbes, cells, and molecules), quantum optics, and optomechanics (where light is used to probe and control objects like atoms, qubits and macroscopic quantum objects). Direct applications of the radiation pressure force in these fields are, for example, laser cooling (the subject of the 1997 Nobel Prize in Physics), quantum control of macroscopic objects and atoms (2012 Nobel Prize in Physics), interferometry (2017 Nobel Prize in Physics) and optical tweezers (2018 Nobel Prize in Physics).
Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure.
Discovery
Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun.
The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by air flow caused by temperature differentials.)
Theory
Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below.
Radiation pressure from momentum of an electromagnetic wave
According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum. Momentum will be transferred to any surface it strikes that absorbs or reflects the radiation.
Consider the momentum transferred to a perfectly absorbing (black) surface. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector , which is the cross product of the electric field vector E and the magnetic field's auxiliary field vector (or magnetizing field) H. The magnitude, denoted by S, divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. So, dimensionally, the Poynting vector is , which is the speed of light, , times pressure, . That pressure is experienced as radiation pressure on the surface:
where is pressure (usually in pascals), is the incident irradiance (usually in W/m2) and is the speed of light in vacuum. Here, .
If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α, resulting in a pressure:
The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure.
Radiation pressure from reflection
The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave:
thus doubling the net radiation pressure on the surface:
For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double.
Radiation pressure by emission
Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie:
The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually tiny. However, because black-body radiation increases rapidly with temperature (as the fourth power of temperature, given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become significant. This is important in stellar interiors.
Radiation pressure in terms of photons
Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by:
where is momentum, is the Planck constant, is wavelength, and is speed of light in vacuum. And is the energy of a single photon given by:
The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance over an area has a power of , this implies a flux of photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically.
Compression in a uniform radiation field
In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor: since this trace equals 3P − u, we get
where is the radiation energy per unit volume.
This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature : the body will be surrounded by a uniform radiation field described by the Planck black-body radiation law and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black-body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space.
By using Stefan–Boltzmann law, this can be expressed as
where is the Stefan–Boltzmann constant.
Solar radiation pressure
Solar radiation pressure is due to the Sun's radiation at closer distances, thus especially within the Solar System. (The radiation pressure of sunlight on Earth is very small: it is equivalent to that exerted by the weight of about a milligram on an area of 1 square metre, or 10 μN/m2.) While it acts on all objects, its net effect is generally greater on smaller bodies, since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure, except when they are behind the shadow of a larger orbiting body.
Solar radiation pressure on objects near the Earth may be calculated using the Sun's irradiance at 1 AU, known as the solar constant, or GSC, whose value is set at 1361 W/m2 as of 2011.
All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail, for instance.
Momentary or hours long solar pressures can indeed escalate due to release of solar flares and coronal mass ejections, but effects remain essentially immeasureable in relation to Earth's orbit. However these pressures persist over eons, such that cumulatively having produced a measureable movement on the Earth-Moon system's orbit.
Pressures of absorption and reflection
Solar radiation pressure at the Earth's distance from the Sun, may be calculated by dividing the solar constant GSC (above) by the speed of light c. For an absorbing sheet facing the Sun, this is simply:
This result is in pascals, equivalent to N/m2 (newtons per square meter). For a sheet at an angle α to the Sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of:
To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of:
Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the Sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure".
The solar constant is defined for the Sun's radiation at the distance to the Earth, also known as one astronomical unit (au). Consequently, at a distance of R astronomical units (R thus being dimensionless), applying the inverse-square law, we would find:
Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in:
Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulas.
Radiation pressure perturbations
Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft.
Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules).
The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun.
A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects.
At any particular time, some facets are exposed to the Sun, and some are in shadow. Each surface exposed to the Sun is reflecting, absorbing, and emitting radiation. Facets in shadow are emitting radiation. The summation of pressures across all of the facets defines the net force and torque on the body. These can be calculated using the equations in the preceding sections.
The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face is more intense than that of the opposite face, resulting in a net force on the body that affects its motion.
The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies.
The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny, since the radiation is moving at the speed of light, while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System.
While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on.
Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer Solar System. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure.
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction", which would oppose the movement of matter. He wrote: "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Solar sails
Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in his 1865 novel From the Earth to the Moon.
A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance.
The Japan Aerospace Exploration Agency (JAXA) has successfully unfurled a solar sail in space, which has already succeeded in propelling its payload with the IKAROS project.
Cosmic effects of radiation pressure
Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales.
Early universe
The photon epoch is a phase when the energy of the universe was dominated by photons, between 10 seconds and 380,000 years after the Big Bang.
Galaxy formation and evolution
The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material.
Clouds of dust and gases
The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions.
Clusters of stars
Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster.
Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.
Star formation
Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function.
Stellar planetary systems
Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure.
Stellar interiors
In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component.
Comets
Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail.
Laser applications of radiation pressure
Optical tweezers
Lasers can be used as a source of monochromatic light with wavelength . With a set of lenses, one can focus the laser beam to a point that is in diameter (or ).
The radiation pressure of a P = 30 mW laser with λ = 1064 nm can therefore be computed as follows.
Area:
force:
pressure:
This is used to trap or levitate particles in optical tweezers.
Light–matter interactions
The reflection of a laser pulse from the surface of an elastic solid can give rise to various types of elastic waves that propagate inside the solid or liquid. In other words, the light can excite and/or amplify motion of, and in, materials. This is the subject of study in the field of optomechanics. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Such light-pressure-induced elastic waves have for example observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. In the field of cavity optomechanics, light is trapped and resonantly enhanced in optical cavities, for example between mirrors. This serves the purpose of gravely enhancing the power of the light, and the radiation pressure it can exert on objects and materials. Optical control (that is, manipulation of the motion) of a plethora of objects has been realized: from kilometers long beams (such as in the LIGO interferometer) to clouds of atoms, and from micro-engineered trampolines to superfluids.
Opposite to exciting or amplifying motion, light can also damp the motion of objects. Laser cooling is a method of cooling materials very close to absolute zero by converting some of material's motional energy into light. Kinetic energy and thermal energy of the material are synonyms here, because they represent the energy associated with Brownian motion of the material. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect.
An other active research area of laser–matter interaction is the radiation pressure acceleration of ions or protons from thin–foil targets. High ion energy beams can be generated for medical applications (for example in ion beam therapy) by the radiation pressure of short laser pulses on ultra-thin foils.
See also
Absorption (electromagnetic radiation)
Cavity optomechanics
Laser cooling
LIGO
Optical tweezers
Photon
Poynting vector
Poynting's theorem
Poynting–Robertson effect
Quantum optics
Solar constant
Solar sail
Sunlight
Wave–particle duality
Yarkovsky effect
Yarkovsky–O'Keefe–Radzievskii–Paddack effect
References
Further reading
Demir, Dilek, "A table-top demonstration of radiation pressure", 2011, Diplomathesis, E-Theses univie
Celestial mechanics
Radiation effects
Radiation | 0.790783 | 0.993775 | 0.785861 |
CGh physics | cGh physics refers to the historical attempts in physics to unify relativity, gravitation, and quantum mechanics, in particular following the ideas of Matvei Petrovich Bronstein and George Gamow. The letters are the standard symbols for the speed of light, the gravitational constant, and the Planck constant.
If one considers these three universal constants as the basis for a 3-D coordinate system and envisions a cube, then this pedagogic construction provides a framework, which is referred to as the cGh cube, or physics cube, or cube of theoretical physics (CTP). This cube can be used for organizing major subjects within physics as occupying each of the eight corners. The eight corners of the cGh physics cube are:
Classical mechanics (_, _, _)
Special relativity (, _, _), gravitation (_, , _), quantum mechanics (_, _, )
General relativity (, , _), quantum field theory (, _, ), non-relativistic quantum theory with gravity (_, , )
Theory of everything, or relativistic quantum gravity (, , )
Other cGh physics topics include Hawking radiation and black-hole thermodynamics.
While there are several other physical constants, these three are given special consideration because they can be used to define all Planck units and thus all physical quantities. The three constants are therefore used sometimes as a framework for philosophical study and as one of pedagogical patterns.
Overview
Before the first successful estimate of the speed of light in 1676, it was not known whether light was transmitted instantaneously or not. Because of the tremendously large value of the speed of light—c (i.e. 299,792,458 metres per second in vacuum)—compared to the range of human perceptual response and visual processing, the propagation of light is normally perceived as instantaneous. Hence, the ratio 1/c is sufficiently close to zero that all subsequent differences of calculations in relativistic mechanics are similarly 'invisible' relative to human perception. However, at speeds comparable to the speed of light (c), Lorentz transformation (as per special relativity) produces substantially different results which agree more accurately with (sufficiently precise) experimental measurement. Non-relativistic theory can then be derived by taking the limit as the speed of light tends to infinity—i.e. ignoring terms (in the Taylor expansion) with a factor of 1/c—producing a first-order approximation of the formulae.
The gravitational constant (G) is irrelevant for a system where gravitational forces are negligible. For example, the special theory of relativity is the special case of general relativity in the limit G → 0.
Similarly, in the theories where the effects of quantum mechanics are irrelevant, the value of Planck constant (h) can be neglected. For example, setting h → 0 in the commutation relation of quantum mechanics, the uncertainty in the simultaneous measurement of two conjugate variables tends to zero, approximating quantum mechanics with classical mechanics.
In popular culture
George Gamow chose "C. G. H." as the initials of his fictitious character, Mr C. G. H. Tompkins.
References
Theoretical physics | 0.796959 | 0.985923 | 0.78574 |
Thermal fluids | Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Fluid flow and continuity
Momentum in fluids
Static and dynamic forces on a boundary
Laminar and turbulent flow
Metacentric height and vessel stability
Applications
Pump Design.
Hydro-Electric Power Generation.
Naval Architecture.
Combustion
Combustion is the sequence of exothermic chemical reactions between a fuel and an oxidant accompanied by the production of heat and conversion of chemical species. The release of heat can result in the production of light in the form of either glowing or a flame. Fuels of interest often include organic compounds (especially hydrocarbons) in the gas, liquid or solid phase.
References
External links
Thermal-Fluids Central
Continuum mechanics
Branches of thermodynamics | 0.809178 | 0.971024 | 0.785731 |
Maxwell–Boltzmann distribution | In physics (in particular in statistical mechanics), the Maxwell–Boltzmann distribution, or Maxwell(ian) distribution, is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann.
It was first defined and used for describing particle speeds in idealized gases, where the particles move freely inside a stationary container without interacting with one another, except for very brief collisions in which they exchange energy and momentum with each other or with their thermal environment. The term "particle" in this context refers to gaseous particles only (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium. The energies of such particles follow what is known as Maxwell–Boltzmann statistics, and the statistical distribution of speeds is derived by equating particle energies with kinetic energy.
Mathematically, the Maxwell–Boltzmann distribution is the chi distribution with three degrees of freedom (the components of the velocity vector in Euclidean space), with a scale parameter measuring speeds in units proportional to the square root of (the ratio of temperature and particle mass).
The Maxwell–Boltzmann distribution is a result of the kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion. The Maxwell–Boltzmann distribution applies fundamentally to particle velocities in three dimensions, but turns out to depend only on the speed (the magnitude of the velocity) of the particles. A particle speed probability distribution indicates which speeds are more likely: a randomly chosen particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The kinetic theory of gases applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantum exchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. This is also true for ideal plasmas, which are ionized gases of sufficiently low density.
The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. A list of derivations are:
Maximum entropy probability distribution in the phase space, with the constraint of conservation of average energy
Canonical ensemble.
Distribution function
For a system containing a large number of identical non-interacting, non-relativistic classical particles in thermodynamic equilibrium, the fraction of the particles within an infinitesimal element of the three-dimensional velocity space , centered on a velocity vector of magnitude , is given by
where:
is the particle mass;
is the Boltzmann constant;
is thermodynamic temperature;
is a probability distribution function, properly normalized so that over all velocities is unity.
One can write the element of velocity space as , for velocities in a standard Cartesian coordinate system, or as in a standard spherical coordinate system, where is an element of solid angle and .
The Maxwellian distribution function for particles moving in only one direction, if this direction is , is
which can be obtained by integrating the three-dimensional form given above over and .
Recognizing the symmetry of , one can integrate over solid angle and write a probability distribution of speeds as the function
This probability density function gives the probability, per unit speed, of finding the particle with a speed near . This equation is simply the Maxwell–Boltzmann distribution (given in the infobox) with distribution parameter
The Maxwell–Boltzmann distribution is equivalent to the chi distribution with three degrees of freedom and scale parameter
The simplest ordinary differential equation satisfied by the distribution is:
or in unitless presentation:
With the Darwin–Fowler method of mean values, the Maxwell–Boltzmann distribution is obtained as an exact result.
Relaxation to the 2D Maxwell–Boltzmann distribution
For particles confined to move in a plane, the speed distribution is given by
This distribution is used for describing systems in equilibrium. However, most systems do not start out in their equilibrium state. The evolution of a system towards its equilibrium state is governed by the Boltzmann equation. The equation predicts that for short range interactions, the equilibrium velocity distribution will follow a Maxwell–Boltzmann distribution. To the right is a molecular dynamics (MD) simulation in which 900 hard sphere particles are constrained to move in a rectangle. They interact via perfectly elastic collisions. The system is initialized out of equilibrium, but the velocity distribution (in blue) quickly converges to the 2D Maxwell–Boltzmann distribution (in orange).
Typical speeds
The mean speed , most probable speed (mode) , and root-mean-square speed can be obtained from properties of the Maxwell distribution.
This works well for nearly ideal, monatomic gases like helium, but also for molecular gases like diatomic oxygen. This is because despite the larger heat capacity (larger internal energy at the same temperature) due to their larger number of degrees of freedom, their translational kinetic energy (and thus their speed) is unchanged.
The most probable speed, , is the speed most likely to be possessed by any molecule (of the same mass ) in the system and corresponds to the maximum value or the mode of . To find it, we calculate the derivative set it to zero and solve for : with the solution: where:
is the gas constant;
is molar mass of the substance, and thus may be calculated as a product of particle mass, , and Avogadro constant, :
For diatomic nitrogen (, the primary component of air) at room temperature, this gives
The mean speed is the expected value of the speed distribution, setting :
The mean square speed is the second-order raw moment of the speed distribution. The "root mean square speed" is the square root of the mean square speed, corresponding to the speed of a particle with average kinetic energy, setting :
In summary, the typical speeds are related as follows:
The root mean square speed is directly related to the speed of sound in the gas, by
where is the adiabatic index, is the number of degrees of freedom of the individual gas molecule. For the example above, diatomic nitrogen (approximating air) at , and
the true value for air can be approximated by using the average molar weight of air, yielding at (corrections for variable humidity are of the order of 0.1% to 0.6%).
The average relative velocity
where the three-dimensional velocity distribution is
The integral can easily be done by changing to coordinates and
Limitations
The Maxwell–Boltzmann distribution assumes that the velocities of individual particles are much less than the speed of light, i.e. that . For electrons, the temperature of electrons must be .
Derivation and related distributions
Maxwell–Boltzmann statistics
The original derivation in 1860 by James Clerk Maxwell was an argument based on molecular collisions of the Kinetic theory of gases as well as certain symmetries in the speed distribution function; Maxwell also gave an early argument that these molecular collisions entail a tendency towards equilibrium. After Maxwell, Ludwig Boltzmann in 1872 also derived the distribution on mechanical grounds and argued that gases should over time tend toward this distribution, due to collisions (see H-theorem). He later (1877) derived the distribution again under the framework of statistical thermodynamics. The derivations in this section are along the lines of Boltzmann's 1877 derivation, starting with result known as Maxwell–Boltzmann statistics (from statistical thermodynamics). Maxwell–Boltzmann statistics gives the average number of particles found in a given single-particle microstate. Under certain assumptions, the logarithm of the fraction of particles in a given microstate is linear in the ratio of the energy of that state to the temperature of the system: there are constants and such that, for all ,
The assumptions of this equation are that the particles do not interact, and that they are classical; this means that each particle's state can be considered independently from the other particles' states. Additionally, the particles are assumed to be in thermal equilibrium.
This relation can be written as an equation by introducing a normalizing factor:
where:
is the expected number of particles in the single-particle microstate ,
is the total number of particles in the system,
is the energy of microstate ,
the sum over index takes into account all microstates,
is the equilibrium temperature of the system,
is the Boltzmann constant.
The denominator in is a normalizing factor so that the ratios add up to unity — in other words it is a kind of partition function (for the single-particle system, not the usual partition function of the entire system).
Because velocity and speed are related to energy, Equation can be used to derive relationships between temperature and the speeds of gas particles. All that is needed is to discover the density of microstates in energy, which is determined by dividing up momentum space into equal sized regions.
Distribution for the momentum vector
The potential energy is taken to be zero, so that all energy is in the form of kinetic energy.
The relationship between kinetic energy and momentum for massive non-relativistic particles is
where is the square of the momentum vector . We may therefore rewrite Equation as:
where:
is the partition function, corresponding to the denominator in ;
is the molecular mass of the gas;
is the thermodynamic temperature;
is the Boltzmann constant.
This distribution of is proportional to the probability density function for finding a molecule with these values of momentum components, so:
The normalizing constant can be determined by recognizing that the probability of a molecule having some momentum must be 1.
Integrating the exponential in over all , , and yields a factor of
So that the normalized distribution function is:
The distribution is seen to be the product of three independent normally distributed variables , , and , with variance . Additionally, it can be seen that the magnitude of momentum will be distributed as a Maxwell–Boltzmann distribution, with . The Maxwell–Boltzmann distribution for the momentum (or equally for the velocities) can be obtained more fundamentally using the H-theorem at equilibrium within the Kinetic theory of gases framework.
Distribution for the energy
The energy distribution is found imposing
where is the infinitesimal phase-space volume of momenta corresponding to the energy interval .
Making use of the spherical symmetry of the energy-momentum dispersion relation this can be expressed in terms of as
Using then in, and expressing everything in terms of the energy , we get
and finally
Since the energy is proportional to the sum of the squares of the three normally distributed momentum components, this energy distribution can be written equivalently as a gamma distribution, using a shape parameter, and a scale parameter,
Using the equipartition theorem, given that the energy is evenly distributed among all three degrees of freedom in equilibrium, we can also split into a set of chi-squared distributions, where the energy per degree of freedom, is distributed as a chi-squared distribution with one degree of freedom,
At equilibrium, this distribution will hold true for any number of degrees of freedom. For example, if the particles are rigid mass dipoles of fixed dipole moment, they will have three translational degrees of freedom and two additional rotational degrees of freedom. The energy in each degree of freedom will be described according to the above chi-squared distribution with one degree of freedom, and the total energy will be distributed according to a chi-squared distribution with five degrees of freedom. This has implications in the theory of the specific heat of a gas.
Distribution for the velocity vector
Recognizing that the velocity probability density is proportional to the momentum probability density function by
and using we get
which is the Maxwell–Boltzmann velocity distribution. The probability of finding a particle with velocity in the infinitesimal element about velocity is
Like the momentum, this distribution is seen to be the product of three independent normally distributed variables , , and , but with variance .
It can also be seen that the Maxwell–Boltzmann velocity distribution for the vector velocity
is the product of the distributions for each of the three directions:
where the distribution for a single direction is
Each component of the velocity vector has a normal distribution with mean and standard deviation , so the vector has a 3-dimensional normal distribution, a particular kind of multivariate normal distribution, with mean and covariance , where is the identity matrix.
Distribution for the speed
The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed is
and the volume element in spherical coordinates
where and are the spherical coordinate angles of the velocity vector. Integration of the probability density function of the velocity over the solid angles yields an additional factor of .
The speed distribution with substitution of the speed for the sum of the squares of the vector components:
In n-dimensional space
In -dimensional space, Maxwell–Boltzmann distribution becomes:
Speed distribution becomes:
where is a normalizing constant.
The following integral result is useful:
where is the Gamma function. This result can be used to calculate the moments of speed distribution function:
which is the mean speed itself
which gives root-mean-square speed
The derivative of speed distribution function:
This yields the most probable speed (mode)
See also
Quantum Boltzmann equation
Maxwell–Boltzmann statistics
Maxwell–Jüttner distribution
Boltzmann distribution
Rayleigh distribution
Kinetic theory of gases
Notes
References
Further reading
External links
"The Maxwell Speed Distribution" from The Wolfram Demonstrations Project at Mathworld
Continuous distributions
Gases
Ludwig Boltzmann
James Clerk Maxwell
Normal distribution
Particle distributions | 0.786763 | 0.998571 | 0.785638 |
Continuum mechanics | Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Continuum mechanics deals with deformable bodies, as opposed to rigid bodies.
A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships.
Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics.
Concept of a continuum
The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus.
Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties.
Major areas
An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties.
Formulation of models
Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time is labeled .
A particular particle within the body in a particular configuration is characterized by a position vector
where are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position in some reference configuration, for example the configuration at the initial time, so that
This function needs to have various properties so that the model makes physical sense. needs to be:
continuous in time, so that the body changes in a way which is realistic,
globally invertible at all times, so that the body cannot intersect itself,
orientation-preserving, as transformations which produce mirror reflections are not possible in nature.
For the mathematical formulation of the model, is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated.
Forces in a continuum
A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces.
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as:
Surface forces
Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup.
The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field that represents this distribution in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector .
Any differential area with normal vector of a given internal surface area , bounding a portion of the body, experiences a contact force arising from the contact between both portions of the body on each side of , and it is given by
where is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle).
The total contact force on the particular internal surface is then expressed as the sum (surface integral) of the contact forces on all differential surfaces :
In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress.
Body forces
Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density (per unit of mass), which is a frame-indifferent vector field.
In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density of the material, and it is specified in terms of force per unit mass or per unit volume. These two specifications are related through the material density by the equation . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field.
The total body force applied to a continuous body is expressed as
Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque about the origin is given by
In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals.
Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by
Kinematics: motion and deformation
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 2).
The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line.
There is continuity during motion or deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at is considered the reference configuration, . The components of the position vector of a particle, taken with respect to the reference configuration, are called the material or reference coordinates.
When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description.
Lagrangian description
In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, . This description is normally used in solid mechanics.
In the Lagrangian description, the motion of a continuum body is expressed by the mapping function (Figure 2),
which is a mapping of the initial configuration onto the current configuration , giving a geometrical correspondence between them, i.e. giving the position vector that a particle , with a position vector in the undeformed or reference configuration , will occupy in the current or deformed configuration at time . The components are called the spatial coordinates.
Physical and kinematic properties , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. .
The material derivative of any property of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles.
In the Lagrangian description, the material derivative of is simply the partial derivative with respect to time, and the position vector is held constant as it does not change with time. Thus, we have
The instantaneous position is a property of a particle, and its material derivative is the instantaneous flow velocity of the particle. Therefore, the flow velocity field of the continuum is given by
Similarly, the acceleration field is given by
Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function and are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third.
Eulerian description
Continuity allows for the inverse of to trace backwards where the particle currently located at was located in the initial or referenced configuration . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration.
The Eulerian description, introduced by d'Alembert, focuses on the current configuration , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time.
Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function
which provides a tracing of the particle which now occupies the position in the current configuration to its original position in the initial configuration .
A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus,
In the Eulerian description, the physical properties are expressed as
where the functional form of in the Lagrangian description is not the same as the form of in the Eulerian description.
The material derivative of , using the chain rule, is then
The first term on the right-hand side of this equation gives the local rate of change of the property occurring at position . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion).
Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position .
Displacement field
The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector , in the Lagrangian description, or , in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as
or in terms of the spatial coordinates as
where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus
and the relationship between and is then given by
Knowing that
then
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e.
Thus, we have
or in terms of the spatial coordinates as
Governing equations
Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied.
The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes:
the physical quantity itself flows through the surface that bounds the volume,
there is a source of the physical quantity on the surface of the volume, or/and,
there is a source of the physical quantity inside the volume.
Let be the body (an open subset of Euclidean space) and let be its surface (the boundary of ).
Let the motion of material points in the body be described by the map
where is the position of a point in the initial configuration and is the location of the same point in the deformed configuration.
The deformation gradient is given by
Balance laws
Let be a physical quantity that is flowing through the body. Let be sources on the surface of the body and let be sources inside the body. Let be the outward unit normal to the surface . Let be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface is moving be (in the direction ).
Then, balance laws can be expressed in the general form
The functions , , and can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws.
If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations)
In the above equations is the mass density (current), is the material time derivative of , is the particle velocity, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, and is an energy source per unit mass. The operators used are defined below.
With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as
In the above, is the first Piola-Kirchhoff stress tensor, and is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by
We can alternatively define the nominal stress tensor which is the transpose of the first Piola-Kirchhoff stress tensor such that
Then the balance laws become
Operators
The operators in the above equations are defined as
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the current configuration. Also,
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the reference configuration.
The inner product is defined as
Clausius–Duhem inequality
The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved.
Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density and an internal specific entropy (i.e. entropy per unit mass) in the region of interest.
Let be such a region and let be its boundary. Then the second law of thermodynamics states that the rate of increase of in this region is greater than or equal to the sum of that supplied to (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region.
Let move with a flow velocity and let particles inside have velocities . Let be the unit outward normal to the surface . Let be the density of matter in the region, be the entropy flux at the surface, and be the entropy source per unit mass.
Then the entropy inequality may be written as
The scalar entropy flux can be related to the vector flux at the surface by the relation . Under the assumption of incrementally isothermal conditions, we have
where is the heat flux vector, is an energy source per unit mass, and is the absolute temperature of a material point at at time .
We then have the Clausius–Duhem inequality in integral form:
We can show that the entropy inequality may be written in differential form as
In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as
Validity
The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure.
When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous.
Applications
Continuum mechanics
Solid mechanics
Fluid mechanics
Engineering
Civil engineering
Mechanical engineering
Aerospace engineering
Biomedical engineering
Chemical engineering
See also
Transport phenomena
Bernoulli's principle
Cauchy elastic material
Configurational mechanics
Curvilinear coordinates
Equation of state
Finite deformation tensors
Finite strain theory
Hyperelastic material
Lagrangian and Eulerian specification of the flow field
Movable cellular automaton
Peridynamics (a non-local continuum theory leading to integral equations)
Stress (physics)
Stress measures
Tensor calculus
Tensor derivative (continuum mechanics)
Theory of elasticity
Knudsen number
Explanatory notes
References
Citations
Works cited
General references
External links
"Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity" by Gilles Leborgne, April 7, 2021: "Part IV Velocity-addition formula and Objectivity"
Classical mechanics | 0.788742 | 0.995816 | 0.785442 |
Free energy principle | The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model and the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.
Overview
In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled.
It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience.
The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system).
The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.
The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsify calculus by making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject to falsification: "I think it is useful to make a fundamental distinction at this point—that we can appeal to later. The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence." There are many examples of these hypotheses being supported by empirical evidence.
Background
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples.
Relationship to other theories
Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology, linguistics and communication, semiotics, and epidemiology among others.
Negative free energy is formally equivalent to the evidence lower bound, which is commonly used in machine learning to train generative models, such as variational autoencoders.
Action and perception
Active inference applies the techniques of approximate Bayesian inference to infer the causes of sensory data from a 'generative' model of how that data is caused and then uses these inferences to guide action.
Bayes' rule characterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods.
In active inference, the leading class of such approximate methods are variational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above.
These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method.
This upper bound is known as the free energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information.
This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way.
In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection of probability density functions which together characterize the causal model.
One such specification is as follows.
The system is modelled as inhabiting a state space , in the sense that its states form the points of this space.
The state space is then factorized according to , where is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible), is the space of sensory states that are directly perceived by the agent, is the space of the agent's possible actions, and is a space of 'internal' states that are private to the agent.
Keeping with the Figure 1, note that in the following the and are functions of (continuous) time . The generative model is the specification of the following density functions:
A sensory model, , often written as , characterizing the likelihood of sensory data given external states and actions;
a stochastic model of the environmental dynamics, , often written , characterizing how the external states are expected by the agent to evolve over time , given the agent's actions;
an action model, , written , characterizing how the agent's actions depend upon its internal states and sensory data; and
an internal model, , written , characterizing how the agent's internal states depend upon its sensory data.
These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as
.
Bayes' rule then determines the "posterior density" , which expresses a probabilistically optimal belief about the external state given the preceding state and the agent's actions, sensory signals, and internal states.
Since computing is computationally intractable, the free energy principle asserts the existence of a "variational density" , where is an approximation to .
One then defines the free energy as
and defines action and perception as the joint optimization problem
where the internal states are typically taken to encode the parameters of the 'variational' density and hence the agent's "best guess" about the posterior belief over .
Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensory surprise, and hence free energy minimization is often motivated by the minimization of surprise.
Free energy minimisation
Free energy minimisation and self-organisation
Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states:
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems.
Free energy minimisation and Bayesian inference
All Bayesian inference can be cast in terms of free energy minimisation. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., Occam's razor and more formal treatments of computational costs). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
Free energy minimisation and thermodynamics
Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz) free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy.
Free energy minimisation and information theory
Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy.
Free energy minimisation in neuroscience
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
Perceptual inference and categorisation
Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering (where ~ denotes a variable in generalised coordinates of motion and is a derivative matrix operator):
Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems.
Perceptual learning and memory
In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
Perceptual precision, attention and salience
Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (c.f., Kalman gain). In neuronally plausible implementations of predictive coding, this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain.
With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance:
where is the total energy function of the neural networks entail, and is the prediction error between the generative model (prior) and posterior changing over time.
Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM. According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error.
Active inference
When gradient descent is applied to action , motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.
Active inference and optimal control
Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow that are specified with scalar and vector value functions of state space (c.f., the Helmholtz decomposition). Here, is the amplitude of random fluctuations and cost is . The priors over flow induce a prior over states that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations.
Active inference and optimal decision (game) theory
Optimal decision problems (usually formulated as partially observable Markov decision processes) are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states.
Neurobiologically, neuromodulators such as dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts.
Active inference and cognitive neuroscience
Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true.
See also
Constructal law - Law of design evolution in nature, animate and inanimate
References
External links
Behavioral and Brain Sciences (by Andy Clark)
Biological systems
Systems theory
Computational neuroscience
Mathematical and theoretical biology | 0.789956 | 0.994046 | 0.785253 |
Wind | Wind is the natural movement of air or other gases relative to a planet's surface. Winds occur on a range of scales, from thunderstorm flows lasting tens of minutes, to local breezes generated by heating of land surfaces and lasting a few hours, to global winds resulting from the difference in absorption of solar energy between the climate zones on Earth. The two main causes of large-scale atmospheric circulation are the differential heating between the equator and the poles, and the rotation of the planet (Coriolis effect). Within the tropics and subtropics, thermal low circulations over terrain and high plateaus can drive monsoon circulations. In coastal areas the sea breeze/land breeze cycle can define local winds; in areas that have variable terrain, mountain and valley breezes can prevail.
Winds are commonly classified by their spatial scale, their speed and direction, the forces that cause them, the regions in which they occur, and their effect. Winds have various defining aspects such as velocity (wind speed), the density of the gases involved, and energy content or wind energy. In meteorology, winds are often referred to according to their strength, and the direction from which the wind is blowing. The convention for directions refer to where the wind comes from; therefore, a 'western' or 'westerly' wind blows from the west to the east, a 'northern' wind blows south, and so on. This is sometimes counter-intuitive.
Short bursts of high speed wind are termed gusts. Strong winds of intermediate duration (around one minute) are termed squalls. Long-duration winds have various names associated with their average strength, such as breeze, gale, storm, and hurricane.
In outer space, solar wind is the movement of gases or charged particles from the Sun through space, while planetary wind is the outgassing of light chemical elements from a planet's atmosphere into space. The strongest observed winds on a planet in the Solar System occur on Neptune and Saturn.
In human civilization, the concept of wind has been explored in mythology, influenced the events of history, expanded the range of transport and warfare, and provided a power source for mechanical work, electricity, and recreation. Wind powers the voyages of sailing ships across Earth's oceans. Hot air balloons use the wind to take short trips, and powered flight uses it to increase lift and reduce fuel consumption. Areas of wind shear caused by various weather phenomena can lead to dangerous situations for aircraft. When winds become strong, trees and human-made structures can be damaged or destroyed.
Winds can shape landforms, via a variety of aeolian processes such as the formation of fertile soils, for example loess, and by erosion. Dust from large deserts can be moved great distances from its source region by the prevailing winds; winds that are accelerated by rough topography and associated with dust outbreaks have been assigned regional names in various parts of the world because of their significant effects on those regions. Wind also affects the spread of wildfires. Winds can disperse seeds from various plants, enabling the survival and dispersal of those plant species, as well as flying insect and bird populations. When combined with cold temperatures, the wind has a negative impact on livestock. Wind affects animals' food stores, as well as their hunting and defensive strategies.
Causes
Wind is caused by differences in atmospheric pressure, which are mainly due to temperature differences. When a difference in atmospheric pressure exists, air moves from the higher to the lower pressure area, resulting in winds of various speeds. On a rotating planet, air will also be deflected by the Coriolis effect, except exactly on the equator. Globally, the two major driving factors of large-scale wind patterns (the atmospheric circulation) are the differential heating between the equator and the poles (difference in absorption of solar energy leading to buoyancy forces) and the rotation of the planet. Outside the tropics and aloft from frictional effects of the surface, the large-scale winds tend to approach geostrophic balance. Near the Earth's surface, friction causes the wind to be slower than it would be otherwise. Surface friction also causes winds to blow more inward into low-pressure areas.
Winds defined by an equilibrium of physical forces are used in the decomposition and analysis of wind profiles. They are useful for simplifying the atmospheric equations of motion and for making qualitative arguments about the horizontal and vertical distribution of horizontal winds. The geostrophic wind component is the result of the balance between Coriolis force and pressure gradient force. It flows parallel to isobars and approximates the flow above the atmospheric boundary layer in the midlatitudes. The thermal wind is the difference in the geostrophic wind between two levels in the atmosphere. It exists only in an atmosphere with horizontal temperature gradients. The ageostrophic wind component is the difference between actual and geostrophic wind, which is responsible for air "filling up" cyclones over time. The gradient wind is similar to the geostrophic wind but also includes centrifugal force (or centripetal acceleration).
Measurement
Wind direction is usually expressed in terms of the direction from which it originates. For example, a northerly wind blows from the north to the south. Weather vanes pivot to indicate the direction of the wind. At airports, windsocks indicate wind direction, and can also be used to estimate wind speed by the angle of hang. Wind speed is measured by anemometers, most commonly using rotating cups or propellers. When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed.
Sustained wind speeds are reported globally at a height and are averaged over a 10‑minute time frame. The United States reports winds over a 1‑minute average for tropical cyclones, and a 2‑minute average within weather observations. India typically reports winds over a 3‑minute average. Knowing the wind sampling average is important, as the value of a one-minute sustained wind is typically 14% greater than a ten-minute sustained wind. A short burst of high speed wind is termed a wind gust; one technical definition of a wind gust is: the maxima that exceed the lowest wind speed measured during a ten-minute time interval by for periods of seconds. A squall is an increase of the wind speed above a certain threshold, which lasts for a minute or more.
To determine winds aloft, radiosondes determine wind speed by GPS, radio navigation, or radar tracking of the probe. Alternatively, movement of the parent weather balloon position can be tracked from the ground visually using theodolites. Remote sensing techniques for wind include SODAR, Doppler lidars and radars, which can measure the Doppler shift of electromagnetic radiation scattered or reflected off suspended aerosols or molecules, and radiometers and radars can be used to measure the surface roughness of the ocean from space or airplanes. Ocean roughness can be used to estimate wind velocity close to the sea surface over oceans. Geostationary satellite imagery can be used to estimate the winds at cloud top based upon how far clouds move from one image to the next. Wind engineering describes the study of the effects of the wind on the built environment, including buildings, bridges and other artificial objects.
Models
Models can provide spatial and temporal information about airflow. Spatial information can be obtained through the interpolation of data from various measurement stations, allowing for horizontal data calculation. Alternatively, profiles, such as the logarithmic wind profile, can be utilized to derive vertical information.
Temporal information is typically computed by solving the Navier-Stokes equations within numerical weather prediction models, generating global data for General Circulation Models or specific regional data. The calculation of wind fields is influenced by factors such as radiation differentials, Earth's rotation, and friction, among others. Solving the Navier-Stokes equations is a time-consuming numerical process, but machine learning techniques can help expedite computation time.
Numerical weather prediction models have significantly advanced our understanding of atmospheric dynamics and have become indispensable tools in weather forecasting and climate research. By leveraging both spatial and temporal data, these models enable scientists to analyze and predict global and regional wind patterns, contributing to our comprehension of the Earth's complex atmospheric system.
Wind force scale
Historically, the Beaufort wind force scale (created by Beaufort) provides an empirical description of wind speed based on observed sea conditions. Originally it was a 13-level scale (012), but during the 1940s, the scale was expanded to 18 levels (017). There are general terms that differentiate winds of different average speeds such as a breeze, a gale, a storm, or a hurricane. Within the Beaufort scale, gale-force winds lie between and with preceding adjectives such as moderate, fresh, strong, and whole used to differentiate the wind's strength within the gale category. A storm has winds of to . The terminology for tropical cyclones differs from one region to another globally. Most ocean basins use the average wind speed to determine the tropical cyclone's category. Below is a summary of the classifications used by Regional Specialized Meteorological Centers worldwide:
Enhanced Fujita scale
The Enhanced Fujita Scale (EF Scale) rates the strength of tornadoes by using damage to estimate wind speed. It has six levels, from visible damage to complete destruction. It is used in the United States and in some other countries, including Canada and France, with small modifications.
Station model
The station model plotted on surface weather maps uses a wind barb to show both wind direction and speed. The wind barb shows the speed using "flags" on the end.
Each half of a flag depicts of wind.
Each full flag depicts of wind.
Each pennant (filled triangle) depicts of wind.
Winds are depicted as blowing from the direction the barb is facing. Therefore, a northeast wind will be depicted with a line extending from the cloud circle to the northeast, with flags indicating wind speed on the northeast end of this line. Once plotted on a map, an analysis of isotachs (lines of equal wind speeds) can be accomplished. Isotachs are particularly useful in diagnosing the location of the jet stream on upper-level constant pressure charts, and are usually located at or above the 300 hPa level.
Global climatology
Easterly winds, on average, dominate the flow pattern across the poles, westerly winds blow across the mid-latitudes of the Earth, polewards of the subtropical ridge, while easterlies again dominate the tropics.
Directly under the subtropical ridge are the doldrums, or horse latitudes, where winds are lighter. Many of the Earth's deserts lie near the average latitude of the subtropical ridge, where descent reduces the relative humidity of the air mass. The strongest winds are in the mid-latitudes where cold polar air meets warm air from the tropics.
Tropics
The trade winds (also called trades) are the prevailing pattern of easterly surface winds found in the tropics towards the Earth's equator. The trade winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. The trade winds act as the steering flow for tropical cyclones that form over the world's oceans. Trade winds also steer African dust westward across the Atlantic Ocean into the Caribbean, as well as portions of southeast North America.
A monsoon is a seasonal prevailing wind that lasts for several months within tropical regions. The term was first used in English in India, Bangladesh, Pakistan, and neighboring countries to refer to the big seasonal winds blowing from the Indian Ocean and Arabian Sea in the southwest bringing heavy rainfall to the area. Its poleward progression is accelerated by the development of a heat low over the Asian, African, and North American continents during May through July, and over Australia in December.
Westerlies and their impact
The Westerlies or the Prevailing Westerlies are the prevailing winds in the middle latitudes between 35 and 65 degrees latitude. These prevailing winds blow from the west to the east, and steer extratropical cyclones in this general manner. The winds are predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere. They are strongest in the winter when the pressure is lower over the poles, and weakest during the summer and when pressures are higher over the poles.
Together with the trade winds, the westerlies enabled a round-trip trade route for sailing ships crossing the Atlantic and Pacific Oceans, as the westerlies lead to the development of strong ocean currents on the western sides of oceans in both hemispheres through the process of western intensification. These western ocean currents transport warm, sub-tropical water polewards toward the polar regions. The westerlies can be particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, which slows the winds down. The strongest westerly winds in the middle latitudes are within a band known as the Roaring Forties, between 40 and 50 degrees latitude south of the equator. The Westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.
Polar easterlies
The polar easterlies, also known as Polar Hadley cells, are dry, cold prevailing winds that blow from the high-pressure areas of the polar highs at the north and South Poles towards the low-pressure areas within the Westerlies at high latitudes. Unlike the Westerlies, these prevailing winds blow from the east to the west, and are often weak and irregular. Because of the low sun angle, cold air builds up and subsides at the pole creating surface high-pressure areas, forcing an equatorward outflow of air; that outflow is deflected westward by the Coriolis effect.
Local considerations
Sea and land breezes
In coastal regions, sea breezes and land breezes can be important factors in a location's prevailing winds. The sea is warmed by the sun more slowly because of water's greater specific heat compared to land. As the temperature of the surface of the land rises, the land heats the air above it by conduction. The warm air is less dense than the surrounding environment and so it rises. The cooler air above the sea, now with higher sea level pressure, flows inland into the lower pressure, creating a cooler breeze near the coast. A background along-shore wind either strengthens or weakens the sea breeze, depending on its orientation with respect to the Coriolis force.
At night, the land cools off more quickly than the ocean because of differences in their specific heat values. This temperature change causes the daytime sea breeze to dissipate. When the temperature onshore cools below the temperature offshore, the pressure over the water will be lower than that of the land, establishing a land breeze, as long as an onshore wind is not strong enough to oppose it.
Near mountains
Over elevated surfaces, heating of the ground exceeds the heating of the surrounding air at the same altitude above sea level, creating an associated thermal low over the terrain and enhancing any thermal lows that would have otherwise existed, and changing the wind circulation of the region. In areas where there is rugged topography that significantly interrupts the environmental wind flow, the wind circulation between mountains and valleys is the most important contributor to the prevailing winds. Hills and valleys substantially distort the airflow by increasing friction between the atmosphere and landmass by acting as a physical block to the flow, deflecting the wind parallel to the range just upstream of the topography, which is known as a barrier jet. This barrier jet can increase the low-level wind by 45%. Wind direction also changes because of the contour of the land.
If there is a pass in the mountain range, winds will rush through the pass with considerable speed because of the Bernoulli principle that describes an inverse relationship between speed and pressure. The airflow can remain turbulent and erratic for some distance downwind into the flatter countryside. These conditions are dangerous to ascending and descending airplanes. Cool winds accelerating through mountain gaps have been given regional names. In Central America, examples include the Papagayo wind, the Panama wind, and the Tehuano wind. In Europe, similar winds are known as the Bora, Tramontane, and Mistral. When these winds blow over open waters, they increase mixing of the upper layers of the ocean that elevates cool, nutrient rich waters to the surface, which leads to increased marine life.
In mountainous areas, local distortion of the airflow becomes severe. Jagged terrain combines to produce unpredictable flow patterns and turbulence, such as rotors, which can be topped by lenticular clouds. Strong updrafts, downdrafts, and eddies develop as the air flows over hills and down valleys. Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, also known as upslope flow, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air on the descending and generally warming, leeward side where a rain shadow is observed.
Winds that flow over mountains down into lower elevations are known as downslope winds. These winds are warm and dry. In Europe downwind of the Alps, they are known as foehn. In Poland, an example is the halny wiatr. In Argentina, the local name for down sloped winds is zonda. In Java, the local name for such winds is koembang. In New Zealand, they are known as the Nor'west arch, and are accompanied by the cloud formation they are named after that has inspired artwork over the years. In the Great Plains of the United States, these winds are known as a chinook. Downslope winds also occur in the foothills of the Appalachian mountains of the United States, and they can be as strong as other downslope winds and unusual compared to other foehn winds in that the relative humidity typically changes little due to the increased moisture in the source air mass. In California, downslope winds are funneled through mountain passes, which intensify their effect, and examples include the Santa Ana and sundowner winds. Wind speeds during downslope wind effect can exceed .
Shear
Wind shear, sometimes referred to as wind gradient, is a difference in wind speed and direction over a relatively short distance in the Earth's atmosphere. Wind shear can be broken down into vertical and horizontal components, with horizontal wind shear seen across weather fronts and near the coast, and vertical shear typically near the surface, though also at higher levels in the atmosphere near upper level jets and frontal zones aloft.
Wind shear itself is a microscale meteorological phenomenon occurring over a very small distance, but it can be associated with mesoscale or synoptic scale weather features such as squall lines and cold fronts. It is commonly observed near microbursts and downbursts caused by thunderstorms, weather fronts, areas of locally higher low level winds referred to as low level jets, near mountains, radiation inversions that occur because of clear skies and calm winds, buildings, wind turbines, and sailboats. Wind shear has a significant effect on the control of aircraft during take-off and landing, and was a significant cause of aircraft accidents involving large loss of life within the United States.
Sound movement through the atmosphere is affected by wind shear, which can bend the wave front, causing sounds to be heard where they normally would not, or vice versa. Strong vertical wind shear within the troposphere also inhibits tropical cyclone development, but helps to organize individual thunderstorms into living longer life cycles that can then produce severe weather. The thermal wind concept explains how differences in wind speed with height are dependent on horizontal temperature differences, and explains the existence of the jet stream.
In civilization
Religion
As a natural force, the wind was often personified as one or more wind gods or as an expression of the supernatural in many cultures. Vayu is the Vedic and Hindu God of Wind. The Greek wind gods include Boreas, Notus, Eurus, and Zephyrus. Aeolus, in varying interpretations the ruler or keeper of the four winds, has also been described as Astraeus, the god of dusk who fathered the four winds with Eos, goddess of dawn. The ancient Greeks also observed the seasonal change of the winds, as evidenced by the Tower of the Winds in Athens. Venti are the Roman gods of the winds. Fūjin is the Japanese wind god and is one of the eldest Shinto gods. According to legend, he was present at the creation of the world and first let the winds out of his bag to clear the world of mist. In Norse mythology, Njörðr is the god of the wind. There are also four dvärgar (Norse dwarves), named Norðri, Suðri, Austri and Vestri, and probably the four stags of Yggdrasil, personify the four winds, and parallel the four Greek wind gods. Stribog is the name of the Slavic god of winds, sky and air. He is said to be the ancestor (grandfather) of the winds of the eight directions.
History
Kamikaze is a Japanese word, usually translated as divine wind, believed to be a gift from the gods. The term is first known to have been used as the name of a pair or series of typhoons that are said to have saved Japan from two Mongol fleets under Kublai Khan that attacked Japan in 1274 and again in 1281. Protestant Wind is a name for the storm that deterred the Spanish Armada from an invasion of England in 1588 where the wind played a pivotal role, or the favorable winds that enabled William of Orange to invade England in 1688. During Napoleon's Egyptian Campaign, the French soldiers had a hard time with the khamsin wind: when the storm appeared "as a blood-stint in the distant sky", the Ottomans went to take cover, while the French "did not react until it was too late, then choked and fainted in the blinding, suffocating walls of dust". During the North African Campaign of the World War II, "allied and German troops were several times forced to halt in mid-battle because of sandstorms caused by khamsin... Grains of sand whirled by the wind blinded the soldiers and created electrical disturbances that rendered compasses useless."
Transportation
There are many different forms of sailing ships, but they all have certain basic things in common. Except for rotor ships using the Magnus effect, every sailing ship has a hull, rigging and at least one mast to hold up the sails that use the wind to power the ship. Ocean journeys by sailing ship can take many months, and a common hazard is becoming becalmed because of lack of wind, or being blown off course by severe storms or winds that do not allow progress in the desired direction. A severe storm could lead to shipwreck, and the loss of all hands. Sailing ships can only carry a certain quantity of supplies in their hold, so they have to plan long voyages carefully to include appropriate provisions, including fresh water.
For aerodynamic aircraft which operate relative to the air, winds affect groundspeed, and in the case of lighter-than-air vehicles, wind may play a significant or solitary role in their movement and ground track. The velocity of surface wind is generally the primary factor governing the direction of flight operations at an airport, and airfield runways are aligned to account for the common wind direction(s) of the local area. While taking off with a tailwind may be necessary under certain circumstances, a headwind is generally desirable. A tailwind increases takeoff distance required and decreases the climb gradient.
Power source
The ancient Sinhalese of Anuradhapura and in other cities around Sri Lanka used the monsoon winds to power furnaces as early as 300 BCE. The furnaces were constructed on the path of the monsoon winds to bring the temperatures inside up to . A rudimentary windmill was used to power an organ in the first century CE. Windmills were later built in Sistan, Afghanistan, from the 7th century CE. These were vertical-axle windmills, with sails covered in reed matting or cloth material. These windmills were used to grind corn and draw up water, and were used in the gristmilling and sugarcane industries. Horizontal-axle windmills were later used extensively in Northwestern Europe to grind flour beginning in the 1180s, and many Dutch windmills still exist.
Wind power is now one of the main sources of renewable energy, and its use is growing rapidly, driven by innovation and falling prices. Most of the installed capacity in wind power is onshore, but offshore wind power offers a large potential as wind speeds are typically higher and more constant away from the coast. Wind energy the kinetic energy of the air, is proportional to the third power of wind velocity. Betz's law described the theoretical upper limit of what fraction of this energy wind turbines can extract, which is about 59%.
Recreation
Wind figures prominently in several popular sports, including recreational hang gliding, hot air ballooning, kite flying, snowkiting, kite landboarding, kite surfing, paragliding, sailing, and windsurfing. In gliding, wind gradients just above the surface affect the takeoff and landing phases of flight of a glider. Wind gradient can have a noticeable effect on ground launches, also known as winch launches or wire launches. If the wind gradient is significant or sudden, or both, and the pilot maintains the same pitch attitude, the indicated airspeed will increase, possibly exceeding the maximum ground launch tow speed. The pilot must adjust the airspeed to deal with the effect of the gradient. When landing, wind shear is also a hazard, particularly when the winds are strong. As the glider descends through the wind gradient on final approach to landing, airspeed decreases while sink rate increases, and there is insufficient time to accelerate prior to ground contact. The pilot must anticipate the wind gradient and use a higher approach speed to compensate for it.
In the natural world
In arid climates, the main source of erosion is wind. The general wind circulation moves small particulates such as dust across wide oceans thousands of kilometers downwind of their point of origin, which is known as deflation. Westerly winds in the mid-latitudes of the planet drive the movement of ocean currents from west to east across the world's oceans. Wind has a very important role in aiding plants and other immobile organisms in dispersal of seeds, spores, pollen, etc. Although wind is not the primary form of seed dispersal in plants, it provides dispersal for a large percentage of the biomass of land plants.
Erosion
Erosion can be the result of material movement by the wind. There are two main effects. First, wind causes small particles to be lifted and therefore moved to another region. This is called deflation. Second, these suspended particles may impact on solid objects causing erosion by abrasion (ecological succession). Wind erosion generally occurs in areas with little or no vegetation, often in areas where there is insufficient rainfall to support vegetation. An example is the formation of sand dunes, on a beach or in a desert. Loess is a homogeneous, typically nonstratified, porous, friable, slightly coherent, often calcareous, fine-grained, silty, pale yellow or buff, windblown (Aeolian) sediment. It generally occurs as a widespread blanket deposit that covers areas of hundreds of square kilometers and tens of meters thick. Loess often stands in either steep or vertical faces. Loess tends to develop into highly rich soils. Under appropriate climatic conditions, areas with loess are among the most agriculturally productive in the world. Loess deposits are geologically unstable by nature, and will erode very readily. Therefore, windbreaks (such as big trees and bushes) are often planted by farmers to reduce the wind erosion of loess.
Desert dust migration
During mid-summer (July in the northern hemisphere), the westward-moving trade winds south of the northward-moving subtropical ridge expand northwestward from the Caribbean into southeastern North America. When dust from the Sahara moving around the southern periphery of the ridge within the belt of trade winds moves over land, rainfall is suppressed and the sky changes from a blue to a white appearance, which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates. Over 50% of the African dust that reaches the United States affects Florida. Since 1970, dust outbreaks have worsened because of periods of drought in Africa. There is a large variability in the dust transport to the Caribbean and Florida from year to year. Dust events have been linked to a decline in the health of coral reefs across the Caribbean and Florida, primarily since the 1970s. Similar dust plumes originate in the Gobi Desert, which combined with pollutants, spread large distances downwind, or eastward, into North America.
There are local names for winds associated with sand and dust storms. The Calima carries dust on southeast winds into the Canary islands. The Harmattan carries dust during the winter into the Gulf of Guinea. The Sirocco brings dust from north Africa into southern Europe because of the movement of extratropical cyclones through the Mediterranean. Spring storm systems moving across the eastern Mediterranean Sea cause dust to carry across Egypt and the Arabian peninsula, which are locally known as Khamsin. The Shamal is caused by cold fronts lifting dust into the atmosphere for days at a time across the Persian Gulf states.
Effect on plants
Wind dispersal of seeds, or anemochory, is one of the more primitive means of dispersal. Wind dispersal can take on one of two primary forms: seeds can float on the breeze or alternatively, they can flutter to the ground. The classic examples of these dispersal mechanisms include dandelions (Taraxacum spp., Asteraceae), which have a feathery pappus attached to their seeds and can be dispersed long distances, and maples (Acer (genus) spp., Sapindaceae), which have winged seeds and flutter to the ground. An important constraint on wind dispersal is the need for abundant seed production to maximize the likelihood of a seed landing in a site suitable for germination. There are also strong evolutionary constraints on this dispersal mechanism. For instance, species in the Asteraceae on islands tended to have reduced dispersal capabilities (i.e., larger seed mass and smaller pappus) relative to the same species on the mainland. Reliance upon wind dispersal is common among many weedy or ruderal species. Unusual mechanisms of wind dispersal include tumbleweeds. A related process to anemochory is anemophily, which is the process where pollen is distributed by wind. Large families of plants are pollinated in this manner, which is favored when individuals of the dominant plant species are spaced closely together.
Wind also limits tree growth. On coasts and isolated mountains, the tree line is often much lower than in corresponding altitudes inland and in larger, more complex mountain systems, because strong winds reduce tree growth. High winds scour away thin soils through erosion, as well as damage limbs and twigs. When high winds knock down or uproot trees, the process is known as windthrow. This is most likely on windward slopes of mountains, with severe cases generally occurring to tree stands that are 75 years or older. Plant varieties near the coast, such as the Sitka spruce and sea grape, are pruned back by wind and salt spray near the coastline.
Wind can also cause plants damage through sand abrasion. Strong winds will pick up loose sand and topsoil and hurl it through the air at speeds ranging from to . Such windblown sand causes extensive damage to plant seedlings because it ruptures plant cells, making them vulnerable to evaporation and drought. Using a mechanical sandblaster in a laboratory setting, scientists affiliated with the Agricultural Research Service studied the effects of windblown sand abrasion on cotton seedlings. The study showed that the seedlings responded to the damage created by the windblown sand abrasion by shifting energy from stem and root growth to the growth and repair of the damaged stems. After a period of four weeks, the growth of the seedling once again became uniform throughout the plant, as it was before the windblown sand abrasion occurred.
Besides plant gametes (seeds) wind also helps plants' enemies: Spores and other propagules of plant pathogens are even lighter and able to travel long distances. A few plant diseases are known to have been known to travel over marginal seas and even entire oceans. Humans are unable to prevent or even slow down wind dispersal of plant pathogens, requiring prediction and amelioration instead.
Effect on animals
Cattle and sheep are prone to wind chill caused by a combination of wind and cold temperatures, when winds exceed , rendering their hair and wool coverings ineffective. Although penguins use both a layer of fat and feathers to help guard against coldness in both water and air, their flippers and feet are less immune to the cold. In the coldest climates such as Antarctica, emperor penguins use huddling behavior to survive the wind and cold, continuously alternating the members on the outside of the assembled group, which reduces heat loss by 50%. Flying insects, a subset of arthropods, are swept along by the prevailing winds, while birds follow their own course taking advantage of wind conditions, in order to either fly or glide. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. Bird migration, which tends to occur overnight within the lowest of the Earth's atmosphere, contaminates wind profiles gathered by weather radar, particularly the WSR-88D, by increasing the environmental wind returns by to .
Pikas use a wall of pebbles to store dry plants and grasses for the winter in order to protect the food from being blown away. Cockroaches use slight winds that precede the attacks of potential predators, such as toads, to survive their encounters. Their cerci are very sensitive to the wind, and help them survive half of their attacks. Elk have a keen sense of smell that can detect potential upwind predators at a distance of . Increases in wind above signals glaucous gulls to increase their foraging and aerial attacks on thick-billed murres.
Related damage
High winds are known to cause damage, depending upon the magnitude of their velocity and pressure differential. Wind pressures are positive on the windward side of a structure and negative on the leeward side. Infrequent wind gusts can cause poorly designed suspension bridges to sway. When wind gusts are at a similar frequency to the swaying of the bridge, the bridge can be destroyed more easily, such as what occurred with the Tacoma Narrows Bridge in 1940. Wind speeds as low as can lead to power outages due to tree branches disrupting the flow of energy through power lines. While no species of tree is guaranteed to stand up to hurricane-force winds, those with shallow roots are more prone to uproot, and brittle trees such as eucalyptus, sea hibiscus, and avocado are more prone to damage. Hurricane-force winds cause substantial damage to mobile homes, and begin to structurally damage homes with foundations. Winds of this strength due to downsloped winds off terrain have been known to shatter windows and sandblast paint from cars. Once winds exceed , homes completely collapse, and significant damage is done to larger buildings. Total destruction to artificial structures occurs when winds reach . The Saffir–Simpson scale and Enhanced Fujita scale were designed to help estimate wind speed from the damage caused by high winds related to tropical cyclones and tornadoes, and vice versa.
Australia's Barrow Island holds the record for the strongest wind gust, reaching 408 km/h (253 mph) during tropical Cyclone Olivia on 10 April 1996, surpassing the previous record of 372 km/h (231 mph) set on Mount Washington (New Hampshire) on the afternoon of 12 April 1934.
Wildfire intensity increases during daytime hours. For example, burn rates of smoldering logs are up to five times greater during the day because of lower humidity, increased temperatures, and increased wind speeds. Sunlight warms the ground during the day and causes air currents to travel uphill, and downhill during the night as the land cools. Wildfires are fanned by these winds and often follow the air currents over hills and through valleys. United States wildfire operations revolve around a 24-hour fire day that begins at 10:00 a.m. because of the predictable increase in intensity resulting from the daytime warmth.
In outer space
The solar wind is quite different from a terrestrial wind, in that its origin is the Sun, and it is composed of charged particles that have escaped the Sun's atmosphere. Similar to the solar wind, the planetary wind is composed of light gases that escape planetary atmospheres. Over long periods of time, the planetary wind can radically change the composition of planetary atmospheres.
The fastest wind ever recorded came from the accretion disc of the IGR J17091-3624 black hole. Its speed is , which is 3% of the speed of light.
Planetary wind
The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Such a process over geologic time causes water-rich planets such as the Earth to evolve into planets like Venus. Additionally, planets with hotter lower atmospheres could accelerate the loss rate of hydrogen.
Solar wind
Rather than air, the solar wind is a stream of charged particles—a plasma—ejected from the upper atmosphere of the Sun at a rate of . It consists mostly of electrons and protons with energies of about 1 keV. The stream of particles varies in temperature and speed with the passage of time. These particles are able to escape the Sun's gravity, in part because of the high temperature of the corona, but also because of high kinetic energy that particles gain through a process that is not well understood. The solar wind creates the Heliosphere, a vast bubble in the interstellar medium surrounding the Solar System. Planets require large magnetic fields in order to reduce the ionization of their upper atmosphere by the solar wind. Other phenomena caused by the solar wind include geomagnetic storms that can knock out power grids on Earth, the aurorae such as the Northern Lights, and the plasma tails of comets that always point away from the Sun.
On other planets
Strong winds at Venus's cloud tops circle the planet every four to five Earth days. When the poles of Mars are exposed to sunlight after their winter, the frozen CO2 sublimates, creating significant winds that sweep off the poles as fast as , which subsequently transports large amounts of dust and water vapor over its landscape. Other Martian winds have resulted in cleaning events and dust devils. On Jupiter, wind speeds of are common in zonal jet streams. Saturn's winds are among the Solar System's fastest. Cassini–Huygens data indicated peak easterly winds of . On Uranus, northern hemisphere wind speeds reach as high as near 50 degrees north latitude. At the cloud tops of Neptune, prevailing winds range in speed from along the equator to at the poles. At 70° S latitude on Neptune, a high-speed jet stream travels at a speed of . The fastest wind on any known planet is on HD 80606 b located 190 light years away, where it blows at more than 11,000 mph or 5 km/s.
See also
References
External links
Current map of global surface winds
Atmospheric dynamics
Meteorological phenomena | 0.786755 | 0.998082 | 0.785246 |
Anabolism | Anabolism is the set of metabolic pathways that construct macromolecules like DNA or RNA from smaller units. These reactions require energy, known also as an endergonic process. Anabolism is the building-up aspect of metabolism, whereas catabolism is the breaking-down aspect. Anabolism is usually synonymous with biosynthesis.
Pathway
Polymerization, an anabolic pathway used to build macromolecules such as nucleic acids, proteins, and polysaccharides, uses condensation reactions to join monomers. Macromolecules are created from smaller molecules using enzymes and cofactors.
Energy source
Anabolism is powered by catabolism, where large molecules are broken down into smaller parts and then used up in cellular respiration. Many anabolic processes are powered by the cleavage of adenosine triphosphate (ATP). Anabolism usually involves reduction and decreases entropy, making it unfavorable without energy input. The starting materials, called the precursor molecules, are joined using the chemical energy made available from hydrolyzing ATP, reducing the cofactors NAD+, NADP+, and FAD, or performing other favorable side reactions. Occasionally it can also be driven by entropy without energy input, in cases like the formation of the phospholipid bilayer of a cell, where hydrophobic interactions aggregate the molecules.
Cofactors
The reducing agents NADH, NADPH, and FADH2, as well as metal ions, act as cofactors at various steps in anabolic pathways. NADH, NADPH, and FADH2 act as electron carriers, while charged metal ions within enzymes stabilize charged functional groups on substrates.
Substrates
Substrates for anabolism are mostly intermediates taken from catabolic pathways during periods of high energy charge in the cell.
Functions
Anabolic processes build organs and tissues. These processes produce growth and differentiation of cells and increase in body size, a process that involves synthesis of complex molecules. Examples of anabolic processes include the growth and mineralization of bone and increases in muscle mass.
Anabolic hormones
Endocrinologists have traditionally classified hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The classic anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.
Photosynthetic carbohydrate synthesis
Photosynthetic carbohydrate synthesis in plants and certain bacteria is an anabolic process that produces glucose, cellulose, starch, lipids, and proteins from CO2. It uses the energy produced from the light-driven reactions of photosynthesis, and creates the precursors to these large molecules via carbon assimilation in the photosynthetic carbon reduction cycle, a.k.a. the Calvin cycle.
Amino acid biosynthesis
All amino acids are formed from intermediates in the catabolic processes of glycolysis, the citric acid cycle, or the pentose phosphate pathway. From glycolysis, glucose 6-phosphate is a precursor for histidine; 3-phosphoglycerate is a precursor for glycine and cysteine; phosphoenol pyruvate, combined with the 3-phosphoglycerate-derivative erythrose 4-phosphate, forms tryptophan, phenylalanine, and tyrosine; and pyruvate is a precursor for alanine, valine, leucine, and isoleucine. From the citric acid cycle, α-ketoglutarate is converted into glutamate and subsequently glutamine, proline, and arginine; and oxaloacetate is converted into aspartate and subsequently asparagine, methionine, threonine, and lysine.
Glycogen storage
During periods of high blood sugar, glucose 6-phosphate from glycolysis is diverted to the glycogen-storing pathway. It is changed to glucose-1-phosphate by phosphoglucomutase and then to UDP-glucose by UTP--glucose-1-phosphate uridylyltransferase. Glycogen synthase adds this UDP-glucose to a glycogen chain.
Gluconeogenesis
Glucagon is traditionally a catabolic hormone, but also stimulates the anabolic process of gluconeogenesis by the liver, and to a lesser extent the kidney cortex and intestines, during starvation to prevent low blood sugar. It is the process of converting pyruvate into glucose. Pyruvate can come from the breakdown of glucose, lactate, amino acids, or glycerol. The gluconeogenesis pathway has many reversible enzymatic processes in common with glycolysis, but it is not the process of glycolysis in reverse. It uses different irreversible enzymes to ensure the overall pathway runs in one direction only.
Regulation
Anabolism operates with separate enzymes from catalysis, which undergo irreversible steps at some point in their pathways. This allows the cell to regulate the rate of production and prevent an infinite loop, also known as a futile cycle, from forming with catabolism.
The balance between anabolism and catabolism is sensitive to ADP and ATP, otherwise known as the energy charge of the cell. High amounts of ATP cause cells to favor the anabolic pathway and slow catabolic activity, while excess ADP slows anabolism and favors catabolism. These pathways are also regulated by circadian rhythms, with processes such as glycolysis fluctuating to match an animal's normal periods of activity throughout the day.
Etymology
The word anabolism is from Neo-Latin, with roots from , "upward" and , "to throw".
References
Metabolism | 0.789604 | 0.99432 | 0.785119 |
Vector calculus | Vector calculus or vector analysis is a branch of mathematics concerned with the differentiation and integration of vector fields, primarily in three-dimensional Euclidean space, The term vector calculus is sometimes used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, gravitational fields, and fluid flow.
Vector calculus was developed from the theory of quaternions by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis. In its standard form using the cross product, vector calculus does not generalize to higher dimensions, but the alternative approach of geometric algebra, which uses the exterior product, does (see below for more).
Basic objects
Scalar fields
A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory.
Vector fields
A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line.
Vectors and pseudovectors
In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below.
Vector algebra
The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field. The basic algebraic operations consist of:
Also commonly used are the two triple products:
Operators and theorems
Differential operators
Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator, also known as "nabla". The three basic vector operators are:
Also commonly used are the two Laplace operators:
A quantity called the Jacobian matrix is useful for studying functions when both the domain and range of the function are multivariable, such as a change of variables during integration.
Integral theorems
The three basic vector operators have corresponding theorems which generalize the fundamental theorem of calculus to higher dimensions:
In two dimensions, the divergence and curl theorems reduce to the Green's theorem:
Applications
Linear approximations
Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable function with real values, one can approximate for close to by the formula
The right-hand side is the equation of the plane tangent to the graph of at
Optimization
For a continuously differentiable function of several real variables, a point (that is, a set of values for the input variables, which is viewed as a point in ) is critical if all of the partial derivatives of the function are zero at , or, equivalently, if its gradient is zero. The critical values are the values of the function at the critical points.
If the function is smooth, or, at least twice continuously differentiable, a critical point may be either a local maximum, a local minimum or a saddle point. The different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives.
By Fermat's theorem, all local maxima and minima of a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros.
Generalizations
Vector calculus can also be generalized to other 3-manifolds and higher-dimensional spaces.
Different 3-manifolds
Vector calculus is initially defined for Euclidean 3-space, which has additional structure beyond simply being a 3-dimensional real vector space, namely: a norm (giving a notion of length) defined via an inner product (the dot product), which in turn gives a notion of angle, and an orientation, which gives a notion of left-handed and right-handed. These structures give rise to a volume form, and also the cross product, which is used pervasively in vector calculus.
The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of the coordinate system to be taken into account (see for more detail).
Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetric nondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (the special orthogonal group ).
More generally, vector calculus can be defined on any 3-dimensional oriented Riemannian manifold, or more generally pseudo-Riemannian manifold. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point.
Other dimensions
Most of the analytic results are easily understood, in a more general form, using the machinery of differential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yielding harmonic analysis), while curl and cross product do not generalize as directly.
From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being -vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to , , or dimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors.
In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7 (and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or 7 dimensions can a cross product be defined (generalizations in other dimensionalities either require vectors to yield 1 vector, or are alternative Lie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated at Curl § Generalizations; in brief, the curl of a vector field is a bivector field, which may be interpreted as the special orthogonal Lie algebra of infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally dimensions of rotations in dimensions).
There are two important alternative generalizations of vector calculus. The first, geometric algebra, uses -vector fields instead of vector fields (in 3 or fewer dimensions, every -vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with the exterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yields Clifford algebras as the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions.
The second generalization uses differential forms (-covector fields) instead of vector fields or -vector fields, and is widely used in mathematics, particularly in differential geometry, geometric topology, and harmonic analysis, in particular yielding Hodge theory on oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to the exterior derivative of 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form of Stokes' theorem.
From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear.
From the point of view of geometric algebra, vector calculus implicitly identifies -vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifies -forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
See also
Vector calculus identities
Vector algebra relations
Directional derivative
Conservative vector field
Solenoidal vector field
Laplacian vector field
Helmholtz decomposition
Tensor
Geometric calculus
References
Citations
Sources
Sandro Caparrini (2002) "The discovery of the vector representation of moments and angular velocity", Archive for History of Exact Sciences 56:151–81.
Barry Spain (1965) Vector Analysis, 2nd edition, link from Internet Archive.
Chen-To Tai (1995). A historical study of vector analysis. Technical Report RL 915, Radiation Laboratory, University of Michigan.
External links
The Feynman Lectures on Physics Vol. II Ch. 2: Differential Calculus of Vector Fields
A survey of the improper use of ∇ in vector analysis (1994) Tai, Chen-To
Vector Analysis: A Text-book for the Use of Students of Mathematics and Physics, (based upon the lectures of Willard Gibbs) by Edwin Bidwell Wilson, published 1902.
Mathematical physics | 0.78777 | 0.996426 | 0.784955 |
Milankovitch cycles | Milankovitch cycles describe the collective effects of changes in the Earth's movements on its climate over thousands of years. The term was coined and named after the Serbian geophysicist and astronomer Milutin Milanković. In the 1920s, he hypothesized that variations in eccentricity, axial tilt, and precession combined to result in cyclical variations in the intra-annual and latitudinal distribution of solar radiation at the Earth's surface, and that this orbital forcing strongly influenced the Earth's climatic patterns.
Earth movements
The Earth's rotation around its axis, and revolution around the Sun, evolve over time due to gravitational interactions with other bodies in the Solar System. The variations are complex, but a few cycles are dominant.
The Earth's orbit varies between nearly circular and mildly elliptical (its eccentricity varies). When the orbit is more elongated, there is more variation in the distance between the Earth and the Sun, and in the amount of solar radiation, at different times in the year. In addition, the rotational tilt of the Earth (its obliquity) changes slightly. A greater tilt makes the seasons more extreme. Finally, the direction in the fixed stars pointed to by the Earth's axis changes (axial precession), while the Earth's elliptical orbit around the Sun rotates (apsidal precession). The combined effect of precession with eccentricity is that proximity to the Sun occurs during different astronomical seasons.
Milankovitch studied changes in these movements of the Earth, which alter the amount and location of solar radiation reaching the Earth. This is known as solar forcing (an example of radiative forcing). Milankovitch emphasized the changes experienced at 65° north due to the great amount of land at that latitude. Land masses change temperature more quickly than oceans, because of the mixing of surface and deep water and the fact that soil has a lower volumetric heat capacity than water.
Orbital eccentricity
The Earth's orbit approximates an ellipse. Eccentricity measures the departure of this ellipse from circularity. The shape of the Earth's orbit varies between nearly circular (theoretically the eccentricity can hit zero) and mildly elliptical (highest eccentricity was 0.0679 in the last 250 million years). Its geometric or logarithmic mean is 0.0019. The major component of these variations occurs with a period of 405,000 years (eccentricity variation of ±0.012). Other components have 95,000-year and 124,000-year cycles (with a beat period of 400,000 years). They loosely combine into a 100,000-year cycle (variation of −0.03 to +0.02). The present eccentricity is 0.0167 and decreasing.
Eccentricity varies primarily due to the gravitational pull of Jupiter and Saturn. The semi-major axis of the orbital ellipse, however, remains unchanged; according to perturbation theory, which computes the evolution of the orbit, the semi-major axis is invariant. The orbital period (the length of a sidereal year) is also invariant, because according to Kepler's third law, it is determined by the semi-major axis. Longer-term variations are caused by interactions involving the perihelia and nodes of the planets Mercury, Venus, Earth, Mars, and Jupiter.
Effect on temperature
The semi-major axis is a constant. Therefore, when Earth's orbit becomes more eccentric, the semi-minor axis shortens. This increases the magnitude of seasonal changes.
The relative increase in solar irradiation at closest approach to the Sun (perihelion) compared to the irradiation at the furthest distance (aphelion) is slightly larger than four times the eccentricity. For Earth's current orbital eccentricity, incoming solar radiation varies by about 6.8%, while the distance from the Sun currently varies by only 3.4%.
Perihelion presently occurs around 3 January, while aphelion is around 4 July. When the orbit is at its most eccentric, the amount of solar radiation at perihelion will be about 23% more than at aphelion. However, the Earth's eccentricity is so small (at least at present) that the variation in solar irradiation is a minor factor in seasonal climate variation, compared to axial tilt and even compared to the relative ease of heating the larger land masses of the northern hemisphere.
Effect on lengths of seasons
The seasons are quadrants of the Earth's orbit, marked by the two solstices and the two equinoxes. Kepler's second law states that a body in orbit traces equal areas over equal times; its orbital velocity is highest around perihelion and lowest around aphelion. The Earth spends less time near perihelion and more time near aphelion. This means that the lengths of the seasons vary. Perihelion currently occurs around 3 January, so the Earth's greater velocity shortens winter and autumn in the northern hemisphere. Summer in the northern hemisphere is 4.66 days longer than winter, and spring is 2.9 days longer than autumn. Greater eccentricity increases the variation in the Earth's orbital velocity. Currently, however, the Earth's orbit is becoming less eccentric (more nearly circular). This will make the seasons in the immediate future more similar in length.
Axial tilt (obliquity)
The angle of the Earth's axial tilt with respect to the orbital plane (the obliquity of the ecliptic) varies between 22.1° and 24.5°, over a cycle of about 41,000 years. The current tilt is 23.44°, roughly halfway between its extreme values. The tilt last reached its maximum in 8,700 BCE, which correlates with the beginning of the Holocene, the current geological epoch. It is now in the decreasing phase of its cycle, and will reach its minimum around the year 11,800 CE. Increased tilt increases the amplitude of the seasonal cycle in insolation, providing more solar radiation in each hemisphere's summer and less in winter. However, these effects are not uniform everywhere on the Earth's surface. Increased tilt increases the total annual solar radiation at higher latitudes, and decreases the total closer to the equator.
The current trend of decreasing tilt, by itself, will promote milder seasons (warmer winters and colder summers), as well as an overall cooling trend. Because most of the planet's snow and ice lies at high latitude, decreasing tilt may encourage the termination of an interglacial period and the onset of a glacial period for two reasons: 1) there is less overall summer insolation, and 2) there is less insolation at higher latitudes (which melts less of the previous winter's snow and ice).
Axial precession
Axial precession is the trend in the direction of the Earth's axis of rotation relative to the fixed stars, with a period of about 25,700 years. Also known as the precession of the equinoxes, this motion means that eventually Polaris will no longer be the north pole star. This precession is caused by the tidal forces exerted by the Sun and the Moon on the rotating Earth; both contribute roughly equally to this effect.
Currently, perihelion occurs during the southern hemisphere's summer. This means that solar radiation due to both the axial tilt inclining the southern hemisphere toward the Sun, and the Earth's proximity to the Sun, will reach maximum during the southern summer and reach minimum during the southern winter. These effects on heating are thus additive, which means that seasonal variation in irradiation of the southern hemisphere is more extreme. In the northern hemisphere, these two factors reach maximum at opposite times of the year: the north is tilted toward the Sun when the Earth is furthest from the Sun. The two effects work in opposite directions, resulting in less extreme variations in insolation.
In about 10,000 years, the north pole will be tilted toward the Sun when the Earth is at perihelion. Axial tilt and orbital eccentricity will both contribute their maximum increase in solar radiation during the northern hemisphere's summer. Axial precession will promote more extreme variation in irradiation of the northern hemisphere and less extreme variation in the south. When the Earth's axis is aligned such that aphelion and perihelion occur near the equinoxes, axial tilt will not be aligned with or against eccentricity.
Apsidal precession
The orbital ellipse itself precesses in space, in an irregular fashion, completing a full cycle in about 112,000 years relative to the fixed stars. Apsidal precession occurs in the plane of the ecliptic and alters the orientation of the Earth's orbit relative to the ecliptic. This happens primarily as a result of interactions with Jupiter and Saturn. Smaller contributions are also made by the sun's oblateness and by the effects of general relativity that are well known for Mercury.
Apsidal precession combines with the 25,700-year cycle of axial precession (see above) to vary the position in the year that the Earth reaches perihelion. Apsidal precession shortens this period to about 21,000 years, at present. According to a relatively old source (1965), the average value over the last 300,000 years was 23,000 years, varying between 20,800 and 29,000 years.
As the orientation of Earth's orbit changes, each season will gradually start earlier in the year. Precession means the Earth's nonuniform motion (see above) will affect different seasons. Winter, for instance, will be in a different section of the orbit. When the Earth's apsides (extremes of distance from the sun) are aligned with the equinoxes, the length of spring and summer combined will equal that of autumn and winter. When they are aligned with the solstices, the difference in the length of these seasons will be greatest.
Orbital inclination
The inclination of Earth's orbit drifts up and down relative to its present orbit. This three-dimensional movement is known as "precession of the ecliptic" or "planetary precession". Earth's current inclination relative to the invariable plane (the plane that represents the angular momentum of the Solar System—approximately the orbital plane of Jupiter) is 1.57°. Milankovitch did not study planetary precession. It was discovered more recently and measured, relative to Earth's orbit, to have a period of about 70,000 years. When measured independently of Earth's orbit, but relative to the invariable plane, however, precession has a period of about 100,000 years. This period is very similar to the 100,000-year eccentricity period. Both periods closely match the 100,000-year pattern of glacial events.
Theory constraints
Materials taken from the Earth have been studied to infer the cycles of past climate. Antarctic ice cores contain trapped air bubbles whose ratios of different oxygen isotopes are a reliable proxy for global temperatures around the time the ice was formed. Study of this data concluded that the climatic response documented in the ice cores was driven by northern hemisphere insolation as proposed by the Milankovitch hypothesis. Similar astronomical hypotheses had been advanced in the 19th century by Joseph Adhemar, James Croll, and others.
Analysis of deep-ocean cores and of lake depths, and a seminal paper by Hays, Imbrie, and Shackleton provide additional validation through physical evidence. Climate records contained in a core of rock drilled in Arizona show a pattern synchronized with Earth's eccentricity, and cores drilled in New England match it, going back 215 million years.
100,000-year issue
Of all the orbital cycles, Milankovitch believed that obliquity had the greatest effect on climate, and that it did so by varying the summer insolation in northern high latitudes. Therefore, he deduced a 41,000-year period for ice ages. However, subsequent research has shown that ice age cycles of the Quaternary glaciation over the last million years have been at a period of 100,000 years, which matches the eccentricity cycle. Various explanations for this discrepancy have been proposed, including frequency modulation or various feedbacks (from carbon dioxide, or ice sheet dynamics). Some models can reproduce the 100,000-year cycles as a result of non-linear interactions between small changes in the Earth's orbit and internal oscillations of the climate system. In particular, the mechanism of the stochastic resonance was originally proposed in order to describe this interaction.
Jung-Eun Lee of Brown University proposes that precession changes the amount of energy that Earth absorbs, because the southern hemisphere's greater ability to grow sea ice reflects more energy away from Earth. Moreover, Lee says, "Precession only matters when eccentricity is large. That's why we see a stronger 100,000-year pace than a 21,000-year pace." Some others have argued that the length of the climate record is insufficient to establish a statistically significant relationship between climate and eccentricity variations.
Transition changes
From 1–3 million years ago, climate cycles matched the 41,000-year cycle in obliquity. After one million years ago, the Mid-Pleistocene Transition (MPT) occurred with a switch to the 100,000-year cycle matching eccentricity. The transition problem refers to the need to explain what changed one million years ago. The MPT can now be reproduced in numerical simulations that include a decreasing trend in carbon dioxide and glacially induced removal of regolith.
Interpretation of unsplit peak variances
Even the well-dated climate records of the last million years do not exactly match the shape of the eccentricity curve. Eccentricity has component cycles of 95,000 and 125,000 years. Some researchers, however, say the records do not show these peaks, but only indicate a single cycle of 100,000 years. The split between the two eccentricity components, however, is observed at least once in a drill core from the 500-million year-old Scandinavian Alum Shale.
Unsynced stage five observation
Deep-sea core samples show that the interglacial interval known as marine isotope stage 5 began 130,000 years ago. This is 10,000 years before the solar forcing that the Milankovitch hypothesis predicts. (This is also known as the causality problem because the effect precedes the putative cause.)
Present and future conditions
Since orbital variations are predictable, any model that relates orbital variations to climate can be run forward to predict future climate, with two caveats: the mechanism by which orbital forcing influences climate is not definitive; and non-orbital effects can be important (for example, the human impact on the environment principally increases greenhouse gases resulting in a warmer climate).
An often-cited 1980 orbital model by Imbrie predicted "the long-term cooling trend that began some 6,000 years ago will continue for the next 23,000 years." Another work suggests that solar insolation at 65° N will reach a peak of 460 W·m−2 in around 6,500 years, before decreasing back to current levels (450 W·m−2) in around 16,000 years. Earth's orbit will become less eccentric for about the next 100,000 years, so changes in this insolation will be dominated by changes in obliquity, and should not decline enough to permit a new glacial period in the next 50,000 years.
Other celestial bodies
Mars
Since 1972, speculation sought a relationship between the formation of Mars' alternating bright and dark layers in the polar layered deposits, and the planet's orbital climate forcing. In 2002, Laska, Levard, and Mustard showed ice-layer radiance, as a function of depth, correlate with the insolation variations in summer at the Martian north pole, similar to palaeoclimate variations on Earth. They also showed Mars' precession had a period of about 51 kyr, obliquity had a period of about 120 kyr, and eccentricity had a period ranging between 95 and 99 kyr. In 2003, Head, Mustard, Kreslavsky, Milliken, and Marchant proposed Mars was in an interglacial period for the past 400 kyr, and in a glacial period between 400 and 2100 kyr, due to Mars' obliquity exceeding 30°. At this extreme obliquity, insolation is dominated by the regular periodicity of Mars' obliquity variation. Fourier analysis of Mars' orbital elements, show an obliquity period of 128 kyr, and a precession index period of 73 kyr.
Mars has no moon large enough to stabilize its obliquity, which has varied from 10 to 70 degrees. This would explain recent observations of its surface compared to evidence of different conditions in its past, such as the extent of its polar caps.
Outer Solar system
Saturn's moon Titan has a cycle of approximately 60,000 years that could change the location of the methane lakes. Neptune's moon Triton has a variation similar to Titan's, which could cause its solid nitrogen deposits to migrate over long time scales.
Exoplanets
Scientists using computer models to study extreme axial tilts have concluded that high obliquity could cause extreme climate variations, and while that would probably not render a planet uninhabitable, it could pose difficulty for land-based life in affected areas. Most such planets would nevertheless allow development of both simple and more complex lifeforms. Although the obliquity they studied is more extreme than Earth ever experiences, there are scenarios 1.5 to 4.5 billion years from now, as the Moon's stabilizing effect lessens, where obliquity could leave its current range and the poles could eventually point almost directly at the Sun.
See also
References
Bibliography
This is the first work that investigated the derivative of the ice volume in relation to insolation (page 698).
In Ancient Rocks Scientists See a Climate Cycle Working Across Deep Time (Columbia Climate School, Kevin Krajick, May 7, 2018)
.
This shows that Milankovitch theory fits the data extremely well, over the past million years, provided that we consider derivatives.
The oldest reference for Milankovitch cycles is:
Tying celestial mechanics to Earth's ice age (Physics Today 73 (5), Maslin M. A. 01 May 2020)
This review article discusses cycles and great-scale changes in the global climate during the Cenozoic Era.
External links
Campisano, C. J. (2012) Milankovitch Cycles, Paleoclimatic Change, and Hominin Evolution. Nature Education Knowledge 4(3):5
Ice Age – Milankovitch Cycles – National Geographic Channel
The Milankovitch band, Internet Archive of American Geophysical Union lecture
Paleoclimatology
Climate forcing
Ice ages
Periodic phenomena | 0.786389 | 0.998124 | 0.784913 |
Lagrangian mechanics | In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lagrangian mechanics describes a mechanical system as a pair consisting of a configuration space M and a smooth function within that space called a Lagrangian. For many systems, , where T and V are the kinetic and potential energy of the system, respectively.
The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
Introduction
Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is
nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems.
Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment.
For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so , and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written . The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus
In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration",
applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for.
Lagrangian
Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by
where
is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the particles. Each particle labeled has mass and is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself.
Kinetic energy is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so
V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, If there is some external field or external driving force changing with time, the potential changes with time, so most generally
As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy.
One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form If the number of constraints in the system is C, then each constraint has an equation ..., each of which could apply to any of the particles. If particle k is subject to constraint i, then At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are:
When the constraint equations are non-integrable, when the constraints have inequalities, or with complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods.
If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates.
With these definitions, Lagrange's equations of the first kind are
where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and
are each shorthands for a vector of partial derivatives with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to , because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations.
In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by , is just ; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2).
In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore . We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple , by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time:
The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is
Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so
With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind
are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for.
Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates.
From Newtonian to Lagrangian mechanics
Newton's laws
For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation
where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when
Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates the law in tensor index notation is the "Lagrangian form"
where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind,
is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates.
It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense.
However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C,
The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations.
The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion.
D'Alembert's principle
A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero:
The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint).
Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero:
so that
Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion.
Equations of motion from D'Alembert's principle
If there are constraints on particle k, then since the coordinates of the position are linked together by a constraint equation, so are those of the virtual displacements . Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential,
There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time.
The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces
so that
This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result:
Now D'Alembert's principle is in the generalized coordinates as required,
and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion,
These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle.
Euler–Lagrange equations and Hamilton's principle
For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that
equating to Lagrange's equations and defining the Lagrangian as obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion
However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations.
The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is
which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian,
Now, if the condition holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle:
The time integral of the Lagrangian is another quantity called the action, defined as
which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as , ·, or ·. With this definition Hamilton's principle is
Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles.
Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others.
Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here.
Lagrange multipliers and constraints
The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles,
Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian
The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives
The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement
which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations
For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian gives
and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are
thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers.
Properties of the Lagrangian
Non-uniqueness
The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian will describe the same motion as L. If one restricts as above to trajectories q over a given time interval } and fixed end points and , then two Lagrangians describing the same system can differ by the "total time derivative" of a function :
where means
Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via
with the last two components and independent of q.
Invariance under point transformations
Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation which is invertible as , the new Lagrangian L′ is a function of the new coordinates
and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation;
This may simplify the equations of motion.
Cyclic coordinates and conserved momenta
An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by
If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that
and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable".
For example, a system may have a Lagrangian
where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta
are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved.
Energy
Given a Lagrangian the Hamiltonian of the corresponding mechanical system is, by definition,
This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector: . From:
where is a symmetric matrix that is defined for the derivation.
Invariance under coordinate transformations
At every time instant t, the energy is invariant under configuration space coordinate changes , i.e. (using natural coordinates)
Besides this result, the proof below shows that, under such change of coordinates, the derivatives change as coefficients of a linear form.
Conservation
In Lagrangian mechanics, the system is closed if and only if its Lagrangian does not explicitly depend on time. The energy conservation law states that the energy of a closed system is an integral of motion.
More precisely, let be an extremal. (In other words, satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to
If the Lagrangian L does not explicitly depend on time, then , then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that
Hence, if the chosen coordinates were natural coordinates, the energy is conserved.
Kinetic and potential energies
Under all these circumstances, the constant
is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates.
Mechanical similarity
If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, , so that
and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if
Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios
Interacting particles
For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems:
If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction,
This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above.
The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added.
Consequences of singular Lagrangians
From the Euler-Lagrange equations, it follows that:
where the matrix is defined as . If the matrix is non-singular, the above equations can be solved to represent as a function of . If the matrix is non-invertible, it would not be possible to represent all 's as a function of but also, the Hamiltonian equations of motions will not take the standard form.
Examples
The following examples apply Lagrange's equations of the second kind to mechanical problems.
Conservative force
A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential,
If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates.
Cartesian coordinates
The Lagrangian of the particle can be written
The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate
with derivatives
hence
and similarly for the y and z coordinates. Collecting the equations in vector form we find
which is Newton's second law of motion for a particle subject to a conservative force.
Polar coordinates in 2D and 3D
Using the spherical coordinates as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is
So, in spherical coordinates, the Euler–Lagrange equations are
The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum
in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant.
The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2.
Pendulum on a movable support
Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the -direction. Let be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle from the vertical. The coordinates and velocity components of the pendulum bob are
The generalized coordinates can be taken to be and . The kinetic energy of the system is then
and the potential energy is
giving the Lagrangian
Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is
and the Lagrange equation for the support coordinate is
The Lagrange equation for the angle θ is
and simplifying
These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example, should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively.
Two-body central force problem
Two bodies of masses and with position vectors and are in orbit about each other due to an attractive central potential . We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies and the location of the center of mass . The Lagrangian is then
where is the total mass, is the reduced mass, and the potential of the radial force, which depends only on the magnitude of the separation . The Lagrangian splits into a center-of-mass term and a relative motion term .
The Euler–Lagrange equation for is simply
which states the center of mass moves in a straight line at constant velocity.
Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates and take ,
so is a cyclic coordinate with the corresponding conserved (angular) momentum
The radial coordinate and angular velocity can vary with time, but only in such a way that is constant. The Lagrange equation for is
This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity from this radial equation,
which is the equation of motion for a one-dimensional problem in which a particle of mass is subjected to the inward central force and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term):
Of course, if one remains entirely within the one-dimensional formulation, enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system.
Extensions to include non-conservative forces
Dissipative forces
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom.
In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form:
where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then
and
Electromagnetism
A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent.
The Lagrangian for a charged particle with electrical charge , interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential and magnetic vector potential are defined from the electric field and magnetic field as follows:
The Lagrangian of a massive charged test particle in an electromagnetic field
is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law
Under gauge transformation:
where is any scalar function of space and time, the aforementioned Lagrangian transforms like:
which still produces the same Lorentz force law.
Note that the canonical momentum (conjugate to position ) is the kinetic momentum plus a contribution from the field (known as the potential momentum):
This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum is not gauge invariant, and therefore not a measurable physical quantity; However, if is cyclic (i.e. Lagrangian is independent of position ), which happens if the and fields are uniform, then this canonical momentum given here is the conserved momentum, while the measurable physical kinetic momentum is not.
Other contexts and formulations
The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations.
Alternative formulations of classical mechanics
A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by
and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)).
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates.
Momentum space formulation
The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian obtains the generalized momenta Lagrangian in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta.
Higher derivatives of generalized coordinates
There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability
Optics
Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow.
Relativistic formulation
Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out.
Quantum mechanics
In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions.
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics.
Classical field theory
In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field defined over a region of 3D space. Associated with the field is a Lagrangian density
defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space
where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian.
Noether's theorem
The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity.
See also
Canonical coordinates
Fundamental lemma of the calculus of variations
Functional derivative
Generalized coordinates
Hamiltonian mechanics
Hamiltonian optics
Inverse problem for Lagrangian mechanics, the general topic of finding a Lagrangian for a system given the equations of motion.
Lagrangian and Eulerian specification of the flow field
Lagrangian point
Lagrangian system
Non-autonomous mechanics
Plateau's problem
Restricted three-body problem
Footnotes
Notes
References
The Principle of Least Action, R. Feynman
Further reading
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
External links
Principle of least action interactive Excellent interactive explanation/webpage
Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math)
Constrained motion and generalized coordinates, page 4
Dynamical systems
Mathematical physics | 0.78589 | 0.998752 | 0.784909 |
Energy density | In physics, energy density is the quotient between the amount of energy stored in a given system or contained in a given region of space and the volume of the system or region considered. Often only the useful or extractable energy is measured. It is sometimes confused with stored energy per unit mass, which is called specific energy or .
There are different types of energy stored, corresponding to a particular type of reaction. In order of the typical magnitude of the energy stored, examples of reactions are: nuclear, chemical (including electrochemical), electrical, pressure, material deformation or in electromagnetic fields. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles from the combustion of gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide. Electrochemical reactions are used by devices such as laptop computers and mobile phones to release energy from batteries.
Energy per unit volume has the same physical units as pressure, and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. The energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
In cosmological and other contexts in general relativity, the energy densities considered relate to the elements of the stress-energy tensor and therefore do include the rest mass energy as well as energy densities associated with pressure.
Chemical energy
When discussing the chemical energy contained, there are different types which can be quantified depending on the intended purpose. One is the theoretical total amount of thermodynamic work that can be derived from a system, at a given temperature and pressure imposed by the surroundings, called exergy. Another is the theoretical amount of electrical energy that can be derived from reactants that are at room temperature and atmospheric pressure. This is given by the change in standard Gibbs free energy. But as a source of heat or for use in a heat engine, the relevant quantity is the change in standard enthalpy or the heat of combustion.
There are two kinds of heat of combustion:
The higher value (HHV), or gross heat of combustion, includes all the heat released as the products cool to room temperature and whatever water vapor is present condenses.
The lower value (LHV), or net heat of combustion, does not include the heat which could be released by condensing water vapor, and may not include the heat released on cooling all the way down to room temperature.
A convenient table of HHV and LHV of some fuels can be found in the references.
In energy storage and fuels
For energy storage, the energy density relates the stored energy to the volume of the storage equipment, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy of a fuel per unit mass is called its specific energy.
The adjacent figure shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Some values may not be precise because of isomers or other irregularities. The heating values of the fuel describe their specific energies more comprehensively.
The density values for chemical fuels do not include the weight of the oxygen required for combustion. The atomic weights of carbon and oxygen are similar, while hydrogen is much lighter. Figures are presented in this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that contain their own oxidizer (such as gunpowder and TNT), where the mass of the oxidizer in effect adds weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite.
Given the high energy density of gasoline, the exploration of alternative media to store the energy of powering a car, such as hydrogen or battery, is strongly limited by the energy density of the alternative medium. The same mass of lithium-ion storage, for example, would result in a car with only 2% the range of its gasoline counterpart. If sacrificing the range is undesirable, much more storage volume is necessary. Alternative options are discussed for energy storage to increase energy density and decrease charging time, such as supercapacitors.
No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly it is pulled out.
Efficiency
In general an engine will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion.
Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Large scale, intensive energy use impacts and is impacted by climate, waste storage, and environmental consequences.
Nuclear energy
The greatest energy source by far is matter itself, according to the mass-energy equivalence. This energy is described by E = mc2, where c is the speed of light. In terms of density, m = ρV, where ρ is the mass per unit volume, V is the volume of the mass itself. This energy can be released by the processes of nuclear fission (~0.1%), nuclear fusion (~1%), or the annihilation of some or all of the matter in the volume V by matter-antimatter collisions (100%).
The most effective ways of accessing this energy, aside from antimatter, are fusion and fission. Fusion is the process by which the sun produces energy which will be available for billions of years (in the form of sunlight and heat). However as of 2024, sustained fusion power production continues to be elusive. Power from fission in nuclear power plants (using uranium and thorium) will be available for at least many decades or even centuries because of the plentiful supply of the elements on earth, though the full potential of this source can only be realized through breeder reactors, which are, apart from the BN-600 reactor, not yet used commercially.
Fission reactors
Nuclear fuels typically have volumetric energy densities at least tens of thousands of times higher than chemical fuels. A 1 inch tall uranium fuel pellet is equivalent to about 1 ton of coal, 120 gallons of crude oil, or 17,000 cubic feet of natural gas. In light-water reactors, 1 kg of natural uranium – following a corresponding enrichment and used for power generation– is equivalent to the energy content of nearly 10,000 kg of mineral oil or 14,000 kg of coal. Comparatively, coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density.
The density of thermal energy contained in the core of a light-water reactor (pressurized water reactor (PWR) or boiling water reactor (BWR)) of typically 1 GWe (1,000 MW electrical corresponding to ≈3,000 MW thermal) is in the range of 10 to 100 MW of thermal energy per cubic meter of cooling water depending on the location considered in the system (the core itself (≈30 m3), the reactor pressure vessel (≈50 m3), or the whole primary circuit (≈300 m3)). This represents a considerable density of energy that requires a continuous water flow at high velocity at all times in order to remove heat from the core, even after an emergency shutdown of the reactor.
The incapacity to cool the cores of three BWRs at Fukushima after the 2011 tsunami and the resulting loss of external electrical power and cold source caused the meltdown of the three cores in only a few hours, even though the three reactors were correctly shut down just after the Tōhoku earthquake. This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's.
Antimatter annihilation
Because antimatter-matter interactions result in complete conversion from the rest mass to radiant energy, the energy density of this reaction depends on the density of the matter and antimatter used. A neutron star would approximate the most dense system capable of matter-antimatter annihilation. A black hole, although denser than a neutron star, does not have an equivalent anti-particle form, but would offer the same 100% conversion rate of mass to energy in the form of Hawking radiation. Even in the case of relatively small black holes (smaller than astronomical objects) the power output would be tremendous.
Electric and magnetic fields
Electric and magnetic fields can store energy and its density relates to the strength of the fields within a given volume. This (volumetric) energy density is given by
where is the electric field, is the magnetic field, and and are the permittivity and permeability of the surroundings respectively. The solution will be (in SI units) in joules per cubic metre.
In ideal (linear and nondispersive) substances, the energy density (in SI units) is
where is the electric displacement field and is the magnetizing field. In the case of absence of magnetic fields, by exploiting Fröhlich's relationships it is also possible to extend these equations to anisotropic and nonlinear dielectrics, as well as to calculate the correlated Helmholtz free energy and entropy densities.
In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma.
Pulsed sources
When a pulsed laser impacts a surface, the radiant exposure, i.e. the energy deposited per unit of surface, may also be called energy density or fluence.
Table of material energy densities
The following unit conversions may be helpful when considering the data in the tables: 3.6 MJ = 1 kW⋅h ≈ 1.34 hp⋅h. Since 1 J = 10−6 MJ and 1 m3 = 103 L, divide joule/m3 by 109 to get MJ/L = GJ/m3. Divide MJ/L by 3.6 to get kW⋅h/L.
Chemical reactions (oxidation)
Unless otherwise stated, the values in the following table are lower heating values for perfect combustion, not counting oxidizer mass or volume. When used to produce electricity in a fuel cell or to do work, it is the Gibbs free energy of reaction (ΔG) that sets the theoretical upper limit. If the produced is vapor, this is generally greater than the lower heat of combustion, whereas if the produced is liquid, it is generally less than the higher heat of combustion. But in the most relevant case of hydrogen, ΔG is 113 MJ/kg if water vapor is produced, and 118 MJ/kg if liquid water is produced, both being less than the lower heat of combustion (120 MJ/kg).
Electrochemical reactions (batteries)
Common battery formats
Nuclear reactions
In material deformation
The mechanical energy storage capacity, or resilience, of a Hookean material when it is deformed to the point of failure can be computed by calculating tensile strength times the maximum elongation dividing by two. The maximum elongation of a Hookean material can be computed by dividing stiffness of that material by its ultimate tensile strength. The following table lists these values computed using the Young's modulus as measure of stiffness:
Other release mechanisms
See also
Energy content of biofuel
Energy density Extended Reference Table
Figure of merit
Food energy
Heat of combustion
High-energy-density matter
Power density and specifically
Power-to-weight ratio
Rechargeable battery
Solid-state battery
Specific energy
Specific impulse
Orders of magnitude (energy)
Footnotes
Further reading
The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998)
Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000)
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
External links
"Aircraft Fuels." Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259
"Fuels of the Future for Cars and Trucks" – Dr. James J. Eberhardt – Energy Efficiency and Renewable Energy, U.S. Department of Energy – 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002
Energy
Density
Volume-specific quantities
Physical cosmological concepts
Physical quantities | 0.786644 | 0.997708 | 0.784841 |
Agility | Agility or nimbleness is an ability to change the body's position quickly and requires the integration of isolated movement skills using a combination of balance, coordination, endurance, flexibility, speed and strength. More specifically, it is dependent on these six motor skills:
Balance: The ability to maintain equilibrium when stationary or moving (i.e., not to fall over) through the coordinated actions of our sensory functions (eyes, ears and the proprioceptive organs in our joints);
Static balance: The ability to retain the center of mass above the base of support in a stationary position;
Dynamic balance: The ability to maintain balance with body movement; an equal distribution of weight;
Coordination: The ability to control the movement of the body in co-operation with the body's sensory functions (e.g., in catching a ball [ball, hand, and eye coordination]).
Endurance:
Basic endurance:
General endurance:
Specific endurance:
Flexibility:
Active flexibility:
Static flexibility:
Dynamic flexibility:
Passive flexibility:
Speed: The ability to move all or part of the body quickly;
Strength: The ability of a muscle or muscle group to overcome a resistance;
Static strength:
Dynamic strength:
Explosive strength:
Maximal strength:
In sports, agility is often defined in terms of an individual sport, due to it being an integration of many components each used differently (specific to all sorts of different sports). Sheppard and Young (2006) defined agility as a "rapid whole body movement with change of direction or velocity in response to a stimulus".
Agility is also an important attribute in many role playing games, both video games such as Pokémon, and tabletop games such as Dungeons & Dragons. Agility may affect the character's ability to evade an enemy's attack or land their own, or pickpocket and pick locks.
In modern-day psychology, author, psychologist, and executive coach Susan David introduces a concept that she terms “emotional agility,” defined as: “being flexible with your thoughts and feelings so that you can respond optimally to everyday situations.”
The concept has also been applied to higher education management and leadership, where it was used to accelerate slower traditional and deliberative processes and to replace them with corporate decision-making.
See also
Illinois agility test
Agility drill
Fitness trail
Freerunning
Parkour
References
Physical exercise
Physical fitness | 0.791741 | 0.991269 | 0.784828 |
Lorenz system | The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not (see Saddle points). The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Overview
In 1963, Edward Lorenz, with the help of Ellen Fetter who was responsible for the numerical simulations and figures, and Margaret Hamilton who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations:
The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: is proportional to the rate of convection, to the horizontal temperature variation, and to the vertical temperature variation. The constants , , and are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself.
The Lorenz equations can arise in simplified models for lasers, dynamos, thermosyphons, brushless DC motors, electric circuits, chemical reactions and forward osmosis. The Lorenz equations are also the governing equations in Fourier space for the Malkus waterwheel. The Malkus waterwheel exhibits chaotic motion where instead of spinning in one direction at a constant speed, its rotation will speed up, slow down, stop, change directions, and oscillate back and forth between combinations of such behaviors in an unpredictable manner.
From a technical standpoint, the Lorenz system is nonlinear, aperiodic, three-dimensional and deterministic. The Lorenz equations have been the subject of hundreds of research articles, and at least one book-length study.
Analysis
One normally assumes that the parameters , , and are positive. Lorenz used the values , and . The system exhibits chaotic behavior for these (and nearby) values.
If then there is only one equilibrium point, which is at the origin. This point corresponds to no convection. All orbits converge to the origin, which is a global attractor, when .
A pitchfork bifurcation occurs at , and for two additional critical points appear at
These correspond to steady convection. This pair of equilibrium points is stable only if
which can hold only for positive if . At the critical value, both equilibrium points lose stability through a subcritical Hopf bifurcation.
When , , and , the Lorenz system has chaotic solutions (but not all solutions are chaotic). Almost all initial points will tend to an invariant setthe Lorenz attractora strange attractor, a fractal, and a self-excited attractor with respect to all three equilibria. Its Hausdorff dimension is estimated from above by the Lyapunov dimension (Kaplan-Yorke dimension) as , and the correlation dimension is estimated to be .
The exact Lyapunov dimension formula of the global attractor can be found analytically under classical restrictions on the parameters:
The Lorenz attractor is difficult to analyze, but the action of the differential equation on the attractor is described by a fairly simple geometric model. Proving that this is indeed the case is the fourteenth problem on the list of Smale's problems. This problem was the first one to be resolved, by Warwick Tucker in 2002.
For other values of , the system displays knotted periodic orbits. For example, with it becomes a torus knot.
Connection to tent map
In Figure 4 of his paper, Lorenz plotted the relative maximum value in the z direction achieved by the system against the previous relative maximum in the direction. This procedure later became known as a Lorenz map (not to be confused with a Poincaré plot, which plots the intersections of a trajectory with a prescribed surface). The resulting plot has a shape very similar to the tent map. Lorenz also found that when the maximum value is above a certain cut-off, the system will switch to the next lobe. Combining this with the chaos known to be exhibited by the tent map, he showed that the system switches between the two lobes chaotically.
A Generalized Lorenz System
Over the past several years, a series of papers regarding high-dimensional Lorenz models have yielded a generalized Lorenz model, which can be simplified into the classical Lorenz model for three state variables or the following five-dimensional Lorenz model for five state variables:
A choice of the parameter has been applied to be consistent with the choice of the other parameters. See details in.
Simulations
Julia simulation
using Plots
# define the Lorenz attractor
@kwdef mutable struct Lorenz
dt::Float64 = 0.02
σ::Float64 = 10
ρ::Float64 = 28
β::Float64 = 8/3
x::Float64 = 2
y::Float64 = 1
z::Float64 = 1
end
function step!(l::Lorenz)
dx = l.σ * (l.y - l.x); l.x += l.dt * dx
dy = l.x * (l.ρ - l.z) - l.y; l.y += l.dt * dy
dz = l.x * l.y - l.β * l.z; l.z += l.dt * dz
end
attractor = Lorenz()
# initialize a 3D plot with 1 empty series
plt = plot3d(
1,
xlim = (-30, 30),
ylim = (-30, 30),
zlim = (0, 60),
title = "Lorenz Attractor",
marker = 2,
)
# build an animated gif by pushing new points to the plot, saving every 10th frame
@gif for i=1:1500
step!(attractor)
push!(plt, attractor.x, attractor.y, attractor.z)
end every 10
Maple simulation
deq := [diff(x(t), t) = 10*(y(t) - x(t)), diff(y(t), t) = 28*x(t) - y(t) - x(t)*z(t), diff(z(t), t) = x(t)*y(t) - 8/3*z(t)]:
with(DEtools):
DEplot3d(deq, {x(t), y(t), z(t)}, t = 0 .. 100, [[x(0) = 10, y(0) = 10, z(0) = 10]], stepsize = 0.01, x = -20 .. 20, y = -25 .. 25, z = 0 .. 50, linecolour = sin(t*Pi/3), thickness = 1, orientation = [-40, 80], title = `Lorenz Chaotic Attractor`);
Maxima simulation
[sigma, rho, beta]: [10, 28, 8/3]$
eq: [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]$
sol: rk(eq, [x, y, z], [1, 0, 0], [t, 0, 50, 1/100])$
len: length(sol)$
x: makelist(sol[k][2], k, len)$
y: makelist(sol[k][3], k, len)$
z: makelist(sol[k][4], k, len)$
draw3d(points_joined=true, point_type=-1, points(x, y, z), proportional_axes=xyz)$
MATLAB simulation
% Solve over time interval [0,100] with initial conditions [1,1,1]
% ''f'' is set of differential equations
% ''a'' is array containing x, y, and z variables
% ''t'' is time variable
sigma = 10;
beta = 8/3;
rho = 28;
f = @(t,a) [-sigma*a(1) + sigma*a(2); rho*a(1) - a(2) - a(1)*a(3); -beta*a(3) + a(1)*a(2)];
[t,a] = ode45(f,[0 100],[1 1 1]); % Runge-Kutta 4th/5th order ODE solver
plot3(a(:,1),a(:,2),a(:,3))
Mathematica simulation
Standard way:
tend = 50;
eq = {x'[t] == σ (y[t] - x[t]),
y'[t] == x[t] (ρ - z[t]) - y[t],
z'[t] == x[t] y[t] - β z[t]};
init = {x[0] == 10, y[0] == 10, z[0] == 10};
pars = {σ->10, ρ->28, β->8/3};
{xs, ys, zs} =
NDSolveValue[{eq /. pars, init}, {x, y, z}, {t, 0, tend}];
ParametricPlot3D[{xs[t], ys[t], zs[t]}, {t, 0, tend}]
Less verbose:
lorenz = NonlinearStateSpaceModel[{{σ (y - x), x (ρ - z) - y, x y - β z}, {}}, {x, y, z}, {σ, ρ, β}];
soln[t_] = StateResponse[{lorenz, {10, 10, 10}}, {10, 28, 8/3}, {t, 0, 50}];
ParametricPlot3D[soln[t], {t, 0, 50}]
Python simulation
import matplotlib.pyplot as plt
import numpy as np
def lorenz(xyz, *, s=10, r=28, b=2.667):
"""
Parameters
----------
xyz : array-like, shape (3,)
Point of interest in three-dimensional space.
s, r, b : float
Parameters defining the Lorenz attractor.
Returns
-------
xyz_dot : array, shape (3,)
Values of the Lorenz attractor's partial derivatives at *xyz*.
"""
x, y, z = xyz
x_dot = s*(y - x)
y_dot = r*x - y - x*z
z_dot = x*y - b*z
return np.array([x_dot, y_dot, z_dot])
dt = 0.01
num_steps = 10000
xyzs = np.empty((num_steps + 1, 3)) # Need one more for the initial values
xyzs[0] = (0., 1., 1.05) # Set initial values
# Step through "time", calculating the partial derivatives at the current point
# and using them to estimate the next point
for i in range(num_steps):
xyzs[i + 1] = xyzs[i] + lorenz(xyzs[i]) * dt
# Plot
ax = plt.figure().add_subplot(projection='3d')
ax.plot(*xyzs.T, lw=0.6)
ax.set_xlabel("X Axis")
ax.set_ylabel("Y Axis")
ax.set_zlabel("Z Axis")
ax.set_title("Lorenz Attractor")
plt.show()
R simulation
library(deSolve)
library(plotly)
# parameters
prm <- list(sigma = 10, rho = 28, beta = 8/3)
# initial values
varini <- c(
X = 1,
Y = 1,
Z = 1
)
Lorenz <- function (t, vars, prm) {
with(as.list(vars), {
dX <- prm$sigma*(Y - X)
dY <- X*(prm$rho - Z) - Y
dZ <- X*Y - prm$beta*Z
return(list(c(dX, dY, dZ)))
})
}
times <- seq(from = 0, to = 100, by = 0.01)
# call ode solver
out <- ode(y = varini, times = times, func = Lorenz,
parms = prm)
# to assign color to points
gfill <- function (repArr, long) {
rep(repArr, ceiling(long/length(repArr)))[1:long]
}
dout <- as.data.frame(out)
dout$color <- gfill(rainbow(10), nrow(dout))
# Graphics production with Plotly:
plot_ly(
data=dout, x = ~X, y = ~Y, z = ~Z,
type = 'scatter3d', mode = 'lines',
opacity = 1, line = list(width = 6, color = ~color, reverscale = FALSE)
)
Applications
Model for atmospheric convection
As shown in Lorenz's original paper, the Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman. The Lorenz equations are derived from the Oberbeck–Boussinesq approximation to the equations describing fluid circulation in a shallow layer of fluid, heated uniformly from below and cooled uniformly from above. This fluid circulation is known as Rayleigh–Bénard convection. The fluid is assumed to circulate in two dimensions (vertical and horizontal) with periodic rectangular boundary conditions.
The partial differential equations modeling the system's stream function and temperature are subjected to a spectral Galerkin approximation: the hydrodynamic fields are expanded in Fourier series, which are then severely truncated to a single term for the stream function and two terms for the temperature. This reduces the model equations to a set of three coupled, nonlinear ordinary differential equations. A detailed derivation may be found, for example, in nonlinear dynamics texts from , Appendix C; , Appendix D; or Shen (2016), Supplementary Materials.
Model for the nature of chaos and order in the atmosphere
The scientific community accepts that the chaotic features found in low-dimensional Lorenz models could represent features of the Earth's atmosphere, yielding the statement of “weather is chaotic.” By comparison, based on the concept of attractor coexistence within the generalized Lorenz model and the original Lorenz model, Shen and his co-authors proposed a revised view that “weather possesses both chaos and order with distinct predictability”. The revised view, which is a build-up of the conventional view, is used to suggest that “the chaotic and regular features found in theoretical Lorenz models could better represent features of the Earth's atmosphere”.
Resolution of Smale's 14th problem
Smale's 14th problem says, 'Do the properties of the Lorenz attractor exhibit that of a strange attractor?'. The problem was answered affirmatively by Warwick Tucker in 2002. To prove this result, Tucker used rigorous numerics methods like interval arithmetic and normal forms. First, Tucker defined a cross section that is cut transversely by the flow trajectories. From this, one can define the first-return map , which assigns to each the point where the trajectory of first intersects .
Then the proof is split in three main points that are proved and imply the existence of a strange attractor. The three points are:
There exists a region invariant under the first-return map, meaning .
The return map admits a forward invariant cone field.
Vectors inside this invariant cone field are uniformly expanded by the derivative of the return map.
To prove the first point, we notice that the cross section is cut by two arcs formed by . Tucker covers the location of these two arcs by small rectangles , the union of these rectangles gives . Now, the goal is to prove that for all points in , the flow will bring back the points in , in . To do that, we take a plan below at a distance small, then by taking the center of and using Euler integration method, one can estimate where the flow will bring in which gives us a new point . Then, one can estimate where the points in will be mapped in using Taylor expansion, this gives us a new rectangle centered on . Thus we know that all points in will be mapped in . The goal is to do this method recursively until the flow comes back to and we obtain a rectangle in such that we know that . The problem is that our estimation may become imprecise after several iterations, thus what Tucker does is to split into smaller rectangles and then apply the process recursively.
Another problem is that as we are applying this algorithm, the flow becomes more 'horizontal', leading to a dramatic increase in imprecision. To prevent this, the algorithm changes the orientation of the cross sections, becoming either horizontal or vertical.
Gallery
See also
Eden's conjecture on the Lyapunov dimension
Lorenz 96 model
List of chaotic maps
Takens' theorem
Notes
References
Shen, B.-W. (2015-12-21). "Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term". Nonlinear Processes in Geophysics. 22 (6): 749–764. doi:10.5194/npg-22-749-2015. ISSN 1607-7946.
Further reading
External links
Lorenz attractor by Rob Morris, Wolfram Demonstrations Project.
Lorenz equation on planetmath.org
Synchronized Chaos and Private Communications, with Kevin Cuomo. The implementation of Lorenz attractor in an electronic circuit.
Lorenz attractor interactive animation (you need the Adobe Shockwave plugin)
3D Attractors: Mac program to visualize and explore the Lorenz attractor in 3 dimensions
Lorenz Attractor implemented in analog electronic
Lorenz Attractor interactive animation (implemented in Ada with GTK+. Sources & executable)
Interactive web based Lorenz Attractor made with Iodide
Chaotic maps
Articles containing video clips
Articles with example Python (programming language) code
Articles with example MATLAB/Octave code
Articles with example Julia code | 0.7873 | 0.996829 | 0.784803 |
Electromotive force | In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other types of electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electrical grid as the lines of the magnetic field are shifted about and cut across the conductors.
In a battery, the charge separation that gives rise to a potential difference (voltage) between the terminals is accomplished by chemical reactions at the electrodes that convert chemical potential energy into electromagnetic potential energy. A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is:
In an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which creates a potential difference between the generator terminals. Charge separation takes place within the generator because electrons flow away from one terminal toward the other, until, in the open-circuit case, an electric field is developed that makes further charge separation impossible. The emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction.
History
In 1801, Alessandro Volta introduced the term "force motrice électrique" to describe the active agent of a battery (which he had invented around 1798).
This is called the "electromotive force" in English.
Around 1830, Michael Faraday established that chemical reactions at each of two electrode–electrolyte interfaces provide the "seat of emf" for the voltaic cell. That is, these reactions drive the current and are not an endless source of energy as the earlier obsolete theory thought. In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Alessandro Volta, who had measured a contact potential difference at the metal–metal (electrode–electrode) interface of his cells, held the incorrect opinion that contact alone (without taking into account a chemical reaction) was the origin of the emf.
Notation and units of measurement
Electromotive force is often denoted by or ℰ.
In a device without internal resistance, if an electric charge passing through that device gains an energy via work, the net emf for that device is the energy gained per unit charge: Like other measures of energy per charge, emf uses the SI unit volt, which is equivalent to a joule (SI unit of energy) per coulomb (SI unit of charge).
Electromotive force in electrostatic units is the statvolt (in the centimeter gram second system of units equal in amount to an erg per electrostatic unit of charge).
Formal definitions
Inside a source of emf (such as a battery) that is open-circuited, a charge separation occurs between the negative terminal N and the positive terminal P.
This leads to an electrostatic field that points from P to N, whereas the emf of the source must be able to drive current from N to P when connected to a circuit.
This led Max Abraham to introduce the concept of a nonelectrostatic field that exists only inside the source of emf.
In the open-circuit case, , while when the source is connected to a circuit the electric field inside the source changes but remains essentially the same.
In the open-circuit case, the conservative electrostatic field created by separation of charge exactly cancels the forces producing the emf.
Mathematically:
where is the conservative electrostatic field created by the charge separation associated with the emf, is an element of the path from terminal N to terminal P, '' denotes the vector dot product, and is the electric scalar potential.
This emf is the work done on a unit charge by the source's nonelectrostatic field when the charge moves from N to P.
When the source is connected to a load, its emf is just
and no longer has a simple relation to the electric field inside it.
In the case of a closed path in the presence of a varying magnetic field, the integral of the electric field around the (stationary) closed loop may be nonzero.
Then, the "induced emf" (often called the "induced voltage") in the loop is:
where is the entire electric field, conservative and non-conservative, and the integral is around an arbitrary, but stationary, closed curve through which there is a time-varying magnetic flux , and is the vector potential.
The electrostatic field does not contribute to the net emf around a circuit because the electrostatic portion of the electric field is conservative (i.e., the work done against the field around a closed path is zero, see Kirchhoff's voltage law, which is valid, as long as the circuit elements remain at rest and radiation is ignored).
That is, the "induced emf" (like the emf of a battery connected to a load) is not a "voltage" in the sense of a difference in the electric scalar potential.
If the loop is a conductor that carries current in the direction of integration around the loop, and the magnetic flux is due to that current, we have that , where is the self inductance of the loop.
If in addition, the loop includes a coil that extends from point 1 to 2, such that the magnetic flux is largely localized to that region, it is customary to speak of that region as an inductor, and to consider that its emf is localized to that region.
Then, we can consider a different loop that consists of the coiled conductor from 1 to 2, and an imaginary line down the center of the coil from 2 back to 1.
The magnetic flux, and emf, in loop is essentially the same as that in loop :
For a good conductor, is negligible, so we have, to a good approximation,
where is the electric scalar potential along the centerline between points 1 and 2.
Thus, we can associate an effective "voltage drop" with an inductor (even though our basic understanding of induced emf is based on the vector potential rather than the scalar potential), and consider it as a load element in Kirchhoff's voltage law,
where now the induced emf is not considered to be a source emf.
This definition can be extended to arbitrary sources of emf and paths moving with velocity through the electric field and magnetic field :
which is a conceptual equation mainly, because the determination of the "effective forces" is difficult.
The term
is often called a "motional emf".
In (electrochemical) thermodynamics
When multiplied by an amount of charge the emf yields a thermodynamic work term that is used in the formalism for the change in Gibbs energy when charge is passed in a battery:
where is the Gibbs free energy, is the entropy, is the system volume, is its pressure and is its absolute temperature.
The combination is an example of a conjugate pair of variables. At constant pressure the above relationship produces a Maxwell relation that links the change in open cell voltage with temperature (a measurable quantity) to the change in entropy when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is:
where is the number of electrons/ion, and is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by:
where is the enthalpy of reaction. The quantities on the right are all directly measurable. Assuming constant temperature and pressure:
which is used in the derivation of the Nernst equation.
Distinction with potential difference
Although an electrical potential difference (voltage) is sometimes called an emf, they are formally distinct concepts:
Potential difference is a more general term that includes emf.
Emf is the cause of a potential difference.
In a circuit of a voltage source and a resistor, the sum of the source's applied voltage plus the ohmic voltage drop through the resistor is zero. But the resistor provides no emf, only the voltage source does:
For a circuit using a battery source, the emf is due solely to the chemical forces in the battery.
For a circuit using an electric generator, the emf is due solely to a time-varying magnetic forces within the generator.
Both a 1 volt emf and a 1 volt potential difference correspond to 1 joule per coulomb of charge.
In the case of an open circuit, the electric charge that has been separated by the mechanism generating the emf creates an electric field opposing the separation mechanism. For example, the chemical reaction in a voltaic cell stops when the opposing electric field at each electrode is strong enough to arrest the reactions. A larger opposing field can reverse the reactions in what are called reversible cells.
The electric charge that has been separated creates an electric potential difference that can (in many cases) be measured with a voltmeter between the terminals of the device, when not connected to a load. The magnitude of the emf for the battery (or other source) is the value of this open-circuit voltage.
When the battery is charging or discharging, the emf itself cannot be measured directly using the external voltage because some voltage is lost inside the source.
It can, however, be inferred from a measurement of the current and potential difference , provided that the internal resistance already has been measured:
"Potential difference" is not the same as "induced emf" (often called "induced voltage").
The potential difference (difference in the electric scalar potential) between two points A and B is independent of the path we take from A to B.
If a voltmeter always measured the potential difference between A and B, then the position of the voltmeter would make no difference.
However, it is quite possible for the measurement by a voltmeter between points A and B to depend on the position of the voltmeter, if a time-dependent magnetic field is present.
For example, consider an infinitely long solenoid using an AC current to generate a varying flux in the interior of the solenoid.
Outside the solenoid we have two resistors connected in a ring around the solenoid.
The resistor on the left is 100 Ω and the one on the right is 200 Ω, they are connected at the top and bottom at points A and B.
The induced voltage, by Faraday's law is , so the current Therefore, the voltage across the 100 Ω resistor is and the voltage across the 200 Ω resistor is , yet the two resistors are connected on both ends, but measured with the voltmeter to the left of the solenoid is not the same as measured with the voltmeter to the right of the solenoid.
Generation
Chemical sources
The question of how batteries (galvanic cells) generate an emf occupied scientists for most of the 19th century. The "seat of the electromotive force" was eventually determined in 1889 by Walther Nernst to be primarily at the interfaces between the electrodes and the electrolyte.
Atoms in molecules or solids are held together by chemical bonding, which stabilizes the molecule or solid (i.e. reduces its energy). When molecules or solids of relatively high energy are brought together, a spontaneous chemical reaction can occur that rearranges the bonding and reduces the (free) energy of the system. In batteries, coupled half-reactions, often involving metals and their ions, occur in tandem, with a gain of electrons (termed "reduction") by one conductive electrode and loss of electrons (termed "oxidation") by another (reduction-oxidation or redox reactions). The spontaneous overall reaction can only occur if electrons move through an external wire between the electrodes. The electrical energy given off is the free energy lost by the chemical reaction system.
As an example, a Daniell cell consists of a zinc anode (an electron collector) that is oxidized as it dissolves into a zinc sulfate solution. The dissolving zinc leaving behind its electrons in the electrode according to the oxidation reaction (s = solid electrode; aq = aqueous solution):
The zinc sulfate is the electrolyte in that half cell. It is a solution which contains zinc cations , and sulfate anions with charges that balance to zero.
In the other half cell, the copper cations in a copper sulfate electrolyte move to the copper cathode to which they attach themselves as they adopt electrons from the copper electrode by the reduction reaction:
which leaves a deficit of electrons on the copper cathode. The difference of excess electrons on the anode and deficit of electrons on the cathode creates an electrical potential between the two electrodes. (A detailed discussion of the microscopic process of electron transfer between an electrode and the ions in an electrolyte may be found in Conway.) The electrical energy released by this reaction (213 kJ per 65.4 g of zinc) can be attributed mostly due to the 207 kJ weaker bonding (smaller magnitude of the cohesive energy) of zinc, which has filled 3d- and 4s-orbitals, compared to copper, which has an unfilled orbital available for bonding.
If the cathode and anode are connected by an external conductor, electrons pass through that external circuit (light bulb in figure), while ions pass through the salt bridge to maintain charge balance until the anode and cathode reach electrical equilibrium of zero volts as chemical equilibrium is reached in the cell. In the process the zinc anode is dissolved while the copper electrode is plated with copper. The salt bridge has to close the electrical circuit while preventing the copper ions from moving to the zinc electrode and being reduced there without generating an external current. It is not made of salt but of material able to wick cations and anions (a dissociated salt) into the solutions. The flow of positively charged cations along the bridge is equivalent to the same number of negative charges flowing in the opposite direction.
If the light bulb is removed (open circuit) the emf between the electrodes is opposed by the electric field due to the charge separation, and the reactions stop.
For this particular cell chemistry, at 298 K (room temperature), the emf = 1.0934 V, with a temperature coefficient of = −4.53×10−4 V/K.
Voltaic cells
Volta developed the voltaic cell about 1792, and presented his work March 20, 1800. Volta correctly identified the role of dissimilar electrodes in producing the voltage, but incorrectly dismissed any role for the electrolyte. Volta ordered the metals in a 'tension series', "that is to say in an order such that any one in the list becomes positive when in contact with any one that succeeds, but negative by contact with any one that precedes it." A typical symbolic convention in a schematic of this circuit ( –||– ) would have a long electrode 1 and a short electrode 2, to indicate that electrode 1 dominates. Volta's law about opposing electrode emfs implies that, given ten electrodes (for example, zinc and nine other materials), 45 unique combinations of voltaic cells (10 × 9/2) can be created.
Typical values
The electromotive force produced by primary (single-use) and secondary (rechargeable) cells is usually of the order of a few volts. The figures quoted below are nominal, because emf varies according to the size of the load and the state of exhaustion of the cell.
Other chemical sources
Other chemical sources include fuel cells.
Electromagnetic induction
Electromagnetic induction is the production of a circulating electric field by a time-dependent magnetic field. A time-dependent magnetic field can be produced either by motion of a magnet relative to a circuit, by motion of a circuit relative to another circuit (at least one of these must be carrying an electric current), or by changing the electric current in a fixed circuit. The effect on the circuit itself, of changing the electric current, is known as self-induction; the effect on another circuit is known as mutual induction.
For a given circuit, the electromagnetically induced emf is determined purely by the rate of change of the magnetic flux through the circuit according to Faraday's law of induction.
An emf is induced in a coil or conductor whenever there is change in the flux linkages. Depending on the way in which the changes are brought about, there are two types: When the conductor is moved in a stationary magnetic field to procure a change in the flux linkage, the emf is statically induced. The electromotive force generated by motion is often referred to as motional emf. When the change in flux linkage arises from a change in the magnetic field around the stationary conductor, the emf is dynamically induced. The electromotive force generated by a time-varying magnetic field is often referred to as transformer emf.
Contact potentials
When solids of two different materials are in contact, thermodynamic equilibrium requires that one of the solids assume a higher electrical potential than the other. This is called the contact potential. Dissimilar metals in contact produce what is known also as a contact electromotive force or Galvani potential. The magnitude of this potential difference is often expressed as a difference in Fermi levels in the two solids when they are at charge neutrality, where the Fermi level (a name for the chemical potential of an electron system) describes the energy necessary to remove an electron from the body to some common point (such as ground). If there is an energy advantage in taking an electron from one body to the other, such a transfer will occur. The transfer causes a charge separation, with one body gaining electrons and the other losing electrons. This charge transfer causes a potential difference between the bodies, which partly cancels the potential originating from the contact, and eventually equilibrium is reached. At thermodynamic equilibrium, the Fermi levels are equal (the electron removal energy is identical) and there is now a built-in electrostatic potential between the bodies.
The original difference in Fermi levels, before contact, is referred to as the emf.
The contact potential cannot drive steady current through a load attached to its terminals because that current would involve a charge transfer. No mechanism exists to continue such transfer and, hence, maintain a current, once equilibrium is attained.
One might inquire why the contact potential does not appear in Kirchhoff's law of voltages as one contribution to the sum of potential drops. The customary answer is that any circuit involves not only a particular diode or junction, but also all the contact potentials due to wiring and so forth around the entire circuit. The sum of all the contact potentials is zero, and so they may be ignored in Kirchhoff's law.
Solar cell
Operation of a solar cell can be understood from its equivalent circuit. Photons with energy greater than the bandgap of the semiconductor create mobile electron–hole pairs. Charge separation occurs because of a pre-existing electric field associated with the p-n junction. This electric field is created from a built-in potential, which arises from the contact potential between the two different materials in the junction. The charge separation between positive holes and negative electrons across the p–n diode yields a forward voltage, the photo voltage, between the illuminated diode terminals, which drives current through any attached load. Photo voltage is sometimes referred to as the photo emf, distinguishing between the effect and the cause.
Solar cell current–voltage relationship
Two internal current losses limit the total current available to the external circuit. The light-induced charge separation eventually creates a forward current through the cell's internal resistance in the direction opposite the light-induced current . In addition, the induced voltage tends to forward bias the junction, which at high enough voltages will cause a recombination current in the diode opposite the light-induced current.
When the output is short-circuited, the output voltage is zeroed, and so the voltage across the diode is smallest. Thus, short-circuiting results in the smallest losses and consequently the maximum output current, which for a high-quality solar cell is approximately equal to the light-induced current . Approximately this same current is obtained for forward voltages up to the point where the diode conduction becomes significant.
The current delivered by the illuminated diode to the external circuit can be simplified (based on certain assumptions) to:
is the reverse saturation current. Two parameters that depend on the solar cell construction and to some degree upon the voltage itself are the ideality factor m and the thermal voltage , which is about 26 millivolts at room temperature.
Solar cell photo emf
Solving the illuminated diode's above simplified current–voltage relationship for output voltage yields:
which is plotted against in the figure.
The solar cell's photo emf has the same value as the open-circuit voltage , which is determined by zeroing the output current :
It has a logarithmic dependence on the light-induced current and is where the junction's forward bias voltage is just enough that the forward current completely balances the light-induced current. For silicon junctions, it is typically not much more than 0.5 volts. While for high-quality silicon panels it can exceed 0.7 volts in direct sunlight.
When driving a resistive load, the output voltage can be determined using Ohm's law and will lie between the short-circuit value of zero volts and the open-circuit voltage . When that resistance is small enough such that (the near-vertical part of the two illustrated curves), the solar cell acts more like a current generator rather than a voltage generator, since the current drawn is nearly fixed over a range of output voltages. This contrasts with batteries, which act more like voltage generators.
Other sources that generate emf
A transformer coupling two circuits may be considered a source of emf for one of the circuits, just as if it were caused by an electrical generator; this is the origin of the term "transformer emf".
For converting sound waves into voltage signals:
a microphone generates an emf from a moving diaphragm.
a magnetic pickup generates an emf from a varying magnetic field produced by an instrument.
a piezoelectric sensor generates an emf from strain on a piezoelectric crystal.
Devices that use temperature to produce emfs include thermocouples and thermopiles.
Any electrical transducer which converts a physical energy into electrical energy.
See also
Counter-electromotive force
Electric battery
Electrochemical cell
Electrolytic cell
Galvanic cell
Voltaic pile
References
Further reading
George F. Barker, "On the measurement of electromotive force". Proceedings of the American Philosophical Society Held at Philadelphia for Promoting Useful Knowledge, American Philosophical Society. January 19, 1883.
Andrew Gray, "Absolute Measurements in Electricity and Magnetism", Electromotive force. Macmillan and co., 1884.
Charles Albert Perkins, "Outlines of Electricity and Magnetism", Measurement of Electromotive Force. Henry Holt and co., 1896.
John Livingston Rutgers Morgan, "The Elements of Physical Chemistry", Electromotive force. J. Wiley, 1899.
"Abhandlungen zur Thermodynamik, von H. Helmholtz. Hrsg. von Max Planck". (Tr. "Papers to thermodynamics, on H. Helmholtz. Hrsg. by Max Planck".) Leipzig, W. Engelmann, Of Ostwald classical author of the accurate sciences series. New consequence. No. 124, 1902.
Theodore William Richards and Gustavus Edward Behr, jr., "The electromotive force of iron under varying conditions, and the effect of occluded hydrogen". Carnegie Institution of Washington publication series, 1906.
Henry S. Carhart, "Thermo-electromotive force in electric cells, the thermo-electromotive force between a metal and a solution of one of its salts". New York, D. Van Nostrand company, 1920.
Hazel Rossotti, "Chemical applications of potentiometry". London, Princeton, N.J., Van Nostrand, 1969.
Nabendu S. Choudhury, 1973. "Electromotive force measurements on cells involving beta-alumina solid electrolyte". NASA technical note, D-7322.
G. W. Burns, et al., "Temperature-electromotive force reference functions and tables for the letter-designated thermocouple types based on the ITS-90". Gaithersburg, MD : U.S. Dept. of Commerce, National Institute of Standards and Technology, Washington, Supt. of Docs., U.S. G.P.O., 1993.
Electromagnetism
Electrodynamics
Voltage | 0.785896 | 0.998492 | 0.78471 |
Inverse-square law | In science, an inverse-square law is any scientific law stating that the observed "intensity" of a specified physical quantity is inversely proportional to the square of the distance from the source of that physical quantity. The fundamental cause for this can be understood as geometric dilution corresponding to point-source radiation into three-dimensional space.
Radar energy expands during both the signal transmission and the reflected return, so the inverse square for both paths means that the radar will receive energy according to the inverse fourth power of the range.
To prevent dilution of energy while propagating a signal, certain methods can be used such as a waveguide, which acts like a canal does for water, or how a gun barrel restricts hot gas expansion to one dimension in order to prevent loss of energy transfer to a bullet.
Formula
In mathematical notation the inverse square law can be expressed as an intensity (I) varying as a function of distance (d) from some centre. The intensity is proportional (see ∝) to the reciprocal of the square of the distance thus:
It can also be mathematically expressed as :
or as the formulation of a constant quantity:
The divergence of a vector field which is the resultant of radial inverse-square law fields with respect to one or more sources is proportional to the strength of the local sources, and hence zero outside sources. Newton's law of universal gravitation follows an inverse-square law, as do the effects of electric, light, sound, and radiation phenomena.
Justification
The inverse-square law generally applies when some force, energy, or other conserved quantity is evenly radiated outward from a point source in three-dimensional space. Since the surface area of a sphere (which is 4πr2) is proportional to the square of the radius, as the emitted radiation gets farther from the source, it is spread out over an area that is increasing in proportion to the square of the distance from the source. Hence, the intensity of radiation passing through any unit area (directly facing the point source) is inversely proportional to the square of the distance from the point source. Gauss's law for gravity is similarly applicable, and can be used with any physical quantity that acts in accordance with the inverse-square relationship.
Occurrences
Gravitation
Gravitation is the attraction between objects that have mass. Newton's law states:
If the distribution of matter in each body is spherically symmetric, then the objects can be treated as point masses without approximation, as shown in the shell theorem. Otherwise, if we want to calculate the attraction between massive bodies, we need to add all the point-point attraction forces vectorially and the net attraction might not be exact inverse square. However, if the separation between the massive bodies is much larger compared to their sizes, then to a good approximation, it is reasonable to treat the masses as a point mass located at the object's center of mass while calculating the gravitational force.
As the law of gravitation, this law was suggested in 1645 by Ismaël Bullialdus. But Bullialdus did not accept Kepler's second and third laws, nor did he appreciate Christiaan Huygens's solution for circular motion (motion in a straight line pulled aside by the central force). Indeed, Bullialdus maintained the sun's force was attractive at aphelion and repulsive at perihelion. Robert Hooke and Giovanni Alfonso Borelli both expounded gravitation in 1666 as an attractive force. Hooke's lecture "On gravity" was at the Royal Society, in London, on 21 March. Borelli's "Theory of the Planets" was published later in 1666. Hooke's 1670 Gresham lecture explained that gravitation applied to "all celestiall bodys" and added the principles that the gravitating power decreases with distance and that in the absence of any such power bodies move in straight lines. By 1679, Hooke thought gravitation had inverse square dependence and communicated this in a letter to Isaac Newton:
my supposition is that the attraction always is in duplicate proportion to the distance from the center reciprocall.
Hooke remained bitter about Newton claiming the invention of this principle, even though Newton's 1686 Principia acknowledged that Hooke, along with Wren and Halley, had separately appreciated the inverse square law in the solar system, as well as giving some credit to Bullialdus.
Electrostatics
The force of attraction or repulsion between two electrically charged particles, in addition to being directly proportional to the product of the electric charges, is inversely proportional to the square of the distance between them; this is known as Coulomb's law. The deviation of the exponent from 2 is less than one part in 1015.
Light and other electromagnetic radiation
The intensity (or illuminance or irradiance) of light or other linear waves radiating from a point source (energy per unit of area perpendicular to the source) is inversely proportional to the square of the distance from the source, so an object (of the same size) twice as far away receives only one-quarter the energy (in the same time period).
More generally, the irradiance, i.e., the intensity (or power per unit area in the direction of propagation), of a spherical wavefront varies inversely with the square of the distance from the source (assuming there are no losses caused by absorption or scattering).
For example, the intensity of radiation from the Sun is 9126 watts per square meter at the distance of Mercury (0.387 AU); but only 1367 watts per square meter at the distance of Earth (1 AU)—an approximate threefold increase in distance results in an approximate ninefold decrease in intensity of radiation.
For non-isotropic radiators such as parabolic antennas, headlights, and lasers, the effective origin is located far behind the beam aperture. If you are close to the origin, you don't have to go far to double the radius, so the signal drops quickly. When you are far from the origin and still have a strong signal, like with a laser, you have to travel very far to double the radius and reduce the signal. This means you have a stronger signal or have antenna gain in the direction of the narrow beam relative to a wide beam in all directions of an isotropic antenna.
In photography and stage lighting, the inverse-square law is used to determine the “fall off” or the difference in illumination on a subject as it moves closer to or further from the light source. For quick approximations, it is enough to remember that doubling the distance reduces illumination to one quarter; or similarly, to halve the illumination increase the distance by a factor of 1.4 (the square root of 2), and to double illumination, reduce the distance to 0.7 (square root of 1/2). When the illuminant is not a point source, the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%.
The fractional reduction in electromagnetic fluence (Φ) for indirectly ionizing radiation with increasing distance from a point source can be calculated using the inverse-square law. Since emissions from a point source have radial directions, they intercept at a perpendicular incidence. The area of such a shell is 4πr 2 where r is the radial distance from the center. The law is particularly important in diagnostic radiography and radiotherapy treatment planning, though this proportionality does not hold in practical situations unless source dimensions are much smaller than the distance. As stated in Fourier theory of heat “as the point source is magnification by distances, its radiation is dilute proportional to the sin of the angle, of the increasing circumference arc from the point of origin”.
Example
Let P be the total power radiated from a point source (for example, an omnidirectional isotropic radiator). At large distances from the source (compared to the size of the source), this power is distributed over larger and larger spherical surfaces as the distance from the source increases. Since the surface area of a sphere of radius r is A = 4πr 2, the intensity I (power per unit area) of radiation at distance r is
The energy or intensity decreases (divided by 4) as the distance r is doubled; if measured in dB would decrease by 6.02 dB per doubling of distance. When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to the reference value.
Sound in a gas
In acoustics, the sound pressure of a spherical wavefront radiating from a point source decreases by 50% as the distance r is doubled; measured in dB, the decrease is still 6.02 dB, since dB represents an intensity ratio. The pressure ratio (as opposed to power ratio) is not inverse-square, but is inverse-proportional (inverse distance law):
The same is true for the component of particle velocity that is in-phase with the instantaneous sound pressure :
In the near field is a quadrature component of the particle velocity that is 90° out of phase with the sound pressure and does not contribute to the time-averaged energy or the intensity of the sound. The sound intensity is the product of the RMS sound pressure and the in-phase component of the RMS particle velocity, both of which are inverse-proportional. Accordingly, the intensity follows an inverse-square behaviour:
Field theory interpretation
For an irrotational vector field in three-dimensional space, the inverse-square law corresponds to the property that the divergence is zero outside the source. This can be generalized to higher dimensions. Generally, for an irrotational vector field in n-dimensional Euclidean space, the intensity "I" of the vector field falls off with the distance "r" following the inverse (n − 1)th power law
given that the space outside the source is divergence free.
Non-Euclidean implications
The inverse-square law, fundamental in Euclidean spaces, also applies to non-Euclidean geometries, including hyperbolic space. The curvature present in these spaces alters physical laws, influencing a variety of fields such as cosmology, general relativity, and string theory.
John D. Barrow, in his 2020 paper "Non-Euclidean Newtonian Cosmology," expands on the behavior of force (F) and potential (Φ) within hyperbolic 3-space (H3). He explains that F and Φ obey the relationships F ∝ 1 / R² sinh²(r/R) and Φ ∝ coth(r/R), where R represents the curvature radius and r represents the distance from the focal point.
The concept of spatial dimensionality, first proposed by Immanuel Kant, remains a topic of debate concerning the inverse-square law. Dimitria Electra Gatzia and Rex D. Ramsier, in their 2021 paper, contend that the inverse-square law is more closely related to force distribution symmetry than to the dimensionality of space.
In the context of non-Euclidean geometries and general relativity, deviations from the inverse-square law do not arise from the law itself but rather from the assumption that the force between two bodies is instantaneous, which contradicts special relativity. General relativity reinterprets gravity as the curvature of spacetime, leading particles to move along geodesics in this curved spacetime.
History
John Dumbleton of the 14th-century Oxford Calculators, was one of the first to express functional relationships in graphical form. He gave a proof of the mean speed theorem stating that "the latitude of a uniformly difform movement corresponds to the degree of the midpoint" and used this method to study the quantitative decrease in intensity of illumination in his Summa logicæ et philosophiæ naturalis (ca. 1349), stating that it was not linearly proportional to the distance, but was unable to expose the Inverse-square law.
In proposition 9 of Book 1 in his book Ad Vitellionem paralipomena, quibus astronomiae pars optica traditur (1604), the astronomer Johannes Kepler argued that the spreading of light from a point source obeys an inverse square law:
In 1645, in his book Astronomia Philolaica ..., the French astronomer Ismaël Bullialdus (1605–1694) refuted Johannes Kepler's suggestion that "gravity" weakens as the inverse of the distance; instead, Bullialdus argued, "gravity" weakens as the inverse square of the distance:
In England, the Anglican bishop Seth Ward (1617–1689) publicized the ideas of Bullialdus in his critique In Ismaelis Bullialdi astronomiae philolaicae fundamenta inquisitio brevis (1653) and publicized the planetary astronomy of Kepler in his book Astronomia geometrica (1656).
In 1663–1664, the English scientist Robert Hooke was writing his book Micrographia (1666) in which he discussed, among other things, the relation between the height of the atmosphere and the barometric pressure at the surface. Since the atmosphere surrounds the Earth, which itself is a sphere, the volume of atmosphere bearing on any unit area of the Earth's surface is a truncated cone (which extends from the Earth's center to the vacuum of space; obviously only the section of the cone from the Earth's surface to space bears on the Earth's surface). Although the volume of a cone is proportional to the cube of its height, Hooke argued that the air's pressure at the Earth's surface is instead proportional to the height of the atmosphere because gravity diminishes with altitude. Although Hooke did not explicitly state so, the relation that he proposed would be true only if gravity decreases as the inverse square of the distance from the Earth's center.
See also
Flux
Antenna (radio)
Gauss's law
Kepler's laws of planetary motion
Kepler problem
Telecommunications, particularly:
William Thomson, 1st Baron Kelvin
Power-aware routing protocols
Inverse proportionality
Multiplicative inverse
Distance decay
Fermi paradox
Square–cube law
Principle of similitude
References
External links
Damping of sound level with distance
Sound pressure p and the inverse distance law 1/r
Philosophy of physics
Scientific method | 0.786355 | 0.997897 | 0.784702 |
Covariant formulation of classical electromagnetism | The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
Covariant objects
Preliminary four-vectors
Lorentz tensors of the following kinds may be used in this article to describe bodies or particles:
four-displacement:
Four-velocity: where γ(u) is the Lorentz factor at the 3-velocity u.
Four-momentum: where is 3-momentum, is the total energy, and is rest mass.
Four-gradient:
The d'Alembertian operator is denoted ,
The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor:
Electromagnetic tensor
The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities.
and the result of raising its indices is
where E is the electric field, B the magnetic field, and c the speed of light.
Four-current
The four-current is the contravariant four-vector which combines electric charge density ρ and electric current density j:
Four-potential
The electromagnetic four-potential is a covariant four-vector containing the electric potential (also called the scalar potential) ϕ and magnetic vector potential (or vector potential) A, as follows:
The differential of the electromagnetic potential is
In the language of differential forms, which provides the generalisation to curved spacetimes, these are the components of a 1-form and a 2-form respectively. Here, is the exterior derivative and the wedge product.
Electromagnetic stress–energy tensor
The electromagnetic stress–energy tensor can be interpreted as the flux density of the momentum four-vector, and is a contravariant symmetric tensor that is the contribution of the electromagnetic fields to the overall stress–energy tensor:
where is the electric permittivity of vacuum, μ0 is the magnetic permeability of vacuum, the Poynting vector is
and the Maxwell stress tensor is given by
The electromagnetic field tensor F constructs the electromagnetic stress–energy tensor T by the equation:
where η is the Minkowski metric tensor (with signature ). Notice that we use the fact that
which is predicted by Maxwell's equations.
Maxwell's equations in vacuum
In vacuum (or for the microscopic equations, not including macroscopic material descriptions), Maxwell's equations can be written as two tensor equations.
The two inhomogeneous Maxwell's equations, Gauss's Law and Ampère's law (with Maxwell's correction) combine into (with metric):
The homogeneous equations – Faraday's law of induction and Gauss's law for magnetism combine to form , which may be written using Levi-Civita duality as:
where Fαβ is the electromagnetic tensor, Jα is the four-current, εαβγδ is the Levi-Civita symbol, and the indices behave according to the Einstein summation convention.
Each of these tensor equations corresponds to four scalar equations, one for each value of β.
Using the antisymmetric tensor notation and comma notation for the partial derivative (see Ricci calculus), the second equation can also be written more compactly as:
In the absence of sources, Maxwell's equations reduce to:
which is an electromagnetic wave equation in the field strength tensor.
Maxwell's equations in the Lorenz gauge
The Lorenz gauge condition is a Lorentz-invariant gauge condition. (This can be contrasted with other gauge conditions such as the Coulomb gauge, which if it holds in one inertial frame will generally not hold in any other.) It is expressed in terms of the four-potential as follows:
In the Lorenz gauge, the microscopic Maxwell's equations can be written as:
Lorentz force
Charged particle
Electromagnetic (EM) fields affect the motion of electrically charged matter: due to the Lorentz force. In this way, EM fields can be detected (with applications in particle physics, and natural occurrences such as in aurorae). In relativistic form, the Lorentz force uses the field strength tensor as follows.
Expressed in terms of coordinate time t, it is:
where pα is the four-momentum, q is the charge, and xβ is the position.
Expressed in frame-independent form, we have the four-force
where uβ is the four-velocity, and τ is the particle's proper time, which is related to coordinate time by .
Charge continuum
The density of force due to electromagnetism, whose spatial part is the Lorentz force, is given by
and is related to the electromagnetic stress–energy tensor by
Conservation laws
Electric charge
The continuity equation:
expresses charge conservation.
Electromagnetic energy–momentum
Using the Maxwell equations, one can see that the electromagnetic stress–energy tensor (defined above) satisfies the following differential equation, relating it to the electromagnetic tensor and the current four-vector
or
which expresses the conservation of linear momentum and energy by electromagnetic interactions.
Covariant objects in matter
Free and bound four-currents
In order to solve the equations of electromagnetism given here, it is necessary to add information about how to calculate the electric current, Jν. Frequently, it is convenient to separate the current into two parts, the free current and the bound current, which are modeled by different equations;
where
Maxwell's macroscopic equations have been used, in addition the definitions of the electric displacement D and the magnetic intensity H:
where M is the magnetization and P the electric polarization.
Magnetization–polarization tensor
The bound current is derived from the P and M fields which form an antisymmetric contravariant magnetization-polarization tensor
which determines the bound current
Electric displacement tensor
If this is combined with Fμν we get the antisymmetric contravariant electromagnetic displacement tensor which combines the D and H fields as follows:
The three field tensors are related by:
which is equivalent to the definitions of the D and H fields given above.
Maxwell's equations in matter
The result is that Ampère's law,
and Gauss's law,
combine into one equation:
The bound current and free current as defined above are automatically and separately conserved
Constitutive equations
Vacuum
In vacuum, the constitutive relations between the field tensor and displacement tensor are:
Antisymmetry reduces these 16 equations to just six independent equations. Because it is usual to define Fμν by
the constitutive equations may, in vacuum, be combined with the Gauss–Ampère law to get:
The electromagnetic stress–energy tensor in terms of the displacement is:
where δαπ is the Kronecker delta. When the upper index is lowered with η, it becomes symmetric and is part of the source of the gravitational field.
Linear, nondispersive matter
Thus we have reduced the problem of modeling the current, Jν to two (hopefully) easier problems — modeling the free current, Jνfree and modeling the magnetization and polarization, . For example, in the simplest materials at low frequencies, one has
where one is in the instantaneously comoving inertial frame of the material, σ is its electrical conductivity, χe is its electric susceptibility, and χm is its magnetic susceptibility.
The constitutive relations between the and F tensors, proposed by Minkowski for a linear materials (that is, E is proportional to D and B proportional to H), are:
where u is the four-velocity of material, ε and μ are respectively the proper permittivity and permeability of the material (i.e. in rest frame of material), and denotes the Hodge star operator.
Lagrangian for classical electrodynamics
Vacuum
The Lagrangian density for classical electrodynamics is composed by two components: a field component and a source component:
In the interaction term, the four-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables; the four-current is not itself a fundamental field.
The Lagrange equations for the electromagnetic lagrangian density can be stated as follows:
Noting
the expression inside the square bracket is
The second term is
Therefore, the electromagnetic field's equations of motion are
which is the Gauss–Ampère equation above.
Matter
Separating the free currents from the bound currents, another way to write the Lagrangian density is as follows:
Using Lagrange equation, the equations of motion for can be derived.
The equivalent expression in vector notation is:
See also
Covariant classical field theory
Electromagnetic tensor
Electromagnetic wave equation
Liénard–Wiechert potential for a charge in arbitrary motion
Moving magnet and conductor problem
Inhomogeneous electromagnetic wave equation
Proca action
Quantum electrodynamics
Relativistic electromagnetism
Stueckelberg action
Wheeler–Feynman absorber theory
Notes
References
Further reading
The Feynman Lectures on Physics Vol. II Ch. 25: Electrodynamics in Relativistic Notation
Concepts in physics
Electromagnetism
Special relativity | 0.791666 | 0.991145 | 0.784656 |
Delta-v | Delta-v (also known as "change in velocity"), symbolized as and pronounced deltah-vee, as used in spacecraft flight dynamics, is a measure of the impulse per unit of spacecraft mass that is needed to perform a maneuver such as launching from or landing on a planet or moon, or an in-space orbital maneuver. It is a scalar that has the units of speed. As used in this context, it is not the same as the physical change in velocity of said spacecraft.
A simple example might be the case of a conventional rocket-propelled spacecraft, which achieves thrust by burning fuel. Such a spacecraft's delta-v, then, would be the change in velocity that spacecraft can achieve by burning its entire fuel load.
Delta-v is produced by reaction engines, such as rocket engines, and is proportional to the thrust per unit mass and the burn time. It is used to determine the mass of propellant required for the given maneuver through the Tsiolkovsky rocket equation.
For multiple maneuvers, delta-v sums linearly.
For interplanetary missions, delta-v is often plotted on a porkchop plot, which displays the required mission delta-v as a function of launch date.
Definition
where
is the instantaneous thrust at time .
is the instantaneous mass at time .
Change in velocity is useful in many cases, such as determining the change in momentum (impulse), where: , where is momentum and m is mass.
Specific cases
In the absence of external forces:
where is the coordinate acceleration.
When thrust is applied in a constant direction ( is constant) this simplifies to:
which is simply the magnitude of the change in velocity. However, this relation does not hold in the general case: if, for instance, a constant, unidirectional acceleration is reversed after then the velocity difference is 0, but delta-v is the same as for the non-reversed thrust.
For rockets, "absence of external forces" is taken to mean the absence of gravity and atmospheric drag, as well as the absence of aerostatic back pressure on the nozzle, and hence the vacuum I is used for calculating the vehicle's delta-v capacity via the rocket equation. In addition, the costs for atmospheric losses and gravity drag are added into the delta-v budget when dealing with launches from a planetary surface.
Orbital maneuvers
Orbit maneuvers are made by firing a thruster to produce a reaction force acting on the spacecraft. The size of this force will be
where
is the velocity of the exhaust gas in rocket frame
is the propellant flow rate to the combustion chamber
The acceleration of the spacecraft caused by this force will be
where is the mass of the spacecraft
During the burn the mass of the spacecraft will decrease due to use of fuel, the time derivative of the mass being
If now the direction of the force, i.e. the direction of the nozzle, is fixed during the burn one gets the velocity increase from the thruster force of a burn starting at time and ending at as
Changing the integration variable from time to the spacecraft mass one gets
Assuming to be a constant not depending on the amount of fuel left this relation is integrated to
which is the Tsiolkovsky rocket equation.
If for example 20% of the launch mass is fuel giving a constant of 2100 m/s (a typical value for a hydrazine thruster) the capacity of the reaction control system is
If is a non-constant function of the amount of fuel left
the capacity of the reaction control system is computed by the integral.
The acceleration caused by the thruster force is just an additional acceleration to be added to the other accelerations (force per unit mass) affecting the spacecraft and the orbit can easily be propagated with a numerical algorithm including also this thruster force. But for many purposes, typically for studies or for maneuver optimization, they are approximated by impulsive maneuvers as illustrated in figure 1 with a as given by. Like this one can for example use a "patched conics" approach modeling the maneuver as a shift from one Kepler orbit to another by an instantaneous change of the velocity vector.
This approximation with impulsive maneuvers is in most cases very accurate, at least when chemical propulsion is used. For low thrust systems, typically electrical propulsion systems, this approximation is less accurate. But even for geostationary spacecraft using electrical propulsion for out-of-plane control with thruster burn periods extending over several hours around the nodes this approximation is fair.
Production
Delta-v is typically provided by the thrust of a rocket engine, but can be created by other engines. The time-rate of change of delta-v is the magnitude of the acceleration caused by the engines, i.e., the thrust per total vehicle mass. The actual acceleration vector would be found by adding thrust per mass on to the gravity vector and the vectors representing any other forces acting on the object.
The total delta-v needed is a good starting point for early design decisions since consideration of the added complexities are deferred to later times in the design process.
The rocket equation shows that the required amount of propellant dramatically increases with increasing delta-v. Therefore, in modern spacecraft propulsion systems considerable study is put into reducing the total delta-v needed for a given spaceflight, as well as designing spacecraft that are capable of producing larger delta-v.
Increasing the delta-v provided by a propulsion system can be achieved by:
staging
increasing specific impulse
improving propellant mass fraction
Multiple maneuvers
Because the mass ratios apply to any given burn, when multiple maneuvers are performed in sequence, the mass ratios multiply.
Thus it can be shown that, provided the exhaust velocity is fixed, this means that delta-v can be summed:
When are the mass ratios of the maneuvers, and are the delta-v of the first and second maneuvers
where and . This is just the rocket equation applied to the sum of the two maneuvers.
This is convenient since it means that delta-v can be calculated and simply added and the mass ratio calculated only for the overall vehicle for the entire mission. Thus delta-v is commonly quoted rather than mass ratios which would require multiplication.
Delta-v budgets
When designing a trajectory, delta-v budget is used as a good indicator of how much propellant will be required. Propellant usage is an exponential function of delta-v in accordance with the rocket equation, it will also depend on the exhaust velocity.
It is not possible to determine delta-v requirements from conservation of energy by considering only the total energy of the vehicle in the initial and final orbits since energy is carried away in the exhaust (see also below). For example, most spacecraft are launched in an orbit with inclination fairly near to the latitude at the launch site, to take advantage of the Earth's rotational surface speed. If it is necessary, for mission-based reasons, to put the spacecraft in an orbit of different inclination, a substantial delta-v is required, though the specific kinetic and potential energies in the final orbit and the initial orbit are equal.
When rocket thrust is applied in short bursts the other sources of acceleration may be negligible, and the magnitude of the velocity change of one burst may be simply approximated by the delta-v. The total delta-v to be applied can then simply be found by addition of each of the delta-v'''s needed at the discrete burns, even though between bursts the magnitude and direction of the velocity changes due to gravity, e.g. in an elliptic orbit.
For examples of calculating delta-v, see Hohmann transfer orbit, gravitational slingshot, and Interplanetary Transport Network. It is also notable that large thrust can reduce gravity drag.
Delta-v is also required to keep satellites in orbit and is expended in propulsive orbital stationkeeping maneuvers. Since the propellant load on most satellites cannot be replenished, the amount of propellant initially loaded on a satellite may well determine its useful lifetime.
Oberth effect
From power considerations, it turns out that when applying delta-v in the direction of the velocity the specific orbital energy gained per unit delta-v is equal to the instantaneous speed. This is called the Oberth effect.
For example, a satellite in an elliptical orbit is boosted more efficiently at high speed (that is, small altitude) than at low speed (that is, high altitude).
Another example is that when a vehicle is making a pass of a planet, burning the propellant at closest approach rather than further out gives significantly higher final speed, and this is even more so when the planet is a large one with a deep gravity field, such as Jupiter.
Porkchop plot
Due to the relative positions of planets changing over time, different delta-vs are required at different launch dates. A diagram that shows the required delta-v plotted against time is sometimes called a porkchop plot. Such a diagram is useful since it enables calculation of a launch window, since launch should only occur when the mission is within the capabilities of the vehicle to be employed.
Around the Solar System
Delta-v needed for various orbital manoeuvers using conventional rockets; red arrows show where optional aerobraking can be performed in that particular direction, black numbers give delta-v in km/s that apply in either direction. Gives figures of 8.6 from Earth's surface to LEO, 4.1 and 3.8 for LEO to lunar orbit (or L5) and GEO resp., 0.7 for L5 to lunar orbit, and 2.2 for lunar orbit to lunar surface. Figures are said to come from Chapter 2 of Space Settlements: A Design Study on the NASA website . Lower-delta-v transfers than shown can often be achieved, but involve rare transfer windows or take significantly longer, see: .
C3 Escape orbit
GEO Geosynchronous orbit
GTO Geostationary transfer orbit
L4/5 Earth–Moon Lagrangian point
LEO Low Earth orbit
LEO reentry
For example the Soyuz spacecraft makes a de-orbit from the ISS in two steps. First, it needs a delta-v'' of 2.18 m/s for a safe separation from the space station. Then it needs another 128 m/s for reentry.
See also
Delta-v budget
Gravity drag
Orbital maneuver
Orbital stationkeeping
Spacecraft propulsion
Orbital propellant depot
Specific impulse
Tsiolkovsky rocket equation
Delta-v (physics)
References
Astrodynamics
Spacecraft propulsion | 0.78847 | 0.99512 | 0.784622 |
Newton's laws of motion | Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows:
A body remains at rest, or in motion at a constant speed in a straight line, except insofar as it is acted upon by a force.
At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time.
If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.
The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical objects and systems. In the time since Newton, new insights, especially around the concept of energy, built the field of classical mechanics on his foundations. Limitations to Newton's laws have also been discovered; new theories are necessary when objects move at very high speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics).
Prerequisites
Newton's laws are often stated in terms of point or particle masses, that is, bodies whose volume is negligible. This is a reasonable approximation for real bodies when the motion of internal parts can be neglected, and when the separation between bodies is much larger than the size of each. For instance, the Earth and the Sun can both be approximated as pointlike when considering the orbit of the former around the latter, but the Earth is not pointlike when considering activities on its surface.
The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is , then its average velocity over the time interval from to is Here, the Greek letter (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an instantaneous velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace with the symbol , for example,This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position to the infinitesimally small time interval over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function has a limit of at a given input value if the difference between and can be made arbitrarily small by choosing an input sufficiently close to . One writes, Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero: Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:Consequently, the acceleration is the second derivative of position, often written .
Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction. Velocity and acceleration are vector quantities as well. The mathematical tools of vector algebra provide the means to describe motion in two, three or more dimensions. Vectors are often denoted with an arrow, as in , or in bold typeface, such as . Often, vectors are represented visually as arrows, with the direction of the vector being the direction of the arrow, and the magnitude of the vector indicated by the length of the arrow. Numerically, a vector can be represented as a list; for example, a body's velocity vector might be , indicating that it is moving at 3 metres per second along the horizontal axis and 4 metres per second along the vertical axis. The same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives.
The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight. The physics concept of force makes quantitative the everyday idea of a push or a pull. Forces in Newtonian mechanics are often due to strings and ropes, friction, muscle effort, gravity, and so forth. Like displacement, velocity, and acceleration, force is a vector quantity.
Laws
First law
Translated from Latin, Newton's first law reads,
Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon.
Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. A body's motion preserves the status quo, but external forces can perturb this.
The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest. Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative.
Second law
The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed.
By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity:
where all three quantities can change over time.
Newton's second law, in modern form, states that the time derivative of the momentum is the force:
If the mass does not change with time, then the derivative acts only upon the velocity, and so the force equals the product of the mass and the time derivative of the velocity, which is the acceleration:
As the acceleration is the second derivative of position with respect to time, this can also be written
The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces. When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.
A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension.
Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. In order for this to be more than a tautology — acceleration implies force, force implies acceleration — some other statement about force must also be made. For example, an equation detailing the force might be specified, like Newton's law of universal gravitation. By inserting such an expression for into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter.
Third law
To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth.
Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta and respectively, then the total momentum of the pair is , and the rate of change of is By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and is constant. Alternatively, if is known to be constant, it follows that the forces have equal magnitude and opposite direction.
Candidates for additional laws
Various sources have proposed elevating other ideas used in classical mechanics to the status of Newton's laws. For example, in Newtonian mechanics, the total mass of a body made by bringing together two smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law".
Examples
The study of the behavior of massive bodies using Newton's laws is known as Newtonian mechanics. Some example problems in Newtonian mechanics are particularly noteworthy for conceptual or historical reasons.
Uniformly accelerated motion
If a body falls from rest near the surface of the Earth, then in the absence of air resistance, it will accelerate at a constant rate. This is known as free fall. The speed attained during free fall is proportional to the elapsed time, and the distance traveled is proportional to the square of the elapsed time. Importantly, the acceleration is the same for all bodies, independently of their mass. This follows from combining Newton's second law of motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is
where is the mass of the falling body, is the mass of the Earth, is Newton's constant, and is the distance from the center of the Earth to the body's location, which is very nearly the radius of the Earth. Setting this equal to , the body's mass cancels from both sides of the equation, leaving an acceleration that depends upon , , and , and can be taken to be constant. This particular value of acceleration is typically denoted :
If the body is not released from rest but instead launched upwards and/or horizontally with nonzero velocity, then free fall becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped trajectories, because gravity affects the body's vertical motion and not its horizontal. At the peak of the projectile's trajectory, its vertical velocity is zero, but its acceleration is downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students.
Uniform circular motion
When a body is in uniform circular motion, the force on it changes the direction of its motion but not its speed. For a body moving in a circle of radius at a constant speed , its acceleration has a magnitudeand is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude . Many orbits, such as that of the Moon around the Earth, can be approximated by uniform circular motion. In such cases, the centripetal force is gravity, and by Newton's law of universal gravitation has magnitude , where is the mass of the larger body being orbited. Therefore, the mass of a body can be calculated from observations of another body orbiting around it.
Newton's cannonball is a thought experiment that interpolates between projectile motion and uniform circular motion. A cannonball that is lobbed weakly off the edge of a tall cliff will hit the ground in the same amount of time as if it were dropped from rest, because the force of gravity only affects the cannonball's momentum in the downward direction, and its effect is not diminished by horizontal movement. If the cannonball is launched with a greater initial horizontal velocity, then it will travel farther before it hits the ground, but it will still hit the ground in the same amount of time. However, if the cannonball is launched with an even larger initial velocity, then the curvature of the Earth becomes significant: the ground itself will curve away from the falling cannonball. A very fast cannonball will fall away from the inertial straight-line trajectory at the same rate that the Earth curves away beneath it; in other words, it will be in orbit (imagining that it is not slowed by air resistance or obstacles).
Harmonic motion
Consider a body of mass able to move along the axis, and suppose an equilibrium point exists at the position . That is, at , the net force upon the body is the zero vector, and by Newton's second law, the body will not accelerate. If the force upon the body is proportional to the displacement from the equilibrium point, and directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as , Newton's second law becomes
This differential equation has the solution
where the frequency is equal to , and the constants and can be calculated knowing, for example, the position and velocity the body has at a given time, like .
One reason that the harmonic oscillator is a conceptually important example is that it is good approximation for many systems near a stable mechanical equilibrium. For example, a pendulum has a stable equilibrium in the vertical position: if motionless there, it will remain there, and if pushed slightly, it will swing back and forth. Neglecting air resistance and friction in the pivot, the force upon the pendulum is gravity, and Newton's second law becomes where is the length of the pendulum and is its angle from the vertical. When the angle is small, the sine of is nearly equal to (see Taylor series), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency .
A harmonic oscillator can be damped, often by friction or viscous drag, in which case energy bleeds out of the oscillator and the amplitude of the oscillations decreases over time. Also, a harmonic oscillator can be driven by an applied force, which can lead to the phenomenon of resonance.
Objects with variable mass
Newtonian physics treats matter as being neither created nor destroyed, though it may be rearranged. It can be the case that an object of interest gains or loses mass because matter is added to or removed from it. In such a situation, Newton's laws can be applied to the individual pieces of matter, keeping track of which pieces belong to the object of interest over time. For instance, if a rocket of mass , moving at velocity , ejects matter at a velocity relative to the rocket, then
where is the net external force (e.g., a planet's gravitational pull).
Work and energy
Physicists developed the concept of energy after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body moves in a closed loop — starting at a point, moving along some trajectory, and returning to the initial point — is zero. If this is the case, then the force can be written in terms of the gradient of a function called a scalar potential:
This is true for many forces including that of gravity, but not for friction; indeed, almost any problem in a mechanics textbook that does not involve friction can be expressed in this way. The fact that the force can be written in this way can be understood from the conservation of energy. Without friction to dissipate a body's energy into heat, the body's energy will trade between potential and (non-thermal) kinetic forms while the total amount remains constant. Any gain of kinetic energy, which occurs when the net force on the body accelerates it to a higher speed, must be accompanied by a loss of potential energy. So, the net force upon the body is determined by the manner in which the potential energy decreases.
Rigid-body motion and rotation
A rigid body is an object whose size is too large to neglect and which maintains the same shape over time. In Newtonian mechanics, the motion of a rigid body is often understood by separating it into movement of the body's center of mass and movement around the center of mass.
Center of mass
Significant aspects of the motion of an extended body can be understood by imagining the mass of that body concentrated to a single point, known as the center of mass. The location of a body's center of mass depends upon how that body's material is distributed. For a collection of pointlike objects with masses at positions , the center of mass is located at where is the total mass of the collection. In the absence of a net external force, the center of mass moves at a constant speed in a straight line. This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass . This follows from the fact that the internal forces within the collection, the forces that the objects exert upon each other, occur in balanced pairs by Newton's third law. In a system of two bodies with one much more massive than the other, the center of mass will approximately coincide with the location of the more massive body.
Rotational analogues of Newton's laws
When Newton's laws are applied to rotating extended bodies, they lead to new quantities that are analogous to those invoked in the original laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque.
Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is and the body has momentum , then the body's angular momentum with respect to that point is, using the vector cross product, Taking the time derivative of the angular momentum gives The first term vanishes because and point in the same direction. The remaining term is the torque, When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant. The torque can vanish even when the force is non-zero, if the body is located at the reference point or if the force and the displacement vector are directed along the same line.
The angular momentum of a collection of point masses, and thus of an extended body, is found by adding the contributions from each of the points. This provides a means to characterize a body's rotation about an axis, by adding up the angular momenta of its individual pieces. The result depends on the chosen axis, the shape of the body, and the rate of rotation.
Multi-body gravitational system
Newton's law of universal gravitation states that any body attracts any other body along the straight line connecting them. The size of the attracting force is proportional to the product of their masses, and inversely proportional to the square of the distance between them. Finding the shape of the orbits that an inverse-square force law will produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity of the orbit, and thus the type of conic section, is determined by the energy and the angular momentum of the orbiting body. Planets do not have sufficient energy to escape the Sun, and so their orbits are ellipses, to a good approximation; because the planets pull on one another, actual orbits are not exactly conic sections.
If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables within a computer's memory; Newton's laws are used to calculate how the velocities will change over a short interval of time, and knowing the velocities, the changes of position over that time interval can be computed. This process is looped to calculate, approximately, the bodies' trajectories. Generally speaking, the shorter the time interval, the more accurate the approximation.
Chaos and unpredictability
Nonlinear dynamics
Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws can exhibit sensitive dependence upon their initial conditions: a slight change of the position or velocity of one part of a system can lead to the whole system behaving in a radically different way within a short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem.
Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function that assigns a velocity vector to each point in space and time. A small object being carried along by the fluid flow can change velocity for two reasons: first, because the velocity field at its position is changing over time, and second, because it moves to a new location where the velocity field has a different value. Consequently, when Newton's second law is applied to an infinitesimal portion of fluid, the acceleration has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, becomes
where is the density, is the pressure, and stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation:
where is the kinematic viscosity.
Singularities
It is mathematically possible for a collection of point masses, moving in accord with Newton's laws, to launch some of themselves away so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics.
It is not yet known whether or not the Euler and Navier–Stokes equations exhibit the analogous behavior of initially smooth solutions "blowing up" in finite time. The question of existence and smoothness of Navier–Stokes solutions is one of the Millennium Prize Problems.
Relation to other formulations of classical physics
Classical mechanics can be mathematically formulated in multiple different ways, other than the "Newtonian" description (which itself, of course, incorporates contributions from others both before and after Newton). The physical content of these different formulations is the same as the Newtonian, but they provide different insights and facilitate different types of calculations. For example, Lagrangian mechanics helps make apparent the connection between symmetries and conservation laws, and it is useful when calculating the motion of constrained bodies, like a mass restricted to move along a curving track or on the surface of a sphere. Hamiltonian mechanics is convenient for statistical physics, leads to further insight about symmetry, and can be developed into sophisticated techniques for perturbation theory. Due to the breadth of these topics, the discussion here will be confined to concise treatments of how they reformulate Newton's laws of motion.
Lagrangian
Lagrangian mechanics differs from the Newtonian formulation by considering entire trajectories at once rather than predicting a body's motion at a single instant. It is traditional in Lagrangian mechanics to denote position with and velocity with . The simplest example is a massive point particle, the Lagrangian for which can be written as the difference between its kinetic and potential energies:
where the kinetic energy is
and the potential energy is some function of the position, . The physical path that the particle will take between an initial point and a final point is the path for which the integral of the Lagrangian is "stationary". That is, the physical path has the property that small perturbations of it will, to a first approximation, not change the integral of the Lagrangian. Calculus of variations provides the mathematical tools for finding this path. Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle,
Evaluating the partial derivatives of the Lagrangian gives
which is a restatement of Newton's second law. The left-hand side is the time derivative of the momentum, and the right-hand side is the force, represented in terms of the potential energy.
Landau and Lifshitz argue that the Lagrangian formulation makes the conceptual content of classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's theorem to a Lagrangian for a multi-particle system, and so, Newton's third law is a theorem rather than an assumption.
Hamiltonian
In Hamiltonian mechanics, the dynamics of a system are represented by a function called the Hamiltonian, which in many cases of interest is equal to the total energy of the system. The Hamiltonian is a function of the positions and the momenta of all the bodies making up the system, and it may also depend explicitly upon time. The time derivatives of the position and momentum variables are given by partial derivatives of the Hamiltonian, via Hamilton's equations. The simplest example is a point mass constrained to move in a straight line, under the effect of a potential. Writing for the position coordinate and for the body's momentum, the Hamiltonian is
In this example, Hamilton's equations are
and
Evaluating these partial derivatives, the former equation becomes
which reproduces the familiar statement that a body's momentum is the product of its mass and velocity. The time derivative of the momentum is
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
As in the Lagrangian formulation, in Hamiltonian mechanics the conservation of momentum can be derived using Noether's theorem, making Newton's third law an idea that is deduced rather than assumed.
Among the proposals to reform the standard introductory-physics curriculum is one that teaches the concept of energy before that of force, essentially "introductory Hamiltonian mechanics".
Hamilton–Jacobi
The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics. This formulation also uses Hamiltonian functions, but in a different way than the formulation described above. The paths taken by bodies or collections of bodies are deduced from a function of positions and time . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant , analogously to how a light ray propagates in the direction perpendicular to its wavefront. This is simplest to express for the case of a single point mass, in which is a function , and the point mass moves in the direction along which changes most steeply. In other words, the momentum of the point mass is the gradient of :
The Hamilton–Jacobi equation for a point mass is
The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential , in which case the Hamilton–Jacobi equation becomes
Taking the gradient of both sides, this becomes
Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side,
Gathering together the terms that depend upon the gradient of ,
This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated changes over time at a fixed location, and the second term captures how a moving particle will see different values of that function as it travels from place to place:
Relation to other physical theories
Thermodynamics and statistical physics
In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum.
The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones. It can be writtenwhere is a drag coefficient and is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion.
Electromagnetism
Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist.
Coulomb's law for the electric force between two stationary, electrically charged bodies has much the same mathematical form as Newton's law of universal gravitation: the force is proportional to the product of the charges, inversely proportional to the square of the distance between them, and directed along the straight line between them. The Coulomb force that a charge exerts upon a charge is equal in magnitude to the force that exerts upon , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law.
Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law provides an expression for the force upon a charged body that can be plugged into Newton's second law in order to calculate its acceleration. According to the Lorentz force law, a charged body in an electric field experiences a force in the direction of that field, a force proportional to its charge and to the strength of the electric field. In addition, a moving charged body in a magnetic field experiences a force that is also proportional to its charge, in a direction perpendicular to both the field and the body's direction of motion. Using the vector cross product,
If the electric field vanishes, then the force will be perpendicular to the charge's motion, just as in the case of uniform circular motion studied above, and the charge will circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency . Mass spectrometry works by applying electric and/or magnetic fields to moving charges and measuring the resulting acceleration, which by the Lorentz force law yields the mass-to-charge ratio.
Collections of charged bodies do not always obey Newton's third law: there can be a change of one body's momentum without a compensatory change in the momentum of another. The discrepancy is accounted for by momentum carried by the electromagnetic field itself. The momentum per unit volume of the electromagnetic field is proportional to the Poynting vector.
There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism predicts that electromagnetic waves will travel through empty space at a constant, definite speed. Thus, some inertial observers seemingly have a privileged status over the others, namely those who measure the speed of light and find it to be the value predicted by the Maxwell equations. In other words, light provides an absolute standard for speed, yet the principle of inertia holds that there should be no such standard. This tension is resolved in the theory of special relativity, which revises the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum.
Special relativity
In special relativity, the rule that Wilczek called "Newton's Zeroth Law" breaks down: the mass of a composite object is not merely the sum of the masses of the individual pieces. Newton's first law, inertial motion, remains true. A form of Newton's second law, that force is the rate of change of momentum, also holds, as does the conservation of momentum. However, the definition of momentum is modified. Among the consequences of this is the fact that the more quickly a body moves, the harder it is to accelerate, and so, no matter how much force is applied, a body cannot be accelerated to the speed of light. Depending on the problem at hand, momentum in special relativity can be represented as a three-dimensional vector, , where is the body's rest mass and is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors.
Newton's third law must be modified in special relativity. The third law refers to the forces between two bodies at the same moment in time, and a key feature of special relativity is that simultaneity is relative. Events that happen at the same time relative to one observer can happen at different times relative to another. So, in a given observer's frame of reference, action and reaction may not be exactly opposite, and the total momentum of interacting bodies may not be conserved. The conservation of momentum is restored by including the momentum stored in the field that describes the bodies' interaction.
Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light.
General relativity
General relativity is a theory of gravity that advances beyond that of Newton. In general relativity, the gravitational force of Newtonian mechanics is reimagined as curvature of spacetime. A curved path like an orbit, attributed to a gravitational force in Newtonian mechanics, is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express.
The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light.
Quantum mechanics
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
The Ehrenfest theorem provides a connection between quantum expectation values and Newton's second law, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, position and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
History
The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both.
Antiquity and medieval background
Aristotle and "violent" motion
The subject of physics is often traced back to Aristotle, but the history of the concepts involved is obscured by multiple factors. An exact correspondence between Aristotelian and modern concepts is not simple to establish: Aristotle did not clearly distinguish what we would call speed and force, used the same term for density and viscosity, and conceived of motion as always through a medium, rather than through space. In addition, some concepts often termed "Aristotelian" might better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The "natural" motion of terrestrial solid matter was to fall downwards, whereas a "violent" motion could push a body sideways. Moreover, in Aristotelian physics, a "violent" motion requires an immediate cause; separated from the cause of its "violent" motion, a body would revert to its "natural" behavior. Yet, a javelin continues moving after it leaves the thrower's hand. Aristotle concluded that the air around the javelin must be imparted with the ability to move the javelin forward.
Philoponus and impetus
John Philoponus, a Byzantine Greek thinker active during the sixth century, found this absurd: the same medium, air, was somehow responsible both for sustaining motion and for impeding it. If Aristotle's idea were true, Philoponus said, armies would launch weapons by blowing upon them with bellows. Philoponus argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move. In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics.
Inertia and the first law
The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death.
The modern concept of inertia is credited to Galileo. Based on his experiments, Galileo concluded that the "natural" behavior of a moving body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would be codified into Newton's first law. Galileo thought that a body moving a long distance inertially would follow the curve of the Earth. This idea was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down.
According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione.
According to Huygens, this law was already known by Galileo and Descartes among others.
Force and the second law
Christiaan Huygens, in his Horologium Oscillatorium (1673), put forth the hypothesis that "By the action of gravity, whatever its sources, it happens that bodies are moved by a motion composed both of a uniform motion in one direction or another and of a motion downward due to gravity." Newton's second law generalized this hypothesis from gravity to all forces.
One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally, despite being separated by millions of kilometres. This contrasts with the idea, championed by Descartes among others, that the Sun's gravity held planets in orbit by swirling them in a vortex of transparent matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed.
Momentum conservation and the third law
Johannes Kepler suggested that gravitational attractions were reciprocal — that, for example, the Moon pulls on the Earth while the Earth pulls on the Moon — but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy (1644), Descartes introduced the idea that during a collision between bodies, a "quantity of motion" remains unchanged. Descartes defined this quantity somewhat imprecisely by adding up the products of the speed and "size" of each body, where "size" for him incorporated both volume and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved.
During the 1650s, Huygens studied collisions between hard spheres and deduced a principle that is now identified as the conservation of momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law.
Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens, he listed four laws: the principle of inertia, the change of motion by force, a statement about relative motion that would today be called Galilean invariance, and the rule that interactions between bodies do not change the motion of their center of mass. In a later manuscript, Newton added a law of action and reaction, while saying that this law and the law regarding the center of mass implied one another. Newton probably settled on the presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685.
After the Principia
Newton expressed his second law by saying that the force on a body is proportional to its change of motion, or momentum. By the time he wrote the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous. Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides.
The concept of energy became a key part of Newtonian mechanics in the post-Newton period. Huygens' solution of the collision of hard spheres showed that in that case, not only is momentum conserved, but kinetic energy is as well (or, rather, a quantity that in retrospect we can identify as one-half the total kinetic energy). The question of what is conserved during all other processes, like inelastic collisions and motion slowed by friction, was not resolved until the 19th century. Debates on this topic overlapped with philosophical disputes between the metaphysical views of Newton and Leibniz, and variants of the term "force" were sometimes used to denote what we would call types of energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force is that which a body has when it is in actual motion." In modern terminology, "dead force" and "living force" correspond to potential energy and kinetic energy respectively. Conservation of energy was not established as a universal principle until it was understood that the energy of mechanical work can be dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could then be derived within formulations of classical mechanics that put energy first, as in the Lagrangian and Hamiltonian formulations described above.
Modern presentations of Newton's laws use the mathematics of vectors, a topic that was not developed until the late 19th and early 20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton.
See also
Euler's laws of motion
History of classical mechanics
List of eponymous laws
List of equations in classical mechanics
List of scientific laws named after people
List of textbooks on classical mechanics and quantum mechanics
Norton's dome
Notes
References
Further reading
Newton’s Laws of Dynamics - The Feynman Lectures on Physics
Classical mechanics
Isaac Newton
Texts in Latin
Equations of physics
Scientific observation
Experimental physics
Copernican Revolution
Articles containing video clips
Scientific laws
Eponymous laws of physics | 0.784662 | 0.999792 | 0.784499 |
Navier–Stokes existence and smoothness | The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering.
Even more basic (and seemingly intuitive) properties of the solutions to Navier–Stokes have never been proven. For the three-dimensional system of equations, and given some initial conditions, mathematicians have neither proved that smooth solutions always exist, nor found any counter-examples. This is called the Navier–Stokes existence and smoothness problem.
Since understanding the Navier–Stokes equations is considered to be the first step to understanding the elusive phenomenon of turbulence, the Clay Mathematics Institute in May 2000 made this problem one of its seven Millennium Prize problems in mathematics. It offered a US$1,000,000 prize to the first person providing a solution for a specific statement of the problem:
The Navier–Stokes equations
In mathematics, the Navier–Stokes equations are a system of nonlinear partial differential equations for abstract vector fields of any size. In physics and engineering, they are a system of equations that model the motion of liquids or non-rarefied gases (in which the mean free path is short enough so that it can be thought of as a continuum mean instead of a collection of particles) using continuum mechanics. The equations are a statement of Newton's second law, with the forces modeled according to those in a viscous Newtonian fluid—as the sum of contributions by pressure, viscous stress and an external body force. Since the setting of the problem proposed by the Clay Mathematics Institute is in three dimensions, for an incompressible and homogeneous fluid, only that case is considered below.
Let be a 3-dimensional vector field, the velocity of the fluid, and let be the pressure of the fluid. The Navier–Stokes equations are:
where is the kinematic viscosity, the external volumetric force, is the gradient operator and is the Laplacian operator, which is also denoted by or . Note that this is a vector equation, i.e. it has three scalar equations. Writing down the coordinates of the velocity and the external force
then for each there is the corresponding scalar Navier–Stokes equation:
The unknowns are the velocity and the pressure . Since in three dimensions, there are three equations and four unknowns (three scalar velocities and the pressure), then a supplementary equation is needed. This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid:
Due to this last property, the solutions for the Navier–Stokes equations are searched in the set of solenoidal ("divergence-free") functions. For this flow of a homogeneous medium, density and viscosity are constants.
Since only its gradient appears, the pressure p can be eliminated by taking the curl of both sides of the Navier–Stokes equations. In this case the Navier–Stokes equations reduce to the vorticity-transport equations.
The Navier–Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each other. This means that the equations cannot be solved using traditional linear techniques, and more advanced methods must be used instead. Nonlinearity is important in the Navier–Stokes equations because it allows the equations to describe a wide range of fluid dynamics phenomena, including the formation of shock waves and other complex flow patterns. However, the nonlinearity of the Navier–Stokes equations also makes them more difficult to solve, as traditional linear methods may not work.
One way to understand the nonlinearity of the Navier–Stokes equations is to consider the term (v · ∇)v in the equations. This term represents the acceleration of the fluid, and it is a product of the velocity vector v and the gradient operator ∇. Because the gradient operator is a linear operator, the term (v · ∇)v is nonlinear in the velocity vector v. This means that the acceleration of the fluid depends on the magnitude and direction of the velocity, as well as the spatial distribution of the velocity within the fluid.
The nonlinear nature of the Navier–Stokes equations can be seen in the term , which represents the acceleration of the fluid due to its own velocity. This term is nonlinear because it involves the product of two velocity vectors, and the resulting acceleration is therefore dependent on the magnitude and direction of both vectors.
Another source of nonlinearity in the Navier–Stokes equations is the pressure term . The pressure in a fluid depends on the density and the gradient of the pressure, and this term is therefore nonlinear in the pressure.
One example of the nonlinear nature of the Navier–Stokes equations can be seen in the case of a fluid flowing around a circular obstacle. In this case, the velocity of the fluid near the obstacle will be higher than the velocity of the fluid farther away from the obstacle. This results in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away.
To see this more explicitly, consider the case of a circular obstacle of radius placed in a uniform flow with velocity and density . Let be the velocity of the fluid at position and time , and let be the pressure at the same position and time.
The Navier–Stokes equations in this case are:
where is the kinematic viscosity of the fluid.
Assuming that the flow is steady (meaning that the velocity and pressure do not vary with time), we can set the time derivative terms equal to zero:
We can now consider the flow near the circular obstacle. In this region, the velocity of the fluid will be higher than the uniform flow velocity due to the presence of the obstacle. This results in a nonlinear term in the Navier–Stokes equations that is proportional to the velocity of the fluid.
At the same time, the presence of the obstacle will also result in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. This can be seen by considering the continuity equation, which states that the mass flow rate through any surface must be constant. Since the velocity is higher near the obstacle, the mass flow rate through a surface near the obstacle will be higher than the mass flow rate through a surface farther away from the obstacle. This can be compensated for by a pressure gradient, with higher pressure near the obstacle and lower pressure farther away.
As a result of these nonlinear effects, the Navier–Stokes equations in this case become difficult to solve, and approximations or numerical methods must be used to find the velocity and pressure fields in the flow.
Consider the case of a two-dimensional fluid flow in a rectangular domain, with a velocity field and a pressure field . We can use a finite element method to solve the Navier–Stokes equation for the velocity field:
To do this, we divide the domain into a series of smaller elements, and represent the velocity field as:
where is the number of elements, and are the shape functions associated with each element. Substituting this expression into the Navier–Stokes equation and applying the finite element method, we can derive a system of ordinary differential equations:
where is the domain, and the integrals are over the domain. This system of ordinary differential equations can be solved using techniques such as the finite element method or spectral methods.
Here, we will use the finite difference method. To do this, we can divide the time interval into a series of smaller time steps, and approximate the derivative at each time step using a finite difference formula:
where is the size of the time step, and and are the values of and at time step .
Using this approximation, we can iterate through the time steps and compute the value of at each time step. For example, starting at time step and using the approximation above, we can compute the value of at time step :
This process can be repeated until we reach the final time step .
There are many other approaches to solving ordinary differential equations, each with its own advantages and disadvantages. The choice of approach depends on the specific equation being solved, and the desired accuracy and efficiency of the solution.
Two settings: unbounded and periodic space
There are two different settings for the one-million-dollar-prize Navier–Stokes existence and smoothness problem. The original problem is in the whole space , which needs extra conditions on the growth behavior of the initial condition and the solutions. In order to rule out the problems at infinity, the Navier–Stokes equations can be set in a periodic framework, which implies that they are no longer working on the whole space but in the 3-dimensional torus . Each case will be treated separately.
Statement of the problem in the whole space
Hypotheses and growth conditions
The initial condition is assumed to be a smooth and divergence-free function (see smooth function) such that, for every multi-index (see multi-index notation) and any , there exists a constant such that
for all
The external force is assumed to be a smooth function as well, and satisfies a very analogous inequality (now the multi-index includes time derivatives as well):
for all
For physically reasonable conditions, the type of solutions expected are smooth functions that do not grow large as . More precisely, the following assumptions are made:
There exists a constant such that for all
Condition 1 implies that the functions are smooth and globally defined and condition 2 means that the kinetic energy of the solution is globally bounded.
The Millennium Prize conjectures in the whole space
(A) Existence and smoothness of the Navier–Stokes solutions in
Let . For any initial condition satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector and a pressure satisfying conditions 1 and 2 above.
(B) Breakdown of the Navier–Stokes solutions in
There exists an initial condition and an external force such that there exists no solutions and satisfying conditions 1 and 2 above.
The Millennium Prize conjectures are two mathematical problems that were chosen by the Clay Mathematics Institute as the most important unsolved problems in mathematics. The first conjecture, which is known as the "smoothness" conjecture, states that there should always exist smooth and globally defined solutions to the Navier–Stokes equations in three-dimensional space. The second conjecture, known as the "breakdown" conjecture, states that there should be at least one set of initial conditions and external forces for which there are no smooth solutions to the Navier–Stokes equations.
The Navier–Stokes equations are a set of partial differential equations that describe the motion of fluids. They are given by:
where is the velocity field of the fluid, is the pressure, is the density, is the kinematic viscosity, and is an external force. The first equation is known as the momentum equation, and the second equation is known as the continuity equation.
These equations are typically accompanied by boundary conditions, which describe the behavior of the fluid at the edges of the domain. For example, in the case of a fluid flowing through a pipe, the boundary conditions might specify that the velocity and pressure are fixed at the walls of the pipe.
The Navier–Stokes equations are nonlinear and highly coupled, making them difficult to solve in general. In particular, the difficulty of solving these equations lies in the term , which represents the nonlinear advection of the velocity field by itself. This term makes the Navier–Stokes equations highly sensitive to initial conditions, and it is the main reason why the Millennium Prize conjectures are so challenging.
In addition to the mathematical challenges of solving the Navier–Stokes equations, there are also many practical challenges in applying these equations to real-world situations. For example, the Navier–Stokes equations are often used to model fluid flows that are turbulent, which means that the fluid is highly chaotic and unpredictable. Turbulence is a difficult phenomenon to model and understand, and it adds another layer of complexity to the problem of solving the Navier–Stokes equations.
To solve the Navier–Stokes equations, we need to find a velocity field and a pressure field that satisfy the equations and the given boundary conditions. This can be done using a variety of numerical techniques, such as finite element methods, spectral methods, or finite difference methods.
For example, consider the case of a two-dimensional fluid flow in a rectangular domain, with velocity and pressure fields and a pressure field ,respectively. The Navier–Stokes equations can be written as:
where is the density, is the kinematic viscosity, and is an external force. The boundary conditions might specify that the velocity is fixed at the walls of the domain, or that the pressure is fixed at certain points. The last identity occurs because the flow is solenoidal.
To solve these equations numerically, we can divide the domain into a series of smaller elements, and solve the equations locally within each element. For example, using a finite element method, we might represent the velocity and pressure fields as:
where is the number of elements, and are the shape functions associated with each element. Substituting these expressions into the Navier–Stokes equations and applying the finite element method, we can derive a system of ordinary differential equations
Statement of the periodic problem
Hypotheses
The functions sought now are periodic in the space variables of period 1. More precisely, let be the unitary vector in the i- direction:
Then is periodic in the space variables if for any , then:
Notice that this is considering the coordinates mod 1. This allows working not on the whole space but on the quotient space , which turns out to be the 3-dimensional torus:
Now the hypotheses can be stated properly. The initial condition is assumed to be a smooth and divergence-free function and the external force is assumed to be a smooth function as well. The type of solutions that are physically relevant are those who satisfy these conditions:
Just as in the previous case, condition 3 implies that the functions are smooth and globally defined and condition 4 means that the kinetic energy of the solution is globally bounded.
The periodic Millennium Prize theorems
(C) Existence and smoothness of the Navier–Stokes solutions in
Let . For any initial condition satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector and a pressure satisfying conditions 3 and 4 above.
(D) Breakdown of the Navier–Stokes solutions in
There exists an initial condition and an external force such that there exists no solutions and satisfying conditions 3 and 4 above.
Partial results
Finite difference method proved to be convergent for the Navier–Stokes equations and the equations are numerically solved by the 1960s. It is proved that there are smooth and globally defined solutions to the Navier–Stokes equations in 2 dimensions.
If the initial velocity is sufficiently small then the statement is true: there are smooth and globally defined solutions to the Navier–Stokes equations.
Given an initial velocity there exists a finite time T, depending on such that the Navier–Stokes equations on have smooth solutions and . It is not known if the solutions exist beyond that "blowup time" T.
Jean Leray in 1934 proved the existence of so-called weak solutions to the Navier–Stokes equations, satisfying the equations in mean value, not pointwise.
Terence Tao in 2016 published a finite time blowup result for an averaged version of the 3-dimensional Navier–Stokes equation. He writes that the result formalizes a "supercriticality barrier" for the global regularity problem for the true Navier–Stokes equations, and claims that the method of proof hints at a possible route to establishing blowup for the true equations.
In popular culture
Unsolved problems have been used to indicate a rare mathematical talent in fiction. The Navier–Stokes problem features in The Mathematician's Shiva (2014), a book about a prestigious, deceased, fictional mathematician named Rachela Karnokovitch taking the proof to her grave in protest of academia. The movie Gifted (2017) referenced the Millennium Prize problems and dealt with the potential for a 7-year-old girl and her deceased mathematician mother for solving the Navier–Stokes problem.
See also
List of unsolved problems in mathematics
List of unsolved problems in physics
Notes
References
Further reading
External links
Contributed by: Yakov Sinai
The Clay Mathematics Institute's Navier–Stokes equation prize
Why global regularity for Navier–Stokes is hard — Possible routes to resolution are scrutinized by Terence Tao.
Navier–Stokes existence and smoothness (Millennium Prize Problem) A lecture on the problem by Luis Caffarelli.
Fluid dynamics
Millennium Prize Problems
Partial differential equations
Unsolved problems in mathematics
Unsolved problems in physics | 0.785205 | 0.998986 | 0.784408 |
Einstein's thought experiments | A hallmark of Albert Einstein's career was his use of visualized thought experiments as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices that attempted to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement.
Introduction
A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world.
As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument."
When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638).
Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments.
Special relativity
Pursuing a beam of light
Late in life, Einstein recalled
Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific:
1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university.
2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds.
3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it.
Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations.
Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905.
Magnet and conductor
In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes:
This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena.
Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment.
Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both.
In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry:
Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt.
That decision ultimately led to his development of special relativity as a theory founded on two postulates. Einstein's original expression of these postulates was:
"The laws governing the changes of the state of any physical system do not depend on which one of two coordinate systems in uniform translational motion relative to each other these changes of the state are referred to.
Each ray of light moves in the coordinate system "at rest" with the definite velocity V independent of whether this ray of light is emitted by a body at rest or a body in motion."
In their modern form:
1. The laws of physics take the same form in all inertial frames.
2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor]
Einstein's wording of the first postulate was one with which nearly all theorists of his day could agree. His second postulate expresses a new idea about the character of light. Modern textbooks combine the two postulates. One popular textbook expresses the second postulate as, "The speed of light in free space has the same value c in all directions and in all inertial reference frames."
Trains, embankments, and lightning flashes
The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works (Annus Mirabilis papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before.
Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory.
In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows:
Observer M stands on an embankment, while observer M rides on a rapidly traveling train. At the precise moment that M and M coincide in their positions, lightning strikes points A and B equidistant from M and M.
Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous.
The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M measures exactly the same speed of light as does M. Since M was equidistant from A and B when lightning struck, the fact that M receives light from B before light from A means that to M, the bolts were not synchronous. Instead, the bolt at B struck first.
A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity.
However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization.
The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration.
We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public.
Relativistic center-of-mass theorem
Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter).
Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, i.e. a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum.
Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment:
Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass and length , (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) from the left to the right. The radiation has momentum Since the total momentum of the system is zero, the cylinder recoils with a speed (c) The radiation hits the other end of the cylinder in time (assuming ), bringing the cylinder to a stop after it has moved through a distance
(d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated.
The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle cannot be massless while transferring energy from the right to the left. If energy possesses the inertia the contradiction disappears.
Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of .
Impossibility of faster-than-light signaling
In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling.
Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of (as viewed from the material strip). Imagine two observers, A and B, standing on the x-axis and separated by the distance . They stand next to the material strip, which is not at rest, but rather is moving in the negative x-direction with speed . A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed . The time required for the signal to propagate from A to B is given by
The strip can move at any speed . Given the starting assumption , one can always set the strip moving at a speed such that .
In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it.
About this thought experiment, Einstein wrote:
General relativity
Falling painters and accelerating elevators
In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle:
The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life.
Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity."
Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle.
An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields."
Early applications of the equivalence principle
Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness).
As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis.
In his 1907 Jahrbuch paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test.
By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is , then the increase in its inertial mass is
Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass precisely the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so.
To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A certain amount of electromagnetic energy is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from
In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The energy arriving at will therefore not be the energy but the greater energy given by
According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that
The energy arriving at is greater than the energy emitted by by the potential energy of the mass in the gravitational field. Hence corresponds to the gravitational mass as well as the inertial mass of a quantity of energy.
To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source is situated a distance above a receiver in a uniform gravitational field. A movable mass can shuttle between and (b) A pulse of electromagnetic energy is sent from to The energy is absorbed by (c) Mass is lowered from to releasing an amount of work equal to (d) The energy absorbed by is transferred to This increases the gravitational mass of to a new value (e) The mass is lifted back to , requiring the input of work (e) The energy carried by the mass is then transferred to completing the cycle.
Conservation of energy demands that the difference in work between raising the mass and lowering the mass, , must equal or one could potentially define a perpetual motion machine. Therefore,
In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity.
Einstein then considered sending a continuous electromagnetic beam of frequency (as measured at ) from to in a homogeneous gravitational field. The frequency of the light as measured at will be a larger value given by
Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from to is continuous, how could the number of periods emitted per second from be different from that received at It is impossible for wave crests to appear on the way down from to . The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation.
It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for any theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation.
In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity.
Non-Euclidean geometry and the rotating disk
By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with.
Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force:
In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses.
In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods.
Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime.
Quantum mechanics
Background: Einstein and the quantum
Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote:
In hindsight, we know that Pais was incorrect in his assessment.
Einstein was arguably the greatest single contributor to the "old" quantum theory.
In his 1905 paper on light quanta, Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true. Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence.
In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids. In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero): at very cold temperatures, atoms in a solid do not have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate.
Einstein proposed the wave–particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views. Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black-body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties.
Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character. Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion. In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation. This paper, well known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance.
In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation. Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual. Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose–Einstein condensation, one of the fundamental research topics of condensed matter physics.
While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter.
Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave–particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser.
What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write:
In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution."
Wave–particle duality
All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave–particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability.
Einstein's 1909 arguments for the wave–particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface.
The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules.
Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation in a small volume of a cavity filled with thermal radiation in the frequency interval between and to be a function of frequency and temperature:
where would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law (i.e. a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy ). From this, Einstein concluded that radiation had simultaneous wave and particle aspects.
Bubble paradox
From 1905 to 1923, Einstein was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory.
Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave–particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless".
Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units.
Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He
used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom.
Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality.
In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community.
Einstein's light box
Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote:
The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it.
Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day.
On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete.
At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system.
Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box:
1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field.
2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system).
3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error in returning the pointer to its initial position. Correspondingly, there will be some residual error in the weight measurement.
4. Adding the weights imparts a momentum to the box which can be measured with an accuracy delimited by It is clear that where is the gravitational constant. Plugging in yields
5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty in the determination of the emission time of the photon.
6. Hence, The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle.
After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox.
Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth."
EPR paradox
Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions.
Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed:
1. A reality exists independent of our ability to observe it.
2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality.
3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature.
Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete?
After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism.
Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality."
Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes.
The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this."
Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics.
Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle.
Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership.
In 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out that demonstrated violation of these limits. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified.
The EPR paper has recently been recognized as prescient, since it identified the phenomenon of quantum entanglement, which has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory.
Notes
Primary sources
References
External links
NOVA: Inside Einstein's Mind (2015) — Retrace the thought experiments that inspired his theory on the nature of reality.
Special relativity
General relativity
History of physics
Thought experiments in quantum mechanics
Albert Einstein | 0.79098 | 0.991564 | 0.784307 |
Thermal radiation | Thermal radiation is electromagnetic radiation emitted by the thermal motion of particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation. The emission of energy arises from a combination of electronic, molecular, and lattice oscillations in a material. Kinetic energy is converted to electromagnetism due to charge-acceleration or dipole oscillation. At room temperature, most of the emission is in the infrared (IR) spectrum, though above around 525 °C (977 °F) enough of it becomes visible for the matter to visibly glow. This visible glow is called incandescence. Thermal radiation is one of the fundamental mechanisms of heat transfer, along with conduction and convection.
The primary method by which the Sun transfers heat to the Earth is thermal radiation. This energy is partially absorbed and scattered in the atmosphere, the latter process being the reason why the sky is visibly blue. Much of the Sun's radiation transmits through the atmosphere to the surface where it is either absorbed or reflected.
Thermal radiation can be used to detect objects or phenomena normally invisible to the human eye. Thermographic cameras create an image by sensing infrared radiation. These images can represent the temperature gradient of a scene and are commonly used to locate objects at a higher temperature than their surroundings. In a dark environment where visible light is at low levels, infrared images can be used to locate animals or people due to their body temperature. Cosmic microwave background radiation is another example of thermal radiation.
Blackbody radiation is a concept used to analyze thermal radiation in idealized systems. This model applies if a radiation object meets the physical characteristics of a black body in thermodynamic equilibrium. Planck's law describes the spectrum of blackbody radiation, and relates the radiative heat flux from a body to its temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Where blackbody radiation is not an accurate approximation, emission and absorption can be modeled using quantum electrodynamics (QED).
Overview
Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. It is present in all matter of nonzero temperature. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum.
The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature and its spectral emissivity, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so a black body has an emissivity of one.
Absorptivity, reflectivity, and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity, absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation.
The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing).
History
Ancient Greece
Burning glasses are known to date back to about 700 BC. One of the first accurate mentions of burning glasses appears in Aristophanes's comedy, The Clouds, written in 423 BC. According to the Archimedes' heat ray anecdote, Archimedes is purported to have developed mirrors to concentrate heat rays in order to burn attacking Roman ships during the Siege of Syracuse (c. 213–212 BC), but no sources from the time have been confirmed. Catoptrics is a book attributed to Euclid on how to focus light in order to produce heat, but the book might have been written in 300 AD.
Renaissance
During the Renaissance, Santorio Santorio came up with one of the earliest thermoscopes. In 1612 he published his results on the heating effects from the Sun, and his attempts to measure heat from the Moon.
Earlier, in 1589, Giambattista della Porta reported on the heat felt on his face, emitted by a remote candle and facilitated by a concave metallic mirror. He also reported the cooling felt from a solid ice block. Della Porta's experiment would be replicated many times with increasing accuracy. It was replicated by astronomers Giovanni Antonio Magini and Christopher Heydon in 1603, and supplied instructions for Rudolf II, Holy Roman Emperor who performed it in 1611. In 1660, della Porta's experiment was updated by the Accademia del Cimento using a thermometer invented by Ferdinand II, Grand Duke of Tuscany.
Enlightenment
In 1761, Benjamin Franklin wrote a letter describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes. One experiment he performed consisted of placing square pieces of cloth of various colors out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow.
Caloric theory
Antoine Lavoisier considered that radiation of heat was concerned with the condition of the surface of a physical body rather than the material of which it was composed. Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric fluid which insulated the release of the rest within. He described a good radiator to be a substance with a rough surface as only a small proportion of molecules held caloric in within a given plane, allowing for greater escape from within. Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold, which became a point of contention for the theory as a whole.
In his first memoir, Augustin-Jean Fresnel responded to a view he extracted from a French translation of Isaac Newton's Optics. He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a body under illumination would increase indefinitely in heat.
In Marc-Auguste Pictet's famous experiment of 1790, it was reported that a thermometer detected a lower temperature when a set of mirrors were used to focus "frigorific rays" from a cold object.
In 1791, Pierre Prevost a colleague of Pictet, introduced the concept of radiative equilibrium, wherein all objects both radiate and absorb heat. When an object is cooler than its surroundings, it absorbs more heat than it emits, causing its temperature to increase until it reaches equilibrium. Even at equilibrium, it continues to radiate heat, balancing absorption and emission.
The discovery of infrared radiation is ascribed to astronomer William Herschel. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the calorific rays, beyond the red part of the spectrum, by an increase in the temperature recorded on a thermometer in that region.
Aether theory
First, the earlier theory which originated from the concept of a hypothetical medium referred as aether. Ether supposedly fills all evacuated or non-evacuated spaces. The transmission of light or of radiant heat are allowed by the propagation of electromagnetic waves in the aether. television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths. All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies. Since every body or fluid is submerged in the ether, due to the vibration of the molecules, any body or fluid can potentially initiate an electromagnetic wave. All bodies generate and receive electromagnetic waves at the expense of its stored energy.
In 1860, Gustav Kirchhoff published a mathematical description of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. This relation is known as Stefan–Boltzmann law.
Quantum theory
The microscopic theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900. According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta. Planck noted that energy was emitted in quantas of frequency of vibration similarly to the wave theory. The energy E an electromagnetic wave in vacuum is found by the expression E = hf, where h is the Planck constant and f is its frequency.
Bodies at higher temperatures emit radiation at higher frequencies with an increasing energy per quantum. While the propagation of electromagnetic waves of all wavelengths is often referred as "radiation", thermal radiation is often constrained to the visible and infrared regions. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature.
Radiation waves may travel in unusual patterns compared to conduction heat flow. Radiation allows waves to travel from a heated body through a cold non-absorbing or partially absorbing medium and reach a warmer body again. An example is the case of the radiation waves that travel from the Sun to the Earth.
Characteristics
Frequency
Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top.
The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot. Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law. In the diagram the peak value for each curve moves to the left as the temperature increases.
Relationship to temperature
The total radiation intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law. A kitchen oven, at a temperature about double room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb—roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area.
As for photon statistics, thermal light obeys Super-Poissonian statistics.
Appearance
When the temperature of a body is high enough, its thermal radiation spectrum becomes strong enough in the visible range to visibly glow. The visible component of thermal radiation is sometimes called incandescence,
though this term can also refer to thermal radiation in general. The term derive from the Latin verb , 'to glow white'.
In practice, virtually all solid or liquid substances start to glow around , with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point. The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible.
Reciprocity
The rate of electromagnetic radiation emitted by a body at a given frequency is proportional to the rate that the body absorbs radiation at that frequency, a property known as reciprocity. Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization, and even coherence. It is therefore possible to have thermal radiation which is polarized, coherent, and directional; though polarized and coherent sources are fairly rare in nature.
Fundamental principles
Thermal radiation is one of the three principal mechanisms of heat transfer. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction.
Electromagnetic waves
Thermal radiation is characteristically different from conduction and convection in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum. Thermal radiation is a type of electromagnetic radiation which is often modeled by the propagation of waves. These waves have the standard wave properties of frequency, and wavelength, which are related by the equationwhere is the speed of light in the medium.
Irradiation
Thermal irradiation is the rate at which radiation is incident upon a surface per unit area. It is measured in watts per square meter. Irradiation can either be reflected, absorbed, or transmitted. The components of irradiation can then be characterized by the equation
where, represents the absorptivity, reflectivity and transmissivity. These components are a function of the wavelength of the electromagnetic wave as well as the material properties of the medium.
Absorptivity and emissivity
The spectral absorption is equal to the emissivity ; this relation is known as Kirchhoff's law of thermal radiation. An object is called a black body if this holds for all frequencies, and the following formula applies:
If objects appear white (reflective in the visual spectrum), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared – see the diagram at the left. Most household radiators are painted white, which is sensible given that they are not hot enough to radiate any significant amount of heat, and are not designed as thermal radiators at all – instead, they are actually convectors, and painting them matt black would make little difference to their efficacy. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence.
Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is almost impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most standard temperature and pressure lab controlled environments).
Reflectivity
Reflectivity deviates from the other properties in that it is bidirectional in nature. In other words, this property depends on the direction of the incident of radiation as well as the direction of the reflection. Therefore, the reflected rays of a radiation spectrum incident on a real surface in a specified direction forms an irregular shape that is not easily predictable. In practice, surfaces are often assumed to reflect either in a perfectly specular or a diffuse manner. In a specular reflection, the angles of reflection and incidence are equal. In diffuse reflection, radiation is reflected equally in all directions. Reflection from smooth and polished surfaces can be assumed to be specular reflection, whereas reflection from rough surfaces approximates diffuse reflection. In radiation analysis a surface is defined as smooth if the height of the surface roughness is much smaller relative to the wavelength of the incident radiation.
Transmissivity
A medium that experiences no transmission is opaque, in which case absorptivity and reflectivity sum to unity:
Radiation intensity
Radiation emitted from a surface can propagate in any direction from the surface. Irradiation can also be incident upon a surface from any direction. The amount of irradiation on a surface is therefore dependent on the relative orientation of both the emitter and the receiver. The parameter radiation intensity, is used to quantify how much radiation makes it from one surface to another.
Radiation intensity is often modeled using a spherical coordinate system.
Emissive power
Emissive power is the rate at which radiation is emitted per unit area. It is a measure of heat flux. The total emissive power from a surface is denoted as and can be determined by,where is in units of steradians and is the total intensity.
The total emissive power can also be found by integrating the spectral emissive power over all possible wavelengths. This is calculated as,where represents wavelength.
The spectral emissive power can also be determined from the spectral intensity, as follows,
where both spectral emissive power and emissive intensity are functions of wavelength.
Blackbody radiation
A "black body" is a body which has the property of allowing all incident rays to enter without surface reflection and not allowing them to leave again.
Blackbodies are idealized surfaces that act as the perfect absorber and emitter. They serve as the standard against which real surfaces are compared when characterizing thermal radiation. A blackbody is defined by three characteristics:
A blackbody absorbs all incident radiation, regardless of wavelength and direction.
No surface can emit more energy than a blackbody for a given temperature and wavelength.
A blackbody is a diffuse emitter.
The Planck distribution
The spectral intensity of a blackbody, was first determined by Max Planck. It is given by Planck's law per unit wavelength as:This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. Planck's law shows that radiative energy increases with temperature, and explains why the peak of an emission spectrum shifts to shorter wavelengths at higher temperatures. It can also be found that energy emitted at shorter wavelengths increases more rapidly with temperature relative to longer wavelengths.
The equation is derived as an infinite sum over all possible frequencies in a semi-sphere region. The energy, , of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied.
Stefan-Boltzmann law
The Planck distribution can be used to find the spectral emissive power of a blackbody, as follows,
The total emissive power of a blackbody is then calculated as,The solution of the above integral yields a remarkably elegant equation for the total emissive power of a blackbody, the Stefan-Boltzmann law, which is given as,where is the Steffan-Boltzmann constant.
Wien's displacement law
The wavelength for which the emission intensity is highest is given by Wien's displacement law as:
Constants
Definitions of constants used in the above equations:
Variables
Definitions of variables, with example values:
Emission from non-black surfaces
For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains as a factor:
This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body. For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral.
Heat transfer between surfaces
Calculation of radiative heat transfer between groups of objects, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors, which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy, boiler and furnace design and raytraced computer graphics.
The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface.
Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder.
Applications
Thermal radiation is an important factor of many engineering applications, especially for those dealing with high temperatures.
Solar energy
Sunlight is the incandescence of the "white hot" surface of the Sun. Electromagnetic radiation from the sun has a peak wavelength of about 550 nm, and can be harvested to generate heat or electricity.
Thermal radiation can be concentrated on a tiny spot via reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy. Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to .
A selective surface can be used when energy is being extracted from the sun. Selective surfaces are surfaces tuned to maximize the amount of energy they absorb from the sun's radiation while minimizing the amount of energy they lose to their own thermal radiation. Selective surfaces can also be used on solar collectors.
Incandescent light bulbs
The incandescent light bulb creates light by heating a filament to a temperature at which it emits significant visible thermal radiation. For a tungsten filament at a typical temperature of 3000 K, only a small fraction of the emitted radiation is visible, and the majority is infrared light. This infrared light does not help a person see, but still transfers heat to the environment, making incandescent lights relatively inefficient as a light source.
If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps.
More efficient light sources, such as fluorescent lamps and LEDs, do not function by incandescence.
Thermal comfort
Thermal radiation plays a crucial role in human comfort, influencing perceived temperature sensation. Various technologies have been developed to enhance thermal comfort, including personal heating and cooling devices.
The mean radiant temperature is a metric used to quantify the exchange of radiant heat between a human and their surrounding environment.
Personal heating
Radiant personal heaters are devices that convert energy into infrared radiation that are designed to increase a user's perceived temperature. They typically are either gas-powered or electric. In domestic and commercial applications, gas-powered radiant heaters can produce a higher heat flux than electric heaters which are limited by the amount of current that can be drawn through a circuit breaker.
Personal cooling
Personalized cooling technology is an example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler.
Windows
Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. To reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. "Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow". By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window.
Spacecraft
Shiny metal surfaces, have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft.
Since any electromagnetic radiation, including thermal radiation, conveys momentum as well as energy, thermal radiation also induces very small forces on the radiating or absorbing objects. Normally these forces are negligible, but they must be taken into account when considering spacecraft navigation. The Pioneer anomaly, where the motion of the craft slightly deviated from that expected from gravity alone, was eventually tracked down to asymmetric thermal radiation from the spacecraft. Similarly, the orbits of asteroids are perturbed since the asteroid absorbs solar radiation on the side facing the Sun, but then re-emits the energy at a different angle as the rotation of the asteroid carries the warm surface out of the Sun's view (the YORP effect).
Nanostructures
Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink.
Health and safety
Metabolic temperature regulation
In a practical, room-temperature setting, humans lose considerable energy due to infrared thermal radiation in addition to that lost by conduction to air (aided by concurrent convection, or other air movement like drafts). The heat energy lost is partially regained by absorbing heat radiation from walls or other surroundings. Human skin has an emissivity of very close to 1.0. A human, having roughly 2m2 in surface area, and a temperature of about 307 K, continuously radiates approximately 1000 W. If people are indoors, surrounded by surfaces at 296 K, they receive back about 900 W from the wall, ceiling, and other surroundings, resulting in a net loss of 100 W. These estimates are highly dependent on extrinsic variables, such as wearing clothes.
Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less. However, color makes little difference in the heat transfer between an object at everyday temperatures and its surroundings. This is because the dominant emitted wavelengths are not in the visible spectrum, but rather infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit.
Burns
Thermal radiation is a phenomenon that can burn skin and ignite flammable materials. The time to a damage from exposure to thermal radiation is a function of the rate of delivery of the heat. Radiative heat flux and effects are given as follows:
Near-field radiative heat transfer
At distances on the scale of the wavelength of a radiated electromangetic wave or smaller, Planck's law is not accurate. For objects this small and close together, the quantum tunneling of EM waves has a significant impact on the rate of radiation.
A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface. For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence.
Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs.
Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications.
See also
Incandescence
Infrared photography
Interior radiation control coating
Heat transfer
Microwave Radiation
Planck radiation
Radiant cooling
Sakuma–Hattori equation
Thermal dose unit
View factor
References
Further reading
E.M. Sparrow and R.D. Cess. Radiation Heat Transfer. Hemisphere Publishing Corporation, 1978.
Kuenzer, C. and S. Dech (2013): Thermal Infrared Remote Sensing: Sensors, Methods, Applications (= Remote Sensing and Digital Image Processing 17). Dordrecht: Springer.
External links
Black Body Emission Calculator
Heat transfer
Atmospheric Radiation
Infrared Temperature Calibration 101
Electromagnetic radiation
Heat transfer
Thermodynamics
Temperature
Infrared | 0.786208 | 0.997449 | 0.784202 |
Oscillation | Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations can be used in physics to approximate complex interactions, such as those between atoms.
Oscillations occur not only in mechanical systems but also in dynamic systems in virtually every area of science: for example the beating of the human heart (for circulation), business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, and the periodic swelling of Cepheid variable stars in astronomy. The term vibration is precisely used to describe a mechanical oscillation.
Oscillation, especially rapid oscillation, may be an undesirable phenomenon in process control and control theory (e.g. in sliding mode control), where the aim is convergence to stable state. In these cases it is called chattering or flapping, as in valve chatter, and route flapping.
Simple harmonic oscillation
The simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface. The system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted. The time taken for an oscillation to occur is often referred to as the oscillatory period.
The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy which is converted into potential energy stored in the spring at the extremes of its path. The spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
In the case of the spring-mass system, Hooke's law states that the restoring force of a spring is:
By using Newton's second law, the differential equation can be derived:
where
The solution to this differential equation produces a sinusoidal position function:
where is the frequency of the oscillation, is the amplitude, and is the phase shift of the function. These are determined by the initial conditions of the system. Because cosine oscillates between 1 and −1 infinitely, our spring-mass system would oscillate between the positive and negative amplitude forever without friction.
Two-dimensional oscillators
In two or three dimensions, harmonic oscillators behave similarly to one dimension. The simplest example of this is an isotropic oscillator, where the restoring force is proportional to the displacement from equilibrium with the same restorative constant in all directions.
This produces a similar solution, but now there is a different equation for every direction.
Anisotropic oscillators
With anisotropic oscillators, different directions have different constants of restoring forces. The solution is similar to isotropic oscillators, but there is a different frequency in each direction. Varying the frequencies relative to each other can produce interesting results. For example, if the frequency in one direction is twice that of another, a figure eight pattern is produced. If the ratio of frequencies is irrational, the motion is quasiperiodic. This motion is periodic on each axis, but is not periodic with respect to r, and will never repeat.
Damped oscillations
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. This is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system. The simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator.
Damped oscillators are created when a resistive force is introduced, which is dependent on the first derivative of the position, or in this case velocity. The differential equation created by Newton's second law adds in this resistive force with an arbitrary constant . This example assumes a linear dependence on velocity.
This equation can be rewritten as before:
where .
This produces the general solution:
where .
The exponential term outside of the parenthesis is the decay function and is the damping coefficient. There are 3 categories of damped oscillators: under-damped, where ; over-damped, where ; and critically damped, where .
Driven oscillations
In addition, an oscillating system may be subject to some external force, as when an AC circuit is connected to an outside power source. In this case the oscillation is said to be driven.
The simplest example of this is a spring-mass system with a sinusoidal driving force.
where
This gives the solution:
where and
The second term of is the transient solution to the differential equation. The transient solution can be found by using the initial conditions of the system.
Some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing (from its equilibrium) results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement. At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation.
Resonance
Resonance occurs in a damped driven oscillator when ω = ω0, that is, when the driving frequency is equal to the natural frequency of the system. When this occurs, the denominator of the amplitude is minimized, which maximizes the amplitude of the oscillations.
Coupled oscillations
The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example, two masses and three springs (each mass being attached to fixed points and to each other). In such cases, the behavior of each variable influences that of the others. This leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks (of identical frequency) mounted on a common wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations typically appears very complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes.
The simplest form of coupled oscillators is a 3 spring, 2 mass system, where masses and spring constants are the same. This problem begins with deriving Newton's second law for both masses.
The equations are then generalized into matrix form.
where , , and
The values of and can be substituted into the matrices.
These matrices can now be plugged into the general solution.
The determinant of this matrix yields a quadratic equation.
Depending on the starting point of the masses, this system has 2 possible frequencies (or a combination of the two). If the masses are started with their displacements in the same direction, the frequency is that of a single mass system, because the middle spring is never extended. If the two masses are started in opposite directions, the second, faster frequency is the frequency of the system.
More special cases are the coupled oscillators where energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between the elongation of a vertical spring and the rotation of an object at the end of that spring.
Coupled oscillators are a common description of two related, but different phenomena. One case is where both oscillations affect each other mutually, which usually leads to the occurrence of a single, entrained oscillation state, where both oscillate with a compromise frequency. Another case is where one external oscillation affects an internal oscillation, but is not affected by this. In this case the regions of synchronization, known as Arnold Tongues, can lead to highly complex phenomena as for instance chaotic dynamics.
Small oscillation approximation
In physics, a system with a set of conservative forces and an equilibrium point can be approximated as a harmonic oscillator near equilibrium. An example of this is the Lennard-Jones potential, where the potential is given by:
The equilibrium points of the function are then found:
The second derivative is then found, and used to be the effective potential constant:
The system will undergo oscillations near the equilibrium point. The force that creates these oscillations is derived from the effective potential constant above:
This differential equation can be re-written in the form of a simple harmonic oscillator:
Thus, the frequency of small oscillations is:
Or, in general form
This approximation can be better understood by looking at the potential curve of the system. By thinking of the potential curve as a hill, in which, if one placed a ball anywhere on the curve, the ball would roll down with the slope of the potential curve. This is true due to the relationship between potential energy and force.
By thinking of the potential in this way, one will see that at any local minimum there is a "well" in which the ball would roll back and forth (oscillate) between and . This approximation is also useful for thinking of Kepler orbits.
Continuous systems – waves
As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity; examples include a string or the surface of a body of water. Such systems have (in the classical limit) an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate.
Mathematics
The mathematics of oscillation deals with the quantification of the amount that a sequence or function tends to move between extremes. There are several related notions: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set).
Examples
Mechanical
Double pendulum
Foucault pendulum
Helmholtz resonator
Oscillations in the Sun (helioseismology), stars (asteroseismology) and Neutron-star oscillations.
Quantum harmonic oscillator
Playground swing
String instruments
Torsional vibration
Tuning fork
Vibrating string
Wilberforce pendulum
Lever escapement
Electrical
Alternating current
Armstrong (or Tickler or Meissner) oscillator
Astable multivibrator
Blocking oscillator
Butler oscillator
Clapp oscillator
Colpitts oscillator
Delay-line oscillator
Electronic oscillator
Extended interaction oscillator
Hartley oscillator
Oscillistor
Phase-shift oscillator
Pierce oscillator
Relaxation oscillator
RLC circuit
Royer oscillator
Vačkář oscillator
Wien bridge oscillator
Electro-mechanical
Crystal oscillator
Optical
Laser (oscillation of electromagnetic field with frequency of order 1015 Hz)
Oscillator Toda or self-pulsation (pulsation of output power of laser at frequencies 104 Hz – 106 Hz in the transient regime)
Quantum oscillator may refer to an optical local oscillator, as well as to a usual model in quantum optics.
Biological
Circadian rhythm
Bacterial Circadian Rhythms
Circadian oscillator
Lotka–Volterra equation
Neural oscillation
Oscillating gene
Segmentation clock
Human oscillation
Neural oscillation
Insulin release oscillations
gonadotropin releasing hormone pulsations
Pilot-induced oscillation
Voice production
Economic and social
Business cycle
Generation gap
Malthusian economics
News cycle
Climate and geophysics
Atlantic multidecadal oscillation
Chandler wobble
Climate oscillation
El Niño-Southern Oscillation
Pacific decadal oscillation
Quasi-biennial oscillation
Astrophysics
Neutron stars
Cyclic Model
Quantum mechanical
Neutral particle oscillation, e.g. neutrino oscillations
Quantum harmonic oscillator
Chemical
Belousov–Zhabotinsky reaction
Mercury beating heart
Briggs–Rauscher reaction
Bray–Liebhafsky reaction
Computing
Cellular Automata oscillator
See also
Antiresonance
Beat (acoustics)
BIBO stability
Critical speed
Cycle (music)
Dynamical system
Earthquake engineering
Feedback
Fourier transform for computing periodicity in evenly spaced data
Frequency
Hidden oscillation
Madden–Julian oscillation
Least-squares spectral analysis for computing periodicity in unevenly spaced data
Oscillator phase noise
Periodic function
Phase noise
Quasiperiodicity
Reciprocating motion
Resonator
Rhythm
Seasonality
Self-oscillation
Signal generator
Squegging
Strange attractor
Structural stability
Tuned mass damper
Vibration
Vibrator (mechanical)
References
External links
Vibrations – a chapter from an online textbook | 0.786639 | 0.99681 | 0.78413 |
Four-momentum | In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy and three-momentum , where is the particle's three-velocity and the Lorentz factor, is
The quantity of above is the ordinary non-relativistic momentum of the particle and its rest mass. The four-momentum is useful in relativistic calculations because it is a Lorentz covariant vector. This means that it is easy to keep track of how it transforms under Lorentz transformations.
Minkowski norm
Calculating the Minkowski norm squared of the four-momentum gives a Lorentz invariant quantity equal (up to factors of the speed of light ) to the square of the particle's proper mass:
where
is the metric tensor of special relativity with metric signature for definiteness chosen to be . The negativity of the norm reflects that the momentum is a timelike four-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout.
The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momenta and , the quantity is invariant.
Relation to four-velocity
For a massive particle, the four-momentum is given by the particle's invariant mass multiplied by the particle's four-velocity,
where the four-velocity is
and
is the Lorentz factor (associated with the speed ), is the speed of light.
Derivation
There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocity and simply define , being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with the principle of least action and use the Lagrangian framework to derive the four-momentum, including the expression for the energy. One may at once, using the observations detailed below, define four-momentum from the action . Given that in general for a closed system with generalized coordinates and canonical momenta ,
it is immediate (recalling , , , and , , , in the present metric convention) that
is a covariant four-vector with the three-vector part being the (negative of) canonical momentum.
Consider initially a system of one degree of freedom . In the derivation of the equations of motion from the action using Hamilton's principle, one finds (generally) in an intermediate stage for the variation of the action,
The assumption is then that the varied paths satisfy , from which Lagrange's equations follow at once. When the equations of motion are known (or simply assumed to be satisfied), one may let go of the requirement . In this case the path is assumed to satisfy the equations of motion, and the action is a function of the upper integration limit , but is still fixed. The above equation becomes with , and defining , and letting in more degrees of freedom,
Observing that
one concludes
In a similar fashion, keep endpoints fixed, but let vary. This time, the system is allowed to move through configuration space at "arbitrary speed" or with "more or less energy", the field equations still assumed to hold and variation can be carried out on the integral, but instead observe
by the fundamental theorem of calculus. Compute using the above expression for canonical momenta,
Now using
where is the Hamiltonian, leads to, since in the present case,
Incidentally, using with in the above equation yields the Hamilton–Jacobi equations. In this context, is called Hamilton's principal function.
The action is given by
where is the relativistic Lagrangian for a free particle. From this,
The variation of the action is
To calculate , observe first that and that
So
or
and thus
which is just
where the second step employs the field equations , , and as in the observations above. Now compare the last three expressions to find
with norm , and the famed result for the relativistic energy,
where is the now unfashionable relativistic mass, follows. By comparing the expressions for momentum and energy directly, one has
that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives the energy–momentum relation,
Substituting
in the equation for the norm gives the relativistic Hamilton–Jacobi equation,
It is also possible to derive the results from the Lagrangian directly. By definition,
which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector.
The energy and the three-momentum are separately conserved quantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below.
More pedestrian approaches include expected behavior in electrodynamics. In this approach, the starting point is application of Lorentz force law and Newton's second law in the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance of electric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector.
It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of the velocity addition formula and assuming conservation of momentum. This too gives only the three-vector part.
Conservation of four-momentum
As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa):
The four-momentum (either covariant or contravariant) is conserved.
The total energy is conserved.
The 3-space momentum is conserved (not to be confused with the classic non-relativistic momentum ).
Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, since kinetic energy in the system center-of-mass frame and potential energy from forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta and each have (rest) mass 3GeV/c2 separately, but their total mass (the system mass) is 10GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10GeV/c2.
One practical application from particle physics of the conservation of the invariant mass involves combining the four-momenta and of two daughter particles produced in the decay of a heavier particle with four-momentum to find the mass of the heavier particle. Conservation of four-momentum gives , while the mass of the heavier particle is given by . By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal to . This technique is used, e.g., in experimental searches for Z′ bosons at high-energy particle colliders, where the Z′ boson would show up as a bump in the invariant mass spectrum of electron–positron or muon–antimuon pairs.
If the mass of an object does not change, the Minkowski inner product of its four-momentum and corresponding four-acceleration is simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, so
Canonical momentum in the presence of an electromagnetic potential
For a charged particle of charge , moving in an electromagnetic field given by the electromagnetic four-potential:
where is the scalar potential and the vector potential, the components of the (not gauge-invariant) canonical momentum four-vector is
This, in turn, allows the potential energy from the charged particle in an electrostatic potential and the Lorentz force on the charged particle moving in a magnetic field to be incorporated in a compact way, in relativistic quantum mechanics.
Four-momentum in curved spacetime
In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is four-vector with covariant index:
Four-momentum is expressed through the energy of physical system and relativistic momentum . At the same time, the four-momentum can be represented as the sum of two non-local four-vectors of integral type:
Four-vector is the generalized four-momentum associated with the action of fields on particles; four-vector is the four-momentum of the fields arising from the action of particles on the fields.
Energy and momentum , as well as components of four-vectors and can be calculated if the Lagrangian density of the system is given. The following formulas are obtained for the energy and momentum of the system:
Here is that part of the Lagrangian density that contains terms with four-currents; is the velocity of matter particles; is the time component of four-velocity of particles; is determinant of metric tensor; is the part of the Lagrangian associated with the Lagrangian density ; is velocity of a particle of matter with number .
See also
Four-force
Four-gradient
Pauli–Lubanski pseudovector
References
Wikisource version
Four-vectors
Momentum | 0.789648 | 0.993 | 0.78412 |
Biophysics | Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
History
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
Focus as a subfield
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
See also
Biophysical Society
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Biophysical chemistry
European Biophysical Societies' Association
Mathematical and theoretical biology
Medical biophysics
Membrane biophysics
Molecular biophysics
Neurophysics
Physiomics
Virophysics
Single-particle trajectory
References
Sources
External links
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Applied and interdisciplinary physics | 0.78811 | 0.994899 | 0.78409 |
Perpetual motion | Perpetual motion is the motion of bodies that continues forever in an unperturbed system. A perpetual motion machine is a hypothetical machine that can do work indefinitely without an external energy source. This kind of machine is impossible, since its existence would violate the first and/or second laws of thermodynamics.
These laws of thermodynamics apply regardless of the size of the system. For example, the motions and rotations of celestial bodies such as planets may appear perpetual, but are actually subject to many processes that slowly dissipate their kinetic energy, such as solar wind, interstellar medium resistance, gravitational radiation and thermal radiation, so they will not keep moving forever.
Thus, machines that extract energy from finite sources cannot operate indefinitely because they are driven by the energy stored in the source, which will eventually be exhausted. A common example is devices powered by ocean currents, whose energy is ultimately derived from the Sun, which itself will eventually burn out.
In 2016, new states of matter, time crystals, were discovered in which, on a microscopic scale, the component atoms are in continual repetitive motion, thus satisfying the literal definition of "perpetual motion". However, these do not constitute perpetual motion machines in the traditional sense, or violate thermodynamic laws, because they are in their quantum ground state, so no energy can be extracted from them; they exhibit motion without energy.
History
The history of perpetual motion machines dates back to the Middle Ages. For millennia, it was not clear whether perpetual motion devices were possible or not, until the development of modern theories of thermodynamics showed that they were impossible. Despite this, many attempts have been made to create such machines, continuing into modern times. Modern designers and proponents often use other terms, such as "over unity", to describe their inventions.
Basic principles
There is a scientific consensus that perpetual motion in an isolated system violates either the first law of thermodynamics, the second law of thermodynamics, or both. The first law of thermodynamics is a version of the law of conservation of energy. The second law can be phrased in several different ways, the most intuitive of which is that heat flows spontaneously from hotter to colder places; relevant here is that the law observes that in every macroscopic process, there is friction or something close to it; another statement is that no heat engine (an engine which produces work while moving heat from a high temperature to a low temperature) can be more efficient than a Carnot heat engine operating between the same two temperatures.
In other words:
In any isolated system, one cannot create new energy (law of conservation of energy). As a result, the thermal efficiency—the produced work power divided by the input heating power—cannot be greater than one.
The output work power of heat engines is always smaller than the input heating power. The rest of the heat energy supplied is wasted as heat to the ambient surroundings. The thermal efficiency therefore has a maximum, given by the Carnot efficiency, which is always less than one.
The efficiency of real heat engines is even lower than the Carnot efficiency due to irreversibility arising from the speed of processes, including friction.
Statements 2 and 3 apply to heat engines. Other types of engines that convert e.g. mechanical into electromagnetic energy, cannot operate with 100% efficiency, because it is impossible to design any system that is free of energy dissipation.
Machines that comply with both laws of thermodynamics by accessing energy from unconventional sources are sometimes referred to as perpetual motion machines, although they do not meet the standard criteria for the name. By way of example, clocks and other low-power machines, such as Cox's timepiece, have been designed to run on the differences in barometric pressure or temperature between night and day. These machines have a source of energy, albeit one which is not readily apparent, so that they only seem to violate the laws of thermodynamics.
Even machines that extract energy from long-lived sources - such as ocean currents - will run down when their energy sources inevitably do. They are not perpetual motion machines because they are consuming energy from an external source and are not isolated systems.
Classification
One classification of perpetual motion machines refers to the particular law of thermodynamics the machines purport to violate:
A perpetual motion machine of the first kind produces work without the input of energy. It thus violates the first law of thermodynamics: the law of conservation of energy.
A perpetual motion machine of the second kind is a machine that spontaneously converts thermal energy into mechanical work. When the thermal energy is equivalent to the work done, this does not violate the law of conservation of energy. However, it does violate the more subtle second law of thermodynamics in a cyclic process (see also entropy). The signature of a perpetual motion machine of the second kind is that there is only one heat reservoir involved, which is being spontaneously cooled without involving a transfer of heat to a cooler reservoir. This conversion of heat into useful work, without any side effect, is impossible, according to the second law of thermodynamics.
A perpetual motion machine of the third kind is defined as one that completely eliminates friction and other dissipative forces, to maintain motion forever due to its mass inertia (third in this case refers solely to the position in the above classification scheme, not the third law of thermodynamics). It is impossible to make such a machine, as dissipation can never be completely eliminated in a mechanical system, no matter how close a system gets to this ideal (see examples at below).
Impossibility
"Epistemic impossibility" describes things which absolutely cannot occur within our current formulation of the physical laws. This interpretation of the word "impossible" is what is intended in discussions of the impossibility of perpetual motion in a closed system.
The conservation laws are particularly robust from a mathematical perspective. Noether's theorem, which was proven mathematically in 1915, states that any conservation law can be derived from a corresponding continuous symmetry of the action of a physical system. The symmetry which is equivalent to conservation of energy is the time invariance of physical laws. Therefore, if the laws of physics do not change with time, then the conservation of energy follows. For energy conservation to be violated to allow perpetual motion would require that the foundations of physics would change.
Scientific investigations as to whether the laws of physics are invariant over time use telescopes to examine the universe in the distant past to discover, to the limits of our measurements, whether ancient stars were identical to stars today. Combining different measurements such as spectroscopy, direct measurement of the speed of light in the past and similar measurements demonstrates that physics has remained substantially the same, if not identical, for all of observable time spanning billions of years.
The principles of thermodynamics are so well established, both theoretically and experimentally, that proposals for perpetual motion machines are universally dismissed by physicists. Any proposed perpetual motion design offers a potentially instructive challenge to physicists: one is certain that it cannot work, so one must explain how it fails to work. The difficulty (and the value) of such an exercise depends on the subtlety of the proposal; the best ones tend to arise from physicists' own thought experiments and often shed light upon certain aspects of physics. So, for example, the thought experiment of a Brownian ratchet as a perpetual motion machine was first discussed by Gabriel Lippmann in 1900 but it was not until 1912 that Marian Smoluchowski gave an adequate explanation for why it cannot work. However, during that twelve-year period scientists did not believe that the machine was possible. They were merely unaware of the exact mechanism by which it would inevitably fail.
In the mid-19th-century Henry Dircks investigated the history of perpetual motion experiments, writing a vitriolic attack on those who continued to attempt what he believed to be impossible:
Techniques
Some common ideas recur repeatedly in perpetual motion machine designs. Many ideas that continue to appear today were stated as early as 1670 by John Wilkins, Bishop of Chester and an official of the Royal Society. He outlined three potential sources of power for a perpetual motion machine, " Extractions", "Magnetical Virtues" and "the Natural Affection of Gravity".
The seemingly mysterious ability of magnets to influence motion at a distance without any apparent energy source has long appealed to inventors. One of the earliest examples of a magnetic motor was proposed by Wilkins and has been widely copied since: it consists of a ramp with a magnet at the top, which pulled a metal ball up the ramp. Near the magnet was a small hole that was supposed to allow the ball to drop under the ramp and return to the bottom, where a flap allowed it to return to the top again. However, if the magnet is to be strong enough to pull the ball up the ramp, it cannot then be weak enough to allow gravity to pull it through the hole. Faced with this problem, more modern versions typically use a series of ramps and magnets, positioned so the ball is to be handed off from one magnet to another as it moves. The problem remains the same.
Gravity also acts at a distance, without an apparent energy source, but to get energy out of a gravitational field (for instance, by dropping a heavy object, producing kinetic energy as it falls) one has to put energy in (for instance, by lifting the object up), and some energy is always dissipated in the process. A typical application of gravity in a perpetual motion machine is Bhaskara's wheel in the 12th century, whose key idea is itself a recurring theme, often called the overbalanced wheel: moving weights are attached to a wheel in such a way that they fall to a position further from the wheel's center for one half of the wheel's rotation, and closer to the center for the other half. Since weights further from the center apply a greater torque, it was thought that the wheel would rotate forever. However, since the side with weights further from the center has fewer weights than the other side, at that moment, the torque is balanced and perpetual movement is not achieved. The moving weights may be hammers on pivoted arms, or rolling balls, or mercury in tubes; the principle is the same.
Another theoretical machine involves a frictionless environment for motion. This involves the use of diamagnetic or electromagnetic levitation to float an object. This is done in a vacuum to eliminate air friction and friction from an axle. The levitated object is then free to rotate around its center of gravity without interference. However, this machine has no practical purpose because the rotated object cannot do any work as work requires the levitated object to cause motion in other objects, bringing friction into the problem. Furthermore, a perfect vacuum is an unattainable goal since both the container and the object itself would slowly vaporize, thereby degrading the vacuum.
To extract work from heat, thus producing a perpetual motion machine of the second kind, the most common approach (dating back at least to Maxwell's demon) is unidirectionality. Only molecules moving fast enough and in the right direction are allowed through the demon's trap door. In a Brownian ratchet, forces tending to turn the ratchet one way are able to do so while forces in the other direction are not. A diode in a heat bath allows through currents in one direction and not the other. These schemes typically fail in two ways: either maintaining the unidirectionality costs energy (requiring Maxwell's demon to perform more thermodynamic work to gauge the speed of the molecules than the amount of energy gained by the difference of temperature caused) or the unidirectionality is an illusion and occasional big violations make up for the frequent small non-violations (the Brownian ratchet will be subject to internal Brownian forces and therefore will sometimes turn the wrong way).
Buoyancy is another frequently misunderstood phenomenon. Some proposed perpetual-motion machines miss the fact that to push a volume of air down in a fluid takes the same work as to raise a corresponding volume of fluid up against gravity. These types of machines may involve two chambers with pistons, and a mechanism to squeeze the air out of the top chamber into the bottom one, which then becomes buoyant and floats to the top. The squeezing mechanism in these designs would not be able to do enough work to move the air down, or would leave no excess work available to be extracted.
Patents
Proposals for such inoperable machines have become so common that the United States Patent and Trademark Office (USPTO) has made an official policy of refusing to grant patents for perpetual motion machines without a working model. The USPTO Manual of Patent Examining Practice states:
And, further, that:
The filing of a patent application is a clerical task, and the USPTO will not refuse filings for perpetual motion machines; the application will be filed and then most probably rejected by the patent examiner, after he has done a formal examination. Even if a patent is granted, it does not mean that the invention actually works, it just means that the examiner believes that it works, or was unable to figure out why it would not work.
The United Kingdom Patent Office has a specific practice on perpetual motion; Section 4.05 of the UKPO Manual of Patent Practice states:
Examples of decisions by the UK Patent Office to refuse patent applications for perpetual motion machines include:
Decision BL O/044/06, John Frederick Willmott's application no. 0502841
Decision BL O/150/06, Ezra Shimshi's application no. 0417271
The European Patent Classification (ECLA) has classes including patent applications on perpetual motion systems: ECLA classes "F03B17/04: Alleged perpetua mobilia" and "F03B17/00B: [... machines or engines] (with closed loop circulation or similar : ... Installations wherein the liquid circulates in a closed loop; Alleged perpetua mobilia of this or similar kind".
Apparent perpetual motion machines
As a perpetual motion machine can only be defined in a finite isolated system with discrete parameters, and since true isolated systems do not exist (among other things, due to quantum uncertainty), "perpetual motion" in the context of this article is better defined as a "perpetual motion machine" because a machine is a "A device that directs and controls energy, often in the form of movement or electricity, to produce a certain effect" whereas "motion" is simply movement (such as Brownian motion). Distinctions aside, on the macro scale, there are concepts and technical drafts that propose "perpetual motion", but on closer analysis it is revealed that they actually "consume" some sort of natural resource or latent energy, such as the phase changes of water or other fluids or small natural temperature gradients, or simply cannot sustain indefinite operation. In general, extracting work from these devices is impossible.
Resource consuming
Some examples of such devices include:
The drinking bird toy functions using small ambient temperature gradients and evaporation. It runs until all water is evaporated.
A capillary action-based water pump functions using small ambient temperature gradients and vapour pressure differences. With the "capillary bowl", it was thought that the capillary action would keep the water flowing in the tube, but since the cohesion force that draws the liquid up the tube in the first place holds the droplet from releasing into the bowl, the flow is not perpetual.
A Crookes radiometer consists of a partial vacuum glass container with a lightweight propeller moved by (light-induced) temperature gradients.
Any device picking up minimal amounts of energy from the natural electromagnetic radiation around it, such as a solar-powered motor.
Any device powered by changes in air pressure, such as some clocks (Cox's timepiece, Beverly Clock). The motion leeches energy from moving air which in turn gained its energy from being acted on.
A heat pump, due to it having a COP above 1: the energy it consumes as work is less than the energy it moves as heat.
The Atmos clock uses changes in the vapor pressure of ethyl chloride with temperature to wind the clock spring.
A device powered by induced nuclear reactions or by radioactive decay from an isotope with a relatively long half-life; such a device could plausibly operate for hundreds or thousands of years.
The Oxford Electric Bell and the are driven by dry pile batteries.
Low friction
In flywheel energy storage, "modern flywheels can have a zero-load rundown time measurable in years".
Once spun up, objects in the vacuum of space—stars, black holes, planets, moons, spin-stabilized satellites, etc.—dissipate energy very slowly, allowing them to spin for long periods. Tides on Earth are dissipating the gravitational energy of the Moon/Earth system at an average rate of about 3.75 terawatts.
In certain quantum-mechanical systems (such as superfluidity and superconductivity), very low friction movement is possible. However, the motion stops when the system reaches an equilibrium state (e.g. all the liquid helium arrives at the same level). Similarly, seemingly entropy-reversing effects like superfluids climbing the walls of containers operate by ordinary capillary action.
Thought experiments
In some cases a thought experiment appears to suggest that perpetual motion may be possible through accepted and understood physical processes. However, in all cases, a flaw has been found when all of the relevant physics is considered. Examples include:
Maxwell's demon: This was originally proposed to show that the second law of thermodynamics applied in the statistical sense only, by postulating a "demon" that could select energetic molecules and extract their energy. Subsequent analysis (and experiment) have shown there is no way to physically implement such a system that does not result in an overall increase in entropy.
Brownian ratchet: In this thought experiment, one imagines a paddle wheel connected to a ratchet. Brownian motion would cause surrounding gas molecules to strike the paddles, but the ratchet would only allow it to turn in one direction. A more thorough analysis showed that when a physical ratchet was considered at this molecular scale, Brownian motion would also affect the ratchet and cause it to randomly fail resulting in no net gain. Thus, the device would not violate the laws of thermodynamics.
Vacuum energy and zero-point energy: In order to explain effects such as virtual particles and the Casimir effect, many formulations of quantum physics include a background energy which pervades empty space, known as vacuum or zero-point energy. The ability to harness zero-point energy for useful work is considered pseudoscience by the scientific community at large. Inventors have proposed various methods for extracting useful work from zero-point energy, but none have been found to be viable, no claims for extraction of zero-point energy have ever been validated by the scientific community, and there is no evidence that zero-point energy can be used in violation of conservation of energy.
Ellipsoid paradox: This paradox considers a perfectly reflecting cavity with two black bodies at points A and B. The reflecting surface is composed of two elliptical sections E1 and E2 and a spherical section S, and the bodies at A and B are located at the joint foci of the two ellipses and B is at the center of S. This configuration is such that apparently black body at B heat up relative to A: the radiation originating from the blackbody at A will land on and be absorbed by the blackbody at B. Similarly, rays originating from point B that land on E1 and E2 will be reflected to A. However, a significant proportion of rays that start from B will land on S will be reflected back to B. This paradox is solved when the black bodies' finite sizes are considered instead of punctual black bodies.
Conspiracy theories
Despite being dismissed as pseudoscientific, perpetual motion machines have become the focus of conspiracy theories, alleging that they are being hidden from the public by corporations or governments, who would lose economic control if a power source capable of producing energy cheaply was made available.
See also
Anti-gravity
Faster-than-light
Incredible utility
Johann Bessler
Pathological science
Time travel
Notes
References
External links
The Museum of Unworkable Devices
"Perpetual Motion - Just Isn't." Popular Mechanics, January 1954, pp. 108–111.
In Our Time: Perpetual Motion, BBC discussion with Ruth Gregory, Frank Close and Steven Bramwell, hosted by Melvyn Bragg, first broadcast 24 September 2015.
What is known about perpetual motion in detail, Published on USIIC May 21, 2023
Pseudoscience | 0.785475 | 0.998113 | 0.783992 |
Conservation law | In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all.
A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume.
From Noether's theorem, every differentiable symmetry leads to a conservation law. Other conserved quantities can exist as well.
Conservation laws as fundamental laws of nature
Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge.
Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering.
Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others.
One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of the Universe. For example, the conservation of energy follows from the uniformity of time and the conservation of angular momentum arises from the isotropy of space, i.e. because there is no preferred direction of space. Notably, there is no conservation law associated with time-reversal, although more complex conservation laws combining time-reversal with other symmetries are known.
Exact laws
A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated:
Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly, the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined.
Approximate laws
There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions.
Conservation of mechanical energy
Conservation of mass (approximately true for nonrelativistic speeds)
Conservation of baryon number (See chiral anomaly and sphaleron)
Conservation of lepton number (In the Standard Model)
Conservation of flavor (violated by the weak interaction)
Conservation of strangeness (violated by the weak interaction)
Conservation of space-parity (violated by the weak interaction)
Conservation of charge-parity (violated by the weak interaction)
Conservation of time-parity (violated by the weak interaction)
Conservation of CP parity (violated by the weak interaction); in the Standard Model, this is equivalent to conservation of time-parity.
Global and local conservation laws
The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved.
A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general.
Differential forms
In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge is
where is the divergence operator, is the density of (amount per unit volume), is the flux of (amount crossing a unit area in unit time), and is time.
If we assume that the motion u of the charge is a continuous function of position and time, then
In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation:
where the dependent variable is called the density of a conserved quantity, and is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case:
is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable is called a nonconserved quantity, and the inhomogeneous term is the-source, or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system.
In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the advection form:
where the dependent variable is called the density of the conserved (scalar) quantity, and is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity :
In this case since the chain rule applies:
the conservation equation can be put into the current density form:
In a space with more than one dimension the former definition can be extended to an equation that can be put into the form:
where the conserved quantity is , denotes the scalar product, is the nabla operator, here indicating a gradient, and is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity :
This is the case for the continuity equation:
Here the conserved quantity is the mass, with density and current density , identical to the momentum density, while is the flow velocity.
In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form:
where is called the conserved (vector) quantity, is its gradient, is the zero vector, and is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix :
and the conservation equation can be put into the form:
For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are:
where:
is the flow velocity vector, with components in a N-dimensional space ,
is the specific pressure (pressure per unit density) giving the source term,
It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively:
where denotes the outer product.
Integral and weak forms
Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space:
and by using Green's theorem, the integral form is:
In a similar fashion, for the scalar multidimensional space, the integral form is:
where the line integration is performed along the boundary of the domain, in an anticlockwise manner.
Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is:
In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives.
See also
Invariant (physics)
Momentum
Cauchy momentum equation
Energy
Conservation of energy and the First law of thermodynamics
Conservative system
Conserved quantity
Some kinds of helicity are conserved in dissipationless limit: hydrodynamical helicity, magnetic helicity, cross-helicity.
Principle of mutability
Conservation law of the Stress–energy tensor
Riemann invariant
Philosophy of physics
Totalitarian principle
Convection–diffusion equation
Uniformity of nature
Examples and applications
Advection
Mass conservation, or Continuity equation
Charge conservation
Euler equations (fluid dynamics)
inviscid Burgers equation
Kinematic wave
Conservation of energy
Traffic flow
Notes
References
Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009.
Victor J. Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws.
E. Godlewski and P.A. Raviart, Hyperbolic systems of conservation laws, Ellipses, 1991.
External links
Conservation Laws – Ch. 11–15 in an online textbook
Scientific laws
Symmetry
Thermodynamic systems | 0.789582 | 0.992904 | 0.783979 |
Elastic energy | Elastic energy is the mechanical potential energy stored in the configuration of a material or physical system as it is subjected to elastic deformation by work performed upon it. Elastic energy occurs when objects are impermanently compressed, stretched or generally deformed in any manner. Elasticity theory primarily develops formalisms for the mechanics of solid bodies and materials. (Note however, the work done by a stretched rubber band is not an example of elastic energy. It is an example of entropic elasticity.) The elastic potential energy equation is used in calculations of positions of mechanical equilibrium. The energy is potential as it will be converted into other forms of energy, such as kinetic energy and sound energy, when the object is allowed to return to its original shape (reformation) by its elasticity.
The essence of elasticity is reversibility. Forces applied to an elastic material transfer energy into the material which, upon yielding that energy to its surroundings, can recover its original shape. However, all materials have limits to the degree of distortion they can endure without breaking or irreversibly altering their internal structure. Hence, the characterizations of solid materials include specification, usually in terms of strains, of its elastic limits. Beyond the elastic limit, a material is no longer storing all of the energy from mechanical work performed on it in the form of elastic energy.
Elastic energy of or within a substance is static energy of configuration. It corresponds to energy stored principally by changing the interatomic distances between nuclei. Thermal energy is the randomized distribution of kinetic energy within the material, resulting in statistical fluctuations of the material about the equilibrium configuration. There is some interaction, however. For example, for some solid objects, twisting, bending, and other distortions may generate thermal energy, causing the material's temperature to rise. Thermal energy in solids is often carried by internal elastic waves, called phonons. Elastic waves that are large on the scale of an isolated object usually produce macroscopic vibrations .
Although elasticity is most commonly associated with the mechanics of solid bodies or materials, even the early literature on classical thermodynamics defines and uses "elasticity of a fluid" in ways compatible with the broad definition provided in the Introduction above.
Solids include complex crystalline materials with sometimes complicated behavior. By contrast, the behavior of compressible fluids, and especially gases, demonstrates the essence of elastic energy with negligible complication. The simple thermodynamic formula: where dU is an infinitesimal change in recoverable internal energy U, P is the uniform pressure (a force per unit area) applied to the material sample of interest, and dV is the infinitesimal change in volume that corresponds to the change in internal energy. The minus sign appears because dV is negative under compression by a positive applied pressure which also increases the internal energy. Upon reversal, the work that is done by a system is the negative of the change in its internal energy corresponding to the positive dV of an increasing volume. In other words, the system loses stored internal energy when doing work on its surroundings. Pressure is stress and volumetric change corresponds to changing the relative spacing of points within the material. The stress-strain-internal energy relationship of the foregoing formula is repeated in formulations for elastic energy of solid materials with complicated crystalline structure.
Elastic potential energy in mechanical systems
Components of mechanical systems store elastic potential energy if they are deformed when forces are applied to the system. Energy is transferred to an object by work when an external force displaces or deforms the object. The quantity of energy transferred is the vector dot product of the force and the displacement of the object. As forces are applied to the system they are distributed internally to its component parts. While some of the energy transferred can end up stored as the kinetic energy of acquired velocity, the deformation of component objects results in stored elastic energy.
A prototypical elastic component is a coiled spring. The linear elastic performance of a spring is parametrized by a constant of proportionality, called the spring constant. This constant is usually denoted as k (see also Hooke's Law) and depends on the geometry, cross-sectional area, undeformed length and nature of the material from which the coil is fashioned. Within a certain range of deformation, k remains constant and is defined as the negative ratio of displacement to the magnitude of the restoring force produced by the spring at that displacement.
The deformed length, L, can be larger or smaller than Lo, the undeformed length, so to keep k positive, Fr must be given as a vector component of the restoring force whose sign is negative for L>Lo and positive for L< Lo. If the displacement is abbreviated as then Hooke's Law can be written in the usual form
Energy absorbed and held in the spring can be derived using Hooke's Law to compute the restoring force as a measure of the applied force. This requires the assumption, sufficiently correct in most circumstances, that at a given moment, the magnitude of applied force.
For each infinitesimal displacement dx, the applied force is simply k x and the product of these is the infinitesimal transfer of energy into the spring dU. The total elastic energy placed into the spring from zero displacement to final length L is thus the integral
For a material of Young's modulus, Y (same as modulus of elasticity λ), cross sectional area, A0, initial length, l0, which is stretched by a length, :
where Ue is the elastic potential energy.
The elastic potential energy per unit volume is given by:
where is the strain in the material.
In the general case, elastic energy is given by the free energy per unit of volume f as a function of the strain tensor components εij
where λ and μ are the Lamé elastic coefficients and we use Einstein summation convention. Noting the thermodynamic connection between stress tensor components and strain tensor components,
where the subscript T denotes that temperature is held constant, then we find that if Hooke's law is valid, we can write the elastic energy density as
Continuum systems
Matter in bulk can be distorted in many different ways: stretching, shearing, bending, twisting, etc. Each kind of distortion contributes to the elastic energy of a deformed material. In orthogonal coordinates, the elastic energy per unit volume due to strain is thus a sum of contributions:
where is a 4th rank tensor, called the elastic tensor or stiffness tensor which is a generalization of the elastic moduli of mechanical systems, and is the strain tensor (Einstein summation notation has been used to imply summation over repeated indices). The values of depend upon the crystal structure of the material: in the general case, due to symmetric nature of and , the elastic tensor consists of 21 independent elastic coefficients. This number can be further reduced by the symmetry of the material: 9 for an orthorhombic crystal, 5 for an hexagonal structure, and 3 for a cubic symmetry. Finally, for an isotropic material, there are only two independent parameters, with , where and are the Lamé constants, and is the Kronecker delta.
The strain tensor itself can be defined to reflect distortion in any way that results in invariance under total rotation, but the most common definition with regard to which elastic tensors are usually expressed defines strain as the symmetric part of the gradient of displacement with all nonlinear terms suppressed:
where is the displacement at a point in the -th direction and is the partial derivative in the -th direction. Note that:
where no summation is intended. Although full Einstein notation sums over raised and lowered pairs of indices, the values of elastic and strain tensor components are usually expressed with all indices lowered. Thus beware (as here) that in some contexts a repeated index does not imply a sum overvalues of that index ( in this case), but merely a single component of a tensor.
See also
Clockwork
Elasto-capillarity
Rubber elasticity
References
Sources
Classical mechanics
Forms of energy
simple:Elastic energy
sv:Elastisk energi | 0.791302 | 0.990711 | 0.783951 |
Acceleration | In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes:
the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force;
that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass.
The SI unit for acceleration is metre per second squared (, ).
For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralized in reference to the acceleration due to change in speed.
Definition and properties
Average acceleration
An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically,
Instantaneous acceleration
Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time:
As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to :
(Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.)
By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity.
Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time:
Units
Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.
Other forms
An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration.
Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer.
In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law):
where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large.
Tangential and centripetal acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
with equal to the speed of travel along the path, and
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as:
where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components
are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively.
Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas.
Special cases
Uniform acceleration
Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period.
A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by:
Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed:
where
is the elapsed time,
is the initial displacement from the origin,
is the displacement from the origin at time ,
is the initial velocity,
is the velocity at time , and
is the uniform rate of acceleration.
In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth.
Circular motion
In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighboring point, thereby rotating the velocity vector along the circle.
For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed:
For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius .
Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields
As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as
Thus
This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion.
In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is,
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration, and the tangent is always directed at right angles to the radius vector.
Coordinate systems
In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as The two-dimensional acceleration vector is then defined as . The magnitude of this vector is found by the distance formula asIn three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined asThe three-dimensional acceleration vector is defined as with its magnitude being determined by
Relation to relativity
Special relativity
The special theory of relativity describes the behavior of objects traveling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations.
As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it.
General relativity
Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating.
Conversions
See also
Acceleration (differential geometry)
Four-vector: making the connection between space and time explicit
Gravitational acceleration
Inertia
Orders of magnitude (acceleration)
Shock (mechanics)
Shock and vibration data loggermeasuring 3-axis acceleration
Space travel using constant acceleration
Specific force
References
External links
Acceleration Calculator Simple acceleration unit converter
Acceleration Calculator Acceleration Conversion calculator converts units form meter per second square, kilometre per second square, millimeter per second square & more with metric conversion.
Dynamics (mechanics)
Kinematic properties
Vector physical quantities | 0.78503 | 0.998151 | 0.783578 |
Radiative transfer | Radiative transfer (also called radiation transport) is the physical phenomenon of energy transfer in the form of electromagnetic radiation. The propagation of radiation through a medium is affected by absorption, emission, and scattering processes. The equation of radiative transfer describes these interactions mathematically. Equations of radiative transfer have application in a wide variety of subjects including optics, astrophysics, atmospheric science, and remote sensing. Analytic solutions to the radiative transfer equation (RTE) exist for simple cases but for more realistic media, with complex multiple scattering effects, numerical methods are required.
The present article is largely focused on the condition of radiative equilibrium.
Definitions
The fundamental quantity that describes a field of radiation is called spectral radiance in radiometric terms (in other fields it is often called specific intensity). For a very small area element in the radiation field, there can be electromagnetic radiation passing in both senses in every spatial direction through it. In radiometric terms, the passage can be completely characterized by the amount of energy radiated in each of the two senses in each spatial direction, per unit time, per unit area of surface of sourcing passage, per unit solid angle of reception at a distance, per unit wavelength interval being considered (polarization will be ignored for the moment).
In terms of the spectral radiance, , the energy flowing across an area element of area located at in time in the solid angle about the direction in the frequency interval to is
where is the angle that the unit direction vector makes with a normal to the area element. The units of the spectral radiance are seen to be energy/time/area/solid angle/frequency. In MKS units this would be W·m−2·sr−1·Hz−1 (watts per square-metre-steradian-hertz).
The equation of radiative transfer
The equation of radiative transfer simply says that as a beam of radiation travels, it loses energy to absorption, gains energy by emission processes, and redistributes energy by scattering. The differential form of the equation for radiative transfer is:
where is the speed of light, is the emission coefficient, is the scattering opacity, is the absorption opacity, is the mass density and the term represents radiation scattered from other directions onto a surface.
Solutions to the equation of radiative transfer
Solutions to the equation of radiative transfer form an enormous body of work. The differences however, are essentially due to the various forms for the emission and absorption coefficients. If scattering is ignored, then a general steady state solution in terms of the emission and absorption coefficients may be written:
where is the optical depth of the medium between positions and :
Local thermodynamic equilibrium
A particularly useful simplification of the equation of radiative transfer occurs under the conditions of local thermodynamic equilibrium (LTE). It is important to note that local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist.
In this situation, the absorbing/emitting medium consists of massive particles which are locally in equilibrium with each other, and therefore have a definable temperature (Zeroth Law of Thermodynamics). The radiation field is not, however in equilibrium and is being entirely driven by the presence of the massive particles. For a medium in LTE, the emission coefficient and absorption coefficient are functions of temperature and density only, and are related by:
where is the black body spectral radiance at temperature T. The solution to the equation of radiative transfer is then:
Knowing the temperature profile and the density profile of the medium is sufficient to calculate a solution to the equation of radiative transfer.
The Eddington approximation
The Eddington approximation is distinct from the two-stream approximation. The two-stream approximation assumes that the intensity is constant with angle in the upward hemisphere, with a different constant value in the downward hemisphere. The Eddington approximation instead assumes that the intensity is a linear function of , i.e.
where is the normal direction to the slab-like medium. Note that expressing angular integrals in terms of simplifies things because appears in the Jacobian of integrals in spherical coordinates. The Eddington approximation can be used to obtain the spectral radiance in a "plane-parallel" medium (one in which properties only vary in the perpendicular direction) with isotropic frequency-independent scattering.
Extracting the first few moments of the spectral radiance with respect to yields
Thus the Eddington approximation is equivalent to setting . Higher order versions of the Eddington approximation also exist, and consist of more complicated linear relations of the intensity moments. This extra equation can be used as a closure relation for the truncated system of moments.
Note that the first two moments have simple physical meanings. is the isotropic intensity at a point, and is the flux through that point in the direction.
The radiative transfer through an isotropically scattering medium with scattering coefficient at local thermodynamic equilibrium is given by
Integrating over all angles yields
Premultiplying by , and then integrating over all angles gives
Substituting in the closure relation, and differentiating with respect to allows the two above equations to be combined to form the radiative diffusion equation
This equation shows how the effective optical depth in scattering-dominated systems may be significantly different from that given by the scattering opacity if the absorptive opacity is small.
See also
Beer-Lambert law
Kirchhoff's law of thermal radiation
List of atmospheric radiative transfer codes
Optical depth
Planck's law
Radiative transfer equation and diffusion theory for photon transport in biological tissue
Schwarzschild's equation for radiative transfer
Vector radiative transfer
References
Further reading
Radiometry
Electromagnetic radiation
Atmospheric radiation | 0.794511 | 0.98612 | 0.783483 |
Vis-viva equation | In astrodynamics, the vis-viva equation, also referred to as orbital-energy-invariance law or Burgas formula, is one of the equations that model the motion of orbiting bodies. It is the direct result of the principle of conservation of mechanical energy which applies when the only force acting on an object is its own weight which is the gravitational force determined by the product of the mass of the object and the strength of the surrounding gravitational field.
Vis viva (Latin for "living force") is a term from the history of mechanics, and it survives in this sole context. It represents the principle that the difference between the total work of the accelerating forces of a system and that of the retarding forces is equal to one half the vis viva accumulated or lost in the system while the work is being done.
Equation
For any Keplerian orbit (elliptic, parabolic, hyperbolic, or radial), the vis-viva equation is as follows:
where:
is the relative speed of the two bodies
is the distance between the two bodies' centers of mass
is the length of the semi-major axis ( for ellipses, or for parabolas, and for hyperbolas)
is the gravitational constant
is the mass of the central body
The product of can also be expressed as the standard gravitational parameter using the Greek letter .
Derivation for elliptic orbits (0 ≤ eccentricity < 1)
In the vis-viva equation the mass of the orbiting body (e.g., a spacecraft) is taken to be negligible in comparison to the mass of the central body (e.g., the Earth). The central body and orbiting body are also often referred to as the primary and a particle respectively. In the specific cases of an elliptical or circular orbit, the vis-viva equation may be readily derived from conservation of energy and momentum.
Specific total energy is constant throughout the orbit. Thus, using the subscripts and to denote apoapsis (apogee) and periapsis (perigee), respectively,
Rearranging,
Recalling that for an elliptical orbit (and hence also a circular orbit) the velocity and radius vectors are perpendicular at apoapsis and periapsis, conservation of angular momentum requires specific angular momentum , thus :
Isolating the kinetic energy at apoapsis and simplifying,
From the geometry of an ellipse, where a is the length of the semimajor axis. Thus,
Substituting this into our original expression for specific orbital energy,
Thus, and the vis-viva equation may be written
or
Therefore, the conserved angular momentum can be derived using and , where is semi-major axis and is semi-minor axis of the elliptical orbit, as follows:
and alternately,
Therefore, specific angular momentum , and
Total angular momentum
Practical applications
Given the total mass and the scalars and at a single point of the orbit, one can compute:
and at any other point in the orbit; and
the specific orbital energy , allowing an object orbiting a larger object to be classified as having not enough energy to remain in orbit, hence being "suborbital" (a ballistic missile, for example), having enough energy to be "orbital", but without the possibility to complete a full orbit anyway because it eventually collides with the other body, or having enough energy to come from and/or go to infinity (as a meteor, for example).
The formula for escape velocity can be obtained from the Vis-viva equation by taking the limit as approaches :
Notes
References
Orbits
Conservation laws
Equations of astronomy | 0.791667 | 0.989604 | 0.783436 |
Diffusion | Diffusion is the net movement of anything (for example, atoms, ions, molecules, energy) generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical potential. It is possible to diffuse "uphill" from a region of lower concentration to a region of higher concentration, as in spinodal decomposition. Diffusion is a stochastic process due to the inherent randomness of the diffusing entity and can be used to model many real-life stochastic scenarios. Therefore, diffusion and the corresponding mathematical models are used in several fields beyond physics, such as statistics, probability theory, information theory, neural networks, finance, and marketing.
The concept of diffusion is widely used in many fields, including physics (particle diffusion), chemistry, biology, sociology, economics, statistics, data science, and finance (diffusion of people, ideas, data and price values). The central idea of diffusion, however, is common to all of these: a substance or collection undergoing diffusion spreads out from a point or location at which there is a higher concentration of that substance or collection.
A gradient is the change in the value of a quantity; for example, concentration, pressure, or temperature with the change in another variable, usually distance. A change in concentration over a distance is called a concentration gradient, a change in pressure over a distance is called a pressure gradient, and a change in temperature over a distance is called a temperature gradient.
The word diffusion derives from the Latin word, diffundere, which means "to spread out".
A distinguishing feature of diffusion is that it depends on particle random walk, and results in mixing or mass transport without requiring directed bulk motion. Bulk motion, or bulk flow, is the characteristic of advection. The term convection is used to describe the combination of both transport phenomena.
If a diffusion process can be described by Fick's laws, it is called a normal diffusion (or Fickian diffusion); Otherwise, it is called an anomalous diffusion (or non-Fickian diffusion).
When talking about the extent of diffusion, two length scales are used in two different scenarios:
Brownian motion of an impulsive point source (for example, one single spray of perfume)—the square root of the mean squared displacement from this point. In Fickian diffusion, this is , where is the dimension of this Brownian motion;
Constant concentration source in one dimension—the diffusion length. In Fickian diffusion, this is .
Diffusion vs. bulk flow
"Bulk flow" is the movement/flow of an entire body due to a pressure gradient (for example, water coming out of a tap). "Diffusion" is the gradual movement/dispersion of concentration within a body with no net movement of matter. An example of a process where both bulk motion and diffusion occur is human breathing.
First, there is a "bulk flow" process. The lungs are located in the thoracic cavity, which expands as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs, which causes a decrease in pressure in the alveoli. This creates a pressure gradient between the air outside the body at relatively high pressure and the alveoli at relatively low pressure. The air moves down the pressure gradient through the airways of the lungs and into the alveoli until the pressure of the air and that in the alveoli are equal, that is, the movement of air by bulk flow stops once there is no longer a pressure gradient.
Second, there is a "diffusion" process. The air arriving in the alveoli has a higher concentration of oxygen than the "stale" air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli. Oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases. This creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli, as fresh air has a very low concentration of carbon dioxide compared to the blood in the body.
Third, there is another "bulk flow" process. The pumping action of the heart then transports the blood around the body. As the left ventricle of the heart contracts, the volume decreases, which increases the pressure in the ventricle. This creates a pressure gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow down the pressure gradient.
Diffusion in the context of different disciplines
There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles.
In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion. According to Fick's laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration. Sometime later, various generalizations of Fick's laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics.
From the atomistic point of view, diffusion is considered as a result of the random walk of the diffusing particles. In molecular diffusion, the moving molecules in a gas, liquid, or solid are self-propelled by kinetic energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown, who found that minute particle suspended in a liquid medium and just large enough to be visible under an optical microscope exhibit a rapid and continually irregular motion of particles known as Brownian movement. The theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein.
The concept of diffusion is typically applied to any subject matter involving random walks in ensembles of individuals.
In chemistry and materials science, diffusion also refers to the movement of fluid molecules in porous solids. Different types of diffusion are distinguished in porous solids. Molecular diffusion occurs when the collision with another molecule is more likely than the collision with the pore walls. Under such conditions, the diffusivity is similar to that in a non-confined space and is proportional to the mean free path. Knudsen diffusion occurs when the pore diameter is comparable to or smaller than the mean free path of the molecule diffusing through the pore. Under this condition, the collision with the pore walls becomes gradually more likely and the diffusivity is lower. Finally there is configurational diffusion, which happens if the molecules have comparable size to that of the pore. Under this condition, the diffusivity is much lower compared to molecular diffusion and small differences in the kinetic diameter of the molecule cause large differences in diffusivity.
Biologists often use the terms "net movement" or "net diffusion" to describe the movement of ions or molecules by diffusion. For example, oxygen can diffuse through cell membranes so long as there is a higher concentration of oxygen outside the cell. However, because the movement of molecules is random, occasionally oxygen molecules move out of the cell (against the concentration gradient). Because there are more oxygen molecules outside the cell, the probability that oxygen molecules will enter the cell is higher than the probability that oxygen molecules will leave the cell. Therefore, the "net" movement of oxygen molecules (the difference between the number of molecules either entering or leaving the cell) is into the cell. In other words, there is a net movement of oxygen molecules down the concentration gradient.
History of diffusion in physics
In the scope of time, diffusion in solids was used long before the theory of diffusion was created. For example, Pliny the Elder had previously described the cementation process, which produces steel from the element iron (Fe) through carbon diffusion. Another example is well known for many centuries, the diffusion of colors of stained glass or earthenware and Chinese ceramics.
In modern science, the first systematic experimental study of diffusion was performed by Thomas Graham. He studied diffusion in gases, and the main phenomenon was described by him in 1831–1833:
"...gases of different nature, when brought into contact, do not arrange themselves according to their density, the heaviest undermost, and the lighter uppermost, but they spontaneously diffuse, mutually and equally, through each other, and so remain in the intimate state of mixture for any length of time."
The measurements of Graham contributed to James Clerk Maxwell deriving, in 1867, the coefficient of diffusion for CO2 in the air. The error rate is less than 5%.
In 1855, Adolf Fick, the 26-year-old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism similar to Fourier's law for heat conduction (1822) and Ohm's law for electric current (1827).
Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied until the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist and former assistant of Thomas Graham studied systematically solid state diffusion on the example of gold in lead in 1896. :
"... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals."
In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin. Ludwig Boltzmann, in the development of the atomistic backgrounds of the macroscopic transport processes, introduced the Boltzmann equation, which has served mathematics and physics with a source of transport process ideas and concerns for more than 140 years.
In 1920–1921, George de Hevesy measured self-diffusion using radioisotopes. He studied self-diffusion of radioactive isotopes of lead in the liquid and solid lead.
Yakov Frenkel (sometimes, Jakov/Jacob Frenkel) proposed, and elaborated in 1926, the idea of diffusion in crystals through local defects (vacancies and interstitial atoms). He concluded, the diffusion process in condensed matter is an ensemble of elementary jumps and quasichemical interactions of particles and defects. He introduced several mechanisms of diffusion and found rate constants from experimental data.
Sometime later, Carl Wagner and Walter H. Schottky developed Frenkel's ideas about mechanisms of diffusion further. Presently, it is universally recognized that atomic defects are necessary to mediate diffusion in crystals.
Henry Eyring, with co-authors, applied his theory of absolute reaction rates to Frenkel's quasichemical model of diffusion. The analogy between reaction kinetics and diffusion leads to various nonlinear versions of Fick's law.
Basic models of diffusion
Definition of diffusion flux
Each model of diffusion expresses the diffusion flux with the use of concentrations, densities and their derivatives. Flux is a vector representing the quantity and direction of transfer. Given a small area with normal , the transfer of a physical quantity through the area per time is
where is the inner product and is the little-o notation. If we use the notation of vector area then
The dimension of the diffusion flux is [flux] = [quantity]/([time]·[area]). The diffusing physical quantity may be the number of particles, mass, energy, electric charge, or any other scalar extensive quantity. For its density, , the diffusion equation has the form
where is intensity of any local source of this quantity (for example, the rate of a chemical reaction).
For the diffusion equation, the no-flux boundary conditions can be formulated as on the boundary, where is the normal to the boundary at point .
Normal single component concentration gradient
Fick's first law: The diffusion flux, , is proportional to the negative gradient of spatial concentration, :
where D is the diffusion coefficient. The corresponding diffusion equation (Fick's second law) is
In case the diffusion coefficient is independent of , Fick's second law can be simplified to
where is the Laplace operator,
Multicomponent diffusion and thermodiffusion
Fick's law describes diffusion of an admixture in a medium. The concentration of this admixture should be small and the gradient of this concentration should be also small. The driving force of diffusion in Fick's law is the antigradient of concentration, .
In 1931, Lars Onsager included the multicomponent transport processes in the general context of linear non-equilibrium thermodynamics. For
multi-component transport,
where is the flux of the th physical quantity (component), is the th thermodynamic force and is Onsager's matrix of kinetic transport coefficients.
The thermodynamic forces for the transport processes were introduced by Onsager as the space gradients of the derivatives of the entropy density (he used the term "force" in quotation marks or "driving force"):
where are the "thermodynamic coordinates".
For the heat and mass transfer one can take (the density of internal energy) and is the concentration of the th component. The corresponding driving forces are the space vectors
because
where T is the absolute temperature and is the chemical potential of the th component. It should be stressed that the separate diffusion equations describe the mixing or mass transport without bulk motion. Therefore, the terms with variation of the total pressure are neglected. It is possible for diffusion of small admixtures and for small gradients.
For the linear Onsager equations, we must take the thermodynamic forces in the linear approximation near equilibrium:
where the derivatives of are calculated at equilibrium .
The matrix of the kinetic coefficients should be symmetric (Onsager reciprocal relations) and positive definite (for the entropy growth).
The transport equations are
Here, all the indexes are related to the internal energy (0) and various components. The expression in the square brackets is the matrix of the diffusion (i,k > 0), thermodiffusion (i > 0, k = 0 or k > 0, i = 0) and thermal conductivity coefficients.
Under isothermal conditions T = constant. The relevant thermodynamic potential is the free energy (or the free entropy). The thermodynamic driving forces for the isothermal diffusion are antigradients of chemical potentials, , and the matrix of diffusion coefficients is
(i,k > 0).
There is intrinsic arbitrariness in the definition of the thermodynamic forces and kinetic coefficients because they are not measurable separately and only their combinations can be measured. For example, in the original work of Onsager the thermodynamic forces include additional multiplier T, whereas in the Course of Theoretical Physics this multiplier is omitted but the sign of the thermodynamic forces is opposite. All these changes are supplemented by the corresponding changes in the coefficients and do not affect the measurable quantities.
Nondiagonal diffusion must be nonlinear
The formalism of linear irreversible thermodynamics (Onsager) generates the systems of linear diffusion equations in the form
If the matrix of diffusion coefficients is diagonal, then this system of equations is just a collection of decoupled Fick's equations for various components. Assume that diffusion is non-diagonal, for example, , and consider the state with . At this state, . If at some points, then becomes negative at these points in a short time. Therefore, linear non-diagonal diffusion does not preserve positivity of concentrations. Non-diagonal equations of multicomponent diffusion must be non-linear.
Applied forces
The Einstein relation (kinetic theory) connects the diffusion coefficient and the mobility (the ratio of the particle's terminal drift velocity to an applied force). For charged particles:
where D is the diffusion constant, μ is the "mobility", kB is the Boltzmann constant, T is the absolute temperature, and q is the elementary charge, that is, the charge of one electron.
Below, to combine in the same formula the chemical potential μ and the mobility, we use for mobility the notation .
Diffusion across a membrane
The mobility-based approach was further applied by T. Teorell. In 1935, he studied the diffusion of ions through a membrane. He formulated the essence of his approach in the formula:
the flux is equal to mobility × concentration × force per gram-ion.
This is the so-called Teorell formula. The term "gram-ion" ("gram-particle") is used for a quantity of a substance that contains the Avogadro number of ions (particles). The common modern term is mole.
The force under isothermal conditions consists of two parts:
Diffusion force caused by concentration gradient: .
Electrostatic force caused by electric potential gradient: .
Here R is the gas constant, T is the absolute temperature, n is the concentration, the equilibrium concentration is marked by a superscript "eq", q is the charge and φ is the electric potential.
The simple but crucial difference between the Teorell formula and the Onsager laws is the concentration factor in the Teorell expression for the flux. In the Einstein–Teorell approach, if for the finite force the concentration tends to zero then the flux also tends to zero, whereas the Onsager equations violate this simple and physically obvious rule.
The general formulation of the Teorell formula for non-perfect systems under isothermal conditions is
where μ is the chemical potential, μ0 is the standard value of the chemical potential.
The expression is the so-called activity. It measures the "effective concentration" of a species in a non-ideal mixture. In this notation, the Teorell formula for the flux has a very simple form
The standard derivation of the activity includes a normalization factor and for small concentrations , where is the standard concentration. Therefore, this formula for the flux describes the flux of the normalized dimensionless quantity :
Ballistic time scale
The Einstein model neglects the inertia of the diffusing partial. The alternative
Langevin equation starts with Newton's second law of motion:
where
x is the position.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
F is the random force applied to the particle.
t is time.
Solving this equation, one obtained the time-dependent diffusion constant in the long-time limit and when the particle is significantly denser than the surrounding fluid,
where
kB is the Boltzmann constant;
T is the absolute temperature.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
t is time.
At long time scales, Einstein's result is recovered, but short time scales, the ballistic regime are also explained. Moreover, unlike the Einstein approach, a velocity can be defined, leading to the Fluctuation-dissipation theorem, connecting the competition between friction and random forces in defining the temperature.
Jumps on the surface and in solids
Diffusion of reagents on the surface of a catalyst may play an important role in heterogeneous catalysis. The model of diffusion in the ideal monolayer is based on the jumps of the reagents on the nearest free places. This model was used for CO on Pt oxidation under low gas pressure.
The system includes several reagents on the surface. Their surface concentrations are The surface is a lattice of the adsorption places. Each
reagent molecule fills a place on the surface. Some of the places are free. The concentration of the free places is . The sum of all (including free places) is constant, the density of adsorption places b.
The jump model gives for the diffusion flux of (i = 1, ..., n):
The corresponding diffusion equation is:
Due to the conservation law, and we
have the system of m diffusion equations. For one component we get Fick's law and linear equations because . For two and more components the equations are nonlinear.
If all particles can exchange their positions with their closest neighbours then a simple generalization gives
where is a symmetric matrix of coefficients that characterize the intensities of jumps. The free places (vacancies) should be considered as special "particles" with concentration .
Various versions of these jump models are also suitable for simple diffusion mechanisms in solids.
Porous media
For diffusion in porous media the basic equations are (if Φ is constant):
where D is the diffusion coefficient, Φ is porosity, n is the concentration, m > 0 (usually m > 1, the case m = 1 corresponds to Fick's law).
Care must be taken to properly account for the porosity (Φ) of the porous medium in both the flux terms and the accumulation terms. For example, as the porosity goes to zero, the molar flux in the porous medium goes to zero for a given concentration gradient. Upon applying the divergence of the flux, the porosity terms cancel out and the second equation above is formed.
For diffusion of gases in porous media this equation is the formalization of Darcy's law: the volumetric flux of a gas in the porous media is
where k is the permeability of the medium, μ is the viscosity and p is the pressure.
The advective molar flux is given as
J = nq
and for Darcy's law gives the equation of diffusion in porous media with m = γ + 1.
In porous media, the average linear velocity (ν), is related to the volumetric flux as:
Combining the advective molar flux with the diffusive flux gives the advection dispersion equation
For underground water infiltration, the Boussinesq approximation gives the same equation with m = 2.
For plasma with the high level of radiation, the Zeldovich–Raizer equation gives m > 4 for the heat transfer.
Diffusion in physics
Diffusion coefficient in kinetic theory of gases
The diffusion coefficient is the coefficient in the Fick's first law , where J is the diffusion flux (amount of substance) per unit area per unit time, n (for ideal mixtures) is the concentration, x is the position [length].
Consider two gases with molecules of the same diameter d and mass m (self-diffusion). In this case, the elementary mean free path theory of diffusion gives for the diffusion coefficient
where kB is the Boltzmann constant, T is the temperature, P is the pressure, is the mean free path, and vT is the mean thermal speed:
We can see that the diffusion coefficient in the mean free path approximation grows with T as T3/2 and decreases with P as 1/P. If we use for P the ideal gas law P = RnT with the total concentration n, then we can see that for given concentration n the diffusion coefficient grows with T as T1/2 and for given temperature it decreases with the total concentration as 1/n.
For two different gases, A and B, with molecular masses mA, mB and molecular diameters dA, dB, the mean free path estimate of the diffusion coefficient of A in B and B in A is:
The theory of diffusion in gases based on Boltzmann's equation
In Boltzmann's kinetics of the mixture of gases, each gas has its own distribution function, , where t is the time moment, x is position and c is velocity of molecule of the ith component of the mixture. Each component has its mean velocity . If the velocities do not coincide then there exists diffusion.
In the Chapman–Enskog approximation, all the distribution functions are expressed through the densities of the conserved quantities:
individual concentrations of particles, (particles per volume),
density of momentum (mi is the ith particle mass),
density of kinetic energy
The kinetic temperature T and pressure P are defined in 3D space as
where is the total density.
For two gases, the difference between velocities, is given by the expression:
where is the force applied to the molecules of the ith component and is the thermodiffusion ratio.
The coefficient D12 is positive. This is the diffusion coefficient. Four terms in the formula for C1−C2 describe four main effects in the diffusion of gases:
describes the flux of the first component from the areas with the high ratio n1/n to the areas with lower values of this ratio (and, analogously the flux of the second component from high n2/n to low n2/n because n2/n = 1 – n1/n);
describes the flux of the heavier molecules to the areas with higher pressure and the lighter molecules to the areas with lower pressure, this is barodiffusion;
describes diffusion caused by the difference of the forces applied to molecules of different types. For example, in the Earth's gravitational field, the heavier molecules should go down, or in electric field the charged molecules should move, until this effect is not equilibrated by the sum of other terms. This effect should not be confused with barodiffusion caused by the pressure gradient.
describes thermodiffusion, the diffusion flux caused by the temperature gradient.
All these effects are called diffusion because they describe the differences between velocities of different components in the mixture. Therefore, these effects cannot be described as a bulk transport and differ from advection or convection.
In the first approximation,
for rigid spheres;
for repulsing force
The number is defined by quadratures (formulas (3.7), (3.9), Ch. 10 of the classical Chapman and Cowling book)
We can see that the dependence on T for the rigid spheres is the same as for the simple mean free path theory but for the power repulsion laws the exponent is different. Dependence on a total concentration n for a given temperature has always the same character, 1/n.
In applications to gas dynamics, the diffusion flux and the bulk flow should be joined in one system of transport equations. The bulk flow describes the mass transfer. Its velocity V is the mass average velocity. It is defined through the momentum density and the mass concentrations:
where is the mass concentration of the ith species, is the mass density.
By definition, the diffusion velocity of the ith component is , .
The mass transfer of the ith component is described by the continuity equation
where is the net mass production rate in chemical reactions, .
In these equations, the term describes advection of the ith component and the term represents diffusion of this component.
In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used
Here, is the diffusion coefficient matrix, is the thermal diffusion coefficient, is the body force per unit mass acting on the ith species, is the partial pressure fraction of the ith species (and is the partial pressure), is the mass fraction of the ith species, and
Diffusion of electrons in solids
When the density of electrons in solids is not in equilibrium, diffusion of electrons occurs. For example, when a bias is applied to two ends of a chunk of semiconductor, or a light shines on one end (see right figure), electrons diffuse from high density regions (center) to low density regions (two ends), forming a gradient of electron density. This process generates current, referred to as diffusion current.
Diffusion current can also be described by Fick's first law
where J is the diffusion current density (amount of substance) per unit area per unit time, n (for ideal mixtures) is the electron density, x is the position [length].
Diffusion in geophysics
Analytical and numerical models that solve the diffusion equation for different initial and boundary conditions have been popular for studying a wide variety of changes to the Earth's surface. Diffusion has been used extensively in erosion studies of hillslope retreat, bluff erosion, fault scarp degradation, wave-cut terrace/shoreline retreat, alluvial channel incision, coastal shelf retreat, and delta progradation. Although the Earth's surface is not literally diffusing in many of these cases, the process of diffusion effectively mimics the holistic changes that occur over decades to millennia. Diffusion models may also be used to solve inverse boundary value problems in which some information about the depositional environment is known from paleoenvironmental reconstruction and the diffusion equation is used to figure out the sediment influx and time series of landform changes.
Dialysis
Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus.
Random walk (random motion)
One common misconception is that individual atoms, ions or molecules move randomly, which they do not. In the animation on the right, the ion in the left panel appears to have "random" motion in the absence of other ions. As the right panel shows, however, this motion is not random but is the result of "collisions" with other ions. As such, the movement of a single atom, ion, or molecule within a mixture just appears random when viewed in isolation. The movement of a substance within a mixture by "random walk" is governed by the kinetic energy within the system that can be affected by changes in concentration, pressure or temperature. (This is a classical description. At smaller scales, quantum effects will be non-negligible, in general. Thus, the study of the movement of a single atom becomes more subtle since particles at such small scales are described by probability amplitudes rather than deterministic measures of position and velocity.)
Separation of diffusion from convection in gases
While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task.
Under normal conditions, molecular diffusion dominates only at lengths in the nanometre-to-millimetre range. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection. To separate diffusion in these cases, special efforts are needed.
In contrast, heat conduction through solid media is an everyday occurrence (for example, a metal spoon partly immersed in a hot liquid). This explains why the diffusion of heat was explained mathematically before the diffusion of mass.
Other types of diffusion
Anisotropic diffusion, also known as the Perona–Malik equation, enhances high gradients
Atomic diffusion, in solids
Bohm diffusion, spread of plasma across magnetic fields
Eddy diffusion, in coarse-grained description of turbulent flow
Effusion of a gas through small holes
Electronic diffusion, resulting in an electric current called the diffusion current
Facilitated diffusion, present in some organisms
Gaseous diffusion, used for isotope separation
Heat equation, diffusion of thermal energy
Itō diffusion, mathematisation of Brownian motion, continuous stochastic process.
Knudsen diffusion of gas in long pores with frequent wall collisions
Lévy flight
Molecular diffusion, diffusion of molecules from more dense to less dense areas
Momentum diffusion ex. the diffusion of the hydrodynamic velocity field
Photon diffusion
Plasma diffusion
Random walk, model for diffusion
Reverse diffusion, against the concentration gradient, in phase separation
Rotational diffusion, random reorientation of molecules
Spin diffusion, diffusion of spin magnetic moments in solids
Surface diffusion, diffusion of adparticles on a surface
Taxis is an animal's directional movement activity in response to a stimulus
Kinesis is an animal's non-directional movement activity in response to a stimulus
Trans-cultural diffusion, diffusion of cultural traits across geographical area
Turbulent diffusion, transport of mass, heat, or momentum within a turbulent fluid
See also
References
Articles containing video clips
Broad-concept articles | 0.784898 | 0.998106 | 0.783411 |
Time evolution | Time evolution is the change of state brought about by the passage of time, applicable to systems with internal state (also called stateful systems). In this formulation, time is not required to be a continuous parameter, but may be discrete or even finite. In classical physics, time evolution of a collection of rigid bodies is governed by the principles of classical mechanics. In their most rudimentary form, these principles express the relationship between forces acting on the bodies and their acceleration given by Newton's laws of motion. These principles can be equivalently expressed more abstractly by Hamiltonian mechanics or Lagrangian mechanics.
The concept of time evolution may be applicable to other stateful systems as well. For instance, the operation of a Turing machine can be regarded as the time evolution of the machine's control state together with the state of the tape (or possibly multiple tapes) including the position of the machine's read-write head (or heads). In this case, time is considered to be discrete steps.
Stateful systems often have dual descriptions in terms of states or in terms of observable values. In such systems, time evolution can also refer to the change in observable values. This is particularly relevant in quantum mechanics where the Schrödinger picture and Heisenberg picture are (mostly) equivalent descriptions of time evolution.
Time evolution operators
Consider a system with state space X for which evolution is deterministic and reversible. For concreteness let us also suppose time is a parameter that ranges over the set of real numbers R. Then time evolution is given by a family of bijective state transformations
.
Ft, s(x) is the state of the system at time t, whose state at time s is x. The following identity holds
To see why this is true, suppose x ∈ X is the state at time s. Then by the definition of F, Ft, s(x) is the state of the system at time t and consequently applying the definition once more, Fu, t(Ft, s(x)) is the state at time u. But this is also Fu, s(x).
In some contexts in mathematical physics, the mappings Ft, s are called propagation operators or simply propagators. In classical mechanics, the propagators are functions that operate on the phase space of a physical system. In quantum mechanics, the propagators are usually unitary operators on a Hilbert space. The propagators can be expressed as time-ordered exponentials of the integrated Hamiltonian. The asymptotic properties of time evolution are given by the scattering matrix.
A state space with a distinguished propagator is also called a dynamical system.
To say time evolution is homogeneous means that
for all .
In the case of a homogeneous system, the mappings Gt = Ft,0 form a one-parameter group of transformations of X, that is
For non-reversible systems, the propagation operators Ft, s are defined whenever t ≥ s and satisfy the propagation identity
for any .
In the homogeneous case the propagators are exponentials of the Hamiltonian.
In quantum mechanics
In the Schrödinger picture, the Hamiltonian operator generates the time evolution of quantum states. If is the state of the system at time , then
This is the Schrödinger equation. Given the state at some initial time, if is independent of time, then the unitary time evolution operator is the exponential operator as shown in the equation
See also
Arrow of time
Time translation symmetry
Hamiltonian system
Propagator
Time evolution operator
Hamiltonian (control theory)
References
General references
.
.
.
.
.
Dynamical systems
fr:Opérateur d'évolution | 0.793342 | 0.987417 | 0.783359 |
Vis viva | Vis viva (from the Latin for "living force") is a historical term used to describe a quantity similar to kinetic energy in an early formulation of the principle of conservation of energy.
Overview
Proposed by Gottfried Leibniz over the period 1676–1689, the theory was controversial as it seemed to oppose the theory of conservation of quantity of motion advocated by René Descartes. Descartes' quantity of motion was different from momentum, but Newton defined the quantity of motion as the conjunction of the quantity of matter and velocity in Definition II of his Principia. In Definition III, he defined the force that resists a change in motion as the vis inertia of Descartes. Newton’s Third Law of Motion (for every action there is an equal and opposite reaction) is also equivalent to the principle of conservation of momentum. Leibniz accepted the principle of conservation of momentum, but rejected the Cartesian version of it. The difference between these ideas was whether the quantity of motion was simply related to a body's resistance to a change in velocity (vis inertia) or whether a body's amount of force due to its motion (vis viva) was related to the square of its velocity.
The theory was eventually absorbed into the modern theory of energy, though the term still survives in the context of celestial mechanics through the vis viva equation. The English equivalent "living force" was also used, for example by George William Hill.
The term is due to the German philosopher Gottfried Wilhelm Leibniz, who was the first to attempt a mathematical formulation from 1676 to 1689. Leibniz noticed that in many mechanical systems (of several masses, mi each with velocity vi) the quantity
was conserved. He called this quantity the vis viva or "living force" of the system. The principle represented an accurate statement of the conservation of kinetic energy in elastic collisions that was independent of the conservation of momentum.
However, many physicists at the time were unaware of this fact and, instead, were influenced by the prestige of Sir Isaac Newton in England and of René Descartes in France, both of whom advanced the conservation of momentum as a guiding principle. Thus the momentum:
was held by the rival camp to be the conserved vis viva. It was largely engineers such as John Smeaton, Peter Ewart, Karl Holtzmann, Gustave-Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not adequate for practical calculation and who made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston.
The French mathematician Émilie du Châtelet, who had a sound grasp of Newtonian mechanics, developed Leibniz's concept and, combining it with the observations of Willem 's Gravesande, showed that vis viva was dependent on the square of the velocities.
Members of the academic establishment such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat. Vis viva began to be known as energy after Thomas Young first used the term in 1807.
The recalibration of vis viva to include the coefficient of a half, namely:
was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839, although the present-day definition can occasionally be found earlier (e.g., in Daniel Bernoulli's texts). The former called it the quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work) and both championed its use in engineering calculation.
See also
Conservation of energy: Historical development
Élan vital
Kinetic energy
Orthogenesis
Potentiality and actuality
Vis-viva equation
Notes
References
George E. Smith, "The Vis Viva Dispute: A Controversy at the Dawn of Dynamics", Physics Today 59 (October 2006) Issue 10 pp 31–36. (see also erratum)
Natural philosophy
Obsolete theories in physics
Mechanics
Thermodynamics
Gottfried Wilhelm Leibniz
History of thermodynamics | 0.800354 | 0.978668 | 0.78328 |
Equations of motion | In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behavior of a physical system as a set of mathematical functions in terms of dynamic variables. These variables are usually spatial coordinates and time, but may include momentum components. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics.
Types
There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since the momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term dynamics refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations.
However, kinematics is simpler. It concerns only variables derived from the positions of objects and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement, initial velocity, final velocity, acceleration, and time.
A differential equation of motion, usually identified as some physical law (for example, F = ma) and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants.
To state this formally, in general an equation of motion is a function of the position of the object, its velocity (the first time derivative of , ), and its acceleration (the second derivative of , ), and time . Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in is a second-order ordinary differential equation (ODE) in ,
where is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at ,
The solution to the equation of motion, with specified initial values, describes the system for all times after . Other dynamical variables like the momentum of the object, or quantities derived from and like angular momentum, can be used in place of as the quantity to solve for from some equation of motion, although the position of the object at time is by far the most sought-after quantity.
Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions.
History
Kinematics, dynamics and the mathematical models of the universe developed incrementally over three millennia, thanks to many thinkers, only some of whose names we know. In antiquity, priests, astrologers and astronomers predicted solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon. But they had nothing other than a set of algorithms to guide them. Equations of motion were not written down for another thousand years.
Medieval scholars in the thirteenth century — for example at the relatively new universities in Oxford and Paris — drew on ancient mathematicians (Euclid and Archimedes) and philosophers (Aristotle) to develop a new body of knowledge, now called physics.
At Oxford, Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, who were of similar stature to the intellectuals at the University of Paris. Thomas Bradwardine extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion.
For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity was not used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are remarkably correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that acceleration would be negative during ascent.
Discourses such as these spread throughout Europe, shaping the work of Galileo Galilei and others, and helped in laying the foundation of kinematics. Galileo deduced the equation in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics.
Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution.
The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.)
Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope.
Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum.
More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum.
Thus we arrive at René Descartes, Isaac Newton, Gottfried Leibniz, et al.; and the evolved forms of the equations of motion that begin to be recognized as the modern ones.
Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.
However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields.
Kinematic equations for one particle
Kinematic quantities
From the instantaneous position , instantaneous meaning at an instant value of time , the instantaneous velocity and acceleration have the general, coordinate-independent definitions;
Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature.
The rotational analogues are the "angular vector" (angle the particle rotates about some axis) , angular velocity , and angular acceleration :
where is a unit vector in the direction of the axis of rotation, and is the angle the object turns through about the axis.
The following relation holds for a point-like particle, orbiting about some axis with angular velocity :
where is the position vector of the particle (radial from the rotation axis) and the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body.
Uniform acceleration
The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below.
Constant translational acceleration in a straight line
These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one.
where:
is the particle's initial position
is the particle's final position
is the particle's initial velocity
is the particle's final velocity
is the particle's acceleration
is the time interval
Equations [1] and [2] are from integrating the definitions of velocity and acceleration, subject to the initial conditions and ;
in magnitudes,
Equation [3] involves the average velocity . Intuitively, the velocity increases linearly, so the average velocity multiplied by time is the distance traveled while increasing the velocity from to , as can be illustrated graphically by plotting velocity against time as a straight line graph. Algebraically, it follows from solving [1] for
and substituting into [2]
then simplifying to get
or in magnitudes
From [3],
substituting for in [1]:
From [3],
substituting into [2]:
Usually only the first 4 are needed, the fifth is optional.
Here is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two.
In elementary physics the same formulae are frequently written in different notation as:
where has replaced , replaces . They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: = displacement, = initial velocity, = final velocity, = acceleration, = time.
Constant linear acceleration in any direction
The initial position, initial velocity, and acceleration vectors need not be collinear, and the equations of motion take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case,
although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows:
Applications
Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial velocity , one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity . While these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing to measure up from the ground, the acceleration must be in fact , since the force of gravity acts downwards and therefore also the acceleration on the ball due to it.
At the highest point, the ball will be at rest: therefore . Using equation [4] in the set above, we have:
Substituting and cancelling minus signs gives:
Constant circular acceleration
The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary,
where is the constant angular acceleration, is the angular velocity, is the initial angular velocity, is the angle turned through (angular displacement), is the initial angle, and is the time taken to rotate from the initial state to the final state.
General planar motion
These are the kinematic equations for a particle traversing a path in a plane, described by position . They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity and angular acceleration . These are instantaneous quantities which change with time.
The position of the particle is
where and are the polar unit vectors. Differentiating with respect to time gives the velocity
with radial component and an additional component due to the rotation. Differentiating with respect to time again obtains the acceleration
which breaks into the radial acceleration , centripetal acceleration , Coriolis acceleration , and angular acceleration .
Special cases of motion described by these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration.
General 3D motions
In 3D space, the equations in spherical coordinates with corresponding unit vectors , and , the position, velocity, and acceleration generalize respectively to
In the case of a constant this reduces to the planar equations above.
Dynamic equations of motion
Newtonian mechanics
The first general equation of motion developed was Newton's second law of motion. In its most general form it states the rate of change of momentum of an object equals the force acting on it,
The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as
since is a constant in Newtonian mechanics.
Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continuum, like deformable solids or fluids, but the motion of the system must be accounted for; see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum; see variable-mass system.
It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex.
The momentum form is preferable since this is readily generalized to more complex systems, such as special and general relativity (see four-momentum). It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces.
For a number of particles (see many body problem), the equation of motion for one particle influenced by other particles is
where is the momentum of particle , is the force on particle by particle , and is the resultant external force due to any agent not part of system. Particle does not exert a force on itself.
Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation.
Newton's second law for rotation takes a similar form to the translational case,
by equating the torque acting on the body to the rate of change of its angular momentum . Analogous to mass times acceleration, the moment of inertia tensor depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity,
Again, these equations apply to point like particles, or at each point of a rigid body.
Likewise, for a number of particles, the equation of motion for one particle is
where is the angular momentum of particle , the torque on particle by particle , and is resultant external torque (due to any agent not part of system). Particle does not exert a torque on itself.
Applications
Some examples of Newton's law include describing the motion of a simple pendulum,
and a damped, sinusoidally driven harmonic oscillator,
For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass thrown in the air, in air currents (such as wind) described by a vector field of resistive forces ,
where is the gravitational constant, the mass of the Earth, and is the acceleration of the projectile due to the air currents at position and time .
The classical -body problem for particles each interacting with each other due to gravity is a set of nonlinear coupled second order ODEs,
where labels the quantities (mass, position, etc.) associated with each particle.
Analytical mechanics
Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has degrees of freedom, then one can use a set of generalized coordinates , to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities
The Euler–Lagrange equations are
where the Lagrangian is a function of the configuration and its time rate of change (and possibly time )
Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled second order ODEs in the coordinates are obtained.
Hamilton's equations are
where the Hamiltonian
is a function of the configuration and conjugate "generalized" momenta
in which is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time ,
Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled first order ODEs in the coordinates and momenta are obtained.
The Hamilton–Jacobi equation is
where
is Hamilton's principal function, also called the classical action is a functional of . In this case, the momenta are given by
Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in variables. The action allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether.
All classical equations of motion can be derived from the variational principle known as Hamilton's principle of least action
stating the path the system takes through the configuration space is the one with the least action .
Electrodynamics
In electrodynamics, the force on a charged particle of charge is the Lorentz force:
Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle:
or its momentum:
The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass and charge :
where and are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by:
instead of just , implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation.
Alternatively the Hamiltonian (and substituting into the equations):
can derive the Lorentz force equation.
General relativity
Geodesic equation of motion
The above equations are valid in flat spacetime. In curved spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor , the metric provides the notion of arc length (see line element for details). The differential arc length is given by:
and the geodesic equation is a second-order differential equation in the coordinates. The general solution is a family of geodesics:
where is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system).
Given the mass-energy distribution provided by the stress–energy tensor , the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of spacetime is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation:
where is the separation vector between two geodesics, (not just ) is the covariant derivative, and is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.
For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity.
Spinning objects
In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field.
Analogues for waves and fields
Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified.
Sometimes in the following contexts, the wave or field equations are also called "equations of motion".
Field equations
Equations that describe the spatial dependence and time evolution of fields are called field equations. These include
Maxwell's equations for the electromagnetic field,
Poisson's equation for Newtonian gravitational or electrostatic field potentials,
the Einstein field equation for gravitation (Newton's law of gravity is a special case for weak gravitational fields and low velocities of particles).
This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead.
Wave equations
Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves.
From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3D is:
where is any mechanical or electromagnetic field amplitude, say:
the transverse or longitudinal displacement of a vibrating rod, wire, cable, membrane etc.,
the fluctuating pressure of a medium, sound pressure,
the electric fields or , or the magnetic fields or ,
the voltage or current in an alternating current circuit,
and is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing by . There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation.
Quantum theory
In quantum theory, the wave and field concepts both appear.
In quantum mechanics the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form:
where is the wavefunction of the system, is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and is the Planck constant divided by 2. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that becomes zero. To compare to measurements, operators for observables must be applied the quantum wavefunction according to the experiment performed, leading to either wave-like or particle-like results.
Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance:
the Heisenberg equation of motion resembles the time evolution of classical observables as functions of position, momentum, and time, if one replaces dynamical observables by their quantum operators and the classical Poisson bracket by the commutator,
the phase space formulation closely follows classical Hamiltonian mechanics, placing position and momentum on equal footing,
the Feynman path integral formulation extends the principle of least action to quantum mechanics and field theory, placing emphasis on the use of a Lagrangians rather than Hamiltonians.
See also
Scalar (physics)
Vector
Distance
Displacement
Speed
Velocity
Acceleration
Angular displacement
Angular speed
Angular velocity
Angular acceleration
Equations for a falling body
Parabolic trajectory
Curvilinear coordinates
Orthogonal coordinates
Newton's laws of motion
Projectile motion
Torricelli's equation
Euler–Lagrange equation
Generalized forces
Newton–Euler laws of motion for a rigid body
References
Classical mechanics
Equations of physics | 0.785156 | 0.997507 | 0.783198 |
Zero-point energy | Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity.
The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. This idea would be true if supersymmetry were an exact symmetry of nature; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature".
Etymology and terminology
The term zero-point energy (ZPE) is a translation from the German . Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate.
Overview
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics.
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Given the equivalence of mass and energy expressed by Albert Einstein's , any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field.
The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem.
Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy.
History
Early aether theories
Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was , "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum.
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc:
Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
Second quantum theory
In 1900, Max Planck derived the average energy of a single energy radiator, e.g., a vibrating atomic unit, as a function of absolute temperature:
where is the Planck constant, is the frequency, is the Boltzmann constant, and is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900.
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900.
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of . This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one , as an additional term dependent on the frequency , which was greater than zero (where is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be:
Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920:
and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics.
In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law held down to the zero values of . For the forces between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability.
Quantum field theory and beyond
In 1926, Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory".
Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac:
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:
This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality".
In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers.
In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions.
In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom.
Uncertainty principle
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,
where is the minimum of the classical potential well.
The uncertainty principle tells us that
making the expectation values of the kinetic and potential terms above satisfy
The expectation value of the energy must therefore be at least
where is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly , requires solving for the ground state of the system.
Atomic physics
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing the Planck constant with a bar through its top to denote the quantity . In these terms, an example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:
where is the Planck constant, is the mass of the particle, is the energy state ( corresponds to the ground-state energy), and is the width of the well.
Quantum field theory
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field.
Each point in space makes a contribution of , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy.
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy.
In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses.
Quantum electrodynamic vacuum
The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories.
Redefining the zero of energy
In the quantum theory of the electromagnetic field, classical wave amplitudes and are replaced by operators and that satisfy:
The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator . The fact that:
implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for and . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" and associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of and . This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian:
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:
without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted , i.e.:
In other words, within the normal ordering symbol we can commute and . Since zero-point energy is intimately connected to the non-commutativity of and , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with and and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian.
Electromagnetic field in free space
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by:
We introduce the "mode function" that satisfies the Helmholtz equation:
where and assume it is normalized such that:
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of for each mode of the field. The mode function satisfying these conditions is:
where in order to have the transversality condition satisfied for the Coulomb gauge in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume and impose on the field the periodic boundary condition:
or equivalently
where can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:
which satisfies the Helmholtz equation, transversality, and the "box normalization":
where is chosen to be a unit vector which specifies the polarization of the field mode. The condition means that there are two independent choices of , which we call and where and . Thus we define the mode functions:
in terms of which the vector potential becomes:
or:
where and , are photon annihilation and creation operators for the mode with wave vector and polarization . This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:
for the total vector potential in free space. Using the fact that:
we find the field Hamiltonian is:
This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations:
Clearly the least eigenvalue for is:
This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor
shows. The summation becomes approximately the integral:
for high values of . It diverges proportional to for large .
There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry.
This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out.
Necessity of the vacuum field in QED
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which for all modes . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero.
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by is infinite. We can make the replacement:
the zero-point energy density is:
or in other words the spectral energy density of the vacuum field:
The zero-point energy density in the frequency range from to is therefore:
This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3.
We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is:
This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate and the canonical momentum of the oscillator are:
or:
since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative
For nonrelativistic motion we may neglect the magnetic force and replace the expression for by:
Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for is found similarly from the Hamiltonian to be:
in the electric dipole approximation.
In deriving these equations for , , and we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, ) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator evolves in time as , where is the time evolution operator satisfying
Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is:
and therefore the equation for may be written:
where
and
It can be shown that in the radiation reaction field, if the mass is regarded as the "observed" mass then we can take
The total field acting on the dipole has two parts, and . is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation
satisfied by the field in the (source free) vacuum. For this reason is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at . is the source field, the field generated by the dipole and acting on the dipole.
Using the above equation for we obtain an equation for the Heisenberg-picture operator that is formally the same as the classical equation for a linear dipole oscillator:
where . in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole.
Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field .
According to our earlier equation for the free field is the only field in existence at as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at is therefore of the form
where is the vacuum state of the field and is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero:
since . however, the energy density associated with the free field is infinite:
The important point of this is that the zero-point field energy does not affect the Heisenberg equation for since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with . We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term in the field Hamiltonian.
The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory:
We can calculate from the formal solution of the operator equation of motion
Using the fact that
and that equal-time particle and field operators commute, we obtain:
For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., . Then the integrand above is sharply peaked at and:
the necessity of the vacuum field can also be appreciated by making the small damping approximation in
and
Without the free field in this equation the operator would be exponentially dampened, and commutators like would approach zero for . With the vacuum field included, however, the commutator is at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator.
What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails.
The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of , the spectral energy density of the vacuum field must be proportional to the third power of in order for to hold. In the case of a dissipative force proportional to , by contrast, the fluctuation force must be proportional to in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem.
The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field.
Quantum chromodynamic vacuum
The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
Higgs field
The Standard Model hypothesises a field called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is approximately . It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number.
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory).
Experimental observations
Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics.
Casimir effect
A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move.
Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then.
In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction.
An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than between two closely spaced conducting plates.
Lamb shift
The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels and (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about is roughly of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms.
Fine-structure constant
Taking (the Planck constant divided by ), (the speed of light), and (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where is the absolute value of the electronic charge and is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant:
The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it.
The Heisenberg inequality where , and , are the standard deviations of position and momentum states that:
It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, 90 GeV, that:
rather than the low-energy . The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining . All in all, depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of has been observed for several years now in precision experiment in high-energy physics.
Vacuum birefringence
In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space.
Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth.
Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE).
Speculated involvement in other phenomena
Dark energy
In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy.
There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude.
The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy.
Cosmic inflation
Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain.
Cosmology
Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation.
Alternative theories
There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED."
Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory.
Nevertheless, as pointed out by Jaffe, "no
known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”"
Chaotic and emergent phenomena
The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence.
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity.
A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun.
One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter ( = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature. This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum.
The idea that the vacuum energy can have multiple stable energy states is a leading hypothesis for the cause of cosmic inflation. In fact, it has been argued that these early vacuum fluctuations led to the expansion of the universe and in turn have guaranteed the non-equilibrium conditions necessary to drive order from chaos, as without such expansion the universe would have reached thermal equilibrium and no complexity could have existed. With the continued accelerated expansion of the universe, the cosmos generates an energy gradient that increases the "free energy" (i.e. the available, usable or potential energy for useful work) which the universe is able to use to create ever more complex forms of order. The only reason Earth's environment does not decay into an equilibrium state is that it receives a daily dose of sunshine and that, in turn, is due to the sun "polluting" interstellar space with entropy. The sun's fusion power is only possible due to the gravitational disequilibrium of matter that arose from cosmic expansion. In this essence, the vacuum energy can be viewed as the key cause of the structure throughout the universe. That humanity might alter the morphology of the vacuum energy to create an energy gradient for useful work is the subject of much controversy.
Purported applications
Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines.
Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil.
Casimir batteries and engines
A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates.
In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved."
Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system.
In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force.
A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation.
Single heat baths
In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties.
There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations.
Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown
Space travel and gravitational shielding
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak.
In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross.
From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves."
In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them".
In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum. Building on the work of Thorne, in 1994 Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand (see Alcubierre drive). The ship would then ride this wave inside a region of flat space, known as a warp bubble and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive.
In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC. AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public.
In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, lso commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable"
Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space.
In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored.
In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out.
Zero-point energy in fiction
The concept of Zero-point energy used as an energy source has been an element used in science fiction and related media.
See also
Casimir effect
Ground state
Lamb shift
QED vacuum
QCD vacuum
Quantum fluctuation
Quantum foam
Scalar field
Time crystal
Topological order
Unruh effect
Vacuum energy
Vacuum expectation value
Vacuum state
Virtual particle
References
Notes
Articles in the press
Via Calphysics Institute.
Bibliography
Further reading
Press articles
Journal articles
Books
External links
Nima Arkani-Hamed on the issue of vacuum energy and dark energy.
Steven Weinberg on the cosmological constant problem.
Energy (physics)
Quantum field theory
Quantum electrodynamics
Concepts in physics
Mathematical physics
Condensed matter physics
Materials science
Quantum phases
Non-equilibrium thermodynamics
Perpetual motion
Physical paradoxes
Thermodynamics | 0.784136 | 0.998596 | 0.783035 |
Electrostatics | Electrostatics is a branch of physics that studies slow-moving or stationary electric charges.
Since classical times, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word for amber, , was thus the source of the word electricity. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law.
There are many examples of electrostatic phenomena, from those as simple as the attraction of plastic wrap to one's hand after it is removed from a package, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier and laser printer operation.
The electrostatic model accurately predicts electrical phenomena in "classical" cases where the velocities are low and the system is macroscopic so no quantum effects are involved. It also plays a role in quantum mechanics, where additional terms also need to be included.
Coulomb's law
Coulomb's law states that:
The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.
If is the distance (in meters) between two charges, then the force between two point charges and is:
where ε0 = is the vacuum permittivity.
The SI unit of ε0 is equivalently A2⋅s4 ⋅kg−1⋅m−3 or C2⋅N−1⋅m−2 or F⋅m−1.
Electric field
The electric field, , in units of Newtons per Coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force on a hypothetical small test charge at the point due to Coulomb's law, divided by the charge
Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point.
A collection of particles of charge , located at points (called source points) generates the electric field at (called the field point) of:
where is the displacement vector from a source point to the field point , and is a unit vector that indicates the direction of the field. For a single point charge, , at the origin, the magnitude of this electric field is
and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density and can be obtained by converting this sum into a triple integral:
Gauss's law
Gauss's law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Many numerical problems can be solved by considering a Gaussian surface around a body. Mathematically, Gauss's law takes the form of an integral equation:
where is a volume element. If the charge is distributed over a surface or along a line, replace by or . The divergence theorem allows Gauss's Law to be written in differential form:
where is the divergence operator.
Poisson and Laplace equations
The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ:
This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation:
Electrostatic approximation
The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational:
From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields:
In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as non-relativistic Galilean limits for electromagnetism. In addition, conventional electrostatics ignore quantum effects which have to be added for a complete description.
Electrostatic potential
As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, , called the electrostatic potential (also known as the voltage). An electric field, , points from regions of high electric potential to regions of low electric potential, expressed mathematically as
The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point to point with the following line integral:
From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object).
Electrostatic energy
A test particle's potential energy, , can be calculated from a line integral of the work, . We integrate from a point at infinity, and assume a collection of particles of charge , are already situated at the points . This potential energy (in Joules) is:
where is the distance of each charge from the test charge , which situated at the point , and is the electric potential that would be at if the test charge were not present. If only two charges are present, the potential energy is . The total electric potential energy due a collection of N charges is calculating by assembling these particles one at a time:
where the following sum from, j = 1 to N, excludes i = j:
This electric potential, is what would be measured at if the charge were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription :
This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely and ; they yield equal values for the total electrostatic energy only if both are integrated over all space.
Electrostatic pressure
On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to:
This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
See also
Electrostatic generator, machines that create static electricity.
Electrostatic induction, separation of charges due to electric fields.
Permittivity and relative permittivity, the electric polarizability of materials.
Quantisation of charge, the charge units carried by electrons or protons.
Static electricity, stationary charge accumulated on a material.
Triboelectric effect, separation of charges due to sliding or contact.
References
Further reading
External links
The Feynman Lectures on Physics Vol. II Ch. 4: Electrostatics
Introduction to Electrostatics: Point charges can be treated as a distribution using the Dirac delta function | 0.784802 | 0.997544 | 0.782874 |
Molecular modelling | Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
Molecular mechanics
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
Variables
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
Coordinate representations
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.
Applications
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
See also
References
Further reading
Bioinformatics
Molecular biology
Computational chemistry | 0.796786 | 0.982482 | 0.782828 |
Poynting's theorem | In electrodynamics, Poynting's theorem is a statement of conservation of energy for electromagnetic fields developed by British physicist John Henry Poynting. It states that in a given volume, the stored energy changes at a rate given by the work done on the charges within the volume, minus the rate at which energy leaves the volume. It is only strictly true in media which is not dispersive, but can be extended for the dispersive case.
The theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation.
Definition
Poynting's theorem states that the rate of energy transfer per unit volume from a region of space equals the rate of work done on the charge distribution in the region, plus the energy flux leaving that region.
Mathematically:
where:
is the rate of change of the energy density in the volume.
∇•S is the energy flow out of the volume, given by the divergence of the Poynting vector S.
J•E is the rate at which the fields do work on charges in the volume (J is the current density corresponding to the motion of charge, E is the electric field, and • is the dot product).
Integral form
Using the divergence theorem, Poynting's theorem can also be written in integral form:
where
S is the energy flow, given by the Poynting Vector.
is the energy density in the volume.
is the boundary of the volume. The shape of the volume is arbitrary but fixed for the calculation.
Continuity equation analog
In an electrical engineering context the theorem is sometimes written with the energy density term u expanded as shown. This form resembles the continuity equation:
,
where
ε0 is the vacuum permittivity and μ0 is the vacuum permeability.
is the density of reactive power driving the build-up of electric field,
is the density of reactive power driving the build-up of magnetic field, and
is the density of electric power dissipated by the Lorentz force acting on charge carriers.
Derivation
For an individual charge in an electromagnetic field, the rate of work done by the field on the charge is given by the Lorentz Force Law as:
Extending this to a continuous distribution of charges, moving with current density J, gives:
By Ampère's circuital law:
(Note that the H and D forms of the magnetic and electric fields are used here. The B and E forms could also be used in an equivalent derivation.)
Substituting this into the expression for rate of work gives:
Using the vector identity :
By Faraday's Law:
giving:
Continuing the derivation requires the following assumptions:
the charges are moving in a medium which is not dispersive.
the total electromagnetic energy density, even for time-varying fields, is given by
It can be shown that:
and
and so:
Returning to the equation for rate of work,
Since the volume is arbitrary, this can be cast in differential form as:
where is the Poynting vector.
Poynting vector in macroscopic media
In a macroscopic medium, electromagnetic effects are described by spatially averaged (macroscopic) fields. The Poynting vector in a macroscopic medium can be defined self-consistently with microscopic theory, in such a way that the spatially averaged microscopic Poynting vector is exactly predicted by a macroscopic formalism. This result is strictly valid in the limit of low-loss and allows for the unambiguous identification of the Poynting vector form in macroscopic electrodynamics.
Alternative forms
It is possible to derive alternative versions of Poynting's theorem. Instead of the flux vector as above, it is possible to follow the same style of derivation, but instead choose , the Minkowski form , or perhaps . Each choice represents the response of the propagation medium in its own way: the form above has the property that the response happens only due to electric currents, while the form uses only (fictitious) magnetic monopole currents. The other two forms (Abraham and Minkowski) use complementary combinations of electric and magnetic currents to represent the polarization and magnetization responses of the medium.
Modification
The derivation of the statement is dependent on the assumption that the materials the equation models can be described by a set of susceptibility properties that are linear, isotropic, homogenous and independent of frequency. The assumption that the materials have no absorption must also be made. A modification to Poynting's theorem to account for variations includes a term for the rate of non-Ohmic absorption in a material, which can be calculated by a simplified approximation based on the Drude model.
Complex Poynting vector theorem
This form of the theorem is useful in Antenna theory, where one has often to consider harmonic fields propagating in the space.
In this case, using phasor notation, and .
Then the following mathematical identity holds:
where is the current density.
Note that in free space, and are real, thus,
taking the real part of the above formula, it expresses the fact that the averaged radiated power flowing through is equal to the work on the charges.
References
External links
Eric W. Weisstein "Poynting Theorem" From ScienceWorld – A Wolfram Web Resource.
Electrodynamics
Eponymous theorems of physics
Circuit theorems | 0.792608 | 0.987259 | 0.782509 |
Galilean transformation | In physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. In special relativity the homogenous and inhomogenous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations; conversely, the group contraction in the classical limit of Poincaré transformations yields Galilean transformations.
The equations below are only physically valid in a Newtonian framework, and not applicable to coordinate systems moving relative to each other at speeds approaching the speed of light.
Galileo formulated these concepts in his description of uniform motion.
The topic was motivated by his description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth.
Translation
Although the transformations are named for Galileo, it is the absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors.
The notation below describes the relationship under the Galilean transformation between the coordinates and of a single arbitrary event, as measured in two coordinate systems and , in uniform relative motion (velocity ) in their common and directions, with their spatial origins coinciding at time :
Note that the last equation holds for all Galilean transformations up to addition of a constant, and expresses the assumption of a universal time independent of the relative motion of different observers.
In the language of linear algebra, this transformation is considered a shear mapping, and is described with a matrix acting on a vector. With motion parallel to the x-axis, the transformation acts on only two components:
Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity.
Galilean transformations
The Galilean symmetries can be uniquely written as the composition of a rotation, a translation and a uniform motion of spacetime. Let represent a point in three-dimensional space, and a point in one-dimensional time. A general point in spacetime is given by an ordered pair .
A uniform motion, with velocity , is given by
where . A translation is given by
where and . A rotation is given by
where is an orthogonal transformation.
As a Lie group, the group of Galilean transformations has dimension 10.
Galilean group
Two Galilean transformations and compose to form a third Galilean transformation,
.
The set of all Galilean transformations forms a group with composition as the group operation.
The group is sometimes represented as a matrix group with spacetime events as vectors where is real and is a position in space.
The action is given by
where is real and and is a rotation matrix.
The composition of transformations is then accomplished through matrix multiplication. Care must be taken in the discussion whether one restricts oneself to the connected component group of the orthogonal transformations.
has named subgroups. The identity component is denoted .
Let represent the transformation matrix with parameters :
anisotropic transformations.
isochronous transformations.
spatial Euclidean transformations.
uniformly special transformations / homogenous transformations, isomorphic to Euclidean transformations.
shifts of origin / translation in Newtonian spacetime.
rotations (of reference frame) (see SO(3)), a compact group.
uniform frame motions / boosts.
The parameters span ten dimensions. Since the transformations depend continuously on , is a continuous group, also called a topological group.
The structure of can be understood by reconstruction from subgroups. The semidirect product combination of groups is required.
( is a normal subgroup)
Origin in group contraction
The Lie algebra of the Galilean group is spanned by and (an antisymmetric tensor), subject to commutation relations, where
is the generator of time translations (Hamiltonian), is the generator of translations (momentum operator), is the generator of rotationless Galilean transformations (Galileian boosts), and stands for a generator of rotations (angular momentum operator).
This Lie Algebra is seen to be a special classical limit of the algebra of the Poincaré group, in the limit . Technically, the Galilean group is a celebrated group contraction of the Poincaré group (which, in turn, is a group contraction of the de Sitter group ).
Formally, renaming the generators of momentum and boost of the latter as in
,
where is the speed of light (or any unbounded function thereof), the commutation relations (structure constants) in the limit take on the relations of the former.
Generators of time translations and rotations are identified. Also note the group invariants and .
In matrix form, for , one may consider the regular representation (embedded in , from which it could be derived by a single group contraction, bypassing the Poincaré group),
The infinitesimal group element is then
Central extension of the Galilean group
One may consider a central extension of the Lie algebra of the Galilean group, spanned by and an operator M:
The so-called Bargmann algebra is obtained by imposing , such that lies in the center, i.e. commutes with all other operators.
In full, this algebra is given as
and finally
where the new parameter shows up.
This extension and projective representations that this enables is determined by its group cohomology.
See also
Galilean invariance
Representation theory of the Galilean group
Galilei-covariant tensor formulation
Poincaré group
Lorentz group
Lagrangian and Eulerian coordinates
Notes
References
, Chapter 5, p. 83
, Chapter 38 §38.2, p. 1046,1047
, Chapter 2 §2.6, p. 42
, Chapter 9 §9.1, p. 261
Theoretical physics
Time in physics | 0.787423 | 0.993555 | 0.782349 |