text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
Such systems are known as "ideal gas thermometers".
In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence.
## Dependence on the existence of walls permeable only to heat
In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".
|
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
|
It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".
It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes
It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities.
It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers.
|
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
|
It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes
It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities.
It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers. Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.
## History
Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent".
|
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
|
This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent". This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems.
According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava.
They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body is in temperature equilibrium with two bodies and , then and themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate:
They do not themselves here use the phrase "zeroth law of thermodynamics".
There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics.
Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows:
They then proposed that
The first sentence of this present article is a version of this statement.
|
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
|
Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows:
They then proposed that
The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.
## References
## Further reading
-
0
|
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
|
Superconductivity is a set of physical properties observed in superconductors: materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.
The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the
### Meissner effect
, the complete cancellation of the magnetic field in the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above .
|
https://en.wikipedia.org/wiki/Superconductivity
|
The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above . It was shortly found (by Ching-Wu Chu) that replacing the lanthanum with yttrium, i.e. making YBCO, raised the critical temperature to , which was important because liquid nitrogen could then be used as a refrigerant. Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.
## History
Superconductivity was discovered on April 8, 1911, by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found.
|
https://en.wikipedia.org/wiki/Superconductivity
|
In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.
Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
### London constitutive equations
The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors.
|
https://en.wikipedia.org/wiki/Superconductivity
|
### London constitutive equations
The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.
The two constitutive equations for a superconductor by London are:
$$
\frac{\partial \mathbf{j}}{\partial t} = \frac{n e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j} =-\frac{n e^2}{m}\mathbf{B}.
$$
The first equation follows from Newton's second law for superconducting electrons.
|
https://en.wikipedia.org/wiki/Superconductivity
|
By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.
The two constitutive equations for a superconductor by London are:
$$
\frac{\partial \mathbf{j}}{\partial t} = \frac{n e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j} =-\frac{n e^2}{m}\mathbf{B}.
$$
The first equation follows from Newton's second law for superconducting electrons.
### Conventional theories (1950s)
During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957).
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors.
|
https://en.wikipedia.org/wiki/Superconductivity
|
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.
Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron–phonon interaction as the microscopic mechanism responsible for superconductivity.
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons.
|
https://en.wikipedia.org/wiki/Superconductivity
|
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.
The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature.
Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.
### Niobium
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements.
|
https://en.wikipedia.org/wiki/Superconductivity
|
### Niobium
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements.
Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. The alloy was brittle and difficult to fabricate, but niobium–tin proved useful for generating magnetic fields as high as 20 tesla.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. The alloy was brittle and difficult to fabricate, but niobium–tin proved useful for generating magnetic fields as high as 20 tesla.
In 1962, T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Commercial production of niobium–titanium supermagnet wire immediately commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium became the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium found wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and other applications.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium became the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium found wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total.
### Josephson effect
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggested a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes.
### 2D materials
Multiple types of superconductivity are reported in devices made of single-layer materials. Some of these materials can switch between conducting, insulating, and other behaviors.
Twisting materials imbues them with a “moiré” pattern involving tiled hexagonal cells that act like atoms and host electrons. In this environment, the electrons move slowly enough for their collective interactions to guide their behavior. When each cell has a single electron, the electrons take on an antiferromagnetic arrangement; each electron can have a preferred location and magnetic orientation. Their intrinsic magnetic fields tend to alternate between pointing up and down. Adding electrons allows superconductivity by causing Cooper pairs to form.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Their intrinsic magnetic fields tend to alternate between pointing up and down. Adding electrons allows superconductivity by causing Cooper pairs to form. Fu and Schrade argued that electron-on-electron action was allowing both antiferromagnetic and superconducting states.
The first success with 2D materials involved a twisted bilayer graphene sheet (2018, Tc ~1.7 K, 1.1° twist). A twisted three-layer graphene device was later shown to superconduct (2021, Tc ~2.8 K). Then an untwisted trilayer graphene device was reported to superconduct (2022, Tc 1-2 K). The latter was later shown to be tunable, easily reproducing behavior found millions of other configurations. Directly observing what happens when electrons are added to a material or slightly weakening its electric field lets physicists quickly try out an unprecedented number of recipes to see which lead to superconductivity.
These devices have applications i.n quantum computing.
2D materials other than graphene have also been made to superconduct. Transition metal dichalcogenide (TMD) sheets twisted at 5 degrees intermittently achieved superconduction. by creating a Josephson junction.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Transition metal dichalcogenide (TMD) sheets twisted at 5 degrees intermittently achieved superconduction. by creating a Josephson junction. The device used used thin layers of palladium to connect to the sides of a tungsten telluride layer surrounded and protected by boron nitride. Another group demonstrated superconduction in molybdenum telluride (MoTe₂) in 2D van der Waals materials using ferroelectric domain walls. The Tc was implied to be higher than typical TMDs (~5–10 K).
A Cornell group added a 3.5-degree twist to an insulator that allowed electrons to slow down and interact strongly, leaving one electron per cell, exhibiting superconduction. Existing theories do not explain this behavior.
Fu and collaborators proposed that electrons arranged to form a repeating crystal that allows the electron grid to float independently of the background atomic nuclei allows the electron grid to relax. Its ripples pair electrons the way phonons do, although this is unconfirmed.
## Classification
Superconductors are classified according to many criteria. The most common are:
|
https://en.wikipedia.org/wiki/Superconductivity
|
## Classification
Superconductors are classified according to many criteria. The most common are:
### Response to a magnetic field
A superconductor can be Type I, meaning it has a single critical field, above which superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points called vortices. Furthermore, in multicomponent superconductors it is possible to combine the two behaviours. In that case the superconductor is of Type-1.5.
### Theory of operation
A superconductor is conventional if it is driven by electron–phonon interaction and explained by the BCS theory or its extension, the Eliashberg theory. Otherwise, it is unconventional. Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the system's point group or space group.
### Critical temperature
A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the system's point group or space group.
### Critical temperature
A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide group of superconductors that display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide group of superconductors that display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K.
### Material
Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs), single-layer materials such as graphene and transition metal dichalcogenides, or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon).
## Elementary properties
Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material.
|
https://en.wikipedia.org/wiki/Superconductivity
|
## Elementary properties
Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details. Off diagonal long range order is closely connected to the formation of Cooper pairs.
### Zero electrical DC resistance
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.
|
https://en.wikipedia.org/wiki/Superconductivity
|
The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils persisted for 28 years, 7 months, 27 days in a superconducting gravimeter in Belgium, from August 4, 1995 until March 31, 2024. In such instruments, the measurement is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of four grams.
In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions.
|
https://en.wikipedia.org/wiki/Superconductivity
|
In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating.
The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.
In the class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but non-zero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass".
|
https://en.wikipedia.org/wiki/Superconductivity
|
The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.
### Phase transition
In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing (
$$
s
$$
wave vs.
$$
d
$$
wave) remains controversial.
Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur.
|
https://en.wikipedia.org/wiki/Superconductivity
|
If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat.
|
https://en.wikipedia.org/wiki/Superconductivity
|
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.
Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations.
Meissner effect
When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected.
|
https://en.wikipedia.org/wiki/Superconductivity
|
The results were strongly supported by Monte Carlo computer simulations.
Meissner effect
When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead, the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from thisit is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field.
|
https://en.wikipedia.org/wiki/Superconductivity
|
The Meissner effect is distinct from thisit is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
$$
\nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\,
$$
where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
### London moment
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B.
|
https://en.wikipedia.org/wiki/Superconductivity
|
### London moment
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.
## High-temperature superconductivity
## Applications
Superconductors are promising candidate materials for devising fundamental circuit elements of electronic, spintronic, and quantum technologies. One such example is a superconducting diode, in which supercurrent flows along one direction only, that promise dissipationless superconducting and semiconducting-superconducting hybrid technologies.
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.
|
https://en.wikipedia.org/wiki/Superconductivity
|
They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark.
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.
Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Superconducting photon detectors can be realised in a variety of device configurations.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Series of Josephson devices are used to realize the SI volt. Superconducting photon detectors can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications.
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE).
|
https://en.wikipedia.org/wiki/Superconductivity
|
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE).
Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, compact fusion power devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. Another attractive industrial aspect is the ability for high power transmission at lower voltages. Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity.
|
https://en.wikipedia.org/wiki/Superconductivity
|
Another attractive industrial aspect is the ability for high power transmission at lower voltages. Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity.
## Nobel Prizes
As of 2022, there have been five Nobel Prizes in Physics for superconductivity related subjects:
- Heike Kamerlingh Onnes (1913), "for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium".
- John Bardeen, Leon N. Cooper, and J. Robert Schrieffer (1972), "for their jointly developed theory of superconductivity, usually called the BCS-theory".
- Leo Esaki, Ivar Giaever, and Brian D. Josephson (1973), "for their experimental discoveries regarding tunneling phenomena in semiconductors and superconductors, respectively" and "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects".
- Georg Bednorz and K. Alex Müller (1987), "for their important break-through in the discovery of superconductivity in ceramic materials".
- Alexei A. Abrikosov, Vitaly L. Ginzburg, and Anthony J. Leggett (2003), "for pioneering contributions to the theory of superconductors and superfluids".
|
https://en.wikipedia.org/wiki/Superconductivity
|
Version control (also known as revision control, source control, and source code management) is the software engineering practice of controlling, organizing, and tracking different versions in history of computer files; primarily source code text files, but generally any type of file.
Version control is a component of software configuration management.
A version control system is a software tool that automates version control. Alternatively, version control is embedded as a feature of some systems such as word processors, spreadsheets, collaborative web docs, and content management systems, e.g., Wikipedia's page history.
Version control includes viewing old versions and enables reverting a file to a previous version.
## Overview
As teams develop software, it is common to deploy multiple versions of the same software, and for different developers to work on one or more different versions simultaneously. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).
|
https://en.wikipedia.org/wiki/Version_control
|
Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).
At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This abstracts most operational steps (hides them from ordinary users).
|
https://en.wikipedia.org/wiki/Version_control
|
Consequently, systems to automate some or all of the revision control process have been developed. This abstracts most operational steps (hides them from ordinary users).
Moreover, in software development, legal and business practice, and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.
Revision control may also track changes to configuration files, such as those typically stored in `/etc` or `/usr/local/etc` on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.
Many version control systems identify the version of a file as a number or letter, called the version number, version, revision number, revision, or revision level. For example, the first version of a file might be version 1. When the file is changed the next version is 2. Each version is associated with a timestamp and the person making the change. Revisions can be compared, restored, and, with some types of files, merged.
## History
IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools.
|
https://en.wikipedia.org/wiki/Version_control
|
Revisions can be compared, restored, and, with some types of files, merged.
## History
IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools. Two source management and version control packages that were heavily used by IBM 360/370 installations were The Librarian and Panvalet.
A full system designed for source code control was started in 1972: the Source Code Control System (SCCS), again for the OS/360. SCCS's user manual, especially the introduction, having been published on December 4, 1975, implied that it was the first deliberate revision control system. The Revision Control System (RCS) followed in 1982 and, later, Concurrent Versions System (CVS) added network and concurrent development features to RCS. After CVS, a dominant successor was Subversion, followed by the rise of distributed version control tools such as Git.
## Structure
Revision control manages changes to a set of data over time. These changes can be structured in various ways.
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files.
|
https://en.wikipedia.org/wiki/Version_control
|
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files. Accordingly, some systems such as Git, instead consider changes to the data as a whole, which is less intuitive for simple changes but simplifies more complex changes.
When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in the repository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below.
|
https://en.wikipedia.org/wiki/Version_control
|
For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on.
Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked into any repository. When checking into a different repository, this is interpreted as a merge or patch.
### Graph structure
In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk.
|
https://en.wikipedia.org/wiki/Version_control
|
When checking into a different repository, this is interpreted as a merge or patch.
### Graph structure
In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk. In reality the structure is more complicated, forming a directed acyclic graph, but for many purposes "tree with merges" is an adequate approximation.
Revisions occur in sequence over time, and thus can be arranged in order, either by revision number or timestamp. Revisions are based on past revisions, though it is possible to largely or completely replace an earlier revision, such as "delete all existing text, insert new text". In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph.
|
https://en.wikipedia.org/wiki/Version_control
|
In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph. If there is branching, so multiple future revisions are based on a past revision, or undoing, so a revision can depend on a revision older than its immediate predecessor, then the resulting graph is instead a directed tree (each node can have more than one child), and has multiple tips, corresponding to the revisions without children ("latest revision on each branch"). In principle the resulting tree need not have a preferred tip ("main" latest revision) – just various different revisions – but in practice one tip is generally identified as HEAD. When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline.
|
https://en.wikipedia.org/wiki/Version_control
|
When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline. Conversely, when a revision can be based on more than one previous revision (when a node can have more than one parent), the resulting process is called a merge, and is one of the most complex aspects of revision control. This most often occurs when changes occur in multiple branches (most often two, but more are possible), which are then merged into a single branch incorporating both changes. If these changes overlap, it may be difficult or impossible to merge, and require manual intervention or rewriting.
In the presence of merges, the resulting graph is no longer a tree, as nodes can have multiple parents, but is instead a rooted directed acyclic graph (DAG). The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure.
|
https://en.wikipedia.org/wiki/Version_control
|
The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure. Thus, while the actual relations between versions form a DAG, this can be considered a tree plus merges, and the trunk itself is a line.
In distributed revision control, in the presence of multiple repositories these may be based on a single original version (a root of the tree), but there need not be an original root - instead there can be a separate root (oldest revision) for each repository. This can happen, for example, if two people start working on a project separately. Similarly, in the presence of multiple data sets (multiple projects) that exchange data or merge, there is no single root, though for simplicity one may think of one project as primary and the other as secondary, merged into the first with or without its own revision history.
## Specialized strategies
Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design.
|
https://en.wikipedia.org/wiki/Version_control
|
## Specialized strategies
Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design. A revision table was used to keep track of the changes made. Additionally, the modified areas of the drawing were highlighted using revision clouds.
### In Business and Law
Version control is widespread in business and law. Indeed, "contract redline" and "legal blackline" are some of the earliest forms of revision control, and are still employed in business and law with varying degrees of sophistication. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management), supplanting the "manual" electronic implementation of traditional revision control.
## Source-management models
Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. If two developers try to change the same file at the same time, without some method of managing access the developers may end up overwriting each other's work. Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging.
### Atomic operations
An operation is atomic if the system is left in a consistent state even if the operation is interrupted.
|
https://en.wikipedia.org/wiki/Version_control
|
Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging.
### Atomic operations
An operation is atomic if the system is left in a consistent state even if the operation is interrupted. The commit operation is usually the most critical in this sense. Commits tell the revision control system to make a group of changes final, and available to all users. Not all revision control systems have atomic commits; Concurrent Versions System lacks this feature.
### File locking
The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. Once one developer "checks out" a file, others can read that file, but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout).
File locking has both merits and drawbacks. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in.
|
https://en.wikipedia.org/wiki/Version_control
|
It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in. In a large organization, files can be left "checked out" and locked and forgotten about as developers move between projects - these tools may or may not make it easy to see who has a file checked out.
### Version merging
Most version control systems allow multiple developers to edit the same file at the same time. The first developer to "check in" changes to the central repository always succeeds. The system may provide facilities to merge further changes into the central repository, and preserve the changes from the first developer when other developers check in.
Merging two files can be a very delicate operation, and usually possible only if the data structure is simple, as in text files. The result of a merge of two image files might not result in an image file at all. The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types.
|
https://en.wikipedia.org/wiki/Version_control
|
The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types.
The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access, even when a merging capability exists.
###
### Baseline
s, labels and tags
Most revision control tools will use only one of these similar terms (baseline, label, tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). Typically only one of the terms baseline, label, or tag is used in documentation or discussion; they can be considered synonyms.
In most projects, some snapshots are more significant than others, such as those used to indicate published releases, branches, or milestones.
When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag.
Most formal discussion of configuration management uses the term baseline.
|
https://en.wikipedia.org/wiki/Version_control
|
When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag.
Most formal discussion of configuration management uses the term baseline.
## Distributed revision control
Distributed revision control systems (DRCS) take a peer-to-peer approach, as opposed to the client–server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a bona-fide repository.
Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. This results in some important differences from a centralized system:
- No canonical, reference copy of the codebase exists by default; only working copies.
- Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server.
Rather, communication is only necessary when pushing or pulling changes to or from other peers.
- Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss.
## Best practices
Following best practices is necessary to obtain the full benefits of version control.
|
https://en.wikipedia.org/wiki/Version_control
|
Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss.
## Best practices
Following best practices is necessary to obtain the full benefits of version control. Best practice may vary by version control tool and the field to which version control is applied. The generally accepted best practices in software development include: making incremental, small, changes; making commits which involve only one task or fix -- a corollary to this is to commit only code which works and does not knowingly break existing functionality; utilizing branching to complete functionality before release; writing clear and descriptive commit messages, make what why and how clear in either the commit description or the code; and using a consistent branching strategy. Other best software development practices such as code review and automated regression testing may assist in the following of version control best practices.
##
### Costs
and benefits
Costs and benefits will vary dependent upon the version control tool chosen and the field in which it is applied. This section speaks to the field of software development, where version control is widely applied.
Costs
In addition to the costs of licensing the version control software, using version control requires time and effort. The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices.
|
https://en.wikipedia.org/wiki/Version_control
|
The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices. Management effort may be required to maintain the discipline needed to follow best practices in order to obtain useful benefit.
### Benefits
#### Allows for reverting changes
A core benefit is the ability to keep history and revert changes, allowing the developer to easily undo changes. This gives the developer more opportunity to experiment, eliminating the fear of breaking existing code.
####
### Branch
ing simplifies deployment, maintenance and development
Branching assists with deployment. Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc.
#### Damage mitigation, accountability and process and design improvement
There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how.
|
https://en.wikipedia.org/wiki/Version_control
|
Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc.
#### Damage mitigation, accountability and process and design improvement
There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how.
When bugs arise, knowing what was done when helps with damage mitigation and recovery by assisting in the identification of what problems exist, how long they have existed, and determining problem scope and solutions. Previous versions can be installed and tested to verify conclusions reached by examination of code and commit messages.
#### Simplifies debugging
Version control can greatly simplify debugging. The application of a test case to multiple versions can quickly identify the change which introduced a bug. The developer need not be familiar with the entire code base and can focus instead on the code that introduced the problem.
#### Improves collaboration and communication
Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers.
|
https://en.wikipedia.org/wiki/Version_control
|
#### Improves collaboration and communication
Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers.
The packaging of commits, branches, and all the associated commit messages and version labels, improves communication between developers, both in the moment and over time. Better communication, whether instant or deferred, can improve the code review process, the testing process, and other critical aspects of the software development process.
## Integration
Some of the more advanced revision-control tools offer many other facilities, allowing deeper integration with other tools and software-engineering processes.
### Integrated development environment
Plugins are often available for IDEs such as Oracle JDeveloper, IntelliJ IDEA, Eclipse, Visual Studio, Delphi, NetBeans IDE, Xcode, and GNU Emacs (via vc.el). Advanced research prototypes generate appropriate commit messages.
## Common terminology
Terminology can vary from system to system, but some terms in common usage include:
Baseline
An approved revision of a document or source file to which subsequent changes can be made. See baselines, labels and tags.
### Blame
A search for the author and revision that last modified a particular line.
|
https://en.wikipedia.org/wiki/Version_control
|
See baselines, labels and tags.
### Blame
A search for the author and revision that last modified a particular line.
Branch
A set of files under version control may be branched or forked at a point in time so that, from that time forward, two copies of those files may develop at different speeds or in different ways independently of each other.
### Change
A change (or diff, or delta) represents a specific modification to a document under version control. The granularity of the modification considered a change varies between version control systems.
### Change list
On many version control systems with atomic multi-change commits, a change list (or CL), change set, update, or patch identifies the set of changes made in a single commit. This can also represent a sequential view of the source code, allowing the examination of source as of any particular changelist ID.
### Checkout
To check out (or co) is to create a local working copy from the repository. A user may specify a specific revision or obtain the latest. The term 'checkout' can also be used as a noun to describe the working copy. When a file has been checked out from a shared file server, it cannot be edited by other users. Think of it like a hotel, when you check out, you no longer have access to its amenities.
### Clone
Cloning means creating a repository containing the revisions from another repository.
|
https://en.wikipedia.org/wiki/Version_control
|
Think of it like a hotel, when you check out, you no longer have access to its amenities.
### Clone
Cloning means creating a repository containing the revisions from another repository. This is equivalent to pushing or pulling into an empty (newly initialized) repository. As a noun, two repositories can be said to be clones if they are kept synchronized, and contain the same revisions.
### Commit (noun)
### Commit (verb)
To commit (check in, ci or, more rarely, install, submit or record) is to write or merge the changes made in the working copy back to the repository. A commit contains metadata, typically the author information and a commit message that describes the change.
### Commit message
A short note, written by the developer, stored with the commit, which describes the commit. Ideally, it records why the modification was made, a description of the modification's effect or purpose, and non-obvious aspects of how the change works.
### Conflict
A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
### Delta compression
Most revision control software uses delta compression, which retains only the differences between successive versions of files.
|
https://en.wikipedia.org/wiki/Version_control
|
A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
### Delta compression
Most revision control software uses delta compression, which retains only the differences between successive versions of files. This allows for more efficient storage of many different versions of files.
### Dynamic stream
A stream in which some or all file versions are mirrors of the parent stream's versions.
### Export
Exporting is the act of obtaining the files from the repository. It is similar to checking out except that it creates a clean directory tree without the version-control metadata used in a working copy. This is often used prior to publishing the contents, for example.
### Fetch
See pull.
### Forward integration
The process of merging changes made in the main trunk into a development (feature or team) branch.
### Head
Also sometimes called tip, this refers to the most recent commit, either to the trunk or to a branch. The trunk and each branch have their own head, though HEAD is sometimes loosely used to refer to the trunk.
### Import
Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time.
### Initialize
To create a new, empty repository.
|
https://en.wikipedia.org/wiki/Version_control
|
### Import
Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time.
### Initialize
To create a new, empty repository.
### Interleaved deltas
Some revision control software uses Interleaved deltas, a method that allows storing the history of text based files in a more efficient way than by using Delta compression.
### Label
See tag.
### Locking
When a developer locks a file, no one else can update that file until it is unlocked. Locking can be supported by the version control system, or via informal communications between developers (aka social locking).
### Mainline
Similar to trunk, but there can be a mainline for each branch.
### Merge
A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows:
- A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
- A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
|
https://en.wikipedia.org/wiki/Version_control
|
Some sample scenarios are as follows:
- A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
- A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
- A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk.
- A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.)
### Promote
The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent.
### Pull, push
Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
### Pull request
|
https://en.wikipedia.org/wiki/Version_control
|
Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
### Pull request
### Repository
### Resolve
The act of user intervention to address a conflict between different changes to the same document.
### Reverse integration
The process of merging different team branches into the main trunk of the versioning system.
### Revision and version
A version is any change in form. In SVK, a Revision is the state at a point in time of the entire tree in the repository.
### Share
The act of making one file or folder available in multiple branches at the same time. When a shared file is changed in one branch, it is changed in other branches.
### Stream
A container for branched files that has a known relationship to other such containers. Streams form a hierarchy; each stream can inherit various properties (like versions, namespace, workflow rules, subscribers, etc.) from its parent stream.
### Tag
A tag or label refers to an important snapshot in time, consistent across many files. These files at that point may all be tagged with a user-friendly, meaningful name or revision number. See baselines, labels and tags.
### Trunk
The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master)
|
https://en.wikipedia.org/wiki/Version_control
|
See baselines, labels and tags.
### Trunk
The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master)
### Update
An update (or sync, but sync can also mean a combined push and pull) merges changes made in the repository (by other people, for example) into the local working copy. Update is also the term used by some CM tools (CM+, PLS, SMS) for the change package concept (see changelist). Synonymous with checkout in revision control systems that require each repository to have exactly one working copy (common in distributed systems)
### Unlocking
Releasing a lock.
### Working copy
The working copy is the local copy of files from a repository, at a specific time or revision. All work done to the files in a repository is initially done on a working copy, hence the name. Conceptually, it is a sandbox.
|
https://en.wikipedia.org/wiki/Version_control
|
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models based on the Big Bang concept explain a broad range of phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the horizon and flatness problems, is explained through cosmic inflation: a phase of accelerated expansion during the earliest stages. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated billion years ago, which is considered the age of the universe.
Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an extraordinarily hot and dense primordial universe. Physics lacks a widely accepted theory that can model the earliest conditions of the Big Bang. As the universe expanded, it cooled sufficiently to allow the formation of subatomic particles, and later atoms. These primordial elements—mostly hydrogen, with some helium and lithium—then coalesced under the force of gravity aided by dark matter, forming early stars and galaxies.
|
https://en.wikipedia.org/wiki/Big_Bang
|
As the universe expanded, it cooled sufficiently to allow the formation of subatomic particles, and later atoms. These primordial elements—mostly hydrogen, with some helium and lithium—then coalesced under the force of gravity aided by dark matter, forming early stars and galaxies. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to a concept called dark energy.
The concept of an expanding universe was scientifically originated by the physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang. In 1964, the CMB was discovered. Over the next few years measurements showed this radiation to be uniform over directions in the sky and the shape of the energy versus intensity curve, both consistent with the Big Bang models of high temperatures and densities in the distant past.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In 1964, the CMB was discovered. Over the next few years measurements showed this radiation to be uniform over directions in the sky and the shape of the energy versus intensity curve, both consistent with the Big Bang models of high temperatures and densities in the distant past. By the late 1960s most cosmologists were convinced that competing steady-state model of cosmic evolution was incorrect.
There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. These include the unequal abundances of matter and antimatter known as baryon asymmetry, the detailed nature of dark matter surrounding galaxies, and the origin of dark energy.
## Features of the models
### Assumptions
Big Bang cosmology models depend on three major assumptions: the universality of physical laws, the cosmological principle, and that the matter content can be modeled as a perfect fluid. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. A perfect fluid has no viscosity; the pressure of a perfect fluid is proportional to its density.
These ideas were initially taken as postulates, but later efforts were made to test each of them.
|
https://en.wikipedia.org/wiki/Big_Bang
|
A perfect fluid has no viscosity; the pressure of a perfect fluid is proportional to its density.
These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. The key physical law behind these models, general relativity has passed stringent tests on the scale of the Solar System and binary stars.
The cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.
### Expansion prediction
The cosmological principle dramatically simplifies the equations of general relativity, giving the Friedmann–Lemaître–Robertson–Walker metric to describe the geometry of the universe and, with the assumption of a perfect fluid, the Friedmann equations giving the time dependence of that geometry. The only parameter at this level of description is the mass-energy density: the geometry of the universe and its expansion is a direct consequence of its density. All of the major features of Big Bang cosmology are related to these results.
|
https://en.wikipedia.org/wiki/Big_Bang
|
The only parameter at this level of description is the mass-energy density: the geometry of the universe and its expansion is a direct consequence of its density. All of the major features of Big Bang cosmology are related to these results.
### Mass-energy density
In Big Bang cosmology , the mass–energy density controls the shape and evolution of the universe. By combining astronomical observations with known laws of thermodynamics and particle physics, cosmologists have worked out the components of the density over the lifespan of the universe. In the current universe, luminous matter, the stars, planets, and so on makes up less than 5% of the density.
### Dark matter
accounts for 27% and dark energy the remaining 68%.
### Horizons
An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects.
|
https://en.wikipedia.org/wiki/Big_Bang
|
This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe.
Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
### Thermalization
Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other.
## Timeline
According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling.
### Singularity
Existing theories of physics cannot tell us about the moment of the Big Bang.
Extrapolation of the expansion of the universe backwards in time using only general relativity yields a gravitational singularity with infinite density and temperature at a finite time in the past, but the meaning of this extrapolation in the context of the Big Bang is unclear. Moreover, classical gravitational theories are expected to be inadequate to describe physics under these conditions. Quantum gravity effects are expected to be dominant during the Planck epoch, when the temperature of the universe was close to the Planck scale (around 1032 K or 1028 eV).
Even below the Planck scale, undiscovered physics could greatly influence the expansion history of the universe.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Quantum gravity effects are expected to be dominant during the Planck epoch, when the temperature of the universe was close to the Planck scale (around 1032 K or 1028 eV).
Even below the Planck scale, undiscovered physics could greatly influence the expansion history of the universe. The Standard Model of particle physics is only tested up to temperatures of order 1017K (10 TeV) in particle colliders, such as the Large Hadron Collider. Moreover, new physical phenomena decoupled from the Standard Model could have been important before the time of neutrino decoupling, when the temperature of the universe was only about 1010K (1 MeV).
### Inflation and baryogenesis
The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell.
At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe.
|
https://en.wikipedia.org/wiki/Big_Bang
|
At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.
Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.
### Cooling
The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds.
After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds.
After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
|
https://en.wikipedia.org/wiki/Big_Bang
|
A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.
As the universe cooled, the rest energy density of matter came to gravitationally dominate over that of the photon and neutrino radiation at a time of about 50,000 years. At a time of about 380,000 years, the universe cooled enough that electrons and nuclei combined into neutral atoms (mostly hydrogen) in an event called recombination. This process made the previously opaque universe transparent, and the photons that last scattered during this epoch comprise the cosmic microwave background.
|
https://en.wikipedia.org/wiki/Big_Bang
|
At a time of about 380,000 years, the universe cooled enough that electrons and nuclei combined into neutral atoms (mostly hydrogen) in an event called recombination. This process made the previously opaque universe transparent, and the photons that last scattered during this epoch comprise the cosmic microwave background.
### Structure formation
After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.
|
https://en.wikipedia.org/wiki/Big_Bang
|
The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.
### Cosmic acceleration
Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate.
### Dark energy
in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.
|
https://en.wikipedia.org/wiki/Big_Bang
|
### Dark energy
in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics.
## Concept history
### Etymology
English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s.
It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.:
|
https://en.wikipedia.org/wiki/Big_Bang
|
However, it did not catch on until the 1970s.
It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.: "To create a picture in the mind of the listener, Hoyle had likened the explosive theory of the universe's origin to a 'big bang'." Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative.
A primordial singularity is sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. However, an attempt to find a more suitable alternative was not successful. According to Timothy Ferris: (quoting Ferris).
|
https://en.wikipedia.org/wiki/Big_Bang
|
However, an attempt to find a more suitable alternative was not successful. According to Timothy Ferris: (quoting Ferris).
The term 'big bang' was coined with derisive intent by Fred Hoyle, and its endurance testifies to Sir Fred's creativity and wit. Indeed, the term survived an international competition in which three judges — the television science reporter Hugh Downs, the astronomer Carl Sagan, and myself — sifted through 13,099 entries from 41 countries and concluded that none was apt enough to replace it. No winner was declared, and like it or not, we are stuck with 'big bang'.
### Before the name
Early cosmological models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.- Friedmann (1922) translated into English.
In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.
|
https://en.wikipedia.org/wiki/Big_Bang
|
This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.
Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe.- Lemaître (1927) translated into English. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.
In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by an expanding universe imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the expanding universe concept, Lemaître, was a Roman Catholic priest.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by an expanding universe imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the expanding universe concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed:
During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis.
After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's expanding universe theory, advocated and developed by George Gamow, who used it to develop a theory for the abundance of chemical elements in the universe.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In this model the universe is roughly the same at any point in time. The other was Lemaître's expanding universe theory, advocated and developed by George Gamow, who used it to develop a theory for the abundance of chemical elements in the universe. and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic background radiation.
### As a named model
Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe.
In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems.
|
https://en.wikipedia.org/wiki/Big_Bang
|
In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).
Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.
## Observational evidence
The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the cosmic microwave background, large-scale structure, and Hubble's law.
The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models.
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics.
|
https://en.wikipedia.org/wiki/Big_Bang
|
These are sometimes called the "four pillars" of the Big Bang models.
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics.
### Hubble's law and the expansion of the universe
Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions.
|
https://en.wikipedia.org/wiki/Big_Bang
|
This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed:
$$
v = H_0D
$$
where
-
$$
v
$$
is the recessional velocity of the galaxy or other distant object,
-
$$
D
$$
is the proper distance to the object, and
-
$$
H_0
$$
is Hubble's constant, measured to be km/s/Mpc by the WMAP.
Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.
The theory requires the relation
$$
v = HD
$$
to hold at all times, where
$$
D
$$
is the proper distance,
$$
v
$$
is the recessional velocity, and
$$
v
$$
,
$$
H
$$
, and
$$
D
$$
vary as the universe expands (hence we write
$$
H_0
$$
to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity
$$
v
$$
. For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one.
An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.
|
https://en.wikipedia.org/wiki/Big_Bang
|
An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.
### Cosmic microwave background radiation
In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics.
The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
|
https://en.wikipedia.org/wiki/Big_Bang
|
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing.
### Abundance of primordial elements
Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations.
|
https://en.wikipedia.org/wiki/Big_Bang
|
The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H.
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.
### Galactic evolution and distribution
Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters.
Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.
### Primordial gas clouds
In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.
### Other lines of evidence
The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.
### Other lines of evidence
The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model.
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift.
|
https://en.wikipedia.org/wiki/Big_Bang
|
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.
### Future observations
Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.
## Problems and related issues in physics
As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.
|
https://en.wikipedia.org/wiki/Big_Bang
|
For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.
### Baryon asymmetry
It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium.- Sakharov (1967) translated into English.
- Reprinted in: . All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.
|
https://en.wikipedia.org/wiki/Big_Bang
|
- Reprinted in: . All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.
Dark energy
Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, cosmological models require that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy".
Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler.
|
https://en.wikipedia.org/wiki/Big_Bang
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.