id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
2,200
|
Because of its generality, abstract algebra is used in many fields of mathematics and science. For instance, algebraic topology uses algebraic objects to study topologies. The Poincaré conjecture, proved in 2003, asserts that the fundamental group of a manifold, which encodes information about connectedness, can be used to determine whether a manifold is a sphere or not. Algebraic number theory studies various number rings that generalize the set of integers.
|
Abstract Algebra
| 0.835271
|
2,201
|
Several areas of mathematics led to the study of groups. Lagrange's 1770 study of the solutions of the quintic equation led to the Galois group of a polynomial. Gauss's 1801 study of Fermat's little theorem led to the ring of integers modulo n, the multiplicative group of integers modulo n, and the more general concepts of cyclic groups and abelian groups. Klein's 1872 Erlangen program studied geometry and led to symmetry groups such as the Euclidean group and the group of projective transformations.
|
Abstract Algebra
| 0.835271
|
2,202
|
Noted algebraist Irving Kaplansky called this work "revolutionary"; results which seemed inextricably connected to properties of polynomial rings were shown to follow from a single axiom. Artin, inspired by Noether’s work, came up with the descending chain condition. These definitions marked the birth of abstract ring theory.
|
Abstract Algebra
| 0.835271
|
2,203
|
In 1868 Gordan proved that the graded algebra of invariants of a binary form over the complex numbers was finitely generated, i.e., has a basis. Hilbert wrote a thesis on invariants in 1885 and in 1890 showed that any form of any degree or number of variables has a basis. He extended this further in 1890 to Hilbert's basis theorem.Once these theories had been developed, it was still several decades until an abstract ring concept emerged.
|
Abstract Algebra
| 0.835271
|
2,204
|
He further defined the discriminant of these forms, which is an invariant of a binary form. Between the 1860s and 1890s invariant theory developed and became a major field of algebra. Cayley, Sylvester, Gordan and others found the Jacobian and the Hessian for binary quartic forms and cubic forms.
|
Abstract Algebra
| 0.835271
|
2,205
|
Lasker proved a special case of the Lasker-Noether theorem, namely that every ideal in a polynomial ring is a finite intersection of primary ideals. Macauley proved the uniqueness of this decomposition. Overall, this work led to the development of algebraic geometry.In 1801 Gauss introduced binary quadratic forms over the integers and defined their equivalence.
|
Abstract Algebra
| 0.835271
|
2,206
|
In particular, Noether studied what conditions were required for a polynomial to be an element of the ideal generated by two algebraic curves in the polynomial ring R {\displaystyle \mathbb {R} } , although Noether did not use this modern language. In 1882 Dedekind and Weber, in analogy with Dedekind's earlier work on algebraic number theory, created a theory of algebraic function fields which allowed the first rigorous definition of a Riemann surface and a rigorous proof of the Riemann–Roch theorem. Kronecker in the 1880s, Hilbert in 1890, Lasker in 1905, and Macauley in 1913 further investigated the ideals of polynomial rings implicit in E. Noether's work.
|
Abstract Algebra
| 0.835271
|
2,207
|
Riemann's methods relied on an assumption he called Dirichlet's principle, which in 1870 was questioned by Weierstrass. Much later, in 1900, Hilbert justified Riemann's approach by developing the direct method in the calculus of variations. In the 1860s and 1870s, Clebsch, Gordan, Brill, and especially M. Noether studied algebraic functions and curves.
|
Abstract Algebra
| 0.835271
|
2,208
|
In 1846 and 1847 Kummer introduced ideal numbers and proved unique factorization into ideal primes for cyclotomic fields. Dedekind extended this in 1871 to show that every nonzero ideal in the domain of integers of an algebraic number field is a unique product of prime ideals, a precursor of the theory of Dedekind domains. Overall, Dedekind's work created the subject of algebraic number theory.In the 1850s, Riemann introduced the fundamental concept of a Riemann surface.
|
Abstract Algebra
| 0.835271
|
2,209
|
Jacobi and Eisenstein at around the same time proved a cubic reciprocity law for the Eisenstein integers. The study of Fermat's last theorem led to the algebraic integers. In 1847, Gabriel Lamé thought he had proven FLT, but his proof was faulty as he assumed all the cyclotomic fields were UFDs, yet as Kummer pointed out, Q ( ζ 23 ) ) {\displaystyle \mathbb {Q} (\zeta _{23}))} was not a UFD.
|
Abstract Algebra
| 0.835271
|
2,210
|
Cartan was the first to define concepts such as direct sum and simple algebra, and these concepts proved quite influential. In 1907 Wedderburn extended Cartan's results to an arbitrary field, in what are now called the Wedderburn principal theorem and Artin–Wedderburn theorem.For commutative rings, several areas together led to commutative ring theory. In two papers in 1828 and 1832, Gauss formulated the Gaussian integers and showed that they form a unique factorization domain (UFD) and proved the biquadratic reciprocity law.
|
Abstract Algebra
| 0.835271
|
2,211
|
Presently, the term "abstract algebra" is typically used for naming courses in mathematical education, and is rarely used in advanced mathematics. Algebraic structures, with their associated homomorphisms, form mathematical categories. Category theory is a formalism that allows a unified way for expressing properties and constructions that are similar for various structures. Universal algebra is a related subject that studies types of algebraic structures as single objects. For example, the structure of groups is a single object in universal algebra, which is called the variety of groups.
|
Abstract Algebra
| 0.835271
|
2,212
|
In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, and algebras over a field. The term abstract algebra was coined in the early 20th century to distinguish it from older parts of algebra, and more specifically from elementary algebra, the use of variables to represent numbers in computation and reasoning.
|
Abstract Algebra
| 0.835271
|
2,213
|
However, European mathematicians, for the most part, resisted these concepts until the middle of the 19th century.George Peacock's 1830 Treatise of Algebra was the first attempt to place algebra on a strictly symbolic basis. He distinguished a new symbolical algebra, distinct from the old arithmetical algebra. Whereas in arithmetical algebra a − b {\displaystyle a-b} is restricted to a ≥ b {\displaystyle a\geq b} , in symbolical algebra all rules of operations hold with no restrictions.
|
Abstract Algebra
| 0.835271
|
2,214
|
Muhammad ibn Mūsā al-Khwārizmī originated the word "algebra" in 830 AD, but his work was entirely rhetorical algebra. Fully symbolic algebra did not appear until François Viète's 1591 New Algebra, and even this had some spelled out words that were given symbols in Descartes's 1637 La Géométrie. The formal study of solving symbolic equations led Leonhard Euler to accept what were then considered "nonsense" roots such as negative numbers and imaginary numbers, in the late 18th century.
|
Abstract Algebra
| 0.835271
|
2,215
|
The study of polynomial equations or algebraic equations has a long history. c. 1700 BC, the Babylonians were able to solve quadratic equations specified as word problems. This word problem stage is classified as rhetorical algebra and was the dominant approach up to the 16th century.
|
Abstract Algebra
| 0.835271
|
2,216
|
Questions of structure and classification of various mathematical objects came to forefront.These processes were occurring throughout all of mathematics, but became especially pronounced in algebra. Formal definition through primitive operations and axioms were proposed for many basic algebraic structures, such as groups, rings, and fields. Hence such things as group theory and ring theory took their places in pure mathematics. The algebraic investigations of general fields by Ernst Steinitz and of commutative and then general rings by David Hilbert, Emil Artin and Emmy Noether, building up on the work of Ernst Kummer, Leopold Kronecker and Richard Dedekind, who had considered ideals in commutative rings, and of Georg Frobenius and Issai Schur, concerning representation theory of groups, came to define abstract algebra. These developments of the last quarter of the 19th century and the first quarter of 20th century were systematically exposed in Bartel van der Waerden's Moderne Algebra, the two-volume monograph published in 1930–1931 that forever changed for the mathematical world the meaning of the word algebra from the theory of equations to the theory of algebraic structures.
|
Abstract Algebra
| 0.835271
|
2,217
|
With additional structure, more theorems could be proved, but the generality is reduced. The "hierarchy" of algebraic objects (in terms of generality) creates a hierarchy of the corresponding theories: for instance, the theorems of group theory may be used when studying rings (algebraic objects that have two binary operations with certain axioms) since a ring is a group over one of its operations. In general there is a balance between the amount of generality and the richness of the theory: more general structures have usually fewer nontrivial theorems and fewer applications. Examples of algebraic structures with a single binary operation are: Magma Quasigroup Monoid Semigroup GroupExamples involving several operations include:
|
Abstract Algebra
| 0.835271
|
2,218
|
By abstracting away various amounts of detail, mathematicians have defined various algebraic structures that are used in many areas of mathematics. For instance, almost all systems studied are sets, to which the theorems of set theory apply. Those sets that have a certain binary operation defined on them form magmas, to which the concepts concerning magmas, as well those concerning sets, apply. We can add additional constraints on the algebraic structure, such as associativity (to form semigroups); identity, and inverses (to form groups); and other more complex structures.
|
Abstract Algebra
| 0.835271
|
2,219
|
Solving of systems of linear equations, which led to linear algebra
|
Abstract Algebra
| 0.835271
|
2,220
|
He also completed the Jordan–Hölder theorem. Dedekind and Miller independently characterized Hamiltonian groups and introduced the notion of the commutator of two elements. Burnside, Frobenius, and Molien created the representation theory of finite groups at the end of the nineteenth century. J. A. de Séguier's 1905 monograph Elements of the Theory of Abstract Groups presented many of these results in an abstract, general form, relegating "concrete" groups to an appendix, although it was limited to finite groups. The first monograph on both finite and infinite abstract groups was O. K. Schmidt's 1916 Abstract Theory of Groups.
|
Abstract Algebra
| 0.835271
|
2,221
|
Walther von Dyck in 1882 was the first to require inverse elements as part of the definition of a group.Once this abstract group concept emerged, results were reformulated in this abstract setting. For example, Sylow's theorem was reproven by Frobenius in 1887 directly from the laws of a finite group, although Frobenius remarked that the theorem followed from Cauchy's theorem on permutation groups and the fact that every finite group is a subgroup of a permutation group. Otto Hölder was particularly prolific in this area, defining quotient groups in 1889, group automorphisms in 1893, as well as simple groups.
|
Abstract Algebra
| 0.835271
|
2,222
|
In 1874 Lie introduced the theory of Lie groups, aiming for "the Galois theory of differential equations". In 1876 Poincaré and Klein introduced the group of Möbius transformations, and its subgroups such as the modular group and Fuchsian group, based on work on automorphic functions in analysis.The abstract concept of group emerged slowly over the middle of the nineteenth century. Galois in 1832 was the first to use the term "group", signifying a collection of permutations closed under composition.
|
Abstract Algebra
| 0.835271
|
2,223
|
Frobenius in 1878 and Charles Sanders Peirce in 1881 independently proved that the only finite-dimensional division algebras over R {\displaystyle \mathbb {R} } were the real numbers, the complex numbers, and the quaternions. In the 1880s Killing and Cartan showed that semisimple Lie algebras could be decomposed into simple ones, and classified all simple Lie algebras. Inspired by this, in the 1890s Cartan, Frobenius, and Molien proved (independently) that a finite-dimensional associative algebra over R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } uniquely decomposes into the direct sums of a nilpotent algebra and a semisimple algebra that is the product of some number of simple algebras, square matrices over division algebras.
|
Abstract Algebra
| 0.835271
|
2,224
|
In an 1870 monograph, Benjamin Peirce classified the more than 150 hypercomplex number systems of dimension below 6, and gave an explicit definition of an associative algebra. He defined nilpotent and idempotent elements and proved that any algebra contains one or the other. He also defined the Peirce decomposition.
|
Abstract Algebra
| 0.835271
|
2,225
|
William Kingdon Clifford introduced split-biquaternions in 1873. In addition Cayley introduced group algebras over the real and complex numbers in 1854 and square matrices in two papers of 1855 and 1858.Once there were sufficient examples, it remained to classify them.
|
Abstract Algebra
| 0.835271
|
2,226
|
Noncommutative ring theory began with extensions of the complex numbers to hypercomplex numbers, specifically William Rowan Hamilton's quaternions in 1843. Many other number systems followed shortly. In 1844, Hamilton presented biquaternions, Cayley introduced octonions, and Grassman introduced exterior algebras. James Cockle presented tessarines in 1848 and coquaternions in 1849.
|
Abstract Algebra
| 0.835271
|
2,227
|
In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer. It is often useful to specify quadratic degrees of freedom. These are degrees of freedom that contribute in a quadratic function to the energy of the system. Depending on what one is counting, there are several different ways that degrees of freedom can be defined, each with a different value.
|
Degrees of freedom (physics)
| 0.835227
|
2,228
|
In physics and chemistry, a degree of freedom is an independent physical parameter in the formal description of the state of a physical system. The set of all states of a system is known as the system's phase space, and the degrees of freedom of the system are the dimensions of the phase space. The location of a particle in three-dimensional space requires three position coordinates. Similarly, the direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space.
|
Degrees of freedom (physics)
| 0.835227
|
2,229
|
A degree of freedom Xi is quadratic if the energy terms associated with this degree of freedom can be written as E = α i X i 2 + β i X i Y {\displaystyle E=\alpha _{i}\,\,X_{i}^{2}+\beta _{i}\,\,X_{i}Y} ,where Y is a linear combination of other quadratic degrees of freedom. example: if X1 and X2 are two degrees of freedom, and E is the associated energy: If E = X 1 4 + X 1 3 X 2 + X 2 4 {\displaystyle E=X_{1}^{4}+X_{1}^{3}X_{2}+X_{2}^{4}} , then the two degrees of freedom are not independent and non-quadratic. If E = X 1 4 + X 2 4 {\displaystyle E=X_{1}^{4}+X_{2}^{4}} , then the two degrees of freedom are independent and non-quadratic. If E = X 1 2 + X 1 X 2 + 2 X 2 2 {\displaystyle E=X_{1}^{2}+X_{1}X_{2}+2X_{2}^{2}} , then the two degrees of freedom are not independent but are quadratic. If E = X 1 2 + 2 X 2 2 {\displaystyle E=X_{1}^{2}+2X_{2}^{2}} , then the two degrees of freedom are independent and quadratic.For example, in Newtonian mechanics, the dynamics of a system of quadratic degrees of freedom are controlled by a set of homogeneous linear differential equations with constant coefficients.
|
Degrees of freedom (physics)
| 0.835227
|
2,230
|
Application of the formula for distance between two coordinates d = ( x 2 − x 1 ) 2 + ( y 2 − y 1 ) 2 + ( z 2 − z 1 ) 2 {\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}} results in one equation with one unknown, in which we can solve for z2. One of x1, x2, y1, y2, z1, or z2 can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules typically makes negligible contributions to the heat capacity. This is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures (kBT).
|
Degrees of freedom (physics)
| 0.835227
|
2,231
|
Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum computing, procedures for state preparation and measurement (SPAM) are also error-prone, and thus quantum process tomography is unable to distinguish errors associated with gate operations from errors associated with SPAM. In contrast, RB protocols are robust to state-preparation and measurement errors Randomized benchmarking protocols estimate key features of the errors that affect a set of quantum operations by examining how the observed fidelity of the final quantum state decreases as the length of the random sequence increases. If the set of operations satisfies certain mathematical properties, such as comprising a sequence of twirls with unitary two-designs, then the measured decay can be shown to be an invariant exponential with a rate fixed uniquely by features of the error model.
|
Randomized benchmarking
| 0.835225
|
2,232
|
The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures.
|
Randomized benchmarking
| 0.835225
|
2,233
|
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations. The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, considered the implementation of sequences of Haar-random operations, but this had several practical limitations.
|
Randomized benchmarking
| 0.835225
|
2,234
|
Let R be a commutative ring (so R could be a field). An associative R-algebra (or more simply, an R-algebra) is a ring that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies r ⋅ ( x y ) = ( r ⋅ x ) y = x ( r ⋅ y ) {\displaystyle r\cdot (xy)=(r\cdot x)y=x(r\cdot y)} for all r in R and x, y in the algebra. (This definition implies that the algebra is unital, since rings are supposed to have a multiplicative identity.) Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is ( r , x ) ↦ f ( r ) x {\displaystyle (r,x)\mapsto f(r)x} (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by r ↦ r ⋅ 1 A {\displaystyle r\mapsto r\cdot 1_{A}} (See also § From ring homomorphisms below). Every ring is an associative Z {\displaystyle \mathbb {Z} } -algebra, where Z {\displaystyle \mathbb {Z} } denotes the ring of the integers. A commutative algebra is an associative algebra that is also a commutative ring.
|
Associative algebra
| 0.835212
|
2,235
|
Mechanomics is the study of how forces are transmitted and the influence they have on biological function.Mechanomics is also an emerging field between biology and biomechanics.Physicomics Physicomics it the complex of other than mechanical forces involved in cellular physiology and response to its environment. Besides mechanical one should think of other physical parameters such as pressure, temperature, electro-magnetic fields such as light, et cetera.
|
Mechanome
| 0.835208
|
2,236
|
The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have a decisive influence on the chemical and physical properties of organic compounds. Molecules are classified based on their functional groups.
|
Molecular structure elucidation
| 0.835186
|
2,237
|
Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.
|
Molecular structure elucidation
| 0.835186
|
2,238
|
Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem-solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule.
|
Molecular structure elucidation
| 0.835186
|
2,239
|
In organic chemistry, secondary amino acids are amino acids which do not contain the amino group −NH2 but is rather a secondary amine (>NH). Secondary amino acids can be classified to cyclic acids, such as proline, and acyclic N-substituted amino acids.In nature, proline, hydroxyproline, pipecolic acid and sarcosine are well-known secondary amino acids. Proline is the only proteinogenic secondary amino acids. Other secondary amino acids are non-proteinogenic amino acids.
|
Secondary amino acid
| 0.835185
|
2,240
|
Phalloidin functionalized with a fluorophore is used in microscopy as a stain due to its high affinity for actin. Anantin is a RiPP used in cell biology as an atrial natriuretic peptide receptor inhibitor.In 2012-2013, a derivatized RiPP in clinical trials was LFF571. Phase II clinical trials of LFF571, a derivative of the thiopeptide GE2270-A, for the treatment of Clostridium difficile infections, with comparable safety and efficacy to vancomycin, was terminated early as the results were unfavorable.
|
Ribosomally synthesized and post-translationally modified peptides
| 0.835184
|
2,241
|
Probability Surveys is an open-access electronic journal that is jointly sponsored by the Bernoulli Society and the Institute of Mathematical Statistics. It publishes review articles on topics of interest in probability theory.
|
Probability Surveys
| 0.835175
|
2,242
|
Cell mechanics is a sub-field of biophysics that focuses on the mechanical properties and behavior of living cells and how it relates to cell function. It encompasses aspects of cell biophysics, biomechanics, soft matter physics and rheology, mechanobiology and cell biology.
|
Cell mechanics
| 0.835164
|
2,243
|
Plant cell mechanics combines principles of biomechanics and mechanobiology to investigate the growth and shaping of the plant cells. Plant cells, similar to animal cells, respond to externally applied forces, such as by reorganization of their cytoskeletal network. The presence of a considerably rigid extracellular matrix, the cell wall, however, bestows the plant cells with a set of particular properties. Mainly, the growth of plant cells is controlled by the mechanics and chemical composition of the cell wall. A major part of research in plant cell mechanics is put toward the measurement and modeling of the cell wall mechanics to understand how modification of its composition and mechanical properties affects the cell function, growth and morphogenesis.
|
Cell mechanics
| 0.835164
|
2,244
|
How to Pronounce a Term: Much of the terminology of genetics and biology is unique in its pronunciation. Below each term name is a “Pronunciation” button. Click the button to hear the term spoken.
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,245
|
The process of developing the Talking Glossary began by examining some of the most popular American middle school and high school science textbooks. Genetics-related terms from these textbooks provided the foundation for the Talking Glossary. These terms are associated with biological concepts addressed by the National Science Education Standards and common in high school and college biology courses.
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,246
|
In this light, the Glossary was designed to enable people without a formal scientific background to better understand the terms and concepts behind genetic research. Special attention has been paid to users who are learning or teaching genetics in the classroom. However, the Glossary is designed to be valuable for a much wider audience including patients, doctors, nurses, parents, and professionals dealing with genetic concepts and terminology, such as judges, lawyers, law enforcement officials, and others.
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,247
|
Developing the Talking Glossary: The Talking Glossary of Genetics is a science learning tool developed by the National Human Genome Research Institute (NHGRI) at the National Institutes of Health (NIH). NHGRI oversaw the NIH's role in the Human Genome Project, the international research effort aimed at mapping the genes in the human body and developing tools for gene discovery. Many of the Talking Glossary terms are commonly used today in news reports, by researchers and medical professionals, in classrooms and, increasingly, as part of daily conversation.
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,248
|
A new multimedia, and significantly updated, version of the English Talking Glossary of Genetics was released by the National Human Genome Research Institute in October, 2009. An identical update of the Spanish-language version was released in October, 2011. In September, 2011, an iPhone App of the English Talking Glossary was released by NHGRI and made available as a free download in the Apple App store. The App version contains all 3-D animations, high quality illustrations, the "Test Your Gene IQ" quiz, and similar user functions such as "Suggest a Term" and "Mail This Term to a Friend."
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,249
|
Students, teachers and parents will find the glossary an easy-to-use, always available learning source on genetics. The first version was published in English online in September 1998 by the NHGRI Office of Science Education under the title of "Talking Glossary of Genetics". The Spanish-language version was released 18 months later.
|
Talking Glossary of Genetic Terms
| 0.835149
|
2,250
|
Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable. : 654ff
|
Multivariable Calculus
| 0.83512
|
2,251
|
The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant. : 26ff Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator ( ∇ {\displaystyle \nabla } ) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives.
|
Multivariable Calculus
| 0.83512
|
2,252
|
The multiple integral expands the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration. : 367ff The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves.
|
Multivariable Calculus
| 0.83512
|
2,253
|
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.Multivariable calculus may be thought of as an elementary part of advanced calculus. For advanced calculus, see calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus.
|
Multivariable Calculus
| 0.83512
|
2,254
|
In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:: 543ff Gradient theorem Stokes' theorem Divergence theorem Green's theorem.In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.
|
Multivariable Calculus
| 0.83512
|
2,255
|
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
|
AP Physics B
| 0.835116
|
2,256
|
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
|
AP Physics B
| 0.835116
|
2,257
|
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
|
AP Physics B
| 0.835116
|
2,258
|
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
|
AP Physics B
| 0.835116
|
2,259
|
The Viral Bioinformatics Resource Center (VBRC) is an online resource providing access to a database of curated viral genomes and a variety of tools for bioinformatic genome analysis. This resource was one of eight BRCs (Bioinformatics Resource Centers) funded by NIAID with the goal of promoting research against emerging and re-emerging pathogens, particularly those seen as potential bioterrorism threats. The VBRC is now supported by Dr. Chris Upton at the University of Victoria. The curated VBRC database contains all publicly available genomic sequences for poxviruses and African Swine Fever Viruses (ASFV).
|
Viral Bioinformatics Resource Center
| 0.835116
|
2,260
|
The Peano existence theorem however proves that even for f merely continuous, solutions are guaranteed to exist locally in time; the problem is that there is no guarantee of uniqueness. The result may be found in Coddington & Levinson (1955, Theorem 1.3) or Robinson (2001, Theorem 2.6). An even more general result is the Carathéodory existence theorem, which proves existence for some discontinuous functions f.
|
Initial value problem
| 0.835105
|
2,261
|
Such a construction is sometimes called "Picard's method" or "the method of successive approximations". This version is essentially a special case of the Banach fixed point theorem.
|
Initial value problem
| 0.835105
|
2,262
|
The Picard–Lindelöf theorem guarantees a unique solution on some interval containing t0 if f is continuous on a region containing t0 and y0 and satisfies the Lipschitz condition on the variable y. The proof of this theorem proceeds by reformulating the problem as an equivalent integral equation. The integral can be considered an operator which maps one function into another, such that the solution is a fixed point of the operator. The Banach fixed point theorem is then invoked to show that there exists a unique fixed point, which is the solution of the initial value problem. An older proof of the Picard–Lindelöf theorem constructs a sequence of functions which converge to the solution of the integral equation, and thus, the solution of the initial value problem.
|
Initial value problem
| 0.835105
|
2,263
|
In mathematics, particularly in algebra, an indeterminate equation is an equation for which there is more than one solution. For example, the equation a x + b y = c {\displaystyle ax+by=c} is a simple indeterminate equation, as is x 2 = 1 {\displaystyle x^{2}=1} . Indeterminate equations cannot be solved uniquely.
|
Indeterminate equations
| 0.835091
|
2,264
|
Shortly after, the coil was damaged in a control test in February 2007 and replaced in May 2007. The replacement coil was inferior, a copper wound electromagnet, that was also water cooled. Scientific results, including the observation of an inward turbulent pinch, were reported in Nature Physics.
|
Levitated Dipole Experiment
| 0.835067
|
2,265
|
In the case of deuterium fusion (the cheapest and most straightforward fusion fuel) the geometry of the LDX has the unique advantage over other concepts. Deuterium fusion makes two products, that occur with near equal probability: D + D ⟶ T + H 1 {\displaystyle {\ce {D + D -> T + ^1H}}} D + D ⟶ He 3 + n {\displaystyle {\ce {D + D -> ^3He + n}}} In this machine, the secondary tritium could be partially removed, a unique property of the dipole. Another fuel choice is tritium and deuterium. This reaction can be done at lower heats and pressures.
|
Levitated Dipole Experiment
| 0.835067
|
2,266
|
Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. post-XCS) LCS algorithm. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS.
|
Learning classifier system
| 0.835062
|
2,267
|
John Henry Holland was best known for his work popularizing genetic algorithms (GA), through his ground-breaking book "Adaptation in Natural and Artificial Systems" in 1975 and his formalization of Holland's schema theorem. In 1976, Holland conceptualized an extension of the GA concept to what he called a "cognitive system", and provided the first detailed description of what would become known as the first learning classifier system in the paper "Cognitive Systems based on Adaptive Algorithms". This first system, named Cognitive System One (CS-1) was conceived as a modeling tool, designed to model a real system (i.e. environment) with unknown underlying dynamics using a population of human readable rules. The goal was for a set of rules to perform online machine learning to adapt to the environment based on infrequent payoff/reward (i.e. reinforcement learning) and apply these rules to generate a behavior that matched the real system.
|
Learning classifier system
| 0.835062
|
2,268
|
Individual LCS rules are typically human readable IF:THEN expression. Rules that constitute the LCS prediction model can be ranked by different rule parameters and manually inspected. Global strategies to guide knowledge discovery using statistical and graphical have also been proposed. With respect to other advanced machine learning approaches, such as artificial neural networks, random forests, or genetic programming, learning classifier systems are particularly well suited to problems that require interpretable solutions.
|
Learning classifier system
| 0.835062
|
2,269
|
Browne and Iqbal explored the concept of reusing building blocks in the form of code fragments and were the first to solve the 135-bit multiplexer benchmark problem by first learning useful building blocks from simpler multiplexer problems. ExSTraCS 2.0 was later introduced to improve Michigan-style LCS scalability, successfully solving the 135-bit multiplexer benchmark problem for the first time directly. The n-bit multiplexer problem is highly epistatic and heterogeneous, making it a very challenging machine learning task.
|
Learning classifier system
| 0.835062
|
2,270
|
Urbanowicz extended the UCS framework and introduced ExSTraCS, explicitly designed for supervised learning in noisy problem domains (e.g. epidemiology and bioinformatics). ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data, (2) a form of long-term memory referred to as attribute tracking, allowing for more efficient learning and the characterization of heterogeneous data patterns, and (3) a flexible rule representation similar to Bacardit's mixed discrete-continuous attribute list representation. Both Bacardit and Urbanowicz explored statistical and visualization strategies to interpret LCS rules and perform knowledge discovery for data mining.
|
Learning classifier system
| 0.835062
|
2,271
|
Bacardit introduced GAssist and BioHEL, Pittsburgh-style LCSs designed for data mining and scalability to large datasets in bioinformatics applications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms. Butz introduced the first rule online learning visualization within a GUI for XCSF (see the image at the top of this page).
|
Learning classifier system
| 0.835062
|
2,272
|
XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-world epidemiological investigations of disease followed closely by Holmes who developed the BOOLE++, EpiCS, and later EpiXCS for epidemiological classification. These early works inspired later interest in applying LCS algorithms to complex and large-scale data mining tasks epitomized by bioinformatics applications. In 1998, Stolzmann introduced anticipatory classifier systems (ACS) which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation.
|
Learning classifier system
| 0.835062
|
2,273
|
In Michigan-style LCS, each rule has its own fitness, as well as a number of other rule-parameters associated with it that can describe the number of copies of that rule that exist (i.e. the numerosity), the age of the rule, its accuracy, or the accuracy of its reward predictions, and other descriptive or experiential statistics. A rule along with its parameters is often referred to as a classifier. In Michigan-style systems, classifiers are contained within a population that has a user defined maximum number of classifiers.
|
Learning classifier system
| 0.835062
|
2,274
|
A rule is a context dependent relationship between state values and some prediction. Rules typically take the form of an {IF:THEN} expression, (e.g. {IF 'condition' THEN 'action'}, or as a more specific example, {IF 'red' AND 'octagon' THEN 'stop-sign'}). A critical concept in LCS and rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied.
|
Learning classifier system
| 0.835062
|
2,275
|
This early, ambitious implementation was later regarded as overly complex, yielding inconsistent results.Beginning in 1980, Kenneth de Jong and his student Stephen Smith took a different approach to rule-based machine learning with (LS-1), where learning was viewed as an offline optimization process rather than an online adaptation process. This new approach was more similar to a standard genetic algorithm but evolved independent sets of rules. Since that time LCS methods inspired by the online learning framework introduced by Holland at the University of Michigan have been referred to as Michigan-style LCS, and those inspired by Smith and De Jong at the University of Pittsburgh have been referred to as Pittsburgh-style LCS. In 1986, Holland developed what would be considered the standard Michigan-style LCS for the next decade.Other important concepts that emerged in the early days of LCS research included (1) the formalization of a bucket brigade algorithm (BBA) for credit assignment/learning, (2) selection of parent rules from a common 'environmental niche' (i.e. the match set ) rather than from the whole population , (3) covering, first introduced as a create operator, (4) the formalization of an action set , (5) a simplified algorithm architecture, (6) strength-based fitness, (7) consideration of single-step, or supervised learning problems and the introduction of the correct set , (8) accuracy-based fitness (9) the combination of fuzzy logic with LCS (which later spawned a lineage of fuzzy LCS algorithms), (10) encouraging long action chains and default hierarchies for improving performance on multi-step problems, (11) examining latent learning (which later inspired a new branch of anticipatory classifier systems (ACS)), and (12) the introduction of the first Q-learning-like credit assignment technique. While not all of these concepts are applied in modern LCS algorithms, each were landmarks in the development of the LCS paradigm.
|
Learning classifier system
| 0.835062
|
2,276
|
The characteristic polynomial of an integer matrix has integer coefficients. Since the eigenvalues of a matrix are the roots of this polynomial, the eigenvalues of an integer matrix are algebraic integers. In dimension less than 5, they can thus be expressed by radicals involving integers. Integer matrices are sometimes called integral matrices, although this use is discouraged.
|
Integer matrices
| 0.835051
|
2,277
|
Integer matrices of determinant 1 {\displaystyle 1} form the group S L n ( Z ) {\displaystyle \mathrm {SL} _{n}(\mathbf {Z} )} , which has far-reaching applications in arithmetic and geometry. For n = 2 {\displaystyle n=2} , it is closely related to the modular group. The intersection of the integer matrices with the orthogonal group is the group of signed permutation matrices.
|
Integer matrices
| 0.835051
|
2,278
|
Invertibility of integer matrices is in general more numerically stable than that of non-integer matrices. The determinant of an integer matrix is itself an integer, thus the numerically smallest possible magnitude of the determinant of an invertible integer matrix is one, hence where inverses exist they do not become excessively large (see condition number). Theorems from matrix theory that infer properties from determinants thus avoid the traps induced by ill conditioned (nearly zero determinant) real or floating point valued matrices. The inverse of an integer matrix M {\displaystyle M} is again an integer matrix if and only if the determinant of M {\displaystyle M} equals 1 {\displaystyle 1} or − 1 {\displaystyle -1} .
|
Integer matrices
| 0.835051
|
2,279
|
Furthermore, when the temperature is lowered and the molecules described above pass through the column, the chimeric protein undergoes self-splicing and only the target protein is eluted. This novel technique eliminates the need for a proteolysis step, and modified Sce VMA stays in column attached to chitin through CBD.Recently inteins have been used to purify proteins based on self aggregating peptides. Elastin-like polypeptides (ELPs) are a useful tool in biotechnology.
|
Expressed protein ligation
| 0.83505
|
2,280
|
Inteins are very efficient at protein splicing, and they have accordingly found an important role in biotechnology. There are more than 200 inteins identified to date; sizes range from 100–800 AAs. Inteins have been engineered for particular applications such as protein semisynthesis and the selective labeling of protein segments, which is useful for NMR studies of large proteins.Pharmaceutical inhibition of intein excision may be a useful tool for drug development; the protein that contains the intein will not carry out its normal function if the intein does not excise, since its structure will be disrupted. It has been suggested that inteins could prove useful for achieving allotopic expression of certain highly hydrophobic proteins normally encoded by the mitochondrial genome, for example in gene therapy.
|
Expressed protein ligation
| 0.83505
|
2,281
|
Under this correspondence, (equivalence classes) of ultrametric places of K {\displaystyle K} correspond to prime ideals of O K {\displaystyle {\mathcal {O}}_{K}} . For K = Q {\displaystyle K=\mathbb {Q} } , this gives back Ostrowski's theorem: any prime ideal in Z (which is necessarily by a single prime number) corresponds to a non-Archimedean place and vice versa. However, for more general number fields, the situation becomes more involved, as will be explained below.
|
Number fields
| 0.835048
|
2,282
|
Let K {\displaystyle K} be a number field of degree n {\displaystyle n} . Among all possible bases of K {\displaystyle K} (seen as a Q {\displaystyle \mathbb {Q} } -vector space), there are particular ones known as power bases, that are bases of the form B x = { 1 , x , x 2 , … , x n − 1 } {\displaystyle B_{x}=\{1,x,x^{2},\ldots ,x^{n-1}\}} for some element x ∈ K {\displaystyle x\in K} . By the primitive element theorem, there exists such an x {\displaystyle x} , called a primitive element.
|
Number fields
| 0.835048
|
2,283
|
To find the non-Archimedean places, let again f {\displaystyle f} and x {\displaystyle x} be as above. In Q p {\displaystyle \mathbb {Q} _{p}} , f {\displaystyle f} splits in factors of various degrees, none of which are repeated, and the degrees of which add up to n {\displaystyle n} , the degree of f {\displaystyle f} . For each of these p {\displaystyle p} -adically irreducible factors f i {\displaystyle f_{i}} , we may suppose that x {\displaystyle x} satisfies f i {\displaystyle f_{i}} and obtain an embedding of K {\displaystyle K} into an algebraic extension of finite degree over Q p {\displaystyle \mathbb {Q} _{p}} . Such a local field behaves in many ways like a number field, and the p {\displaystyle p} -adic numbers may similarly play the role of the rationals; in particular, we can define the norm and trace in exactly the same way, now giving functions mapping to Q p {\displaystyle \mathbb {Q} _{p}} .
|
Number fields
| 0.835048
|
2,284
|
M equilibrium accomplishes this by replacing the two main assumptions underlying classical game theory, perfect maximization and rational expectations, with the weaker notions of ordinal monotonicity –players' choice probabilities are ranked the same as the expected payoffs based on their beliefs – and ordinal consistency – players' beliefs yield the same ranking of expected payoffs as their choices. M equilibria do not follow from the fixed-points that follow by imposing rational expectations and that have long dominated economics. Instead, the mathematical machinery used to characterize M equilibria is semi-algebraic geometry. Interestingly, some of this machinery was developed by Nash himself. The characterization of M equilibria as semi-algebraic sets allows for mathematically precise and empirically testable predictions.
|
M equilibrium
| 0.835037
|
2,285
|
A large body of work in experimental game theory has documented systematic departures from Nash equilibrium, the cornerstone of classic game theory. The lack of empirical support for Nash equilibrium led Nash himself to return to doing research in pure mathematics. Selten, who shared the 1994 Nobel Prize with Nash, likewise concluded that “game theory is for proving theorems, not for playing games”. M equilibrium is motivated by the desire for an empirically relevant game theory.
|
M equilibrium
| 0.835037
|
2,286
|
It is a diffuse emitter: measured per unit area perpendicular to the direction, the energy is radiated isotropically, independent of direction.Real materials emit energy at a fraction—called the emissivity—of black-body energy levels. By definition, a black body in thermal equilibrium has an emissivity ε = 1. A source with a lower emissivity, independent of frequency, is often referred to as a gray body. Constructing black bodies with an emissivity as close to 1 as possible remains a topic of current interest.In astronomy, the radiation from stars and planets is sometimes characterized in terms of an effective temperature, the temperature of a black body that would emit the same total flux of electromagnetic energy.
|
Black-body absorption
| 0.835029
|
2,287
|
Molecular Omics is a bimonthly peer-reviewed scientific journal published by the Royal Society of Chemistry. It covers the interface between chemistry, the "omic" sciences, and systems biology. The editor-in-chief is Robert L. Moritz (Institute for Systems Biology).
|
Molecular Omics
| 0.835001
|
2,288
|
Earth science – the science of the planet Earth, as of 2018 the only identified life-bearing planet. Its studies include the following: The water cycle and the process of transpiration Freshwater Oceanography Weathering and erosion Rocks Agrophysics Soil science Pedogenesis Soil fertility Earth's tectonic structure Geomorphology and geophysics Physical geography Seismology: stress, strain, and earthquakes Characteristics of mountains and volcanoes Characteristics and formation of fossils Atmospheric sciences – the branches of science that study the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. Atmosphere of Earth Atmospheric pressure and winds Evaporation, condensation, and humidity Fog and clouds Meteorology, weather, climatology, and climate Hydrology, clouds and precipitation Air masses and weather fronts Major storms: thunderstorms, tornadoes, and hurricanes Major climate groups Speleology Cave
|
Physical Science
| 0.834984
|
2,289
|
Sears and Zemansky's University Physics with Modern Physics Technology Update (13th ed.). Pearson Education. ISBN 978-1-292-02063-1.
|
Physical Science
| 0.834984
|
2,290
|
Physics is the study of your world and universe around you. Maxwell, J.C. (1878).
|
Physical Science
| 0.834984
|
2,291
|
Physics for Dummies. John Wiley & Sons. ISBN 0-470-61841-8.
|
Physical Science
| 0.834984
|
2,292
|
The Feynman Lectures on Physics. Vol. 1.
|
Physical Science
| 0.834984
|
2,293
|
Ancient cultures saw the Earth as the centre of the Solar System or universe (geocentrism). In the 16th century, Nicolaus Copernicus advanced the ideas of heliocentrism, recognizing the Sun as the centre of the Solar System. The structure of solar systems, planets, comets, asteroids, and meteors The shape and structure of Earth (roughly spherical, see also Spherical Earth) Earth in the Solar System Time measurement The composition and features of the Moon Interactions of the Earth and Moon(Note: Astronomy should not be confused with astrology, which assumes that people's destiny and human affairs in general correlate to the apparent positions of astronomical objects in the sky – although the two fields share a common origin, they are quite different; astronomers embrace the scientific method, while astrologers do not.)
|
Physical Science
| 0.834984
|
2,294
|
Astronomy – science of celestial bodies and their interactions in space. Its studies include the following: The life and characteristics of stars and galaxies Origins of the universe. Physical science uses the Big Bang theory as the commonly accepted scientific theory of the origin of the universe. A heliocentric Solar System.
|
Physical Science
| 0.834984
|
2,295
|
Branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment (ecosphere or Earth system) works and how it evolved to its current state. It includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere.
|
Physical Science
| 0.834984
|
2,296
|
Physics – natural and physical science could involve the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves.Branches of physics Astronomy – study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae), the physics, chemistry, and evolution of such objects, and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma-ray bursts, and cosmic microwave background radiation. Branches of astronomy Chemistry – studies the composition, structure, properties and change of matter. In this realm, chemistry deals with such topics as the properties of individual atoms, the manner in which atoms form chemical bonds in the formation of compounds, the interactions of substances through intermolecular forces to give matter its general properties, and the interactions between substances through chemical reactions to form different substances.
|
Physical Science
| 0.834984
|
2,297
|
There are generally two classes of physics engines: real-time and high-precision. High-precision physics engines require more processing power to calculate very precise physics and are usually used by scientists and computer-animated movies. Real-time physics engines—as used in video games and other forms of interactive computing—use simplified calculations and decreased accuracy to compute in time for the game to respond at an appropriate rate for game play.
|
Physics engines
| 0.834981
|
2,298
|
Physics engines for video games typically have two core components, a collision detection/collision response system, and the dynamics simulation component responsible for solving the forces affecting the simulated objects. Modern physics engines may also contain fluid simulations, animation control systems and asset integration tools. There are three major paradigms for the physical simulation of solids: Penalty methods, where interactions are commonly modelled as mass-spring systems. This type of engine is popular for deformable, or soft-body physics. Constraint based methods, where constraint equations are solved that estimate physical laws. Impulse based methods, where impulses are applied to object interactions.Finally, hybrid methods are possible that combine aspects of the above paradigms.
|
Physics engines
| 0.834981
|
2,299
|
Due to the requirements of speed and high precision, special computer processors known as vector processors were developed to accelerate the calculations. The techniques can be used to model weather patterns in weather forecasting, wind tunnel data for designing air- and watercraft or motor vehicles including racecars, and thermal cooling of computer processors for improving heat sinks. As with many calculation-laden processes in computing, the accuracy of the simulation is related to the resolution of the simulation and the precision of the calculations; small fluctuations not modeled in the simulation can drastically change the predicted results. Tire manufacturers use physics simulations to examine how new tire tread types will perform under wet and dry conditions, using new tire materials of varying flexibility and under different levels of weight loading.
|
Physics engines
| 0.834981
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.