id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
1,500
|
The journal was established in 1923 as the Journal of Scientific Instruments. The first issue was introduced by J. J. Thomson, then president of the Institute of Physics, who stated that no publication existed at that time in the English language specially devoted to scientific instruments. The idea for the journal was promoted by Richard Glazebrook, the first president, then director, of the National Physical Laboratory, where the journal was initially edited. The need for interdisciplinarity was recognised even then, with the desire to co-opt biologists, engineers, chemists, and instrument makers, "as well as physicists", on the scientific advisory committee.
|
Measurement Science and Technology
| 0.838883
|
1,501
|
Measurement Science and Technology is a monthly peer-reviewed scientific journal, published by IOP Publishing, covering the areas of measurement, instrumentation, and sensor technology in the sciences. The editor-in-chief is Andrew Yacoot (National Physical Laboratory).
|
Measurement Science and Technology
| 0.838883
|
1,502
|
In physics, angular velocity (symbol ω or ω → {\displaystyle {\vec {\omega }}} , the lowercase Greek letter omega), also known as angular frequency vector, is a pseudovector representation of how the angular position or orientation of an object changes with time, i.e. how quickly an object rotates (spins or revolves) around an axis of rotation and how fast the axis itself changes direction. The magnitude of the pseudovector, ω = ‖ ω ‖ {\displaystyle \omega =\|{\boldsymbol {\omega }}\|} , represents the angular speed (or angular frequency), the angular rate at which the object rotates (spins or revolves). The pseudovector direction ω ^ = ω / ω {\displaystyle {\hat {\boldsymbol {\omega }}}={\boldsymbol {\omega }}/\omega } is normal to the instantaneous plane of rotation or angular displacement. There are two types of angular velocity: Orbital angular velocity refers to how fast a point object revolves about a fixed origin, i.e. the time rate of change of its angular position relative to the origin.
|
Rotation velocity
| 0.838881
|
1,503
|
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in two-dimensional space by means of two coordinates. Two perpendicular coordinate axes are given which cross each other at the origin. They are usually labeled x and y. Relative to these axes, the position of any point in two-dimensional space is given by an ordered pair of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the other axis. Another widely used coordinate system is the polar coordinate system, which specifies a point in terms of its distance from the origin and its angle relative to a rightward reference ray.
|
Plane coordinates
| 0.838873
|
1,504
|
A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix.Let h be the vector of all ones. Then if v is an arbitrary logical vector, the relation R = v hT has constant rows determined by v. In the calculus of relations such an R is called a vector. A particular instance is the universal relation h h T {\displaystyle hh^{\operatorname {T} }} .
|
Binary matrix
| 0.838839
|
1,505
|
A logical matrix may represent an adjacency matrix in graph theory: non-symmetric matrices correspond to directed graphs, symmetric matrices to ordinary graphs, and a 1 on the diagonal corresponds to a loop at the corresponding vertex. The biadjacency matrix of a simple, undirected bipartite graph is a (0, 1)-matrix, and any (0, 1)-matrix arises in this way. The prime factors of a list of m square-free, n-smooth numbers can be described as an m × π(n) (0, 1)-matrix, where π is the prime-counting function, and aij is 1 if and only if the j th prime divides the i th number.
|
Binary matrix
| 0.838839
|
1,506
|
A permutation matrix is a (0, 1)-matrix, all of whose columns and rows each have exactly one nonzero element. A Costas array is a special case of a permutation matrix. An incidence matrix in combinatorics and finite geometry has ones to indicate incidence between points (or vertices) and lines of a geometry, blocks of a block design, or edges of a graph. A design matrix in analysis of variance is a (0, 1)-matrix with constant row sums.
|
Binary matrix
| 0.838839
|
1,507
|
A major branch of virology is virus classification. It is artificial in that it is not based on evolutionary phylogenetics but it is based shared or distinguishing properties of viruses. It seeks to describe the diversity of viruses by naming and grouping them on the basis of similarities. In 1962, André Lwoff, Robert Horne, and Paul Tournier were the first to develop a means of virus classification, based on the Linnaean hierarchical system.
|
Molecular virology
| 0.838838
|
1,508
|
Reverse genetics is a powerful research method in virology. In this procedure complimentary DNA (cDNA) copies of virus genomes called "infectious clones" are used to produce genetically modified viruses that can be then tested for changes in say, virulence or transmissibility.
|
Molecular virology
| 0.838838
|
1,509
|
Reassortment is the switching of genes from different parents and it is particularly useful when studying the genetics of viruses that have segmented genomes (fragmented into two or more nucleic acid molecules) such as influenza viruses and rotaviruses. The genes that encode properties such as serotype can be identified in this way.
|
Molecular virology
| 0.838838
|
1,510
|
The methods for separating viral nucleic acids (RNA and DNA) and proteins, which are now the mainstay of virology, did not exist. Now there are many methods for observing the structure and functions of viruses and their component parts. Thousands of different viruses are now known about and virologists often specialize in either the viruses that infect plants, or bacteria and other microorganisms, or animals. Viruses that infect humans are now studied by medical virologists. Virology is a broad subject covering biology, health, animal welfare, agriculture and ecology.
|
Molecular virology
| 0.838838
|
1,511
|
Virology is the scientific study of biological viruses. It is a subfield of microbiology that focuses on their detection, structure, classification and evolution, their methods of infection and exploitation of host cells for reproduction, their interaction with host organism physiology and immunity, the diseases they cause, the techniques to isolate and culture them, and their use in research and therapy. The identification of the causative agent of tobacco mosaic disease (TMV) as a novel pathogen by Martinus Beijerinck (1898) is now acknowledged as being the official beginning of the field of virology as a discipline distinct from bacteriology. He realized the source was neither a bacterial nor a fungal infection, but something completely different.
|
Molecular virology
| 0.838838
|
1,512
|
Scientific calculators are used widely in situations that require quick access to certain mathematical functions, especially those that were once looked up in mathematical tables, such as trigonometric functions or logarithms. They are also used for calculations of very large or very small numbers, as in some aspects of astronomy, physics, and chemistry. They are very often required for math classes from the junior high school level through college, and are generally either permitted or required on many standardized tests covering math and science subjects; as a result, many are sold into educational markets to cover this demand, and some high-end models include features making it easier to translate a problem on a textbook page into calculator input, e.g. by providing a method to enter an entire problem in as it is written on the page using simple formatting tools.
|
Scientific calculator
| 0.838817
|
1,513
|
The GRE subject test in mathematics is a standardized test in the United States created by the Educational Testing Service (ETS), and is designed to assess a candidate's potential for graduate or post-graduate study in the field of mathematics. It contains questions from many fields of mathematics; about 50% of the questions come from calculus (including pre-calculus topics, multivariate calculus, and differential equations), 25% come from algebra (including linear algebra, abstract algebra, and number theory), and 25% come from a broad variety of other topics typically encountered in undergraduate mathematics courses, such as point-set topology, probability and statistics, geometry, and real analysis.Similar to all the GRE subject tests, the GRE Mathematics test is paper-based, as opposed to the GRE general test which is usually computer-based. It contains approximately 66 multiple-choice questions, which are to be answered within 2 hours and 50 minutes.
|
GRE Mathematics Test
| 0.838772
|
1,514
|
{\displaystyle L(R)=\{x\mid \exists yR(x,y)\}.\,} This definition may be generalized to n-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a delimiter). More formally, a relation R can be viewed as a search problem, and a Turing machine which calculates R is also said to solve it. More formally, if R is a binary relation such that field(R) ⊆ Γ+ and T is a Turing machine, then T calculates R if: If x is such that there is some y such that R(x, y) then T accepts x with output z such that R(x, z) (there may be multiple y, and T need only find one of them) If x is such that there is no y such that R(x, y) then T rejects x(Note that the graph of a partial function is a binary relation, and if T calculates a partial function then there is at most one possible output.) Such problems occur very frequently in graph theory and combinatorial optimization, for example, where searching for structures such as particular matchings, optional cliques, particular stable sets, etc. are subjects of interest.
|
Search problem
| 0.838772
|
1,515
|
Macro-level quantum entanglement has been proposed, but it is so far out of the realm of current physics that nobody believes it. The space tunnels also lead to the discovery of the Fallers, an alien race who refused to establish communications and immediately launched a war, which they are winning. No Faller has been captured alive—they prefer to suicide or kamikaze—but forensic examination of corpses indicate they evolved separately from humans, instead of being seeded by the progenitors.
|
Probability Moon
| 0.838736
|
1,516
|
Kirkus Reviews wrote that the book is "twisty and compelling, brimful of ideas with Kress’s usual life-sized characters", and called it a "top-notch work from a major talent". Roland Green of Booklist wrote that "Kress' characterizations are as sound as ever, but many will be agreeably surprised at her proficiency with military hardware and action scenes. "Jackie Cassada of Library Journal wrote that despite "occasionally reading more like a drawn-out short story rather than a novel", the book is a "fine debut by a writer with potential to grow". Publishers Weekly wrote that "Kress does a good job of working out the ramifications of her shared-reality society, but her human characters lack the depth of those in her best work" and that "her military figures in particular are thinly drawn" and "the physics, although interesting, is introduced in large, sometimes indigestible chunks that slow the plot to a crawl." == References ==
|
Probability Moon
| 0.838736
|
1,517
|
In general relativity, the laws of physics can be expressed in a generally covariant form. In other words, the description of the world as given by the laws of physics does not depend on our choice of coordinate systems. However, it is often useful to fix upon a particular coordinate system, in order to solve actual problems or make actual predictions. A coordinate condition selects such coordinate system(s).
|
Coordinate condition
| 0.838734
|
1,518
|
An example of an under-determinative condition is the algebraic statement that the determinant of the metric tensor is −1, which still leaves considerable gauge freedom. This condition would have to be supplemented by other conditions in order to remove the ambiguity in the metric tensor. An example of an over-determinative condition is the algebraic statement that the difference between the metric tensor and the Minkowski tensor is simply a null four-vector times itself, which is known as a Kerr-Schild form of the metric.
|
Coordinate condition
| 0.838734
|
1,519
|
As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point. Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space.
|
Coordinate geometry
| 0.838701
|
1,520
|
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Informally, it is a line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve y = f(x) at a point x = c on the curve if the line passes through the point (c, f(c)) on the curve and has slope f'(c) where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.
|
Coordinate geometry
| 0.838701
|
1,521
|
However, although Apollonius came close to developing analytic geometry, he did not manage to do so since he did not take into account negative magnitudes and in every case the coordinate system was superimposed upon a given curve a posteriori instead of a priori. That is, equations were determined by curves, but curves were not determined by equations. Coordinates, variables, and equations were subsidiary notions applied to a specific geometric situation.
|
Coordinate geometry
| 0.838701
|
1,522
|
The Greek mathematician Menaechmus solved problems and proved theorems by using a method that had a strong resemblance to the use of coordinates and it has sometimes been maintained that he had introduced analytic geometry.Apollonius of Perga, in On Determinate Section, dealt with problems in a manner that may be called an analytic geometry of one dimension; with the question of finding points on a line that were in a ratio to the others. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding ordinates that are equivalent to rhetorical equations (expressed in words) of curves.
|
Coordinate geometry
| 0.838701
|
1,523
|
Lines in a Cartesian plane, or more generally, in affine coordinates, can be described algebraically by linear equations. In two dimensions, the equation for non-vertical lines is often given in the slope-intercept form: where: m is the slope or gradient of the line. b is the y-intercept of the line. x is the independent variable of the function y = f(x).In a manner analogous to the way lines in a two-dimensional space are described using a point-slope form for their equations, planes in a three dimensional space have a natural description using a point in the plane and a vector orthogonal to it (the normal vector) to indicate its "inclination".
|
Coordinate geometry
| 0.838701
|
1,524
|
It is the foundation of most modern fields of geometry, including algebraic, differential, discrete and computational geometry. Usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, and circles, often in two and sometimes three dimensions. Geometrically, one studies the Euclidean plane (two dimensions) and Euclidean space. As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometric shapes in a numerical way and extracting numerical information from shapes' numerical definitions and representations. That the algebra of the real numbers can be employed to yield results about the linear continuum of geometry relies on the Cantor–Dedekind axiom.
|
Coordinate geometry
| 0.838701
|
1,525
|
In mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is used in physics and engineering, and also in aviation, rocketry, space science, and spaceflight.
|
Coordinate geometry
| 0.838701
|
1,526
|
Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect.
|
N-dimensional space
| 0.838675
|
1,527
|
In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism.
|
N-dimensional space
| 0.838675
|
1,528
|
In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions.
|
N-dimensional space
| 0.838675
|
1,529
|
": 290 Davis Baird has argued that the major change associated with Floris Cohen's identification of a "fourth big scientific revolution" after World War II is the development of scientific instrumentation, not only in chemistry but across the sciences. In chemistry, the introduction of new instrumentation in the 1940s was "nothing less than a scientific and technological revolution": 28–29 in which classical wet-and-dry methods of structural organic chemistry were discarded, and new areas of research opened up. : 38 As early as 1954, W. A. Wildhack discussed both the productive and destructive potential inherent in process control. The ability to make precise, verifiable and reproducible measurements of the natural world, at levels that were not previously observable, using scientific instrumentation, has "provided a different texture of the world". This instrumentation revolution fundamentally changes human abilities to monitor and respond, as is illustrated in the examples of DDT monitoring and the use of UV spectrophotometry and gas chromatography to monitor water pollutants.
|
Measuring instrument
| 0.838636
|
1,530
|
For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections, but these are considered to be far beyond current capabilities.
|
Molecular machinery
| 0.838625
|
1,531
|
AMMs have been pivotal in the design of several stimuli-responsive smart materials, such as 2D and 3D self-assembled materials and nanoparticle-based systems, for versatile applications ranging from 3D printing to drug delivery.AMMs are gradually moving from the conventional solution-phase chemistry to surfaces and interfaces. For instance, AMM-immobilized surfaces (AMMISs) are a novel class of functional materials consisting of AMMs attached to inorganic surfaces forming features like self-assembled monolayers; this gives rise to tunable properties such as fluorescence, aggregation and drug-release activity.Most of these applications remain at the proof-of-concept level, and need major modifications to be adapted to the industrial scale. Challenges in streamlining macroscale applications include autonomous operation, the complexity of the machines, stability in the synthesis of the machines and the working conditions.
|
Molecular machinery
| 0.838625
|
1,532
|
Chemical energy (or "chemical fuels") was an attractive option at the beginning, given the broad array of reversible chemical reactions (heavily based on acid-base chemistry) to switch molecules between different states. However, this comes with the issue of practically regulating the delivery of the chemical fuel and the removal of waste generated to maintain the efficiency of the machine as in biological systems. Though some AMMs have found ways to circumvent this, more recently waste-free reactions such based on electron transfers or isomerization have gained attention (such as redox-responsive viologens). Eventually, several different forms of energy (electric, magnetic, optical and so on) have become the primary energy sources used to power AMMs, even producing autonomous systems such as light-driven motors.
|
Molecular machinery
| 0.838625
|
1,533
|
The first example of an artificial molecular machine (AMM) was reported in 1994, featuring a rotaxane with a ring and two different possible binding sites. In 2016 the Nobel Prize in Chemistry was awarded to Jean-Pierre Sauvage, Sir J. Fraser Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines.
|
Molecular machinery
| 0.838625
|
1,534
|
AMMs have diversified rapidly over the past few decades and their design principles, properties, and characterization methods have been outlined better. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules, such as rotation about single bonds or cis-trans isomerization. Different AMMs are produced by introducing various functionalities, such as the introduction of bistability to create switches. A broad range of AMMs has been designed, featuring different properties and applications; some of these include molecular motors, switches, and logic gates. A wide range of applications have been demonstrated for AMMs, including those integrated into polymeric, liquid crystal, and crystalline systems for varied functions (such as materials research, homogenous catalysis and surface chemistry).
|
Molecular machinery
| 0.838625
|
1,535
|
However, the most crucial aspect of the process is that the reference gene must be stable.The selection of these reference genes was traditionally carried out in molecular biology using qualitative or semi-quantitative studies such as the visual examination of RNA gels, northern blot densitometry or semi-quantitative PCR (PCR mimics). Now, in the genome era, it is possible to carry out a more detailed estimate for many organisms using transcriptomic technologies.
|
Real-time polymerase chain reaction
| 0.838617
|
1,536
|
The first was in the January 1984 issue of Acorn User magazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue. Susan Stepney, Professor of Computer Science at the University of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata.There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features.
|
Conway game
| 0.838602
|
1,537
|
However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours. Some variations on the Game of Life modify the geometry of the universe as well as the rule. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid.
|
Conway game
| 0.838602
|
1,538
|
Ranging is technique that measures distance or slant range from the observer to a target, especially a far and moving target. Active methods use unilateral transmission and passive reflection. Active rangefinding methods include laser (lidar), radar, sonar, and ultrasonic rangefinding. Other devices which measure distance using trigonometry are stadiametric, coincidence and stereoscopic rangefinders.
|
Length measurement
| 0.838575
|
1,539
|
Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string.In differential geometry the generic three-dimensional spaces are 3-manifolds, which locally resemble R 3 {\displaystyle {\mathbb {R} }^{3}} .
|
Spatial geometry
| 0.838528
|
1,540
|
The fundamental theorem of line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. Let φ: U ⊆ R n → R {\displaystyle \varphi :U\subseteq \mathbb {R} ^{n}\to \mathbb {R} } . Then φ ( q ) − φ ( p ) = ∫ γ ∇ φ ( r ) ⋅ d r . {\displaystyle \varphi \left(\mathbf {q} \right)-\varphi \left(\mathbf {p} \right)=\int _{\gamma }\nabla \varphi (\mathbf {r} )\cdot d\mathbf {r} .}
|
Spatial geometry
| 0.838528
|
1,541
|
Suppose V is a subset of R n {\displaystyle \mathbb {R} ^{n}} (in the case of n = 3, V represents a volume in 3D space) which is compact and has a piecewise smooth boundary S (also indicated with ∂V = S ). If F is a continuously differentiable vector field defined on a neighborhood of V, then the divergence theorem says: ∭ V ( ∇ ⋅ F ) d V = {\displaystyle \iiint _{V}\left(\mathbf {\nabla } \cdot \mathbf {F} \right)\,dV=} S {\displaystyle \scriptstyle S} ( F ⋅ n ) d S . {\displaystyle (\mathbf {F} \cdot \mathbf {n} )\,dS.} The left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, and n is the outward pointing unit normal field of the boundary ∂V. (dS may be used as a shorthand for ndS.)
|
Spatial geometry
| 0.838528
|
1,542
|
Experimental physics is a branch of physics that is concerned with data acquisition, data-acquisition methods, and the detailed conceptualization (beyond simple thought experiments) and realization of laboratory experiments. It is often contrasted with theoretical physics, which is more concerned with predicting and explaining the physical behaviour of nature than the acquisition of empirical data. Although experimental and theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relationship. The former provides data about the universe, which can then be analyzed in order to be understood, while the latter provides explanations for the data and thus offers insight into how to better acquire data and set up experiments. Theoretical physics can also offer insight into what data is needed in order to gain a better understanding of the universe, and into what experiments to devise in order to obtain it.
|
Experimental Physics
| 0.838525
|
1,543
|
LIGO, the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Currently two LIGO observatories exist: LIGO Livingston Observatory in Livingston, Louisiana, and LIGO Hanford Observatory near Richland, Washington. JWST, or the James Webb Space Telescope, launched in 2021.
|
Experimental Physics
| 0.838525
|
1,544
|
Some examples of prominent experimental physics projects are: Relativistic Heavy Ion Collider which collides heavy ions such as gold ions (it is the first heavy ion collider) and protons, it is located at Brookhaven National Laboratory, on Long Island, USA. HERA, which collides electrons or positrons and protons, and is part of DESY, located in Hamburg, Germany. LHC, or the Large Hadron Collider, which completed construction in 2008 but suffered a series of setbacks. The LHC began operations in 2008, but was shut down for maintenance until the summer of 2009.
|
Experimental Physics
| 0.838525
|
1,545
|
See the timelines below for listings of physics experiments. Timeline of atomic and subatomic physics Timeline of classical mechanics Timeline of electromagnetism and classical optics Timeline of gravitational physics and relativity Timeline of nuclear fusion Timeline of particle discoveries Timeline of particle physics technology Timeline of states of matter and phase transitions Timeline of thermodynamics
|
Experimental Physics
| 0.838525
|
1,546
|
Phase 1 ended in July 2006. HPF Phase 2 (HPF2) applies the Rosetta v4.8x software in higher resolution, "full atom refinement" mode, concentrating on cancer biomarkers (proteins found at dramatically increased levels in cancer tissues), human secreted proteins and malaria. Phase 1 ran on two volunteer computing grids: on United Devices' grid.org, and on the World Community Grid, an IBM philanthropic initiative. Phase 2 of the project ran exclusively on the World Community Grid; it terminated in 2013 after more than 9 years of IBM involvement.The Institute for Systems Biology will use the results of the computations within its larger research efforts.
|
Human Proteome Folding Project
| 0.838522
|
1,547
|
The Human Proteome Folding Project (HPF) is a collaborative effort between New York University (Bonneau Lab), the Institute for Systems Biology (ISB) and the University of Washington (Baker Lab), using the Rosetta software developed by the Rosetta Commons. The project is managed by the Bonneau lab. HPF Phase 1 applied Rosetta v4.2x software on the human genome and 89 others, starting in November 2004.
|
Human Proteome Folding Project
| 0.838522
|
1,548
|
The question about how many vertices/watchmen/guards were needed, was posed to Chvátal by Victor Klee in 1973. Chvátal proved it shortly thereafter. Chvátal's proof was later simplified by Steve Fisk, via a 3-coloring argument. Chvátal has a more geometrical approach, whereas Fisk uses well-known results from Graph theory.
|
Art gallery problem
| 0.83852
|
1,549
|
Chvátal's upper bound remains valid if the restriction to guards at corners is loosened to guards at any point not exterior to the polygon. There are a number of other generalizations and specializations of the original art-gallery theorem. For instance, for orthogonal polygons, those whose edges/walls meet at right angles, only ⌊ n / 4 ⌋ {\displaystyle \lfloor n/4\rfloor } guards are needed. There are at least three distinct proofs of this result, none of them simple: by Kahn, Klawe, and Kleitman; by Lubiw; and by Sack and Toussaint.A related problem asks for the number of guards to cover the exterior of an arbitrary polygon (the "Fortress Problem"): ⌈ n / 2 ⌉ {\displaystyle \lceil n/2\rceil } are sometimes necessary and always sufficient if guards are placed on the boundary of the polygon, while ⌈ n / 3 ⌉ {\displaystyle \lceil n/3\rceil } are sometimes necessary and always sufficient if guards are placed anywhere in the exterior of the polygon. In other words, the infinite exterior is more challenging to cover than the finite interior.
|
Art gallery problem
| 0.83852
|
1,550
|
Chvátal's art gallery theorem, named after Václav Chvátal, gives an upper bound on the minimal number of guards. It states:
|
Art gallery problem
| 0.83852
|
1,551
|
The art gallery problem or museum problem is a well-studied visibility problem in computational geometry. It originates from the following real-world problem: In the geometric version of the problem, the layout of the art gallery is represented by a simple polygon and each guard is represented by a point in the polygon. A set S {\displaystyle S} of points is said to guard a polygon if, for every point p {\displaystyle p} in the polygon, there is some q ∈ S {\displaystyle q\in S} such that the line segment between p {\displaystyle p} and q {\displaystyle q} does not leave the polygon. The art gallery problem can be applied in several domains such as in robotics, when artificial intelligences (AI) need to execute movements depending on their surroundings. Other domains, where this problem is applied, are in image editing, lighting problems of a stage or installation of infrastructures for the warning of natural disasters.
|
Art gallery problem
| 0.83852
|
1,552
|
To illustrate the proof, we consider the polygon below. The first step is to triangulate the polygon (see Figure 1). Then, one applies a proper 3 {\displaystyle 3} -colouring (Figure 2) and observes that there are 4 {\displaystyle 4} red, 4 {\displaystyle 4} blue and 6 {\displaystyle 6} green vertices. The colour with the fewest vertices is blue or red, thus the polygon can be covered by 4 {\displaystyle 4} guards (Figure 3). This agrees with the art gallery theorem, because the polygon has 14 {\displaystyle 14} vertices, and ⌊ 14 3 ⌋ = 4 {\displaystyle \left\lfloor {\frac {14}{3}}\right\rfloor =4} .
|
Art gallery problem
| 0.83852
|
1,553
|
Let ri be a rotamer at residue position i in the protein chain, and E(ri) the potential energy between the internal atoms of the rotamer. Let E(ri, rj) be the potential energy between ri and rotamer rj at residue position j. Then, we define the optimization problem as one of finding the conformation of minimum energy (ET): The problem of minimizing ET is an NP-hard problem. Even though the class of problems is NP-hard, in practice many instances of protein design can be solved exactly or optimized satisfactorily through heuristic methods.
|
Protein Design
| 0.838477
|
1,554
|
Furthermore, even if amino acid side-chain conformations are limited to a few rotamers (see Structural flexibility), this results in an exponential number of conformations for each sequence. Thus, in our 100 residue protein, and assuming that each amino acid has exactly 10 rotamers, a search algorithm that searches this space will have to search over 200100 protein conformations. The most common energy functions can be decomposed into pairwise terms between rotamers and amino acid types, which casts the problem as a combinatorial one, and powerful optimization algorithms can be used to solve it.
|
Protein Design
| 0.838477
|
1,555
|
When the first proteins were rationally designed during the 1970s and 1980s, the sequence for these was optimized manually based on analyses of other known proteins, the sequence composition, amino acid charges, and the geometry of the desired structure. The first designed proteins are attributed to Bernd Gutte, who designed a reduced version of a known catalyst, bovine ribonuclease, and tertiary structures consisting of beta-sheets and alpha-helices, including a binder of DDT. Urry and colleagues later designed elastin-like fibrous peptides based on rules on sequence composition.
|
Protein Design
| 0.838477
|
1,556
|
Hence, it is also termed inverse folding. Protein design is then an optimization problem: using some scoring criteria, an optimized sequence that will fold to the desired structure is chosen.
|
Protein Design
| 0.838477
|
1,557
|
Recently, several alternatives based on message-passing algorithms have been designed specifically for the optimization of the LP relaxation of the protein design problem. These algorithms can approximate both the dual or the primal instances of the integer programming, but in order to maintain guarantees on optimality, they are most useful when used to approximate the dual of the protein design problem, because approximating the dual guarantees that no solutions are missed. Message-passing based approximations include the tree reweighted max-product message passing algorithm, and the message passing linear programming algorithm.
|
Protein Design
| 0.838477
|
1,558
|
ILP solvers depend on linear programming (LP) algorithms, such as the Simplex or barrier-based methods to perform the LP relaxation at each branch. These LP algorithms were developed as general-purpose optimization methods and are not optimized for the protein design problem (Equation (1)). In consequence, the LP relaxation becomes the bottleneck of ILP solvers when the problem size is large.
|
Protein Design
| 0.838477
|
1,559
|
Thus, these algorithms provide a good perspective on the different kinds of algorithms available for protein design. In 2020 scientists reported the development of an AI-based process using genome databases for evolution-based designing of novel proteins. They used deep learning to identify design-rules. In 2022, a study reported deep learning software that can design proteins that contain prespecified functional sites.
|
Protein Design
| 0.838477
|
1,560
|
Although these algorithms address only the most basic formulation of the protein design problem, Equation (1), when the optimization goal changes because designers introduce improvements and extensions to the protein design model, such as improvements to the structural flexibility allowed (e.g., protein backbone flexibility) or including sophisticated energy terms, many of the extensions on protein design that improve modeling are built atop these algorithms. For example, Rosetta Design incorporates sophisticated energy terms, and backbone flexibility using Monte Carlo as the underlying optimizing algorithm. OSPREY's algorithms build on the dead-end elimination algorithm and A* to incorporate continuous backbone and side-chain movements.
|
Protein Design
| 0.838477
|
1,561
|
Several algorithms have been developed specifically for the protein design problem. These algorithms can be divided into two broad classes: exact algorithms, such as dead-end elimination, that lack runtime guarantees but guarantee the quality of the solution; and heuristic algorithms, such as Monte Carlo, that are faster than exact algorithms but have no guarantees on the optimality of the results. Exact algorithms guarantee that the optimization process produced the optimal according to the protein design model. Thus, if the predictions of exact algorithms fail when these are experimentally validated, then the source of error can be attributed to the energy function, the allowed flexibility, the sequence space or the target structure (e.g., if it cannot be designed for).Some protein design algorithms are listed below.
|
Protein Design
| 0.838477
|
1,562
|
Molecular mechanics force-fields, which have been used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.
|
Protein Design
| 0.838477
|
1,563
|
Statistical potentials, in contrast to physics-based potentials, have the advantage of being fast to compute, of accounting implicitly of complex effects and being less sensitive to small changes in the protein structure. These energy functions are based on deriving energy values from frequency of appearance on a structural database. Protein design, however, has requirements that can sometimes be limited in molecular mechanics force-fields.
|
Protein Design
| 0.838477
|
1,564
|
The trend has been toward using more physics-based potential energy functions.Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.
|
Protein Design
| 0.838477
|
1,565
|
The most accurate energy functions are those based on quantum mechanical simulations. However, such simulations are too slow and typically impractical for protein design. Instead, many protein design algorithms use either physics-based energy functions adapted from molecular mechanics simulation programs, knowledge based energy-functions, or a hybrid mix of both.
|
Protein Design
| 0.838477
|
1,566
|
Suppose the winning-sets are all of size k (i.e., the game-hypergraph is k-uniform). In a Maker-Breaker game, the Erdos-Selfridge theorem implies that Breaker wins if the number of winning-sets is less than 2 k − 1 {\displaystyle 2^{k-1}} . By the above conjecture, we would expect the same to hold in the corresponding Client-Waiter game - Waiter "should" win (as Breaker) whenever the number of winning-sets is less than 2 k − 1 {\displaystyle 2^{k-1}} .
|
Picker-Chooser game
| 0.838466
|
1,567
|
Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies.
|
Concurrent algorithm
| 0.838456
|
1,568
|
One of the earliest applications of PairWise to problems in bioinformatics was by Ewan Birney. Frameshifting refers to the phenomena where in one DNA strands, there are more than one translation frame. For normal Protein-DNA alignment tools, they first choose one of three frames to translate the DNA into a protein sequence, and then compare it with the given protein. Such alignment is based on the assumption that the DNA translation frame is not interrupted for the whole DNA strand.
|
Pairwise Algorithm
| 0.838443
|
1,569
|
In theoretical computer science, nondeterministic constraint logic is a combinatorial system in which an orientation is given to the edges of a weighted undirected graph, subject to certain constraints. One can change this orientation by steps in which a single edge is reversed, subject to the same constraints. The constraint logic problem and its variants have been proven to be PSPACE-complete to determine whether there exists a sequence of moves that reverses a specified edge and are very useful to show various games and puzzles are PSPACE-hard or PSPACE-complete. This is a form of reversible logic in that each sequence of edge orientation changes can be undone. The hardness of this problem has been used to prove that many games and puzzles have high game complexity.
|
Constraint logic problem
| 0.838436
|
1,570
|
In practice, in bialgebras, this map is required to be the identity, which can be obtained by normalizing the counit by dividing by dimension ( ϵ := 1 n tr {\displaystyle \epsilon :=\textstyle {\frac {1}{n}}\operatorname {tr} } ), so in these cases the normalizing constant corresponds to dimension. Alternatively, it may be possible to take the trace of operators on an infinite-dimensional space; in this case a (finite) trace is defined, even though no (finite) dimension exists, and gives a notion of "dimension of the operator". These fall under the rubric of "trace class operators" on a Hilbert space, or more generally nuclear operators on a Banach space.
|
Vector space dimension
| 0.838421
|
1,571
|
Firstly, it allows for a definition of a notion of dimension when one has a trace but no natural sense of basis. For example, one may have an algebra A {\displaystyle A} with maps η: K → A {\displaystyle \eta :K\to A} (the inclusion of scalars, called the unit) and a map ϵ: A → K {\displaystyle \epsilon :A\to K} (corresponding to trace, called the counit). The composition ϵ ∘ η: K → K {\displaystyle \epsilon \circ \eta :K\to K} is a scalar (being a linear operator on a 1-dimensional space) corresponds to "trace of identity", and gives a notion of dimension for an abstract algebra.
|
Vector space dimension
| 0.838421
|
1,572
|
Translating back to the lattice world by using the theorem above and using a lattice-theoretical analogue of the V(R) construction, called the dimension monoid, introduced by Wehrung in 1998, yields the following result. Theorem (Wehrung 2004). There exists a distributive (∨,0,1)-semilattice of cardinality ℵ1 that is not isomorphic to Conc L, for any modular lattice L every finitely generated sublattice of which has finite length. Problem 3 (Goodearl 1991). Is the positive cone of any dimension group with order-unit isomorphic to V(R), for some von Neumann regular ring R?
|
Congruence lattice problem
| 0.838416
|
1,573
|
Theorem (Wehrung 1999). Let R be a von Neumann regular ring. Then the (∨,0)-semilattices Idc R and Conc L(R) are both isomorphic to the maximal semilattice quotient of V(R).
|
Congruence lattice problem
| 0.838416
|
1,574
|
The proof of the negative solution for CLP shows that the problem of representing distributive semilattices by compact congruences of lattices already appears for congruence lattices of semilattices. The question whether the structure of partially ordered set would cause similar problems is answered by the following result.Theorem (Wehrung 2008). For any distributive (∨,0)-semilattice S, there are a (∧,0)-semilattice P and a map μ: P × P → S such that the following conditions hold: (1) x ≤ y implies that μ(x,y)=0, for all x, y in P. (2) μ(x,z) ≤ μ(x,y) ∨ μ(y,z), for all x, y, z in P.
|
Congruence lattice problem
| 0.838416
|
1,575
|
Theorem. Every distributive (∨,0)-semilattice of cardinality at most ℵ1 is isomorphic to (1) Conc L, for some locally finite, relatively complemented modular lattice L (Tůma 1998 and Grätzer, Lakser, and Wehrung 2000). (2) The semilattice of finitely generated two-sided ideals of some (not necessarily unital) von Neumann regular ring (Wehrung 2000).
|
Congruence lattice problem
| 0.838416
|
1,576
|
It should be observed that while the transition homomorphisms used in the Ershov-Pudlák Theorem are (∨,0)-embeddings, the transition homomorphisms used in the result above are not necessarily one-to-one, for example when one tries to represent the three-element chain. Practically this does not cause much trouble, and makes it possible to prove the following results.
|
Congruence lattice problem
| 0.838416
|
1,577
|
Furthermore, it follows from deep 1998 results of universal algebra by Kearnes and Szendrei in so-called commutator theory of varieties that the result above can be extended from the variety of all lattices to any variety V {\displaystyle {\mathcal {V}}} such that all Con A, for A ∈ V {\displaystyle A\in {\mathcal {V}}} , satisfy a fixed nontrivial identity in the signature (∨,∧) (in short, with a nontrivial congruence identity). We should also mention that many attempts at CLP were also based on the following result, first proved by Bulman-Fleming and McDowell in 1978 by using a categorical 1974 result of Shannon, see also Goodearl and Wehrung in 2001 for a direct argument.Theorem (Bulman-Fleming and McDowell 1978). Every distributive (∨,0)-semilattice is a direct limit of finite Boolean (∨,0)-semilattices and (∨,0)-homomorphisms.
|
Congruence lattice problem
| 0.838416
|
1,578
|
The problem remained open until it was recently solved in the negative by Tůma and Wehrung.Theorem (Tůma and Wehrung 2006). There exists a diagram D of finite Boolean (∨,0)-semilattices and (∨,0,1)-embeddings, indexed by a finite partially ordered set, that cannot be lifted, with respect to the Conc functor, by any diagram of lattices and lattice homomorphisms. In particular, this implies immediately that CLP has no functorial solution.
|
Congruence lattice problem
| 0.838416
|
1,579
|
Theorem (Pudlák 1985). There exists a direct limits preserving functor Φ, from the category of all distributive lattices with zero and 0-lattice embeddings to the category of all lattices with zero and 0-lattice embeddings, such that ConcΦ is naturally equivalent to the identity. Furthermore, Φ(S) is a finite atomistic lattice, for any finite distributive (∨,0)-semilattice S. This result is improved further, by an even far more complex construction, to locally finite, sectionally complemented modular lattices by Růžička in 2004 and 2006.Pudlák asked in 1985 whether his result above could be extended to the whole category of distributive (∨,0)-semilattices with (∨,0)-embeddings.
|
Congruence lattice problem
| 0.838416
|
1,580
|
The approach of CLP suggested by Pudlák in his 1985 paper is different. It is based on the following result, Fact 4, p. 100 in Pudlák's 1985 paper, obtained earlier by Yuri L. Ershov as the main theorem in Section 3 of the Introduction of his 1977 monograph.Theorem (Ershov 1977, Pudlák 1985). Every distributive (∨,0)-semilattice is the directed union of its finite distributive (∨,0)-subsemilattices.
|
Congruence lattice problem
| 0.838416
|
1,581
|
Comparing with classical mechanics, which gives accurate results dealing with macroscopic objects moving much slower than the speed of light, total momentum of the two colliding bodies is frame-dependent. In the center of momentum frame, according to classical mechanics, This agrees with the relativistic calculation u 1 = − v 1 , {\displaystyle u_{1}=-v_{1},} despite other differences. One of the postulates in Special Relativity states that the laws of physics, such as conservation of momentum, should be invariant in all inertial frames of reference.
|
Elastic interaction
| 0.838393
|
1,582
|
In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net conversion of kinetic energy into other forms such as heat, noise, or potential energy. During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute). Collisions of atoms are elastic, for example Rutherford backscattering.
|
Elastic interaction
| 0.838393
|
1,583
|
A general form triangle has six main characteristics (see picture): three linear (side lengths a, b, c) and three angular (α, β, γ). The classical plane trigonometry problem is to specify three of the six characteristics and determine the other three. A triangle can be uniquely determined in this sense when given any of the following: Three sides (SSS) Two sides and the included angle (SAS, side-angle-side) Two sides and an angle not included between them (SSA), if the side length adjacent to the angle is shorter than the other side length. A side and the two angles adjacent to it (ASA) A side, the angle opposite to it and an angle adjacent to it (AAS).For all cases in the plane, at least one of the side lengths must be specified. If only the angles are given, the side lengths cannot be determined, because any similar triangle is a solution.
|
Solution of triangles
| 0.838382
|
1,584
|
Solution of triangles (Latin: solutio triangulorum) is the main trigonometric problem of finding the characteristics of a triangle (angles and lengths of sides), when some of these are known. The triangle can be located on a plane or on a sphere. Applications requiring triangle solutions include geodesy, astronomy, construction, and navigation.
|
Solution of triangles
| 0.838382
|
1,585
|
Spherical geometry differs from planar Euclidean geometry, so the solution of spherical triangles is built on different rules. For example, the sum of the three angles α + β + γ depends on the size of the triangle.
|
Solution of triangles
| 0.838382
|
1,586
|
The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence P 0 ⊊ P 1 ⊊ ⋯ ⊊ P n {\displaystyle {\mathcal {P}}_{0}\subsetneq {\mathcal {P}}_{1}\subsetneq \cdots \subsetneq {\mathcal {P}}_{n}} of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0.
|
High-dimensional space
| 0.83838
|
1,587
|
In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized. Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion. Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity.
|
Theory of measurement
| 0.838357
|
1,588
|
Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated.
|
Theory of measurement
| 0.838357
|
1,589
|
Information theory recognises that all data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity." This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements.
|
Theory of measurement
| 0.838357
|
1,590
|
In quantum mechanics, a measurement is an action that determines a particular property (position, momentum, energy, etc.) of a quantum system. Quantum measurements are always statistical samples from a probability distribution; the distribution for many quantum phenomena is discrete. : 197 Quantum measurements alter quantum states and yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value. The unambiguous meaning of the quantum measurement is an unresolved fundamental problem in quantum mechanics; the most common interpretation is that when a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value.
|
Theory of measurement
| 0.838357
|
1,591
|
However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales.Measurement is a cornerstone of trade, science, technology and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields.
|
Theory of measurement
| 0.838357
|
1,592
|
Pascal's work on this problem began an important correspondence between him and fellow mathematician Pierre de Fermat (1601-1665). Communicating through letters, the two continued to exchange their ideas and thoughts. These interactions led to the conception of basic probability theory. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling.
|
Poker probability
| 0.838344
|
1,593
|
His work from 1550, titled Liber de Ludo Aleae, discussed the concepts of probability and how they were directly related to gambling. However, his work did not receive any immediate recognition since it was not published until after his death. Blaise Pascal (1623-1662) also contributed to probability theory.
|
Poker probability
| 0.838344
|
1,594
|
Probability and gambling have been ideas since long before the invention of poker. The development of probability theory in the late 1400s was attributed to gambling; when playing a game with high stakes, players wanted to know what the chance of winning would be. In 1494, Fra Luca Paccioli released his work Summa de arithmetica, geometria, proportioni e proportionalita which was the first written text on probability. Motivated by Paccioli's work, Girolamo Cardano (1501-1576) made further developments in probability theory.
|
Poker probability
| 0.838344
|
1,595
|
The cumulative probability is determined by adding one hand's probability with the probabilities of all hands above it. The Odds are defined as the ratio of the number of ways not to draw the hand, to the number of ways to draw it. In statistics, this is called odds against.
|
Poker probability
| 0.838344
|
1,596
|
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for global differential geometry D for symplectic geometry and contact geometryIn addition, the special second-level code "-" is used for specific kinds of materials.
|
Mathematics Subject Classification
| 0.838341
|
1,597
|
For physics papers the Physics and Astronomy Classification Scheme (PACS) is often used. Due to the large overlap between mathematics and physics research it is quite common to see both PACS and MSC codes on research papers, particularly for multidisciplinary journals and repositories such as the arXiv. The ACM Computing Classification System (CCS) is a similar hierarchical classification scheme for computer science. There is some overlap between the AMS and ACM classification schemes, in subjects related to both mathematics and computer science, however the two schemes differ in the details of their organization of those topics.
|
Mathematics Subject Classification
| 0.838341
|
1,598
|
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theoryAll valid MSC classification codes must have at least the first-level identifier.
|
Mathematics Subject Classification
| 0.838341
|
1,599
|
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis
|
Mathematics Subject Classification
| 0.838341
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.