id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
1,200
|
Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time.
|
Plant sciences
| 0.840526
|
1,201
|
Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive.A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them.
|
Hydrobiology
| 0.840526
|
1,202
|
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters).
|
Hydrobiology
| 0.840526
|
1,203
|
The biologist technician usually has a training level bac +2 or bac +3: - DUT biological engineering options biological and biochemical analyzes (ABB), environmental engineering, - BTSA water professions - BTS GEMEAU - water management and control - BTS and regional controls - BTSA Agricultural, Biological and Biotechnological Analyzes (ANABIOTEC) - DEUST analysis of biological media - Bachelor's degree in biology The engineer in hydrobiology has a training level bac +5: - engineering school diploma: INA, ENSA, Polytech Montpellier sciences and water technologies - master's degree in environmental sciences or biology (training examples) environmental management and coastal ecology (University of La Rochelle) biology of organisms and populations (University of Burgundy) continental and coastal environments sciences Environment, Soils, Waters and Biodiversity (University of Rouen) operation and restoration of continental aquatic environments (University of Clermont Ferrand), etc.
|
Hydrobiology
| 0.840526
|
1,204
|
The following are the research interests of Hydrobiologists: Acidification impact on lake and reservoir ecosystems Ocean acidification Paleolimnology of remote mountain lakes Molecular ecology, phylogeography and taxonomy of Cladocera Chemical communication in plankton (prey-predator interaction) Biomanipulation of water reservoirs Phosphorus and nitrogen nutrient cycles
|
Hydrobiology
| 0.840526
|
1,205
|
The NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields (materials science, fuel cells, biotechnology, etc.). The SURF III Synchrotron Ultraviolet Radiation Facility is a source of synchrotron radiation, in continuous operation since 1961. SURF III now serves as the US national standard for source-based radiometry throughout the generalized optical spectrum.
|
Physical Measurement Laboratory
| 0.84049
|
1,206
|
The reports confirm suspicions and technical grounds publicly raised by cryptographers in 2007 that the EC-DRBG could contain a kleptographic backdoor (perhaps placed in the standard by NSA).NIST responded to the allegations, stating that "NIST works to publish the strongest cryptographic standards possible" and that it uses "a transparent, public process to rigorously vet our recommended standards". The agency stated that "there has been some confusion about the standards development process and the role of different organizations in it...The National Security Agency (NSA) participates in the NIST cryptography process because of its recognized expertise. NIST is also required by statute to consult with the NSA." Recognizing the concerns expressed, the agency reopened the public comment period for the SP800-90 publications, promising that "if vulnerabilities are found in these or any other NIST standards, we will work with the cryptographic community to address them as quickly as possible". Due to public concern of this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard.
|
Physical Measurement Laboratory
| 0.84049
|
1,207
|
In February 2014 NIST published the NIST Cybersecurity Framework that serves as voluntary guidance for organizations to manage and reduce cybersecurity risk. It was later amended and Version 1.1 was published in April 2018.Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, made the Framework mandatory for U.S. federal government agencies. An extension to the NIST Cybersecurity Framework is the Cybersecurity Maturity Model (CMMC) which was introduced in 2019 (thought the origin of CMMC began with Executive Order 13556).It emphasizes the importance of implementing Zero-trust architecture (ZTA) which focuses on protecting resources over the network perimeter.
|
Physical Measurement Laboratory
| 0.84049
|
1,208
|
Four scientific researchers at NIST have been awarded Nobel Prizes for work in physics: William Daniel Phillips in 1997, Eric Allin Cornell in 2001, John Lewis Hall in 2005 and David Jeffrey Wineland in 2012, which is the largest number for any US government laboratory. All four were recognized for their work related to laser cooling of atoms, which is directly related to the development and advancement of the atomic clock. In 2011, Dan Shechtman was awarded the Nobel Prize in chemistry for his work on quasicrystals in the Metallurgy Division from 1982 to 1984. In addition, John Werner Cahn was awarded the 2011 Kyoto Prize for Materials Science, and the National Medal of Science has been awarded to NIST researchers Cahn (1998) and Wineland (2007). Other notable people who have worked at NBS or NIST include:
|
Physical Measurement Laboratory
| 0.84049
|
1,209
|
All NASA-borne, extreme-ultraviolet observation instruments have been calibrated at SURF since the 1970s, and SURF is used for measurement and characterization of systems for extreme ultraviolet lithography. The Center for Nanoscale Science and Technology (CNST) performs research in nanotechnology, both through internal research efforts and by running a user-accessible cleanroom nanomanufacturing facility. This "NanoFab" is equipped with tools for lithographic patterning and imaging (e.g., electron microscopes and atomic force microscopes).
|
Physical Measurement Laboratory
| 0.84049
|
1,210
|
In computer vision, face images have been used extensively to develop facial recognition systems, face detection, and many other projects that use images of faces.
|
Comparison of datasets in machine learning
| 0.840453
|
1,211
|
As datasets come in myriad formats and can sometimes be difficult to use, there has been considerable work put into curating and standardizing the format of datasets to make them easier to use for machine learning research. OpenML: Web platform with Python, R, Java, and other APIs for downloading hundreds of machine learning datasets, evaluating algorithms on datasets, and benchmarking algorithm performance against dozens of other algorithms. PMLB: A large, curated repository of benchmark datasets for evaluating supervised machine learning algorithms. Provides classification and regression datasets in a standardized format that are accessible through a Python API.
|
Comparison of datasets in machine learning
| 0.840453
|
1,212
|
The data portals which are suitable for a specific subtype of machine learning application are listed in the subsequent sections.
|
Comparison of datasets in machine learning
| 0.840453
|
1,213
|
The data portal sometimes lists a wide variety of subtypes of datasets pertaining to many machine learning applications.
|
Comparison of datasets in machine learning
| 0.840453
|
1,214
|
These datasets consist primarily of text for tasks such as natural language processing, sentiment analysis, translation, and cluster analysis.
|
Comparison of datasets in machine learning
| 0.840453
|
1,215
|
These datasets are applied for machine learning (ML) research and have been cited in peer-reviewed academic journals. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learning algorithms (such as deep learning), computer hardware, and, less-intuitively, the availability of high-quality training datasets. High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data.
|
Comparison of datasets in machine learning
| 0.840453
|
1,216
|
In the case of nickelocene, the extra two electrons are in orbitals which are weakly metal-carbon antibonding; this is why it often participates in reactions where the M–C bonds are broken and the electron count of the metal changes to 18.The 20-electron systems TM(CO)8− (TM = Sc, Y) have a cubic (Oh) equilibrium geometry and a singlet (1A1g) electronic ground state. There is one occupied valence MO with a2u symmetry, which is formed only by ligand orbitals without a contribution from the metal AOs. But the adducts TM(CO)8− (TM=Sc, Y) fulfill the 18-electron rule when one considers only those valence electrons, which occupy metal–ligand bonding orbitals.
|
18 electron rule
| 0.840413
|
1,217
|
In mathematics and statistics, a quantitative variable may be continuous or discrete if they are typically obtained by measuring or counting, respectively. If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value. In some contexts a variable can be discrete in some ranges of the number line and continuous in others.
|
Discrete variables
| 0.840397
|
1,218
|
This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer. Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.
|
Computational complexity
| 0.840391
|
1,219
|
A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer.
|
Computational complexity
| 0.840391
|
1,220
|
The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of n polynomial equations of degree d in n indeterminates may have up to d n {\displaystyle d^{n}} complex solutions, if the number of solutions is finite (this is Bézout's theorem).
|
Computational complexity
| 0.840391
|
1,221
|
Adaptive-control Data Mining Engineering Design Feature Selection Function Approximation Game-Play Image Classification Knowledge Handling Medical Diagnosis Modeling Navigation Optimization Prediction Querying Robotics Routing Rule-Induction Scheduling Strategy
|
Classifier system
| 0.840372
|
1,222
|
Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions (e.g. behavior modeling, classification, data mining, regression, function approximation, or game strategy). This approach allows complex solution spaces to be broken up into smaller, simpler parts. The founding concepts behind learning classifier systems came from attempts to model complex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e. artificial intelligence).
|
Classifier system
| 0.840372
|
1,223
|
The name, "Learning Classifier System (LCS)", is a bit misleading since there are many machine learning algorithms that 'learn to classify' (e.g. decision trees, artificial neural networks), but are not LCSs. The term 'rule-based machine learning (RBML)' is useful, as it more clearly captures the essential 'rule-based' component of these systems, but it also generalizes to methods that are not considered to be LCSs (e.g. association rule learning, or artificial immune systems). More general terms such as, 'genetics-based machine learning', and even 'genetic algorithm' have also been applied to refer to what would be more characteristically defined as a learning classifier system. Due to their similarity to genetic algorithms, Pittsburgh-style learning classifier systems are sometimes generically referred to as 'genetic algorithms'.
|
Classifier system
| 0.840372
|
1,224
|
As a result, the LCS paradigm can be flexibly applied to many problem domains that call for machine learning. The major divisions among LCS implementations are as follows: (1) Michigan-style architecture vs. Pittsburgh-style architecture, (2) reinforcement learning vs. supervised learning, (3) incremental learning vs. batch learning, (4) online learning vs. offline learning, (5) strength-based fitness vs. accuracy-based fitness, and (6) complete action mapping vs best action mapping. These divisions are not necessarily mutually exclusive. For example, XCS, the best known and best studied LCS algorithm, is Michigan-style, was designed for reinforcement learning but can also perform supervised learning, applies incremental learning that can be either online or offline, applies accuracy-based fitness, and seeks to generate a complete action mapping.
|
Classifier system
| 0.840372
|
1,225
|
As a result, LCS algorithms are rarely considered in comparison to other established machine learning approaches. This is likely due to the following factors: (1) LCS is a relatively complicated algorithmic approach, (2) LCS, rule-based modeling is a different paradigm of modeling than almost all other machine learning approaches. (3) LCS software implementations are not as common.
|
Classifier system
| 0.840372
|
1,226
|
Typically, most parameters can be left to the community determined defaults with the exception of two critical parameters: Maximum rule population size, and the maximum number of learning iterations. Optimizing these parameters are likely to be very problem dependent. Notoriety: Despite their age, LCS algorithms are still not widely known even in machine learning communities.
|
Classifier system
| 0.840372
|
1,227
|
Limited Software Availability: There are a limited number of open source, accessible LCS implementations, and even fewer that are designed to be user friendly or accessible to machine learning practitioners. Interpretation: While LCS algorithms are certainly more interpretable than some advanced machine learners, users must interpret a set of rules (sometimes large sets of rules to comprehend the LCS model.). Methods for rule compaction, and interpretation strategies remains an area of active research. Theory/Convergence Proofs: There is a relatively small body of theoretical work behind LCS algorithms.
|
Classifier system
| 0.840372
|
1,228
|
The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology. In 1978, the situation was reversed – methods from algebraic topology were used to solve a problem in combinatorics – when László Lovász proved the Kneser conjecture, thus beginning the new study of topological combinatorics. Lovász's proof used the Borsuk-Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems. Topics in this area include: Sperner's lemma Regular maps
|
Discrete geometry
| 0.840366
|
1,229
|
Discrete geometry and combinatorial geometry are branches of geometry that study combinatorial properties and constructive methods of discrete geometric objects. Most questions in discrete geometry involve finite or discrete sets of basic geometric objects, such as points, lines, planes, circles, spheres, polygons, and so forth. The subject focuses on the combinatorial properties of these objects, such as how they intersect one another, or how they may be arranged to cover a larger object. Discrete geometry has a large overlap with convex geometry and computational geometry, and is closely related to subjects such as finite geometry, combinatorial optimization, digital geometry, discrete differential geometry, geometric graph theory, toric geometry, and combinatorial topology.
|
Discrete geometry
| 0.840366
|
1,230
|
Digital geometry deals with discrete sets (usually discrete point sets) considered to be digitized models or images of objects of the 2D or 3D Euclidean space. Simply put, digitizing is replacing an object by a discrete set of its points. The images we see on the TV screen, the raster display of a computer, or in newspapers are in fact digital images. Its main application areas are computer graphics and image analysis.
|
Discrete geometry
| 0.840366
|
1,231
|
Although polyhedra and tessellations had been studied for many years by people such as Kepler and Cauchy, modern discrete geometry has its origins in the late 19th century. Early topics studied were: the density of circle packings by Thue, projective configurations by Reye and Steinitz, the geometry of numbers by Minkowski, and map colourings by Tait, Heawood, and Hadwiger. László Fejes Tóth, H.S.M. Coxeter, and Paul Erdős laid the foundations of discrete geometry.
|
Discrete geometry
| 0.840366
|
1,232
|
A polytope is a geometric object with flat sides, which exists in any general number of dimensions. A polygon is a polytope in two dimensions, a polyhedron in three dimensions, and so on in higher dimensions (such as a 4-polytope in four dimensions). Some theories further generalize the idea to include such objects as unbounded polytopes (apeirotopes and tessellations), and abstract polytopes. The following are some of the aspects of polytopes studied in discrete geometry: Polyhedral combinatorics Lattice polytopes Ehrhart polynomials Pick's theorem Hirsch conjecture
|
Discrete geometry
| 0.840366
|
1,233
|
Thus, S c {\displaystyle S_{c}} partitions into equivalence classes. Each equivalence class comprises a collection of quadratic irrationalities with each pair equivalent through the action of some matrix. Serret's theorem implies that the regular continued fraction expansions of equivalent quadratic irrationalities are eventually the same, that is, their sequences of partial quotients have the same tail.
|
Quadratic irrationalities
| 0.840321
|
1,234
|
This defines an injection from the quadratic irrationals to quadruples of integers, so their cardinality is at most countable; since on the other hand every square root of a prime number is a distinct quadratic irrational, and there are countably many prime numbers, they are at least countable; hence the quadratic irrationals are a countable set. Quadratic irrationals are used in field theory to construct field extensions of the field of rational numbers Q. Given the square-free integer c, the augmentation of Q by quadratic irrationals using √c produces a quadratic field Q(√c). For example, the inverses of elements of Q(√c) are of the same form as the above algebraic numbers: d a + b c = a d − b d c a 2 − b 2 c .
|
Quadratic irrationalities
| 0.840321
|
1,235
|
In mathematics, a quadratic irrational number (also known as a quadratic irrational or quadratic surd) is an irrational number that is the solution to some quadratic equation with rational coefficients which is irreducible over the rational numbers. Since fractions in the coefficients of a quadratic equation can be cleared by multiplying both sides by their least common denominator, a quadratic irrational is an irrational root of some quadratic equation with integer coefficients. The quadratic irrational numbers, a subset of the complex numbers, are algebraic numbers of degree 2, and can therefore be expressed as a + b c d , {\displaystyle {a+b{\sqrt {c}} \over d},} for integers a, b, c, d; with b, c and d non-zero, and with c square-free. When c is positive, we get real quadratic irrational numbers, while a negative c gives complex quadratic irrational numbers which are not real numbers.
|
Quadratic irrationalities
| 0.840321
|
1,236
|
The Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order.
|
Solvable groups
| 0.84029
|
1,237
|
Stoney units is a system of geometrized units in which the Coulomb constant and the elementary charge are included. Hartree atomic units are a system of units used in atomic physics, particularly for describing the properties of electrons. The atomic units have been chosen to use several constants relating to the electron: the electron mass, the elementary charge, the Coulomb constant and the reduced Planck constant. The unit of energy in this system is the total energy of the electron in the Bohr atom and called the Hartree energy. The unit of length is the Bohr radius.
|
Systems of measurement
| 0.840229
|
1,238
|
Natural units are units of measurement defined in terms of universal physical constants in such a manner that selected physical constants take on the numerical value of one when expressed in terms of those units. Natural units are so named because their definition relies on only properties of nature and not on any human construct. Varying systems of natural units are possible, depending on the choice of constants used. Some examples are as follows: Geometrized unit systems are useful in relativistic physics.
|
Systems of measurement
| 0.840229
|
1,239
|
In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures.
|
Macromolecular system
| 0.840215
|
1,240
|
A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions are extremely important in catalysis, binding reactants into conformations suitable for reaction and lowering the transition state energy of reaction. Template-directed synthesis is a special case of supramolecular catalysis. Encapsulation systems such as micelles, dendrimers, and cavitands are also used in catalysis to create microenvironments suitable for reactions (or steps in reactions) to progress that is not possible to use on a macroscopic scale.
|
Macromolecular system
| 0.840215
|
1,241
|
Porphyrins, and phthalocyanines have highly tunable photochemical and electrochemical activity as well as the potential to form complexes. Photochromic and photoisomerizable groups can change their shapes and properties, including binding properties, upon exposure to light. Tetrathiafulvalene (TTF) and quinones have multiple stable oxidation states, and therefore can be used in redox reactions and electrochemistry. Other units, such as benzidine derivatives, viologens, and fullerenes, are useful in supramolecular electrochemical devices.
|
Macromolecular system
| 0.840215
|
1,242
|
Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. Commonly used spacers and connecting groups include polyether chains, biphenyls and triphenyls, and simple alkyl chains. The chemistry for creating and connecting these units is very well understood. nanoparticles, nanorods, fullerenes and dendrimers offer nanometer-sized structure and encapsulation units.
|
Macromolecular system
| 0.840215
|
1,243
|
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines.
|
Computer experiment
| 0.8402
|
1,244
|
Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) .
|
Computer experiment
| 0.8402
|
1,245
|
{\displaystyle (\mathrm {d} G)_{T,P}=\left({\frac {\partial G}{\partial \xi }}\right)_{T,P}\,\mathrm {d} \xi .\,} If we introduce the stoichiometric coefficient for the i-th component in the reaction ν i = ∂ N i / ∂ ξ {\displaystyle \nu _{i}=\partial N_{i}/\partial \xi \,} (negative for reactants), which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative ( ∂ G ∂ ξ ) T , P = ∑ i μ i ν i = − A {\displaystyle \left({\frac {\partial G}{\partial \xi }}\right)_{T,P}=\sum _{i}\mu _{i}\nu _{i}=-\mathbb {A} \,} where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923. (De Donder; Progogine & Defay, p. 69; Guggenheim, pp.
|
Chemical Thermodynamics
| 0.840195
|
1,246
|
In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T, respectively.
|
Chemical Thermodynamics
| 0.840195
|
1,247
|
Functional genomics is a field of molecular biology that attempts to describe gene (and protein) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing). Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach.
|
Functional genomics
| 0.840161
|
1,248
|
The promise of functional genomics is to generate and synthesize genomic and proteomic knowledge into an understanding of the dynamic properties of an organism. This could potentially provide a more complete picture of how the genome specifies function compared to studies of single genes. Integration of functional genomics data is often a part of systems biology approaches.
|
Functional genomics
| 0.840161
|
1,249
|
In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy.
|
Atom physics
| 0.84015
|
1,250
|
The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward. The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics.
|
Atom physics
| 0.84015
|
1,251
|
One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or Vaiśeṣika Sūtra written by Kaṇāda. This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk).
|
Atom physics
| 0.84015
|
1,252
|
In mathematics, a plane is a two-dimensional space or flat surface that extends indefinitely. A plane is the two-dimensional analogue of a point (zero dimensions), a line (one dimension) and three-dimensional space. When working exclusively in two-dimensional Euclidean space, the definite article is used, so the Euclidean plane refers to the whole space. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory, and graphing are performed in a two-dimensional or planar space.
|
Two-dimensional space
| 0.840143
|
1,253
|
An algorithm that uses geometric invariants to vote for object hypotheses Similar to pose clustering, however instead of voting on pose, we are now voting on geometry A technique originally developed for matching geometric features (uncalibrated affine views of plane models) against a database of such features Widely used for pattern-matching, CAD/CAM, and medical imaging. It is difficult to choose the size of the buckets It is hard to be sure what “enough” means. Therefore, there may be some danger that the table will get clogged.
|
Object classification
| 0.840129
|
1,254
|
Object recognition – technology in the field of computer vision for finding and identifying objects in an image or video sequence. Humans recognize a multitude of objects in images with little effort, despite the fact that the image of the objects may vary somewhat in different view points, in many different sizes and scales or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. This task is still a challenge for computer vision systems. Many approaches to the task have been implemented over multiple decades.
|
Object classification
| 0.840129
|
1,255
|
For any A, B, and C subgroups of a group with A ≤ C (A subgroup of C) then AB ∩ C = A(B ∩ C); the multiplication here is the product of subgroups. This property has been called the modular property of groups (Aschbacher 2000) or (Dedekind's) modular law (Robinson 1996, Cohn 2000). Since for two normal subgroups the product is actually the smallest subgroup containing the two, the normal subgroups form a modular lattice. The Lattice theorem establishes a Galois connection between the lattice of subgroups of a group and that of its quotients.
|
Lattice of subgroups
| 0.840102
|
1,256
|
Subgroups with certain properties form lattices, but other properties do not. Normal subgroups always form a modular lattice. In fact, the essential property that guarantees that the lattice is modular is that subgroups commute with each other, i.e. that they are quasinormal subgroups. Nilpotent normal subgroups form a lattice, which is (part of) the content of Fitting's theorem.
|
Lattice of subgroups
| 0.840102
|
1,257
|
A common variant of the problem, assumed by several academic authors as the canonical problem, does not make the simplifying assumption that the host must uniformly choose the door to open, but instead that he uses some other strategy. The confusion as to which formalization is authoritative has led to considerable acrimony, particularly because this variant makes proofs more involved without altering the optimality of the always-switch strategy for the player. In this variant, the player can have different probabilities of winning depending on the observed choice of the host, but in any case the probability of winning by switching is at least 1/2 (and can be as high as 1), while the overall probability of winning by switching is still exactly 2/3. The variants are sometimes presented in succession in textbooks and articles intended to teach the basics of probability theory and game theory. A considerable number of other generalizations have also been studied.
|
Monty Hall problem
| 0.840064
|
1,258
|
One discussant (William Bell) considered it a matter of taste whether one explicitly mentions that (under the standard conditions), which door is opened by the host is independent of whether one should want to switch. Among the simple solutions, the "combined doors solution" comes closest to a conditional solution, as we saw in the discussion of approaches using the concept of odds and Bayes' theorem. It is based on the deeply rooted intuition that revealing information that is already known does not affect probabilities.
|
Monty Hall problem
| 0.840064
|
1,259
|
Many probability text books and articles in the field of probability theory derive the conditional probability solution through a formal application of Bayes' theorem; among them books by Gill and Henze. Use of the odds form of Bayes' theorem, often called Bayes' rule, makes such a derivation more transparent.Initially, the car is equally likely to be behind any of the three doors: the odds on door 1, door 2, and door 3 are 1: 1: 1. This remains the case after the player has chosen door 1, by independence. According to Bayes' rule, the posterior odds on the location of the car, given that the host opens door 3, are equal to the prior odds multiplied by the Bayes factor or likelihood, which is, by definition, the probability of the new piece of information (host opens door 3) under each of the hypotheses considered (location of the car).
|
Monty Hall problem
| 0.840064
|
1,260
|
The emission spectrum of atomic hydrogen has been divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two energy levels in an atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics. The spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts.
|
Hydrogen spectrum
| 0.840052
|
1,261
|
The Bohr model was later replaced by quantum mechanics in which the electron occupies an atomic orbital rather than an orbit, but the allowed energy levels of the hydrogen atom remained the same as in the earlier theory. Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To distinguish the two states, the lower energy state is commonly designated as n′, and the higher energy state is designated as n. The energy of an emitted photon corresponds to the energy difference between the two states.
|
Hydrogen spectrum
| 0.840052
|
1,262
|
In molecular biology, the iron response element or iron-responsive element (IRE) is a short conserved stem-loop which is bound by iron response proteins (IRPs, also named IRE-BP or IRBP). The IRE is found in UTRs (untranslated regions) of various mRNAs whose products are involved in iron metabolism. For example, the mRNA of ferritin (an iron storage protein) contains one IRE in its 5' UTR. When iron concentration is low, IRPs bind the IRE in the ferritin mRNA and cause reduced translation rates. In contrast, binding to multiple IREs in the 3' UTR of the transferrin receptor (involved in iron acquisition) leads to increased mRNA stability.
|
Iron response element
| 0.840002
|
1,263
|
The Unit dummy force method provides a convenient means for computing displacements in structural systems. It is applicable for both linear and non-linear material behaviours as well as for systems subject to environmental effects, and hence more general than Castigliano's second theorem.
|
Unit dummy force method
| 0.839959
|
1,264
|
In April 2013, the company released a new version of the sequencer called the "PacBio RS II" that uses all 150,000 ZMW holes concurrently, doubling the throughput per experiment. The highest throughput mode in November 2013 used P5 binding, C3 chemistry, BluePippin size selection, and a PacBio RS II officially yielded 350 million bases per SMRT Cell though a human de novo data set released with the chemistry averaging 500 million bases per SMRT Cell. Throughput varies based on the type of sample being sequenced. With the introduction of P6-C4 chemistry typical throughput per SMRT Cell increased to 500 million bases to 1 billion bases.
|
SMRT sequencing
| 0.839912
|
1,265
|
Throughput per SMRT cell is around 500 million bases demonstrated by sequencing results from the CHM1 cell line.On October 15, 2014, PacBio announced the release of new chemistry P6-C4 for the RS II system, which represents the company's 6th generation of polymerase and 4th generation chemistry--further extending the average read length to 10,000 - 15,000 bases, with the longest reads exceeding 40,000 bases. The throughput with the new chemistry was estimated between 500 million to 1 billion bases per SMRT Cell, depending on the sample being sequenced. This was the final version of chemistry released for the RS instrument.
|
SMRT sequencing
| 0.839912
|
1,266
|
The resulting P4 attributes provided higher-quality assemblies using fewer SMRT Cells and with improved variant calling. When coupled with input DNA size selection (using an electrophoresis instrument such as BluePippin) yields average read length over 7 kilobases.On October 3, 2013, PacBio released new reagent combination for PacBio RS II, the P5 DNA polymerase with C3 chemistry (P5-C3). Together, they extend sequencing read lengths to an average of approximately 8,500 bases, with the longest reads exceeding 30,000 bases.
|
SMRT sequencing
| 0.839912
|
1,267
|
At commercialization, read length had a normal distribution with a mean of about 1100 bases. A new chemistry kit released in early 2012 increased the sequencer's read length; an early customer of the chemistry cited mean read lengths of 2500 to 2900 bases.The XL chemistry kit released in late 2012 increased average read length to more than 4300 bases.On August 21, 2013, PacBio released a new DNA polymerase Binding Kit P4. This P4 enzyme has average read lengths of more than 4,300 bases when paired with the C2 sequencing chemistry and more than 5,000 bases when paired with the XL chemistry. The enzyme’s accuracy is similar to C2, reaching QV50 between 30X and 40X coverage.
|
SMRT sequencing
| 0.839912
|
1,268
|
Sequencing performance can be measured in read length, accuracy, and total throughput per experiment. PacBio sequencing systems using ZMWs have the advantage of long read lengths, although error rates are on the order of 5-15% and sample throughput is lower than Illumina sequencing platforms.On 19 Sep 2018, Pacific Biosciences released the Sequel 6.0 chemistry, synchronizing the chemistry version with the software version. Performance is contrasted for large-insert libraries with high molecular weight DNA versus shorter-insert libraries below ~15,000 bases in length. For larger templates average read lengths are up to 30,000 bases. For shorter-insert libraries, average read length are up to 100,000 bases while reading the same molecule in a circle several times. The latter shorter-insert libraries then yield up to 50 billion bases from a single SMRT Cell.
|
SMRT sequencing
| 0.839912
|
1,269
|
Motor units within a motor pool are recruited in a stereotypical order, from motor units that produce small amounts of force per spike, to those producing the largest force per spike. The gradient of motor unit force is correlated with a gradient in motor neuron soma size and motor neuron electrical excitability. This relationship was described by Elwood Henneman and is known as Henneman's size principle, a fundamental discovery of neuroscience and an organizing principle of motor control.For tasks requiring small forces, such as continual adjustment of posture, motor units with fewer muscle fibers that are slowly-contracting, but less fatigueable, are used. As more force is required, motor units with fast twitch, fast-fatigeable muscle fibers are recruited. High| | _________________ Force required | / | | | | | _____________|_________________ | __________|_______________________________ Low|__________|__________________________________________ ↑ ↑ ↑ Time Type I Recruit first Type II A Type IIB
|
Motor control
| 0.839875
|
1,270
|
As of 2011, some US clinical laboratories nevertheless used assays sold for "research use only".Laboratory processes need to adhere to regulations, such as the Clinical Laboratory Improvement Amendments, Health Insurance Portability and Accountability Act, Good Laboratory Practice, and Food and Drug Administration specifications in the United States. Laboratory Information Management Systems help by tracking these processes. Regulation applies to both staff and supplies. As of 2012, twelve US states require molecular pathologists to be licensed; several boards such as the American Board of Medical Genetics and the American Board of Pathology certify technologists, supervisors, and laboratory directors.Automation and sample barcoding maximise throughput and reduce the possibility of error or contamination during manual handling and results reporting. Single devices to do the assay from beginning to end are now available.
|
Molecular diagnosis
| 0.839866
|
1,271
|
The industrialisation of molecular biology assay tools has made it practical to use them in clinics. : foreword Miniaturisation into a single handheld device can bring medical diagnostics into the clinic and into the office or home. : foreword The clinical laboratory requires high standards of reliability; diagnostics may require accreditation or fall under medical device regulations.
|
Molecular diagnosis
| 0.839866
|
1,272
|
Molecular diagnostics is a collection of techniques used to analyze biological markers in the genome and proteome, and how their cells express their genes as proteins, applying molecular biology to medical testing. In medicine the technique is used to diagnose and monitor disease, detect risk, and decide which therapies will work best for individual patients,: foreword and in agricultural biosecurity similarly to monitor crop- and livestock disease, estimate risk, and decide what quarantine measures must be taken.By analysing the specifics of the patient and their disease, molecular diagnostics offers the prospect of personalised medicine. These tests are useful in a range of medical specialties, including infectious disease, oncology, human leucocyte antigen typing (which investigates and predicts immune function), coagulation, and pharmacogenomics—the genetic prediction of which drugs will work best. : v-vii They overlap with clinical chemistry (medical tests on bodily fluids).
|
Molecular diagnosis
| 0.839866
|
1,273
|
The age of the Sun cannot be measured directly; one way to estimate it is from the age of the oldest meteorites, and models of the evolution of the Solar System. The composition in the photosphere of the modern-day Sun, by mass, is 74.9% hydrogen and 23.8% helium. All heavier elements, called metals in astronomy, account for less than 2 percent of the mass. The SSM is used to test the validity of stellar evolution theory. In fact, the only way to determine the two free parameters of the stellar evolution model, the helium abundance and the mixing length parameter (used to model convection in the Sun), are to adjust the SSM to "fit" the observed Sun.
|
Standard Solar Model
| 0.83986
|
1,274
|
The standard solar model (SSM) is a mathematical treatment of the Sun as a spherical ball of gas (in varying states of ionisation, with the hydrogen in the deep interior being a completely ionised plasma). This model, technically the spherically symmetric quasi-static model of a star, has stellar structure described by several differential equations derived from basic physical principles. The model is constrained by boundary conditions, namely the luminosity, radius, age and composition of the Sun, which are well determined.
|
Standard Solar Model
| 0.83986
|
1,275
|
The numerical solution of the differential equations of stellar structure requires equations of state for the pressure, opacity and energy generation rate, as described in stellar structure, which relate these variables to the density, temperature and composition.
|
Standard Solar Model
| 0.83986
|
1,276
|
The differential equations of stellar structure, such as the equation of hydrostatic equilibrium, are integrated numerically. The differential equations are approximated by difference equations. The star is imagined to be made up of spherically symmetric shells and the numerical integration carried out in finite steps making use of the equations of state, giving relationships for the pressure, the opacity and the energy generation rate in terms of the density, temperature and composition.
|
Standard Solar Model
| 0.83986
|
1,277
|
The SSM serves two purposes: it provides estimates for the helium abundance and mixing length parameter by forcing the stellar model to have the correct luminosity and radius at the Sun's age, it provides a way to evaluate more complex models with additional physics, such as rotation, magnetic fields and diffusion or improvements to the treatment of convection, such as modelling turbulence, and convective overshooting.Like the Standard Model of particle physics and the standard cosmology model the SSM changes over time in response to relevant new theoretical or experimental physics discoveries.
|
Standard Solar Model
| 0.83986
|
1,278
|
Since there are many nuclear species, a computerised reaction network is needed to keep track of how all the abundances vary together. According to the Vogt–Russell theorem, the mass and the composition structure throughout a star uniquely determine its radius, luminosity, and internal structure, as well as its subsequent evolution (though this "theorem" was only intended to apply to the slow, stable phases of stellar evolution and certainly does not apply to the transitions between stages and rapid evolutionary stages). The information about the varying abundances of nuclear species over time, along with the equations of state, is sufficient for a numerical solution by taking sufficiently small time increments and using iteration to find the unique internal structure of the star at each stage.
|
Standard Solar Model
| 0.83986
|
1,279
|
For simplicity, the stellar structure equations are written without explicit time dependence, with the exception of the luminosity gradient equation: Here L is the luminosity, ε is the nuclear energy generation rate per unit mass and εν is the luminosity due to neutrino emission (see below for the other quantities). The slow evolution of the Sun on the main sequence is then determined by the change in the nuclear species (principally hydrogen being consumed and helium being produced). The rates of the various nuclear reactions are estimated from particle physics experiments at high energies, which are extrapolated back to the lower energies of stellar interiors (the Sun burns hydrogen rather slowly).
|
Standard Solar Model
| 0.83986
|
1,280
|
Nuclear reactions in the core of the Sun change its composition, by converting hydrogen nuclei into helium nuclei by the proton–proton chain and (to a lesser extent in the Sun than in more massive stars) the CNO cycle. This increases the mean molecular weight in the core of the Sun, which should lead to a decrease in pressure. This does not happen as instead the core contracts. By the virial theorem half of the gravitational potential energy released by this contraction goes towards raising the temperature of the core, and the other half is radiated away.
|
Standard Solar Model
| 0.83986
|
1,281
|
pgEd is working with Sandra de Castro Buffington and Hollywood, Health & Society at the Norman Lear Center, University of Southern California (USC) Annenberg School for Communication, to advance awareness about personal genetics through television. They have also worked with the Broad Institute on outreach via fiction.
|
Personal Genetics Education Project
| 0.839853
|
1,282
|
pgEd hosts the annual GETed conference, a meeting that brings together experts from across the United States and beyond in education, research, health, entertainment, and policy to develop strategies for accelerating public awareness. Topics covered during these conferences have included reproductive technologies, human behavior and cognition, microbiomes, the intersection of faith and genetics, interplanetary travel, the importance of engaging the political sphere, and the power of entertainment and gaming to reach millions.
|
Personal Genetics Education Project
| 0.839853
|
1,283
|
The Personal Genetics Education Project (pgEd) aims to engage and inform a worldwide audience about the benefits of knowing one's genome as well as the ethical, legal and social issues (ELSI) and dimensions of personal genetics. pgEd was founded in 2006, is housed in the Department of Genetics at Harvard Medical School and is directed by Ting Wu, a professor in that department. It employs a variety of strategies for reaching general audiences, including generating online curricular materials, leading discussions in classrooms, workshops, and conferences, developing a mobile educational game (Map-Ed), holding an annual conference geared toward accelerating awareness (GETed), and working with the world of entertainment to improve accuracy and outreach.
|
Personal Genetics Education Project
| 0.839853
|
1,284
|
pgEd's advisory board includes Sandra de Castro Buffington, Director, Hollywood Health and Society, George M. Church, Professor of Genetics, Harvard Medical School, Juan Enriquez, Managing Director at Excel Venture Management, and Marc Hodosh, Co-Creator of TEDMED.
|
Personal Genetics Education Project
| 0.839853
|
1,285
|
pgEd develops tools for teachers and general audiences that examine the potential benefits and risks of personalized genome analysis. These include freely accessible, interactive lesson plans that tackle issues such as genetic testing of minors, reproductive genetics, complex human traits and genetics, and the history of eugenics. pgEd also engages educators at conferences as well as organizes professional development workshops. All of pgEd's materials are freely available online.
|
Personal Genetics Education Project
| 0.839853
|
1,286
|
In 2013, pgEd created a mobile educational quiz called Map-Ed. Map-Ed invites players to work their way through five questions that address key concepts in genetics and then pin themselves on a world map. Within weeks of its launch, Map-Ed gained over 1,000 pins around the world, spanning across all 7 continents. Translations and new maps linked to questions on topics broadly related to genetics are in development.
|
Personal Genetics Education Project
| 0.839853
|
1,287
|
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation.
|
Circular motion
| 0.839847
|
1,288
|
The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing).
|
DNA microarray experiment
| 0.839844
|
1,289
|
The puzzle cannot be solved: it is impossible to change the string MI into MU by repeatedly applying the given rules. In other words, MU is not a theorem of the MIU formal system. To prove this, one must step "outside" the formal system itself. In order to prove assertions like this, it is often beneficial to look for an invariant; that is, some quantity or property that doesn't change while applying the rules.
|
MU puzzle
| 0.839839
|
1,290
|
In her textbook, Discrete Mathematics with Applications, Susanna S. Epp uses the MU puzzle to introduce the concept of recursive definitions, and begins the relevant chapter with a quote from GEB.
|
MU puzzle
| 0.839839
|
1,291
|
The MU puzzle is a puzzle stated by Douglas Hofstadter and found in Gödel, Escher, Bach involving a simple formal system called "MIU". Hofstadter's motivation is to contrast reasoning within a formal system (i.e., deriving theorems) against reasoning about the formal system itself. MIU is an example of a Post canonical system and can be reformulated as a string rewriting system.
|
MU puzzle
| 0.839839
|
1,292
|
On this level, the MU puzzle can be seen to be impossible. The inability of the MIU system to express or deduce facts about itself, such as the inability to derive MU, is a consequence of its simplicity. However, more complex formal systems, such as systems of mathematical logic, may possess this ability. This is the key idea behind Godel's Incompleteness Theorem.
|
MU puzzle
| 0.839839
|
1,293
|
Under what conditions do smooth solutions exist for the Navier–Stokes equations, which are the equations that describe the flow of a viscous fluid? This problem, for an incompressible fluid in three dimensions, is also one of the Millennium Prize Problems in mathematics. Turbulent flow: Is it possible to make a theoretical model to describe the statistics of a turbulent flow (in particular, its internal structures)?
|
Open problems in physics
| 0.839823
|
1,294
|
What is the relation between BQP and NP? Can quantum algorithms go beyond BQP? Post-quantum cryptography: can we prove that prove that some cryptographic protocols are safe againts quantum computers? Quantum capacity: The capacity of a quantum channel is in general not known.
|
Open problems in physics
| 0.839823
|
1,295
|
Temperature: can quantum computing be performed at non-cryogenic temperatures? Can we build room temperature quantum computers? Complexity classes problems: what is the relation of BQP and BPP?
|
Open problems in physics
| 0.839823
|
1,296
|
Threshold theorem: Can we go beyond the noisy intermediate-scale quantum era? can quantum computers reach fault tolerance? Is it possible to have enough qubit scalability to implement quantum error correction?
|
Open problems in physics
| 0.839823
|
1,297
|
It is unknown whether this is due to unknown physics (such as sterile neutrinos), experimental error in the measurements, or errors in the theoretical flux calculations. Strong CP problem and axions: Why is the strong nuclear interaction invariant to parity and charge conjugation? Is Peccei–Quinn theory the solution to this problem?
|
Open problems in physics
| 0.839823
|
1,298
|
Is there a theory that can explain the masses of particular quarks and leptons in particular generations from first principles (a theory of Yukawa couplings)? Neutrino mass: What is the mass of neutrinos, whether they follow Dirac or Majorana statistics? Is the mass hierarchy normal or inverted?
|
Open problems in physics
| 0.839823
|
1,299
|
Hierarchy problem: Why is gravity such a weak force? It becomes strong for particles only at the Planck scale, around 1019 GeV, much above the electroweak scale (100 GeV, the energy scale dominating physics at low energies). Why are these scales so different from each other? What prevents quantities at the electroweak scale, such as the Higgs boson mass, from getting quantum corrections on the order of the Planck scale?
|
Open problems in physics
| 0.839823
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.