id int32 0 100k | text stringlengths 21 3.54k | source stringlengths 1 124 | similarity float32 0.78 0.88 |
|---|---|---|---|
2,600 | History of geodesy – history of the scientific discipline that deals with the measurement and representation of the Earth, including its gravitational field, in a three-dimensional time-varying space History of geography – history of the science that studies the lands, features, inhabitants, and phenomena of Earth History of geoinformatics – history of the science and the technology which develops and uses information science infrastructure to address the problems of geography, geosciences and related branches of engineering. History of geology – history of the study of the Earth, with the general exclusion of present-day life, flow within the ocean, and the atmosphere. History of planetary geology – history of the planetary science discipline concerned with the geology of the celestial bodies such as the planets and their moons, asteroids, comets, and meteorites. | Physical Sciences | 0.83369 |
2,601 | History of environmental soil science – history of the Environmental soil science is the study of the interaction of humans with the pedosphere as well as critical aspects of the biosphere, the lithosphere, the hydrosphere, and the atmosphere. History of environmental geology – history of the Environmental geology, like hydrogeology, is an applied science concerned with the practical application of the principles of geology in the solving of environmental problems. History of toxicology – history of the branch of biology, chemistry, and medicine concerned with the study of the adverse effects of chemicals on living organisms. | Physical Sciences | 0.83369 |
2,602 | History of Freshwater biology – history of the scientific biological study of freshwater ecosystems and is a branch of limnology History of marine biology – history of the scientific study of organisms in the ocean or other marine or brackish bodies of water History of parasitology – history of the Parasitology is the study of parasites, their hosts, and the relationship between them. History of population dynamics – history of the Population dynamics is the branch of life sciences that studies short-term and long-term changes in the size and age composition of populations, and the biological and environmental processes influencing those changes. History of environmental chemistry – history of the Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. | Physical Sciences | 0.83369 |
2,603 | History of climatology – history of the study of climate, scientifically defined as weather conditions averaged over a period of time History of coastal geography – history of the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology and oceanography) and the human geography (sociology and history) of the coast. History of environmental science – history of an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. History of ecology – history of the scientific study of the distribution and abundance of living organisms and how the distribution and abundance are affected by interactions between the organisms and their environment. | Physical Sciences | 0.83369 |
2,604 | History of atmospheric sciences – history of the umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. History of climatology History of meteorology History of atmospheric chemistry History of biogeography – history of the study of the distribution of species (biology), organisms, and ecosystems in geographic space and through geological time. History of cartography – history of the study and practice of making maps or globes. | Physical Sciences | 0.83369 |
2,605 | History of nanotechnology – history of the study of manipulating matter on an atomic and molecular scale History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture. History of spectroscopy – history of the study of the interaction between matter and radiated energy History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces.History of Earth science – history of the all-embracing term for the sciences related to the planet Earth. Earth science, and all of its branches, are branches of physical science. | Physical Sciences | 0.83369 |
2,606 | History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids Multidisciplinary fields involving chemistry History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. History of chemical engineering – history of the branch of engineering that deals with physical science (e.g., chemistry and physics), and life sciences (e.g., biology, microbiology and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. | Physical Sciences | 0.83369 |
2,607 | History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behavior of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. | Physical Sciences | 0.83369 |
2,608 | History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes, and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). | Physical Sciences | 0.83369 |
2,609 | History of Flavor chemistry – history of someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the study of the chemistry of marine environments including the influences of different variables. | Physical Sciences | 0.83369 |
2,610 | History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis, and development for market of pharmaceutical agents (drugs). | Physical Sciences | 0.83369 |
2,611 | It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. | Physical Sciences | 0.83369 |
2,612 | History of vehicle dynamics – history of the dynamics of vehicles, here assumed to be ground vehicles.History of chemistry – history of the physical science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate the chemical reactions History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. | Physical Sciences | 0.83369 |
2,613 | History of quantum physics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of theory of relativity – History of statics – history of the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. History of solid state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. | Physical Sciences | 0.83369 |
2,614 | History of psychophysics – history of the quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. History of plasma physics – history of the state of matter similar to gas in which a certain portion of the particles are ionized. History of polymer physics – history of the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. | Physical Sciences | 0.83369 |
2,615 | History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of particle physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. | Physical Sciences | 0.83369 |
2,616 | History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. | Physical Sciences | 0.83369 |
2,617 | History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of materials physics – history of the use of physics to describe materials in many different ways such as force, heat, light and mechanics. History of mathematical physics – history of the application of mathematics to problems in physics and the development of mathematical methods for such applications and for the formulation of physical theories. History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. | Physical Sciences | 0.83369 |
2,618 | History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of cryogenics – history of cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123K) and the behavior of materials at those temperatures. History of Dynamics – history of the study of the causes of motion and changes in motion History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. | Physical Sciences | 0.83369 |
2,619 | History of neurophysics – history of the branch of biophysics dealing with the nervous system. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. | Physical Sciences | 0.83369 |
2,620 | History of physical cosmology – history of the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. History of planetary science – history of the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of stellar astronomy – history of the natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters, and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) History of atmospheric physics – history of the study of the application of physics to the atmosphere History of atomic, molecular, and optical physics – history of the study of how matter and light interact History of biophysics – history of the study of physical processes relating to biology History of medical physics – history of the application of physics concepts, theories and methods to medicine. | Physical Sciences | 0.83369 |
2,621 | History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of extragalactic astronomy – history of the branch of astronomy concerned with objects outside our own Milky Way Galaxy History of galactic astronomy – history of the study of our own Milky Way galaxy and all its contents. | Physical Sciences | 0.83369 |
2,622 | History of physics – history of the physical science that studies matter and its motion through space-time, and related concepts such as energy and force History of acoustics – history of the study of mechanical waves in solids, liquids, and gases (such as vibration and sound) History of agrophysics – history of the study of physics applied to agroecosystems History of soil physics – history of the study of soil physical properties and processes. History of astrophysics – history of the study of the physical aspects of celestial objects History of astronomy – history of the study of the universe beyond Earth, including its formation and development, and the evolution, physics, chemistry, meteorology, and motion of celestial objects (such as galaxies, planets, etc.) and phenomena that originate outside the atmosphere of Earth (such as the cosmic background radiation). History of astrodynamics – history of the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. | Physical Sciences | 0.83369 |
2,623 | History of physical science – history of the branch of natural science that studies non-living systems, in contrast to the life sciences. It in turn has many branches, each referred to as a "physical science", together called the "physical sciences". However, the term "physical" creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena (organic chemistry, for example). The four main branches of physical science are astronomy, physics, chemistry, and the Earth sciences, which include meteorology and geology. | Physical Sciences | 0.83369 |
2,624 | Physics – branch of science that studies matter and its motion through space and time, along with related concepts such as energy and force. Physics is one of the "fundamental sciences" because the other natural sciences (like biology, geology etc.) deal with systems that seem to obey the laws of physics. According to physics, the physical laws of matter, energy and the fundamental forces of nature govern the interactions between particles and physical entities (such as planets, molecules, atoms or the subatomic particles). Some of the basic pursuits of physics, which include some of the most prominent developments in modern science in the last millennium, include: Describing the nature, measuring and quantifying of bodies and their motion, dynamics etc. Newton's laws of motion Mass, force and weight Momentum and conservation of energy Gravity, theories of gravity Energy, work, and their relationship Motion, position, and energy Different forms of Energy, their interconversion and the inevitable loss of energy in the form of heat (Thermodynamics) Energy conservation, conversion, and transfer. Energy source the transfer of energy from one source to work in another. Kinetic molecular theory Phases of matter and phase transitions Temperature and thermometers Energy and heat Heat flow: conduction, convection, and radiation The four laws of thermodynamics The principles of waves and sound The principles of electricity, magnetism, and electromagnetism The principles, sources, and properties of light | Physical Sciences | 0.83369 |
2,625 | Physical science can be described as all of the following: A branch of science (a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe).A branch of natural science – natural science is a major branch of science that tries to explain and predict nature's phenomena, based on empirical evidence. In natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as peer review and repeatability of findings, are amongst the criteria and methods used for this purpose. Natural science can be broken into two main branches: life science (for example biology) and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences. | Physical Sciences | 0.83369 |
2,626 | Jaynes Probability Theory: The Logic of Science. Keynes saw numerical probabilities as special cases of probability, which did not have to be quantifiable or even comparable.Keynes, in chapter 3 of the "A Treatise on Probability", used the example of taking an umbrella in case of rain to express the idea of uncertainty that he dealt with by the use of interval estimates in chapters 3, 15, 16, and 17 of the "A Treatise on Probability". Intervals that overlap are not greater than, less than or equal to each other. | A Treatise on Probability | 0.833636 |
2,627 | According to the classical four-vertex theorem, every simple closed planar smooth curve must have at least four vertices. A more general fact is that every simple closed space curve which lies on the boundary of a convex body, or even bounds a locally convex disk, must have four vertices. Every curve of constant width must have at least six vertices.If a planar curve is bilaterally symmetric, it will have a vertex at the point or points where the axis of symmetry crosses the curve. Thus, the notion of a vertex for a curve is closely related to that of an optical vertex, the point where an optical axis crosses a lens surface. | Vertex (curve) | 0.833626 |
2,628 | In the geometry of plane curves, a vertex is a point of where the first derivative of curvature is zero. This is typically a local maximum or minimum of curvature, and some authors define a vertex to be more specifically a local extremum of curvature. However, other special cases may occur, for instance when the second derivative is also zero, or when the curvature is constant. For space curves, on the other hand, a vertex is a point where the torsion vanishes. | Vertex (curve) | 0.833626 |
2,629 | BWT has also been proved to be useful on sequence prediction which is a common area of study in machine learning and natural-language processing. In particular, Ktistakis et al. proposed a sequence prediction scheme called SuBSeq that exploits the lossless compression of data of the Burrows–Wheeler transform. SuBSeq exploits BWT by extracting the FM-index and then performing a series of operations called backwardSearch, forwardSearch, neighbourExpansion, and getConsequents in order to search for predictions given a suffix. The predictions are then classified based on a weight and put into an array from which the element with the highest weight is given as the prediction from the SuBSeq algorithm. SuBSeq has been shown to outperform state of the art algorithms for sequence prediction both in terms of training time and accuracy. | Block-sorting compression | 0.83362 |
2,630 | A number of optimizations can make these algorithms run more efficiently without changing the output. There is no need to represent the table in either the encoder or decoder. In the encoder, each row of the table can be represented by a single pointer into the strings, and the sort performed using the indices. In the decoder, there is also no need to store the table, and in fact no sort is needed at all. | Block-sorting compression | 0.83362 |
2,631 | Since any rotation of the input string will lead to the same transformed string, the BWT cannot be inverted without adding an EOF marker to the end of the input or doing something equivalent, making it possible to distinguish the input string from all its rotations. Increasing the size of the alphabet (by appending the EOF character) makes later compression steps awkward. There is a bijective version of the transform, by which the transformed string uniquely identifies the original, and the two have the same length and contain exactly the same characters, just in a different order.The bijective transform is computed by factoring the input into a non-increasing sequence of Lyndon words; such a factorization exists and is unique by the Chen–Fox–Lyndon theorem, and may be found in linear time. The algorithm sorts the rotations of all the words; as in the Burrows–Wheeler transform, this produces a sorted sequence of n strings. | Block-sorting compression | 0.83362 |
2,632 | If g {\displaystyle {\mathfrak {g}}} is a complex semisimple Lie algebra and h {\displaystyle {\mathfrak {h}}} is a Cartan subalgebra, we can construct a root system as follows. We say that α ∈ h ∗ {\displaystyle \alpha \in {\mathfrak {h}}^{*}} is a root of g {\displaystyle {\mathfrak {g}}} relative to h {\displaystyle {\mathfrak {h}}} if α ≠ 0 {\displaystyle \alpha \neq 0} and there exists some X ≠ 0 ∈ g {\displaystyle X\neq 0\in {\mathfrak {g}}} such that for all H ∈ h {\displaystyle H\in {\mathfrak {h}}} . One can show that there is an inner product for which the set of roots forms a root system. The root system of g {\displaystyle {\mathfrak {g}}} is a fundamental tool for analyzing the structure of g {\displaystyle {\mathfrak {g}}} and classifying its representations. (See the section below on Root systems and Lie theory.) | Positive root | 0.833608 |
2,633 | A vector λ {\displaystyle \lambda } in E is called integral if its inner product with each coroot is an integer: Since the set of α ∨ {\displaystyle \alpha ^{\vee }} with α ∈ Δ {\displaystyle \alpha \in \Delta } forms a base for the dual root system, to verify that λ {\displaystyle \lambda } is integral, it suffices to check the above condition for α ∈ Δ {\displaystyle \alpha \in \Delta } . The set of integral elements is called the weight lattice associated to the given root system. This term comes from the representation theory of semisimple Lie algebras, where the integral elements form the possible weights of finite-dimensional representations. The definition of a root system guarantees that the roots themselves are integral elements. | Positive root | 0.833608 |
2,634 | The concept of a root system was originally introduced by Wilhelm Killing around 1889 (in German, Wurzelsystem). He used them in his attempt to classify all simple Lie algebras over the field of complex numbers. Killing originally made a mistake in the classification, listing two exceptional rank 4 root systems, when in fact there is only one, now known as F4. Cartan later corrected this mistake, by showing Killing's two root systems were isomorphic.Killing investigated the structure of a Lie algebra L {\displaystyle L} , by considering what is now called a Cartan subalgebra h {\displaystyle {\mathfrak {h}}} . | Positive root | 0.833608 |
2,635 | In mathematics, a root system is a configuration of vectors in a Euclidean space satisfying certain geometrical properties. The concept is fundamental in the theory of Lie groups and Lie algebras, especially the classification and representation theory of semisimple Lie algebras. Since Lie groups (and some analogues such as algebraic groups) and Lie algebras have become important in many parts of mathematics during the twentieth century, the apparently special nature of root systems belies the number of areas in which they are applied. Further, the classification scheme for root systems, by Dynkin diagrams, occurs in parts of mathematics with no overt connection to Lie theory (such as singularity theory). Finally, root systems are important for their own sake, as in spectral graph theory. | Positive root | 0.833608 |
2,636 | In solid-state physics, an electron hole (usually referred to simply as a hole) is the absence of an electron from a full valence band. A hole is essentially a way to conceptualize the interactions of the electrons within a nearly full valence band of a crystal lattice, which is missing a small fraction of its electrons. In some ways, the behavior of a hole within a semiconductor crystal lattice is comparable to that of the bubble in a full bottle of water.The hole concept was pioneered in 1929 by Rudolf Peierls, who analyzed the Hall effect using Bloch's theorem, and demostrated that a nearly full and a nearly empty Brillouin zones give the opposite Hall voltages. The concept of an electron hole in solid-state physics predates the concept of a hole in Dirac equation, but there is no evidence that it would have influenced Dirac's thinking. | Electron holes | 0.833599 |
2,637 | Visibility (geometry) Art gallery problem (The museum problem) Visibility graph Watchman route problem Computer graphics applications: Hidden surface determination Hidden line removal Ray casting (not to be confused with ray tracing of computer graphics) | List of combinatorial computational geometry topics | 0.83358 |
2,638 | List of combinatorial computational geometry topics enumerates the topics of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character. See List of numerical computational geometry topics for another flavor of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. | List of combinatorial computational geometry topics | 0.83358 |
2,639 | Overall, researchers believe pharmacogenomics will allow physicians to better tailor medicine to the needs of the individual patient. As of November 2016, the FDA has approved 204 drugs with pharmacogenetics information in its labeling. | Personal Genomics | 0.833579 |
2,640 | Since 2001, there has been an almost 550% increase in the number of research papers in PubMed related to the search terms pharmacogenomics and pharmacogenetics. This field allows researchers to better understand how genetic differences will influence the body's response to a drug and inform which medicine is most appropriate for the patient. These treatment plans will be able to prevent or at least minimize the adverse drug reactions which are a, "significant cause of hospitalizations and deaths in the United States." | Personal Genomics | 0.833579 |
2,641 | These labels may describe genotype-specific dosing instructions and risk for adverse events amongst other information.Disease risk may be calculated based on genetic markers and genome-wide association studies for common medical conditions, which are multifactorial and include environmental components in the assessment. Diseases which are individually rare (less than 200,000 people affected in the USA) are nevertheless collectively common (affecting roughly 8-10% of the US population). Over 2500 of these diseases (including a few more common ones) have predictive genetics of sufficiently high clinical impact that they are recommended as medical genetic tests available for single genes (and in whole genome sequencing) and growing at about 200 new genetic diseases per year. | Personal Genomics | 0.833579 |
2,642 | Molecular and Cellular Biochemistry is a peer-reviewed scientific journal covering research in cellular biology and biochemistry. It was a successor to the journal Enzymologia and was established in 1973 to make "it possible to extend the potentialities of the periodical". | Molecular and Cellular Biochemistry | 0.83357 |
2,643 | The Turing Test attempts to define when a machine might be said to possess human intelligence, while Turing's Wager is an argument aiming to demonstrate that characterising the brain mathematically will take over a thousand years. While building an artificial intelligence and mapping the human brain are both difficult endeavours, the former is actually a sub-problem of the latter (Thwaites et al. 2017). | Turing's Wager | 0.83356 |
2,644 | In computer science, conditional statements are used to make binary decisions. A program can perform different computations or actions depending on whether a certain boolean value evaluates to true or false. The if-then-else construct is a control flow statement which runs one of two code blocks depending on the value of a boolean expression, and its structure looks like this:if condition then code block 1 else code block 2 end The conditional expression is condition, and if it is true, then code block 1 is executed, otherwise code block 2 is executed. It is also possible to combine multiple conditions with the else-if construct: if condition 1 then code block 1 else if condition 2 then code block 2 else code block 3 end This can be represented by the flow diagram on the right. | Binary decision | 0.833556 |
2,645 | A binary decision is a choice between two alternatives, for instance between taking some specific action or not taking it.Binary decisions are basic to many fields. Examples include: Truth values in mathematical logic, and the corresponding Boolean data type in computer science, representing a value which may be chosen to be either true or false. Conditional statements (if-then or if-then-else) in computer science, binary decisions about which piece of code to execute next. Decision trees and binary decision diagrams, representations for sequences of binary decisions. Binary choice, a statistical model for the outcome of a binary decision. | Binary decision | 0.833556 |
2,646 | "Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example of emergence and self-organization. A version of Life that incorporates random fluctuations has been used in physics to study phase transitions and nonequilibrium dynamics. | Conway's Game of Life | 0.833555 |
2,647 | See (Chandler & Magnus 1982) for a detailed history of combinatorial group theory. A proto-form is found in the 1856 icosian calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. The foundations of combinatorial group theory were laid by Walther von Dyck, student of Felix Klein, in the early 1880s, who gave the first systematic study of groups by generators and relations. == References == | Combinatorial group theory | 0.833535 |
2,648 | In mathematics, combinatorial group theory is the theory of free groups, and the concept of a presentation of a group by generators and relations. It is much used in geometric topology, the fundamental group of a simplicial complex having in a natural and geometric way such a presentation. A very closely related topic is geometric group theory, which today largely subsumes combinatorial group theory, using techniques from outside combinatorics besides. It also comprises a number of algorithmically insoluble problems, most notably the word problem for groups; and the classical Burnside problem. | Combinatorial group theory | 0.833535 |
2,649 | If two contrasts are orthogonal, estimates created by using such contrasts will be uncorrelated. If orthogonal contrasts are available, it is possible to summarize the results of a statistical analysis in the form of a simple analysis of variance table, in such a way that it contains the results for different test statistics relating to different contrasts, each of which are statistically independent. Linear contrasts can be easily converted into sums of squares. | Contrast variable | 0.833517 |
2,650 | Note that we are not interested in one of these scores by itself, but only in the contrast (in this case — the difference). Since this is a linear combination of independent variables, its variance equals the weighted sum of the summands' variances; in this case both weights are one. This "blending" of two variables into one might be useful in many cases such as ANOVA, regression, or even as descriptive statistics in its own right. | Contrast variable | 0.833517 |
2,651 | Let θ 1 , … , θ t {\displaystyle \theta _{1},\ldots ,\theta _{t}} be a set of variables, either parameters or statistics, and a 1 , … , a t {\displaystyle a_{1},\ldots ,a_{t}} be known constants. The quantity ∑ i = 1 t a i θ i {\displaystyle \sum _{i=1}^{t}a_{i}\theta _{i}} is a linear combination. It is called a contrast if ∑ i = 1 t a i = 0 {\displaystyle \sum _{i=1}^{t}a_{i}=0} . Furthermore, two contrasts, ∑ i = 1 t a i θ i {\displaystyle \sum _{i=1}^{t}a_{i}\theta _{i}} and ∑ i = 1 t b i θ i {\displaystyle \sum _{i=1}^{t}b_{i}\theta _{i}} , are orthogonal if ∑ i = 1 t a i b i = 0 {\displaystyle \sum _{i=1}^{t}a_{i}b_{i}=0} . | Contrast variable | 0.833517 |
2,652 | One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this. Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology. | Dipole-dipole interaction | 0.833509 |
2,653 | Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. | Dipole-dipole interaction | 0.833509 |
2,654 | The battery of the Geometry E is a base 33.5 kWh and a longer-range 39.4 kWh lithium iron phosphate battery providing a NEDC range of 320 and 401 km (199 and 249 mi) respectively. The electric motor is a TZ160XS601 drive motor produced by GLB Intelligent Power Technologies capable of producing 60 kW and 130 Nm of torque, giving it a top speed of 121 km/h. Charge time for the Geometry E from 0-80% is 30 minutes.The interior of the Geometry E features two 10.25-inch infotainment screens and a central control screen as standard. | Geometry E | 0.833507 |
2,655 | The Geometry E is officially the third brand new model of the Geometry brand, while replacing the short-lived Geometry EX3 sold in 2021 alone. It was developed based on the same platform as the Geely Vision X3 and the Geometry EX3 rebadged variant, and comes in three trims; Cute Tiger, Linglong Tiger, and Thunder Tiger. Pricing of the Geometry E starts at $12,947 (86,800 yuan) for the base model, while the Linglong Tiger and Thunder Tiger costs around $14,588 and $15,483 respectively. | Geometry E | 0.833507 |
2,656 | The Geometry E is a battery-powered subcompact crossover produced by Chinese auto manufacturer Geely under the Geometry brand. | Geometry E | 0.833507 |
2,657 | In statistics, the studentized range, denoted q, is the difference between the largest and smallest data in a sample normalized by the sample standard deviation. It is named after William Sealy Gosset (who wrote under the pseudonym "Student"), and was introduced by him in 1927. The concept was later discussed by Newman (1939), Keuls (1952), and John Tukey in some unpublished notes. Its statistical distribution is the studentized range distribution, which is used for multiple comparison procedures, such as the single step procedure Tukey's range test, the Newman–Keuls method, and the Duncan's step down procedure, and establishing confidence intervals that are still valid after data snooping has occurred. | Studentized range | 0.833488 |
2,658 | A taut chain can be extended by only about 40%. At this point the force along the chain is sufficient to mechanically rupture the C-C covalent bond. This tensile force limit has been calculated via quantum chemistry simulations and it is approximately 7 nN, about a factor of a thousand greater than the entropic chain forces at low strain. The angles between adjacent backbone C-C bonds in an isoprene unit vary between about 115–120 degrees and the forces associated with maintaining these angles are quite large, so within each unit, the chain backbone always follows a zigzag path, even at bond rupture. This mechanism accounts for the steep upturn in the elastic stress, observed at high strains (Fig. 1). | Rubber elasticity | 0.833486 |
2,659 | The concept of entropy comes to us from the area mathematical physics called statistical mechanics which is concerned with the study of large thermal systems, e.g. rubber networks at room temperature. Although the detailed behavior of the constituent chains are random and far too complex to study individually, we can obtain very useful information about their 'average' behavior from a statistical mechanics analysis of a large sample. There are no other examples of how entropy changes can produce a force in our everyday experience. | Rubber elasticity | 0.833486 |
2,660 | Therefore, they are equal in space, and all of them are 1/3 of the overall end-to-end distance of the chain: ⟨ R x 0 2 ⟩ = ⟨ R y 0 2 ⟩ = ⟨ R z 0 2 ⟩ = ⟨ R 2 ⟩ / 3 {\displaystyle \langle R_{x0}^{2}\rangle =\langle R_{y0}^{2}\rangle =\langle R_{z0}^{2}\rangle =\langle R^{2}\rangle /3} . Plugging in the change of free energy equation above, it is easy to get: The free energy change per volume is just: where n s {\displaystyle n_{s}} is the number of strands in network, the subscript "def" means "deformation", v s = n s / V {\displaystyle v_{s}=n_{s}/V} , which is the number density per volume of polymer chains, β = ⟨ R 2 ⟩ / R 0 2 {\displaystyle \beta =\langle R^{2}\rangle /R_{0}^{2}} which is the ratio between the end-to-end distance of the chain and the theoretical distance that obey random walk statistics. If we assume incompressibility, the product of extension ratios is 1, implying no change in the volume: λ x λ y λ z = 1 {\displaystyle \lambda _{x}\lambda _{y}\lambda _{z}=1} . | Rubber elasticity | 0.833486 |
2,661 | The initial morphology of the network, immediately after chemical cross-linking, is governed by two random processes: (1) The probability for a cross-link to occur at any isoprene unit and, (2) the random walk nature of the chain conformation. The end-to-end distance probability distribution for a fixed chain length, i.e. fixed number of isoprene units, is described by a random walk. It is the joint probability distribution of the network chain lengths and the end-to-end distances between their cross-link nodes that characterizes the network morphology. Because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously, simple analytic elasticity models are not possible; an explicit 3-dimensional numerical model is required to simulate the effects of strain on a representative volume element of a network. | Rubber elasticity | 0.833486 |
2,662 | Rubber elasticity is produced by several complex molecular processes and its explanation requires a knowledge of advanced mathematics, chemistry and statistical physics, particularly the concept of entropy. Entropy may be thought of as a measure of the thermal energy that is stored in a molecule. Common rubbers, such as polybutadiene and polyisoprene (also called natural rubber), are produced by a process called polymerization. | Rubber elasticity | 0.833486 |
2,663 | But it is easily seen that the associated skew-symmetric linear operator Fab has rank 2 in the former case and rank 4 in the latter case.To state the classification theorem, we consider the eigenvalue problem for F, that is, the problem of finding eigenvalues λ and eigenvectors r which satisfy the eigenvalue equation F a b r b = λ r a . {\displaystyle F^{a}{}_{b}r^{b}=\lambda \,r^{a}.} The skew-symmetry of F implies that: either the eigenvector r is a null vector (i.e. η(r,r) = 0), or the eigenvalue λ is zero, or both.A 1-dimensional subspace generated by a null eigenvector is called a principal null direction of the bivector. The classification theorem characterizes the possible principal null directions of a bivector. It states that one of the following must hold for any nonzero bivector: the bivector has one "repeated" principal null direction; in this case, the bivector itself is said to be null, the bivector has two distinct principal null directions; in this case, the bivector is called non-null.Furthermore, for any non-null bivector, the two eigenvalues associated with the two distinct principal null directions have the same magnitude but opposite sign, λ = ±ν, so we have three subclasses of non-null bivectors: spacelike: ν = 0 timelike: ν ≠ 0 and rank F = 2 non-simple: ν ≠ 0 and rank F = 4,where the rank refers to the rank of the linear operator F. | Classification of electromagnetic fields | 0.833486 |
2,664 | A bivector that can be written as F = v ∧ w, where v, w are linearly independent, is called simple. Any nonzero bivector over a 4-dimensional vector space either is simple, or can be written as F = v ∧ w + x ∧ y, where v, w, x, and y are linearly independent; the two cases are mutually exclusive. Stated like this, the dichotomy makes no reference to the metric η, only to exterior algebra. | Classification of electromagnetic fields | 0.833486 |
2,665 | It acts on the tangent space at p by ra → Fabrb. We will use the symbol F to denote either the bivector or the operator, according to context. We mention a dichotomy drawn from exterior algebra. | Classification of electromagnetic fields | 0.833486 |
2,666 | The classification theorem for electromagnetic fields characterizes the bivector F in relation to the Lorentzian metric η = ηab by defining and examining the so-called "principal null directions". Let us explain this. The bivector Fab yields a skew-symmetric linear operator Fab = Facηcb defined by lowering one index with the metric. | Classification of electromagnetic fields | 0.833486 |
2,667 | In differential geometry and theoretical physics, the classification of electromagnetic fields is a pointwise classification of bivectors at each point of a Lorentzian manifold. It is used in the study of solutions of Maxwell's equations and has applications in Einstein's theory of relativity. | Classification of electromagnetic fields | 0.833486 |
2,668 | The algebraic classification of bivectors given above has an important application in relativistic physics: the electromagnetic field is represented by a skew-symmetric second rank tensor field (the electromagnetic field tensor) so we immediately obtain an algebraic classification of electromagnetic fields. In a cartesian chart on Minkowski spacetime, the electromagnetic field tensor has components F a b = ( 0 B z − B y E x / c − B z 0 B x E y / c B y − B x 0 E z / c − E x / c − E y / c − E z / c 0 ) {\displaystyle F_{ab}=\left({\begin{matrix}0&B_{z}&-B_{y}&E_{x}/c\\-B_{z}&0&B_{x}&E_{y}/c\\B_{y}&-B_{x}&0&E_{z}/c\\-E_{x}/c&-E_{y}/c&-E_{z}/c&0\end{matrix}}\right)} where E x , E y , E z {\displaystyle E_{x},E_{y},E_{z}} and B x , B y , B z {\displaystyle B_{x},B_{y},B_{z}} denote respectively the components of the electric and magnetic fields, as measured by an inertial observer (at rest in our coordinates). As usual in relativistic physics, we will find it convenient to work with geometrised units in which c = 1 {\displaystyle c=1} . In the "Index gymnastics" formalism of special relativity, the Minkowski metric η {\displaystyle \eta } is used to raise and lower indices. | Classification of electromagnetic fields | 0.833486 |
2,669 | Prompt engineering techniques are enabled by in-context learning. In-context learning itself is an emergent property of model scale, meaning breaks in downstream scaling laws occur such that its efficacy increases at a different rate in larger models than in smaller models.In contrast to training and fine tuning for each specific task, which are not temporary, what has been learnt during in-context learning is of a temporary nature. It does not carry the temporary contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other. This result of "mesa-optimization" within transformer layers, is a form of meta-learning or "learning to learn". | Few-shot learning (natural language processing) | 0.833464 |
2,670 | The algorithm of Christofides and Serdyukov follows a similar outline but combines the minimum spanning tree with a solution of another problem, minimum-weight perfect matching. This gives a TSP tour which is at most 1.5 times the optimal. It was one of the first approximation algorithms, and was in part responsible for drawing attention to approximation algorithms as a practical approach to intractable problems. As a matter of fact, the term "algorithm" was not commonly extended to approximation algorithms until later; the Christofides algorithm was initially referred to as the Christofides heuristic.This algorithm looks at things differently by using a result from graph theory which helps improve on the lower bound of the TSP which originated from doubling the cost of the minimum spanning tree. | Travelling salesman problem | 0.833447 |
2,671 | Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%.The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed. | Travelling salesman problem | 0.833447 |
2,672 | Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. | Travelling salesman problem | 0.833447 |
2,673 | The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. In the theory of computational complexity, the decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. | Travelling salesman problem | 0.833447 |
2,674 | For points in the Euclidean plane, the optimal solution to the travelling salesman problem forms a simple polygon through all of the points, a polygonalization of the points. Any non-optimal solution with crossings can be made into a shorter solution without crossings by local optimizations. The Euclidean distance obeys the triangle inequality, so the Euclidean TSP forms a special case of metric TSP. However, even when the input points have integer coordinates, their distances generally take the form of square roots, and the length of a tour is a sum of radicals, making it difficult to perform the symbolic computation needed to perform exact comparisons of the lengths of different tours. | Travelling salesman problem | 0.833447 |
2,675 | The TSP, in particular the Euclidean variant of the problem, has attracted the attention of researchers in cognitive psychology. It has been observed that humans are able to produce near-optimal solutions quickly, in a close-to-linear fashion, with performance that ranges from 1% less efficient, for graphs with 10–20 nodes, to 11% less efficient for graphs with 120 nodes. The apparent ease with which humans accurately generate near-optimal solutions to the problem has led researchers to hypothesize that humans use one or more heuristics, with the two most popular theories arguably being the convex-hull hypothesis and the crossing-avoidance heuristic. However, additional evidence suggests that human performance is quite varied, and individual differences as well as graph geometry appear to affect performance in the task. | Travelling salesman problem | 0.833447 |
2,676 | Suppose X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} are n {\displaystyle n} independent random variables with uniform distribution in the square 2 {\displaystyle ^{2}} , and let L n ∗ {\displaystyle L_{n}^{\ast }} be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known that, almost surely, L n ∗ n → β when n → ∞ , {\displaystyle {\frac {L_{n}^{*}}{\sqrt {n}}}\rightarrow \beta \qquad {\text{when }}n\to \infty ,} where β {\displaystyle \beta } is a positive constant that is not known explicitly. Since L n ∗ ≤ 2 n + 2 {\displaystyle L_{n}^{*}\leq 2{\sqrt {n}}+2} (see below), it follows from bounded convergence theorem that β = lim n → ∞ E / n {\displaystyle \beta =\lim _{n\to \infty }\mathbb {E} /{\sqrt {n}}} , hence lower and upper bounds on β {\displaystyle \beta } follow from bounds on E {\displaystyle \mathbb {E} } . The almost sure limit L n ∗ n → β {\displaystyle {\frac {L_{n}^{*}}{\sqrt {n}}}\rightarrow \beta } as n → ∞ {\displaystyle n\to \infty } may not exist if the independent locations X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} are replaced with observations from a stationary ergodic process with uniform marginals. | Travelling salesman problem | 0.833447 |
2,677 | In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.). | Travelling salesman problem | 0.833447 |
2,678 | PSII also relies on light to drive the formation of proton gradients in chloroplasts, however PSII utilizes vectorial redox chemistry to achieve this goal. Rather than physically transporting protons through the protein, reactions requiring the binding of protons will occur on the extracellular side while reactions requiring the release of protons will occur on the intracellular side. Absorption of photons of 680 nm wavelength is used to excite two electrons in P680 to a higher energy level. | Proton electromotive force | 0.833422 |
2,679 | The Jurassic Park licensed game Jurassic Park: Trespasser exhibited ragdoll physics in 1998 but received very polarised opinions; most were negative, as the game had a large number of bugs. It was remembered, however, for being a pioneer in video game physics.There are fighting games where the player controls one part of the body of the fighter and the rest follows along, such as Rag Doll Kung Fu, as well as racing games such as the FlatOut series. Recent procedural animation technologies, such as those found in NaturalMotion's Euphoria software, have allowed the development of games that rely heavily on the suspension of disbelief facilitated by realistic whole-body muscle/nervous ragdoll physics as an integral part of the immersive gaming experience, as opposed to the antiquated use of canned-animation techniques. This is seen in Grand Theft Auto IV, Grand Theft Auto V, Red Dead Redemption, Max Payne 3 and Red Dead Redemption 2 as well as titles such as LucasArts' Star Wars: The Force Unleashed and Puppet Army Faction's Kontrol, which feature 2D powered ragdoll locomotion on uneven or moving surfaces. | Ragdoll physics | 0.833409 |
2,680 | This requires both animation processing and physics processing, thus making it even slower than traditional ragdoll alone, though the benefits of the extra realism seem to overshadow the reduction in processing speed. Occasionally the ragdolling player model will appear to stretch out and spin around in multiple directions, as though the character were made of rubber. This erratic behavior has been observed to occur in games that use certain versions of the Havok engine, such as Halo 2 and Fable II. Procedural animation: traditionally used in non-realtime media (film/TV/etc), this technique (used in the Medal of Honor series starting from European Assault onward) employs the use of multi-layered physical models in non-playing characters (bones / muscle / nervous systems), and deformable scenic elements from "simulated materials" in vehicles, etc. By removing the use of pre-made animation, each reaction seen by the player is unique, whilst still deterministic. | Ragdoll physics | 0.833409 |
2,681 | Verlet constraints are much simpler and faster to solve than most of those in a fully modelled rigid body system, resulting in much less CPU consumption for characters. Inverse kinematics post-processing: used in Halo: Combat Evolved and Half-Life, this technique relies on playing a pre-set death animation and then using inverse kinematics to force the character into a possible position after the animation has completed. This means that, during an animation, a character could wind up clipping through world geometry, but after it has come to rest, all of its bones will be in valid space. | Ragdoll physics | 0.833409 |
2,682 | This had the advantage of low CPU utilization, as the data needed to animate a "dying" character was chosen from a set number of pre-drawn frames. In contrast, a ragdoll is a collection of multiple rigid bodies (each of which is ordinarily tied to a bone in the graphics engine's skeletal animation system) tied together by a system of constraints that restrict how the bones may move relative to each other. When the character dies, their body begins to collapse to the ground, honouring these restrictions on each of the joints' motion, which often looks more realistic. The term ragdoll comes from the problem that the articulated systems, due to the limits of the solvers used, tend to have little or zero joint/skeletal muscle stiffness, leading to a character collapsing much like a toy rag doll, often into comically improbable or compromising positions. Modern use of ragdoll physics goes beyond death sequences. | Ragdoll physics | 0.833409 |
2,683 | Ragdoll physics is a type of procedural animation used by physics engines, which is often used as a replacement for traditional static death animations in video games and animated films. As computers increased in power, it became possible to do limited real-time physical simulations, which made death animations more realistic. Early video games used manually created animations for a character’s death sequences. | Ragdoll physics | 0.833409 |
2,684 | Lie, Sophus (1973), Gesammelte Abhandlungen. Band 1 (in German), New York: Johnson Reprint Corp., MR 0392459. Mackey, George Whitelaw (1976), The Theory of Unitary Group Representations, University of Chicago Press, MR 0396826 Smith, David Eugene (1906), History of Modern Mathematics, Mathematical Monographs, No. 1. Weyl, Hermann (1950) , The Theory of Groups and Quantum Mechanics, translated by Robertson, H. P., Dover, ISBN 978-0-486-60269-1. Wussing, Hans (2007), The Genesis of the Abstract Group Concept: A Contribution to the History of the Origin of Abstract Group Theory, New York: Dover Publications, ISBN 978-0-486-45868-7. | Mathematical group | 0.833389 |
2,685 | Borel, Armand (2001), Essays in the History of Lie Groups and Algebraic Groups, Providence, R.I. : American Mathematical Society, ISBN 978-0-8218-0288-5 Cayley, Arthur (1889), The Collected Mathematical Papers of Arthur Cayley, vol. II (1851–1860), Cambridge University Press. O'Connor, John J.; Robertson, Edmund F., "The development of group theory", MacTutor History of Mathematics Archive, University of St Andrews Curtis, Charles W. | Mathematical group | 0.833389 |
2,686 | Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element a {\displaystyle a} , all the powers of a {\displaystyle a} are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} , the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups. The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian. | Mathematical group | 0.833389 |
2,687 | Hence all group axioms are fulfilled. This example is similar to ( Q ∖ { 0 } , ⋅ ) {\displaystyle \left(\mathbb {Q} \smallsetminus \left\{0\right\},\cdot \right)} above: it consists of exactly those elements in the ring Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } that have a multiplicative inverse. These groups, denoted F p × {\displaystyle \mathbb {F} _{p}^{\times }} , are crucial to public-key cryptography. | Mathematical group | 0.833389 |
2,688 | This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed. Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle. | Boundary-value problem | 0.833378 |
2,689 | In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions. Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. | Boundary-value problem | 0.833378 |
2,690 | For example, see Analytica (Wikipedia), Analytica (SIP page), Oracle Crystal Ball, Frontline Solvers, and Autobox. The first large documented application of SIPs involved the exploration portfolio of Royal Dutch Shell in 2005 as reported by Savage, Scholtes, and Zweidler, who formalized the discipline of probability management in 2006. The topic is also explored at length in.Vectors of simulated realizations of probability distributions have been used to drive stochastic optimization since at least 1991. | Probability management | 0.833377 |
2,691 | This type of sequencing is useful to capture isoforms and splice variants.SMRT sequencing has several applications in reproductive medical genetics research when investigating families with suspected parental gonadal mosaicism. Long reads enable haplotype phasing in patients to investigate parent-of-origin of mutations. Deep sequencing enables determination of allele frequencies in sperm cells, of relevance for estimation of recurrence risk for future affected offspring. | Single Molecule Real Time Sequencing | 0.833375 |
2,692 | Yield per SMRT Cell increased to 10 or 20 billion bases, for either large-insert libraries or shorter-insert (e.g. amplicon) libraries respectively. On 19 September 2018, the company announced the Sequel 6.0 chemistry with average read lengths increased to 100,000 bases for shorter-insert libraries and 30,000 for longer-insert libraries. SMRT Cell yield increased up to 50 billion bases for shorter-insert libraries. | Single Molecule Real Time Sequencing | 0.833375 |
2,693 | In September 2015, the company announced the launch of a new sequencing instrument, the Sequel System, that increased capacity to 1 million ZMW holes.With the Sequel instrument initial read lengths were comparable to the RS, then later chemistry releases increased read length. On January 23, 2017, the V2 chemistry was released. It increased average read lengths to between 10,000 and 18,000 bases.On March 8, 2018, the 2.1 chemistry was released. It increased average read length to 20,000 bases and half of all reads above 30,000 bases in length. | Single Molecule Real Time Sequencing | 0.833375 |
2,694 | The first journal dedicated to biomedical data science appeared in 2018 – Annual Review of Biomedical Data Science. “The Annual Review of Biomedical Data Science provides comprehensive expert reviews in biomedical data science, focusing on advanced methods to store, retrieve, analyze, and organize biomedical data and knowledge. The scope of the journal encompasses informatics, computational, and statistical approaches to biomedical data, including the sub-fields of bioinformatics, computational biology, biomedical informatics, clinical and clinical research informatics, biostatistics, and imaging informatics. The mission of the journal is to identify both emerging and established areas of biomedical data science, and the leaders in these fields.” Other journals have a more general scope than biomedical data science, but regularly publish biomedical data science research such as Health Data Science and Nature Machine Intelligence. Data science would not exist without curated datasets and the field has seen the rise of journals that are dedicated to describing and validating such datasets, some of which are useful for biomedical applications, including Scientific Data, Biomedical Data, and Data. | Biomedical data science | 0.833368 |
2,695 | Mount Sinai’s Icahn School of Medicine offers a Master of Science in Biomedical Data Science. Stanford University’s Department of Biomedical Data Science offers multiple biomedical informatics graduate programs (MS, PhD, and MD/PhD). The University of Exeter’s College of Healthcare and Medicine offers an MSc in Health Data Science. | Biomedical data science | 0.833368 |
2,696 | Dartmouth College's Geisel School of Medicine houses the Department of Biomedical Data Science where Quantitative Biomedical Sciences programs are available at the master's and PhD levels. Johns Hopkins University’s Department of Biomedical Engineering offers biomedical data science training at the undergraduate, master's, and PhD levels. Imperial College London’s Faculty of Medicine and Data Science Institute offer an MRes in Biomedical Research (Data Science). | Biomedical data science | 0.833368 |
2,697 | Modern biomedical datasets often have specific features which make their analyses difficult, including: Large numbers of feature (sometimes billions), typically far larger than the number of samples (typically tens or hundreds) Noisy and missing data Privacy concerns (e.g., electronic health record confidentiality) Requirement of interpretability from decision makers and regulatory bodiesMany biomedical data science projects apply machine learning to such datasets. These characteristics, while also present in many data science applications more generally, make biomedical data science a specific field. Examples of biomedical data science research include: Computational genomics Computational imaging Electronic health records data mining Biomedical network science | Biomedical data science | 0.833368 |
2,698 | Biomedical data science is a multidisciplinary field which leverages large volumes of data to promote biomedical innovation and discovery. Biomedical data science draws from various fields including Biostatistics, Biomedical informatics, and machine learning, with the goal of understanding biological and medical data. It can be viewed as the study and application of data science to solve biomedical problems. | Biomedical data science | 0.833368 |
2,699 | The National Library of Medicine of the US National Institutes of Health (NIH) identified key biomedical data scientist attributes in an NIH-wide review: general biomedical subject matter knowledge; programming language expertise; predictive analytics, modeling, and machine learning; team science and communication; and responsible data stewardship. | Biomedical data science | 0.833368 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.