id int32 0 100k | text stringlengths 21 3.54k | source stringlengths 1 124 | similarity float32 0.78 0.88 |
|---|---|---|---|
2,100 | Based on X-PLOR, to determine the structure of molecules based on data from NMR experiments. It solves distance geometry problems with heuristic methods (such as simulated annealing) and local search methods (such as conjugate gradient minimization). TINKER. | Distance geometry problem | 0.835686 |
2,101 | Some software packages for applications are: DGSOL. Solves large distance geometry problems in macromolecular modeling. Xplor-NIH. | Distance geometry problem | 0.835686 |
2,102 | There are many applications of distance geometry.In telecommunication networks such as GPS, the positions of some sensors are known (which are called anchors) and some of the distances between sensors are also known: the problem is to identify the positions for all sensors. Hyperbolic navigation is one pre-GPS technology that uses distance geometry for locating ships based on the time it takes for signals to reach anchors. There are many applications in chemistry. Techniques such as NMR can measure distances between pairs of atoms of a given molecule, and the problem is to infer the 3-dimensional shape of the molecule from those distances. | Distance geometry problem | 0.835686 |
2,103 | The concepts of distance geometry will first be explained by describing two particular problems. | Distance geometry problem | 0.835686 |
2,104 | In structural biology, a protein subunit is a polypeptide chain or single protein molecule that assembles (or "coassembles") with others to form a protein complex. Large assemblies of proteins such as viruses often use a small number of types of protein subunits as building blocks.A subunit is often named with a Greek or Roman letter, and the numbers of this type of subunit in a protein is indicated by a subscript. For example, ATP synthase has a type of subunit called Ξ±. Three of these are present in the ATP synthase molecule, leading to the designation Ξ±3. Larger groups of subunits can also be specified, like Ξ±3Ξ²3-hexamer and c-ring.Naturally occurring proteins that have a relatively small number of subunits are referred to as oligomeric. | Protein subunits | 0.835677 |
2,105 | The College Board's recommended preparation was a one-year college preparatory course in physics, a one-year course in algebra and trigonometry, and experience in the laboratory. | SAT Subject Test in Physics | 0.835674 |
2,106 | On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Physics. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. | SAT Subject Test in Physics | 0.835673 |
2,107 | It required critical thinking and test-taking strategies, at which high school freshmen or sophomores may have been inexperienced. The Physics SAT tested more than what normal state requirements were; therefore, many students prepared for the Physics SAT using a preparatory book or by taking an AP course in physics. | SAT Subject Test in Physics | 0.835673 |
2,108 | The SAT Subject Test in Physics, Physics SAT II, or simply the Physics SAT, was a one-hour multiple choice test on physics administered by the College Board in the United States. A high school student generally chose to take the test to fulfill college entrance requirements for the schools at which the student was planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; until January 2005, they were known as SAT IIs; they are still well known by this name. The material tested on the Physics SAT was supposed to be equivalent to that taught in a junior- or senior-level high school physics class. | SAT Subject Test in Physics | 0.835673 |
2,109 | The SAT Subject Test in Physics had 75 questions and consisted of two parts: Part A and Part B. Part A: First 12 or 13 questions 4 groups of two to four questions each The questions within any one group all relate to a single situation. Five possible answer choices are given before the question. An answer choice can be used once, more than once, or not at all in each group.Part B: Last 62 or 63 questions Each question has five possible answer choice with one correct answer. Some questions may be in groups of two or three. | SAT Subject Test in Physics | 0.835673 |
2,110 | Students taking the SAT Subject Test in Physics were prohibited from using any resources during the test, including textbooks, notes, or formula sheets. Although there were mathematics questions including trigonometry, the use of a calculator was not allowed. All scratch work was required to have been done directly in the test booklet. | SAT Subject Test in Physics | 0.835673 |
2,111 | The Nobel Prize in Physiology or Medicine in 2021 was attributed to David Julius (professor at the University of California, San Francisco, USA) and Ardem Patapoutian (neuroscience professor at Scripps Research in La Jolla, California, USA) "for their discovery of receptors for temperature and touch". | Sensible temperature | 0.835672 |
2,112 | Multilocus sequence typing (MLST) is a technique in molecular biology for the typing of multiple loci, using DNA sequences of internal fragments of multiple housekeeping genes to characterize isolates of microbial species. The first MLST scheme to be developed was for Neisseria meningitidis, the causative agent of meningococcal meningitis and septicaemia. Since its introduction for the research of evolutionary history, MLST has been used not only for human pathogens but also for plant pathogens. | Multilocus sequence typing | 0.835671 |
2,113 | MLST is automated, combines advances in high throughput sequencing and bioinformatics with established population genetics techniques. MLST data can be used to investigate evolutionary relationships among bacteria. MLST provides good discriminatory power to differentiate isolates. The application of MLST is huge, and provides a resource for the scientific, public health, and veterinary communities as well as the food industry. The following are examples of MLST applications. | Multilocus sequence typing | 0.835671 |
2,114 | Thus, for example in Escherichia coli, identifying strains carrying toxin genes is more important than having a population genetics-based evaluation of prevalent strains. The advent of second-generation sequencing technologies has made it possible to obtain sequence information across the entire bacterial genome at relatively modest cost and effort, and MLST can now be assigned from whole-genome sequence information, rather than sequencing each locus separately as was the practice when MLST was first developed. Whole-genome sequencing provides richer information for differentiating bacterial strains (MLST uses approximately 0.1% of the genomic sequence to assign type while disregarding the rest of the bacterial genome). For example, whole-genome sequencing of numerous isolates has revealed the single MLST lineage ST258 of Klebsiella pneumoniae comprises two distinct genetic clades, providing additional information about the evolution and spread of these multi-drug resistant organisms, and disproving the previous hypothesis of a single clonal origin for ST258. | Multilocus sequence typing | 0.835671 |
2,115 | Population genetics is not the only relevant factor in an epidemic. Virulence factors are also important in causing disease, and population genetic studies struggle to monitor these. This is because the genes involved are often highly recombining and mobile between strains in comparison with the population genetic framework. | Multilocus sequence typing | 0.835671 |
2,116 | The series is currently edited by Anna Pyle, Yale and David Christianson, Chair of the Department of Chemistry, University of Pennsylvania. Each volume is guest-edited and contributed to by expert researchers in the field (e. g. Biochemists, biophysicists, molecular biologists, analytical chemists or physiologists) == References == | Methods in Enzymology | 0.835662 |
2,117 | Historically, each volume has centered on a specific topic of biochemistry, such as DNA repair, yeast genetics, or the biology of nitric oxide. In recent years, however, the range of topics covered has broadened to also include biotechnology-oriented areas of research. Each Volume and Chapter includes not only background knowledge but also specific research techniques, detailed experimental procedures and methods. Video elements are also present. | Methods in Enzymology | 0.835662 |
2,118 | Methods in Enzymology is a book-series of scientific publications focused primarily on research methods in biochemistry by Academic Press, created by Sidney P. Colowick and Nathan O. Kaplan. | Methods in Enzymology | 0.835662 |
2,119 | Indeed, this is the statement of the ChurchβTuring thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. | Tractable problem | 0.835649 |
2,120 | These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. | Tractable problem | 0.835649 |
2,121 | Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. | Tractable problem | 0.835649 |
2,122 | From that normal form, the matrix Ξ¦ can be read off directly. Consequently, expressions for the adjoint and the Lie algebras can be obtained using formulas (4) and (5). This is demonstrated below in most of the non-trivial cases. | Classical groups | 0.835648 |
2,123 | The quaternionic special linear group is given by S L ( n , H ) = { g β G L ( n , H ) | d e t g = 1 } β‘ S U β ( 2 n ) , {\displaystyle \mathrm {SL} (n,\mathbb {H} )=\{g\in \mathrm {GL} (n,\mathbb {H} )|\mathrm {det} \ g=1\}\equiv \mathrm {SU} ^{*}(2n),} where the determinant is taken on the matrices in C2n. Alternatively, one can define this as the kernel of the DieudonnΓ© determinant G L ( n , H ) β H β / β R > 0 β {\displaystyle \mathrm {GL} (n,\mathbb {H} )\rightarrow \mathbb {H} ^{*}/\simeq \mathbb {R} _{>0}^{*}} . The Lie algebra is s l ( n , H ) = { ( X β Y Β― Y X Β― ) | R e ( Tr β‘ X ) = 0 } β‘ s u β ( 2 n ) . {\displaystyle {\mathfrak {sl}}(n,\mathbb {H} )=\left\{\left.\left({\begin{matrix}X&-{\overline {Y}}\\Y&{\overline {X}}\end{matrix}}\right)\right|Re(\operatorname {Tr} X)=0\right\}\equiv {\mathfrak {su}}^{*}(2n).} | Classical groups | 0.835648 |
2,124 | Under the identification above, G L ( n , H ) = { g β G L ( 2 n , C ) | J g = g Β― J , d e t g β 0 } β‘ U β ( 2 n ) . {\displaystyle \mathrm {GL} (n,\mathbb {H} )=\{g\in \mathrm {GL} (2n,\mathbb {C} )|Jg={\overline {g}}J,\mathrm {det} \quad g\neq 0\}\equiv \mathrm {U} ^{*}(2n).} Its Lie algebra gl(n, H) is the set of all matrices in the image of the mapping Mn(H) β M2n(C) of above, g l ( n , H ) = { ( X β Y Β― Y X Β― ) | X , Y β g l ( n , C ) } β‘ u β ( 2 n ) . {\displaystyle {\mathfrak {gl}}(n,\mathbb {H} )=\left\{\left.\left({\begin{matrix}X&-{\overline {Y}}\\Y&{\overline {X}}\end{matrix}}\right)\right|X,Y\in {\mathfrak {gl}}(n,\mathbb {C} )\right\}\equiv {\mathfrak {u}}^{*}(2n).} | Classical groups | 0.835648 |
2,125 | Contrasting with the classical Lie groups are the exceptional Lie groups, G2, F4, E6, E7, E8, which share their abstract properties, but not their familiarity. These were only discovered around 1890 in the classification of the simple Lie algebras over the complex numbers by Wilhelm Killing and Γlie Cartan. | Classical groups | 0.835648 |
2,126 | A few examples are the following. The rotation group SO(3) is a symmetry of Euclidean space and all fundamental laws of physics, the Lorentz group O(3,1) is a symmetry group of spacetime of special relativity. The special unitary group SU(3) is the symmetry group of quantum chromodynamics and the symplectic group Sp(m) finds application in Hamiltonian mechanics and quantum mechanical versions of it. | Classical groups | 0.835648 |
2,127 | The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.The classical groups form the deepest and most useful part of the subject of linear Lie groups. Most types of classical groups find application in classical and modern physics. | Classical groups | 0.835648 |
2,128 | Thus the Lie algebra can be characterized without reference to a basis, or the adjoint, as a u t ( Ο ) = { X β M n ( V ): Ο ( X x , y ) = β Ο ( x , X y ) , β x , y β V } . {\displaystyle {\mathfrak {aut}}(\varphi )=\{X\in M_{n}(V):\varphi (Xx,y)=-\varphi (x,Xy),\quad \forall x,y\in V\}.} The normal form for Ο will be given for each classical group below. | Classical groups | 0.835648 |
2,129 | {\displaystyle \operatorname {Aut} (\varphi )=\left\{A\in \operatorname {GL} (V):\Phi ^{-1}A^{\mathrm {T} }\Phi A=1\right\}.} The Lie algebra aut(Ο) of the automorphism groups can be written down immediately. Abstractly, X β aut(Ο) if and only if ( e t X ) Ο e t X = 1 {\displaystyle (e^{tX})^{\varphi }e^{tX}=1} for all t, corresponding to the condition in (3) under the exponential mapping of Lie algebras, so that a u t ( Ο ) = { X β M n ( V ): X Ο = β X } , {\displaystyle {\mathfrak {aut}}(\varphi )=\left\{X\in M_{n}(V):X^{\varphi }=-X\right\},} or in a basis as is seen using the power series expansion of the exponential mapping and the linearity of the involved operations. | Classical groups | 0.835648 |
2,130 | If q = 0 the notation is U(n). In this case, Ξ¦ takes the form Ξ¦ = ( 1 p 0 0 β 1 q ) = I p , q , {\displaystyle \Phi =\left({\begin{matrix}1_{p}&0\\0&-1_{q}\end{matrix}}\right)=I_{p,q},} and the Lie algebra is given by u ( p , q ) = { ( X p Γ p Z p Γ q Z p Γ q Β― T Y q Γ q ) | X Β― T = β X , Y Β― T = β Y } . {\displaystyle {\mathfrak {u}}(p,q)=\left\{\left.\left({\begin{matrix}X_{p\times p}&Z_{p\times q}\\{\overline {Z_{p\times q}}}^{\mathrm {T} }&Y_{q\times q}\end{matrix}}\right)\right|{\overline {X}}^{\mathrm {T} }=-X,\quad {\overline {Y}}^{\mathrm {T} }=-Y\right\}.} | Classical groups | 0.835648 |
2,131 | This volume attempts to formulate certain patterns of plausible reasoning. The relation of these patterns with the calculus of probability are also investigated. Their relation to mathematical invention and instruction are also discussed. The following are some of the patterns of plausible inference discussed by Polya. | Mathematics and Plausible Reasoning | 0.835643 |
2,132 | . . In the next chapter the techniques of generalization, specialization and analogy are presented as possible strategies for plausible reasoning. In the remaining chapters, these ideas are illustrated by discussing the discovery of several results in various fields of mathematics like number theory, geometry, etc. and also in physical sciences. | Mathematics and Plausible Reasoning | 0.835643 |
2,133 | This is the origin of the term linear for describing this type of equations. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n β 1) in the Euclidean space of dimension n. Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, for linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. | First degree equation | 0.835634 |
2,134 | These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR. : 72 | Enzyme engineering | 0.835613 |
2,135 | Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics.Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. | Enzyme engineering | 0.835613 |
2,136 | In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2. | Electron | 0.835597 |
2,137 | However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". | Electron | 0.835597 |
2,138 | When the absolute value of this function is squared, it gives the probability that a particle will be observed near a locationβa probability density. : 162β218 Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. | Electron | 0.835597 |
2,139 | As with all particles, electrons can act as waves. This is called the waveβparticle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (Ο). | Electron | 0.835597 |
2,140 | The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. | Electron | 0.835597 |
2,141 | Elementary algebra Left-hand side and right-hand side of an equation β Linear equation β Quadratic equation β Solution point β Arithmetic progression β Recurrence relation β Finite difference β Difference operator β Groups β Group isomorphism β Subgroups β Fermat's little theorem β Cryptography β Faulhaber's formula β | Outline of discrete mathematics | 0.835581 |
2,142 | Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics β such as integers, graphs, and statements in logic β do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics, therefore, excludes topics in "continuous mathematics" such as calculus and analysis. Included below are many of the standard terms used routinely in university-level courses and in research papers. This is not, however, intended as a complete list of mathematical terms; just a selection of typical terms of art that may be encountered. | Outline of discrete mathematics | 0.835581 |
2,143 | Decimal β Binary numeral system β Divisor β Division by zero β Indeterminate form β Empty product β Euclidean algorithm β Fundamental theorem of arithmetic β Modular arithmetic β Successor function | Outline of discrete mathematics | 0.835581 |
2,144 | Binary relation β Heterogeneous relation β Reflexive relation β Reflexive property of equality β Symmetric relation β Symmetric property of equality β Antisymmetric relation β Transitivity (mathematics) β Transitive closure β Transitive property of equality β Equivalence and identity Equivalence relation β Equivalence class β Equality (mathematics) β Inequation β Inequality (mathematics) β Similarity (geometry) β Congruence (geometry) β Equation β Identity (mathematics) β Identity element β Identity function β Substitution property of equality β Graphing equivalence β Extensionality β Uniqueness quantification β | Outline of discrete mathematics | 0.835581 |
2,145 | Set (mathematics) β Element (mathematics) β Venn diagram β Empty set β Subset β Union (set theory) β Disjoint union β Intersection (set theory) β Disjoint sets β Complement (set theory) β Symmetric difference β Ordered pair β Cartesian product β Power set β Simple theorems in the algebra of sets β Naive set theory β Multiset β | Outline of discrete mathematics | 0.835581 |
2,146 | Logic β a study of reasoning-Modal Logic: A type of logic for the study of necessity and probability Set theory β a study of collections of elements Number theory β study of integers and integer-valued functions Combinatorics β a study of Counting Finite mathematics β a course title Graph theory β a study of graphs Digital geometry and digital topology Algorithmics β a study of methods of calculation Information theory β a mathematical representation of the conditions and parameters affecting the transmission and processing of information Computability and complexity theories β deal with theoretical and practical limitations of algorithms Elementary probability theory and Markov chains Linear algebra β a study of related linear equations Functions β an expression, rule, or law that defines a relationship between one variable (the independent variable) and another variable (the dependent variable) Partially ordered set β Probability β concerns with numerical descriptions of the chances of occurrence of an event Proofs β Relation β a collection of ordered pairs containing one object from each set | Outline of discrete mathematics | 0.835581 |
2,147 | For further reading in discrete mathematics, beyond a basic level, see these pages. Many of these disciplines are closely related to computer science. Automata theory β Coding theory β Combinatorics β Computational geometry β Digital geometry β Discrete geometry β Graph theory β a study of graphs Mathematical logic β Discrete optimization β Set theory β Number theory β Information theory β Game theory β | Outline of discrete mathematics | 0.835581 |
2,148 | Mechanobiology is an emerging field of science at the interface of biology, engineering, chemistry and physics. It focuses on how physical forces and changes in the mechanical properties of cells and tissues contribute to development, cell differentiation, physiology, and disease. Mechanical forces are experienced and may be interpreted to give biological responses in cells. The movement of joints, compressive loads on the cartilage and bone during exercise, and shear pressure on the blood vessel during blood circulation are all examples of mechanical forces in human tissues. | Mechanobiology | 0.83558 |
2,149 | The solid phase is made up of porous ECM. The proteoglycans and interstitial fluids interact to give compressive force to the cartilage through negative electrostatic repulsive forces. The ion concentration difference between the extracellular and intracellular ions composition of chondrocytes result in hydrostatic pressure. During development, mechanical environment of joint determines surface and topology of the joint. In adult, moderate mechanical loading is required to maintain cartilage; immobilization of joint leads to loss of proteoglycans and cartilage atrophy while excess mechanical loading results in degeneration of joint. | Mechanobiology | 0.83558 |
2,150 | A major challenge in the field is understanding mechanotransductionβthe molecular mechanisms by which cells sense and respond to mechanical signals. While medicine has typically looked for the genetic and biochemical basis of disease, advances in mechanobiology suggest that changes in cell mechanics, extracellular matrix structure, or mechanotransduction may contribute to the development of many diseases, including atherosclerosis, fibrosis, asthma, osteoporosis, heart failure, and cancer. There is also a strong mechanical basis for many generalized medical disabilities, such as lower back pain, foot and postural injury, deformity, and irritable bowel syndrome. | Mechanobiology | 0.83558 |
2,151 | If F {\displaystyle {\mathcal {F}}} is the whole power set of X {\displaystyle X} then C ( X ) {\displaystyle {\mathcal {C}}(\mathbf {X} )} is called a full complex algebra or power algebra. Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it is isomorphic to the complex algebra corresponding to the field. (Historically the term complex was first used in the case where the algebraic structure was a group and has its origins in 19th century group theory where a subset of a group was called a complex.) | Set algebra | 0.835571 |
2,152 | The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal) Boolean algebras with operators. For this we consider structures ( X , ( R i ) I , F ) {\displaystyle (X,(R_{i})_{I},{\mathcal {F}})} where ( X , ( R i ) I ) {\displaystyle (X,(R_{i})_{I})} is a relational structure i.e. a set with an indexed family of relations defined on it, and ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a field of sets. The complex algebra (or algebra of complexes) determined by a field of sets X = ( X , ( R i ) I , F ) {\displaystyle \mathbf {X} =(X,\left(R_{i}\right)_{I},{\mathcal {F}})} on a relational structure, is the Boolean algebra with operators where for all i β I , {\displaystyle i\in I,} if R i {\displaystyle R_{i}} is a relation of arity n + 1 , {\displaystyle n+1,} then f i {\displaystyle f_{i}} is an operator of arity n {\displaystyle n} and for all S 1 , β¦ , S n β F {\displaystyle S_{1},\ldots ,S_{n}\in {\mathcal {F}}} This construction can be generalized to fields of sets on arbitrary algebraic structures having both operators and relations as operators can be viewed as a special case of relations. | Set algebra | 0.835571 |
2,153 | Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - the Stone representation. (The topology of the Stone representation is also known as the McKinseyβTarski Stone topology after the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology). | Set algebra | 0.835571 |
2,154 | A topological field of sets is called algebraic if and only if there is a base for its topology consisting of complexes. If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology. Topological fields of sets that are separative, compact and algebraic are called Stone fields and provide a generalization of the Stone representation of Boolean algebras. | Set algebra | 0.835571 |
2,155 | A preorder field is a triple ( X , β€ , F ) {\displaystyle (X,\leq ,{\mathcal {F}})} where ( X , β€ ) {\displaystyle (X,\leq )} is a preordered set and ( X , F ) {\displaystyle (X,{\mathcal {F}})} is a field of sets. Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of the Alexandrov topology induced by the preorder. In other words, for all S β F {\displaystyle S\in {\mathcal {F}}}: and Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics of a theory in the modal logic S4, the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of the LindenbaumβTarski algebra of the theory. They are a special case of the general modal frames which are fields of sets with an additional accessibility relation providing representations of modal algebras. | Set algebra | 0.835571 |
2,156 | The energy distribution of the emitted electrons is important both for scientific experiments that use the emitted electron energy distribution to probe aspects of the emitter surface physics and for the field emission sources used in electron beam instruments such as electron microscopes. In the latter case, the "width" (in energy) of the distribution influences how finely the beam can be focused. The theoretical explanation here follows the approach of Forbes. If Ξ΅ denotes the total electron energy relative to the emitter Fermi level, and Kp denotes the kinetic energy of the electron parallel to the emitter surface, then the electron's normal energy Ξ΅n (sometimes called its "forwards energy") is defined by Two types of theoretical energy distribution are recognized: the normal-energy distribution (NED), which shows how the energy Ξ΅n is distributed immediately after emission (i.e., immediately outside the tunneling barrier); and the total-energy distribution, which shows how the total energy Ξ΅ is distributed. | Field emission | 0.835569 |
2,157 | This overall geometry has also been used with carbon nanotubes grown in the void. The other original device type was the "Latham emitter". | Field emission | 0.835569 |
2,158 | This element of incident current density sees a barrier of height h given by: The corresponding escape probability is D(h,F): this may be expanded (approximately) in the form where DF is the escape probability for a barrier of unreduced height equal to the local work-function Ο. Hence, the element dΞ΅dKp makes a contribution z S f F D D d Ο΅ d K p {\displaystyle z_{\mathrm {S} }f_{\mathrm {FD} }D\mathrm {d} {\it {\epsilon }}\mathrm {d} K_{\mathrm {p} }} to the emission current density, and the total contribution made by incident electrons with energies in the elementary range dΞ΅ is thus where the integral is in principle taken along the strip shown in the diagram, but can in practice be extended to β when the decay-width dF is very much less than the Fermi energy KF (which is always the case for a metal). The outcome of the integration can be written: where d F {\displaystyle d_{\mathrm {F} }} and D F {\displaystyle D_{\mathrm {F} }} are values appropriate to a barrier of unreduced height h equal to the local work function Ο, and j F {\displaystyle j_{\mathrm {F} }} is defined by this equation. For a given emitter, with a given field applied to it, j F {\displaystyle j_{\mathrm {F} }} is independent of F, so eq. (21) shows that the shape of the distribution (as Ξ΅ increases from a negative value well below the Fermi level) is a rising exponential, multiplied by the FD distribution function. This generates the familiar distribution shape first predicted by Young. At low temperatures, f F D ( Ο΅ ) {\displaystyle f_{\mathrm {FD} }(\epsilon )} goes sharply from 1 to 0 in the vicinity of the Fermi level, and the FWHM of the distribution is given by: The fact that experimental CFE total energy distributions have this basic shape is a good experimental confirmation that electrons in metals obey FermiβDirac statistics. | Field emission | 0.835569 |
2,159 | In most E. coli K-12 strains (viz. Escherichia coli (molecular biology) for strain pedigrees) there are 314 UAG stop codons. Consequently, a gargantuan amount of work has gone into the replacement of these. One approach pioneered by the group of Prof. George Church from Harvard, was dubbed MAGE in CAGE: this relied on a multiplex transformation and subsequent strain recombination to remove all UAG codonsβthe latter part presented a halting point in a first paper, but was overcome. This resulted in the E. coli strain C321.ΞA, which lacks all UAG codons and RF1. This allowed an experiment to be done with this strain to make it "addicted" to the amino acid biphenylalanine by evolving several key enzymes to require it structurally, therefore putting its expanded genetic code under positive selection. | Genetic code expansion | 0.835562 |
2,160 | Similarly to orthogonal tRNAs and aminoacyl tRNA synthetases (aaRSs), orthogonal ribosomes have been engineered to work in parallel to the natural ribosomes. Orthogonal ribosomes ideally use different mRNA transcripts than their natural counterparts and ultimately should draw on a separate pool of tRNA as well. This should alleviate some of the loss of fitness which currently still arises from techniques such as Amber codon suppression. Additionally, orthogonal ribosomes can be mutated and optimized for particular tasks, like the recognition of quadruplet codons. Such an optimization is not possible, or highly disadvantageous for natural ribosomes. | Genetic code expansion | 0.835562 |
2,161 | A dimensional data element is similar to a categorical variable in statistics. Typically dimensions in a data warehouse are organized internally into one or more hierarchies. "Date" is a common dimension, with several possible hierarchies: "Days (are grouped into) Months (which are grouped into) Years", "Days (are grouped into) Weeks (which are grouped into) Years" "Days (are grouped into) Months (which are grouped into) Quarters (which are grouped into) Years" etc. | Data dimension | 0.835553 |
2,162 | The developers themselves highlight the fact that those doing research should exercise caution when using such microbenchmarks: the JavaScript benchmarks are fleetingly small, and behave in ways that are significantly different than the real applications. We have documented numerous differences in behavior, and we conclude from these measured differences that results based on the benchmarks may mislead JavaScript engine implementers. Furthermore, we observe interesting behaviors in real JavaScript applications that the benchmarks fail to exhibit, suggesting that previously unexplored optimization strategies may be productive in practice. | The Computer Language Benchmarks Game | 0.835518 |
2,163 | The benchmark results have uncovered various compiler issues. Sometimes a given compiler failed to process unusual, but otherwise grammatically valid constructs. At other times, runtime performance was shown to be below expectations, which prompted compiler developers to revise their optimization capabilities. Various research articles have been based on the benchmarks, its results and its methodology. | The Computer Language Benchmarks Game | 0.835518 |
2,164 | In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, but examples can be found at least as early as Galileo. In thought experiments, we gain new information by rearranging or reorganizing already known empirical data in a new way and drawing new (a priori) inferences from them or by looking at these data from a different and unusual perspective. In Galileo's thought experiment, for example, the rearrangement of empirical experience consists of the original idea of combining bodies of different weights.Thought experiments have been used in philosophy (especially ethics), physics, and other fields (such as cognitive psychology, history, political science, economics, social psychology, law, organizational studies, marketing, and epidemiology). In law, the synonym "hypothetical" is frequently used for such experiments. Regardless of their intended goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict and control events in a better and more productive way. | Thought Experiment | 0.835512 |
2,165 | Thought experiments have been used in a variety of fields, including philosophy, law, physics, and mathematics. In philosophy they have been used at least since classical antiquity, some pre-dating Socrates. In law, they were well known to Roman lawyers quoted in the Digest. In physics and other sciences, notable thought experiments date from the 19th and especially the 20th century, but examples can be found at least as early as Galileo. | Thought Experiment | 0.835512 |
2,166 | After some decades, it was asserted that feasible experiments could prove the error of the EPR paper. These experiments tested the Bell inequalities published in 1964 in a purely theoretical paper. The above-mentioned EPR philosophical starting assumptions were considered to be falsified by the empirical fact (e.g. by the optical real experiments of Alain Aspect). Thus thought experiments belong to a theoretical discipline, usually to theoretical physics, but often to theoretical philosophy. In any case, it must be distinguished from a real experiment, which belongs naturally to the experimental discipline and has "the final decision on true or not true", at least in physics. | Thought Experiment | 0.835512 |
2,167 | The relation to real experiments can be quite complex, as can be seen again from an example going back to Albert Einstein. In 1935, with two coworkers, he published a paper on a newly created subject called later the EPR effect (EPR paradox). In this paper, starting from certain philosophical assumptions, on the basis of a rigorous analysis of a certain, complicated, but in the meantime assertedly realizable model, he came to the conclusion that quantum mechanics should be described as "incomplete". Niels Bohr asserted a refutation of Einstein's analysis immediately, and his view prevailed. | Thought Experiment | 0.835512 |
2,168 | With this high influx of new information, there has arisen a higher demand for bioinformatics so scientists can properly analyze the new data. In response, software and other tools have been developed for this purpose. Also, as of 2008, the amount of stored sequences was doubling every 18 months, making urgent the need for better ways to organize data and aid research. In response, many publicly accessible databases and other resources have been created, including the NCBI pathogen detection program, the Pathosystems Resource Integration Centre (PATRIC), Pathogenwatch, the Virulence Factor Database (VFDB) of pathogenic bacteria, the Victors database of virulence factors in human and animal pathogens. Until 2022, the most sequenced pathogens are Salmonella enterica and E. coli - Shigella. The sequencing technologies, the bioinformatics tools, the databases, statistics related to pathogen genomes and the applications in forensics, epidemiology, clinical practice and food safety have been extensively reviewed. | Pathogenomics | 0.835507 |
2,169 | Multiple genetic elements of human-affecting pathogens contribute to the transfer of virulence factors: plasmids, pathogenicity island, prophages, bacteriophages, transposons, and integrative and conjugative elements. Pathogenicity islands and their detection are the focus of several bioinformatics efforts involved in pathogenomics. It is a common belief that "environmental bacterial strains" lack the capacity to harm or do damage to humans. However, recent studies show that bacteria from aquatic environments have acquired pathogenic strains through evolution. This allows for the bacteria to have a wider range in genetic traits and can cause a potential threat to humans from which there is more resistance towards antibiotics. | Pathogenomics | 0.835507 |
2,170 | During the earlier times when genomics was being studied, scientists found it challenging to sequence genetic information. The field began to explode in 1977 when Fred Sanger, PhD, along with his colleagues, sequenced the DNA-based genome of a bacteriophage, using a method now known as the Sanger Method. The Sanger Method for sequencing DNA exponentially advanced molecular biology and directly led to the ability to sequence genomes of other organisms, including the complete human genome.The Haemophilus influenza genome was one of the first organism genomes sequenced in 1995 by J. Craig Venter and Hamilton Smith using whole genome shotgun sequencing. Since then, newer and more efficient high-throughput sequencing, such as Next Generation Genomic Sequencing (NGS) and Single-Cell Genomic Sequencing, have been developed. | Pathogenomics | 0.835507 |
2,171 | The "eco-evo" perspective on pathogen-host interactions emphasizes the influences ecology and the environment on pathogen evolution. The dynamic genomic factors such as gene loss, gene gain and genome rearrangement, are all strongly influenced by changes in the ecological niche where a particular microbial strain resides. Microbes may switch from being pathogenic and non-pathogenic due to changing environments. This was demonstrated during studies of the plague, Yersinia pestis, which apparently evolved from a mild gastrointestinal pathogen to a very highly pathogenic microbe through dynamic genomic events. | Pathogenomics | 0.835507 |
2,172 | Pathogenomics is a field which uses high-throughput screening technology and bioinformatics to study encoded microbe resistance, as well as virulence factors (VFs), which enable a microorganism to infect a host and possibly cause disease. This includes studying genomes of pathogens which cannot be cultured outside of a host. In the past, researchers and medical professionals found it difficult to study and understand pathogenic traits of infectious organisms. With newer technology, pathogen genomes can be identified and sequenced in a much shorter time and at a lower cost, thus improving the ability to diagnose, treat, and even predict and prevent pathogenic infections and disease. It has also allowed researchers to better understand genome evolution events - gene loss, gain, duplication, rearrangement - and how those events impact pathogen resistance and ability to cause disease. This influx of information has created a need for bioinformatics tools and databases to analyze and make the vast amounts of data accessible to researchers, and it has raised ethical questions about the wisdom of reconstructing previously extinct and deadly pathogens in order to better understand virulence. | Pathogenomics | 0.835507 |
2,173 | One of the key forces driving gene gain is thought to be horizontal (lateral) gene transfer (LGT). It is of particular interest in microbial studies because these mobile genetic elements may introduce virulence factors into a new genome. A comparative study conducted by Gill et al. in 2005 postulated that LGT may have been the cause for pathogen variations between Staphylococcus epidermidis and Staphylococcus aureus. There still, however, remains skepticism about the frequency of LGT, its identification, and its impact. New and improved methodologies have been engaged, especially in the study of phylogenetics, to validate the presence and effect of LGT. Gene gain and gene duplication events are balanced by gene loss, such that despite their dynamic nature, the genome of a bacterial species remains approximately the same size. | Pathogenomics | 0.835507 |
2,174 | "Method 2" is the only solution that fulfills the transformation invariants that are present in certain physical systemsβsuch as in statistical mechanics and gas physicsβin the specific case of Jaynes's proposed experiment of throwing straws from a distance onto a small circle. Nevertheless, one can design other practical experiments that give answers according to the other methods. For example, in order to arrive at the solution of "method 1", the random endpoints method, one can affix a spinner to the center of the circle, and let the results of two independent spins mark the endpoints of the chord. In order to arrive at the solution of "method 3", one could cover the circle with molasses and mark the first point that a fly lands on as the midpoint of the chord. Several observers have designed experiments in order to obtain the different solutions and verified the results empirically. | Bertrand paradox (probability) | 0.835496 |
2,175 | The activity selection problem is a combinatorial optimization problem concerning the selection of non-conflicting activities to perform within a given time frame, given a set of activities each marked by a start time (si) and finish time (fi). The problem is to select the maximum number of activities that can be performed by a single person or machine, assuming that a person can only work on a single activity at a time. The activity selection problem is also known as the Interval scheduling maximization problem (ISMP), which is a special type of the more general Interval Scheduling problem. A classic application of this problem is in scheduling a room for multiple competing events, each having its own time requirements (start and end time), and many more arise within the framework of operations research. | Activity selection problem | 0.835495 |
2,176 | In fact, the information lower bound can be generalised to the case where there is a non-zero probability that the algorithm makes an error. In this form, the theorem gives us an upper bound on the probability of success based on the number of tests. For any group-testing algorithm that performs t {\displaystyle t} tests, the probability of success, P ( success ) {\displaystyle \mathbb {P} ({\textrm {success}})} , satisfies P ( success ) β€ t / log 2 β‘ ( n d ) {\displaystyle \mathbb {P} ({\textrm {success}})\leq t/\log _{2}{n \choose d}} . This can be strengthened to: P ( success ) β€ 2 t ( n d ) {\displaystyle \mathbb {P} ({\textrm {success}})\leq {\frac {2^{t}}{n \choose d}}} . | Group testing | 0.835436 |
2,177 | The generality of the theory of group testing lends it to many diverse applications, including clone screening, locating electrical shorts; high speed computer networks; medical examination, quantity searching, statistics; machine learning, DNA sequencing; cryptography; and data forensics. This section provides a brief overview of a small selection of these applications. | Group testing | 0.835436 |
2,178 | The concept of group testing was first introduced by Robert Dorfman in 1943 in a short report published in the Notes section of Annals of Mathematical Statistics. Dorfman's report β as with all the early work on group testing β focused on the probabilistic problem, and aimed to use the novel idea of group testing to reduce the expected number of tests needed to weed out all syphilitic men in a given pool of soldiers. The method was simple: put the soldiers into groups of a given size, and use individual testing (testing items in groups of size one) on the positive groups to find which were infected. Dorfman tabulated the optimum group sizes for this strategy against the prevalence rate of defectiveness in the population. | Group testing | 0.835436 |
2,179 | The structure of the scheme of the tests involved in a non-adaptive procedure is known as a pooling design. Group testing has many applications, including statistics, biology, computer science, medicine, engineering and cyber security. Modern interest in these testing schemes has been rekindled by the Human Genome Project. | Group testing | 0.835436 |
2,180 | In statistics and combinatorial mathematics, group testing is any procedure that breaks up the task of identifying certain objects into tests on groups of items, rather than on individual ones. First studied by Robert Dorfman in 1943, group testing is a relatively new field of applied mathematics that can be applied to a wide range of practical applications and is an active area of research today. A familiar example of group testing involves a string of light bulbs connected in series, where exactly one of the bulbs is known to be broken. The objective is to find the broken bulb using the smallest number of tests (where a test is when some of the bulbs are connected to a power supply). | Group testing | 0.835436 |
2,181 | It is hierarchically structured in four levels. For example, one branch of the hierarchy contains: Computing methodologies Artificial intelligence Knowledge representation and reasoning Ontology engineering | ACM Computing Classification System | 0.835397 |
2,182 | A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers. When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence. | Context of computational complexity | 0.835377 |
2,183 | Conditional Inference Trees. Statistics-based approach that uses non-parametric tests as splitting criteria, corrected for multiple testing to avoid overfitting. This approach results in unbiased predictor selection and does not require pruning.ID3 and CART were invented independently at around the same time (between 1970 and 1980), yet follow a similar approach for learning a decision tree from training tuples. | Classification and regression tree | 0.835363 |
2,184 | Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making). | Classification and regression tree | 0.835363 |
2,185 | Many data mining software packages provide implementations of one or more decision tree algorithms. Examples include Salford Systems CART (which licensed the proprietary code of the original CART authors), IBM SPSS Modeler, RapidMiner, SAS Enterprise Miner, Matlab, R (an open-source software environment for statistical computing, which includes several CART implementations such as rpart, party and randomForest packages), Weka (a free and open-source data-mining suite, contains many decision tree algorithms), Orange, KNIME, Microsoft SQL Server , and scikit-learn (a free and open-source machine learning library for the Python programming language). | Classification and regression tree | 0.835363 |
2,186 | Caspar Henderson, in his book review in The Telegraph, writes that Lane's book "succeeds brilliantly" as good science writing can, expanding the reader's horizons "in ways not previously imagined." Lane explains why the counterintuitive idea "that cross-membrane proton gradients power all living cells" is no mere technical detail: per gram, he notes, the power is 10,000 times denser than the sun, and it is conserved across every form of life, telling us something about how life began and how it was constrained to evolve. Henderson recommends the book as amazing and gripping, only criticising the publisher for the "pedestrian" quality of the design and printing.The founder of Microsoft, Bill Gates, reviewed the book under the heading "This Biology Book Blew Me Away". | The Vital Question | 0.835355 |
2,187 | Tim Requarth, reviewing The Vital Question for The New York Times, finds the book "seductive and often convincing, though speculation far outpaces evidence in many of the bookβs passages. But perhaps for a biological theory of everything, that's to be expected, even welcomed. "Peter Forbes, reviewing The Vital Question in The Guardian, noted that the origin of life was once thought to be "safely consigned to wistful armchair musing", but that in the past 20 years new research in genomics, geology, biochemistry and molecular biology have transformed thinking in the field. "Here is the book that presents all this hard evidence and tightly interlocking theory to a wider audience. | The Vital Question | 0.835355 |
2,188 | In the book, Lane discusses what he considers to be a major gap in biology: why life operates the way that it does, and how it began. In his view as a biochemist, the core question is about energy, as all cells handle energy in the same way, relying on a steep electrochemical gradient across the very small thickness of a membrane in a cell β to power all the chemical reactions of life. The electrical energy is transformed into forms that the cell can use by a chain of energy-handling structures including ancient proteins such as cytochromes, ion channels, and the enzyme ATP synthase, all built into the membrane. Once evolved, this chain has been conserved by all living things, showing that it is vital to life. | The Vital Question | 0.835355 |
2,189 | Darwin suggested that life could have originated in some "warm little pond" containing a suitable mixture of chemical compounds. The question has continued to be debated into the 21st century.Nick Lane is a biochemist at University College London; he researches "evolutionary biochemistry and bioenergetics, focusing on the origin of life and the evolution of complex cells." He has become known as a science writer, having written four books about evolutionary biochemistry. | The Vital Question | 0.835355 |
2,190 | Although quasi-polynomially solvable, it has been conjectured that the planted clique problem has no polynomial time solution; this planted clique conjecture has been used as a computational hardness assumption to prove the difficulty of several other problems in computational game theory, property testing, and machine learning.The complexity class QP consists of all problems that have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows. QP = β c β N DTIME ( 2 log c β‘ n ) {\displaystyle {\mbox{QP}}=\bigcup _{c\in \mathbb {N} }{\mbox{DTIME}}\left(2^{\log ^{c}n}\right)} | Super-polynomial time | 0.83534 |
2,191 | In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. For first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and do not work for all inhomogeneous linear differential equations. Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as Duhamel's principle, named after Jean-Marie Duhamel (1797β1872) who first applied the method to solve the inhomogeneous heat equation. Sometimes variation of parameters itself is called Duhamel's principle and vice versa. | Variation of parameters | 0.835328 |
2,192 | Unlike node degree which depends on topology alone, however, percolation centrality takes into account the topological importance of a node as well as its distance from infected nodes in deciding its overall importance. Piraveenan et al. has shown that percolation centrality-based vaccination is particularly effective when the proportion of people already infected is on the same order of magnitude as the number of people who could be vaccinated before the disease spreads much further. If infection spread is at its infancy, then ring-vaccination surrounding the source of infection is most effective, whereas if the proportion of people already infected is much higher than the number of people that could be vaccinated quickly, then vaccination will only help those who are vaccinated and herd immunity cannot be achieved. Percolation centrality-based vaccination is most effective in the critical scenario where the infection has already spread too far to be completely surrounded by ring-vaccination, yet not spread wide enough so that it cannot be contained by strategic vaccination. Nevertheless, Percolation Centrality also needs full network topology to be computed, and thus is more useful in higher levels of abstraction (for example, networks of townships rather than social networks of individuals), where the corresponding network topology can more readily be obtained. | Targeted immunization strategies | 0.835327 |
2,193 | These nodes are the most highly connected in the network, making them more likely to spread the contagion if infected. Immunizing this segment of the network can drastically reduce the impact of the disease on the network and requires the immunization of far fewer nodes compared to randomly selecting nodes. However, this strategy relies on knowing the global structure of the network, which may not always be practical.A recent centrality measure, Percolation Centrality, introduced by Piraveenan et al. is particularly useful in identifying nodes for vaccination based on the network topology. | Targeted immunization strategies | 0.835327 |
2,194 | No longer satisfied with establishing properties of concrete objects, mathematicians started to turn their attention to general theory. Formal definitions of certain algebraic structures began to emerge in the 19th century. For example, results about various groups of permutations came to be seen as instances of general theorems that concern a general notion of an abstract group. | Abstract Algebra | 0.835271 |
2,195 | The end of the 19th and the beginning of the 20th century saw a shift in the methodology of mathematics. Abstract algebra emerged around the start of the 20th century, under the name modern algebra. Its study was part of the drive for more intellectual rigor in mathematics. Initially, the assumptions in classical algebra, on which the whole of mathematics (and major parts of the natural sciences) depend, took the form of axiomatic systems. | Abstract Algebra | 0.835271 |
2,196 | In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. The first clear definition of an abstract field was due to Heinrich Martin Weber in 1893. It was missing the associative law for multiplication, but covered finite fields and the fields of algebraic number theory and algebraic geometry. In 1910 Steinitz synthesized the knowledge of abstract field theory accumulated so far. He axiomatically defined fields with the modern definition, classified them by their characteristic, and proved many theorems commonly seen today. | Abstract Algebra | 0.835271 |
2,197 | Most theories that are now recognized as parts of abstract algebra started as collections of disparate facts from various branches of mathematics, acquired a common theme that served as a core around which various results were grouped, and finally became unified on a basis of a common set of concepts. This unification occurred in the early decades of the 20th century and resulted in the formal axiomatic definitions of various algebraic structures such as groups, rings, and fields. This historical development is almost the opposite of the treatment found in popular textbooks, such as van der Waerden's Moderne Algebra, which start each chapter with a formal definition of a structure and then follow it with concrete examples. | Abstract Algebra | 0.835271 |
2,198 | Before the nineteenth century, algebra was defined as the study of polynomials. Abstract algebra came into existence during the nineteenth century as more complex problems and solution methods developed. Concrete problems and examples came from number theory, geometry, analysis, and the solutions of algebraic equations. | Abstract Algebra | 0.835271 |
2,199 | Using tools of algebraic number theory, Andrew Wiles proved Fermat's Last Theorem.In physics, groups are used to represent symmetry operations, and the usage of group theory could simplify differential equations. In gauge theory, the requirement of local symmetry can be used to deduce the equations describing a system. The groups that describe those symmetries are Lie groups, and the study of Lie groups and Lie algebras reveals much about the physical system; for instance, the number of force carriers in a theory is equal to the dimension of the Lie algebra, and these bosons interact with the force they mediate if the Lie algebra is nonabelian. | Abstract Algebra | 0.835271 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.