id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
3,200
However, this alternative definition includes one exceptional structure of order 2 which fails to satisfy various basic theorems (such as x ⋅ 0 = 0 {\displaystyle x\cdot 0=0} for all x {\displaystyle x} ). Thus it is much more convenient, and more usual, to use the axioms in the form given above. The difference is that A4 requires 1 to be an identity for all elements, A4* only for non-zero elements. The exceptional structure can be defined by taking an additive group of order 2, and defining multiplication by x ⋅ y = x {\displaystyle x\cdot y=x} for all x {\displaystyle x} and y {\displaystyle y} .
Near-field (mathematics)
0.831728
3,201
In mathematics, a near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws. Alternatively, a near-field is a near-ring in which there is a multiplicative identity and every non-zero element has a multiplicative inverse.
Near-field (mathematics)
0.831728
3,202
In plasma astrophysics, the corotation electric field is the electric field due to the rotation of a magnet. For example, the rotation of the Earth results in a corotation electric field.
Corotation electric field
0.831691
3,203
Photography by reflected ultraviolet radiation is useful for medical, scientific, and forensic investigations, in applications as widespread as detecting bruising of skin, alterations of documents, or restoration work on paintings. Photography of the fluorescence produced by ultraviolet illumination uses visible wavelengths of light. In ultraviolet astronomy, measurements are used to discern the chemical composition of the interstellar medium, and the temperature and composition of stars. Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space.
Ultraviolet waves
0.831689
3,204
In software engineering, domain knowledge is knowledge about the environment in which the target system operates, for example, software agents. Domain knowledge usually must be learned from software users in the domain (as domain specialists/experts), rather than from software developers. It may include user workflows, data pipelines, business policies, configurations and constraints and is crucial in the development of a software application. Expert's domain knowledge (frequently informal and ill-structured) is transformed in computer programs and active data, for example in a set of rules in knowledge bases, by knowledge engineers.
Domain knowledge
0.831669
3,205
The importance of prior knowledge in machine learning is suggested by its role in search and optimization. Loosely, the no free lunch theorem states that all search algorithms have the same average performance over all problems, and thus implies that to gain in performance on a certain application one must use a specialized algorithm that includes some prior knowledge about the problem. The different types of prior knowledge encountered in pattern recognition are now regrouped under two main categories: class-invariance and knowledge on the data.
Prior knowledge for pattern recognition
0.83165
3,206
Using the relation between the Cauchy stress and the surface traction, t = n · σ (where n is the unit outward normal to ∂Ω), we have Converting the surface integral into a volume integral via the divergence theorem gives Using the symmetry of the Cauchy stress and the identity we have the following From the definition of strain and from the equations of equilibrium we have Hence we can write and therefore the variation in the internal energy density is given by An elastic material is defined as one in which the total internal energy is equal to the potential energy of the internal forces (also called the elastic strain energy). Therefore, the internal energy density is a function of the strains, U0 = U0(ε) and the variation of the internal energy can be expressed as Since the variation of strain is arbitrary, the stress–strain relation of an elastic material is given by For a linear elastic material, the quantity ∂U0/∂ε is a linear function of ε, and can therefore be expressed as where c is a fourth-rank tensor of material constants, also called the stiffness tensor. We can see why c must be a fourth-rank tensor by noting that, for a linear elastic material, In index notation The right-hand side constant requires four indices and is a fourth-rank quantity. We can also see that this quantity must be a tensor because it is a linear transformation that takes the strain tensor to the stress tensor. We can also show that the constant obeys the tensor transformation rules for fourth-rank tensors.
Stress-strain relationship
0.83165
3,207
Pattern recognition is a very active field of research intimately bound to machine learning. Also known as classification or statistical classification, pattern recognition aims at building a classifier that can determine the class of an input pattern. This procedure, known as training, corresponds to learning an unknown decision function based only on a set of input-output pairs ( x i , y i ) {\displaystyle ({\boldsymbol {x}}_{i},y_{i})} that form the training data (or training set). Nonetheless, in real world applications such as character recognition, a certain amount of information on the problem is usually known beforehand. The incorporation of this prior knowledge into the training is the key element that will allow an increase of performance in many applications.
Prior knowledge for pattern recognition
0.83165
3,208
The Spanish National Bioinformatics Institute (INB-ISCIII; Spanish: Instituto Nacional de Bioinformática) is an academic service institution tasked with the coordination, integration and development of bioinformatics resources in Spain. Created in 2003, the INB is—since 2015—the main node through which the Carlos III Health Institute is connected to ELIXIR, a European-wide infrastructure of life science data, coordinating the other Spanish institutions partaking in the initiative such as the Spanish National Cancer Research Centre (CNIO), the Centre for Genomic Regulation (CRG), the Universitat Pompeu Fabra, the Institute for Research in Biomedicine (IRB) and the Barcelona's National Supercomputing Center.It consists of 10 distributed nodes, coordinated by a central node, encompassing the scopes of genomics, proteomics, functional genomics, structural biology, population genomics and genome diversity, health informatics, algorithm development and high-performance computing.It is the Spanish participant in the common data platform promoted by the European Union to ensure a rapid and coordinated response to the health crisis caused by COVID-19. Their MareNostrum supercomputer has been used for testing the potential efficacy of compounds against SARS-CoV-2.Alfonso Valencia, former president of the International Society for Computational Biology, is the director.
Spanish National Bioinformatics Institute
0.831645
3,209
In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree. MCTS was combined with neural networks in 2016 and has been used in multiple board games like Chess, Shogi, Checkers, Backgammon, Contract Bridge, Go, Scrabble, and Clobber as well as in turn-based-strategy video games (such as Total War: Rome II's implementation in the high level campaign AI).
Monte Carlo tree search
0.831643
3,210
Such methods were then explored and successfully applied to heuristic search in the field of automated theorem proving by W. Ertel, J. Schumann and C. Suttner in 1989, thus improving the exponential search times of uninformed search algorithms such as e.g. breadth-first search, depth-first search or iterative deepening. In 1992, B. Brügmann employed it for the first time in a Go-playing program. In 2002, Chang et al. proposed the idea of "recursive rolling out and backtracking" with "adaptive" sampling choices in their Adaptive Multi-stage Sampling (AMS) algorithm for the model of Markov decision processes. AMS was the first work to explore the idea of UCB-based exploration and exploitation in constructing sampled/simulated (Monte Carlo) trees and was the main seed for UCT (Upper Confidence Trees).
Monte Carlo tree search
0.831643
3,211
Subjects taught in a prealgebra course may include: Review of natural number arithmetic Types of numbers such as integers, fractions, decimals and negative numbers Ratios and percents Factorization of natural numbers Properties of operations such as associativity and distributivity Simple (integer) roots and powers Rules of evaluation of expressions, such as operator precedence and use of parentheses Basics of equations, including rules for invariant manipulation of equations Understanding of variable manipulation Manipulation and plotting in the standard 4-quadrant Cartesian coordinate plane Powers in scientific notation (example: 340,000,000 in scientific notation is 3.4 × 108) Identifying Probability Solving Square roots Pythagorean TheoremPrealgebra may include subjects from geometry, especially to further the understanding of algebra in applications to area and volume. Prealgebra may also include subjects from statistics to identify probability and interpret data. Proficiency in prealgebra is an indicator of college success. It can also be taught as a remedial course for college students.
Pre-algebra
0.831621
3,212
Prealgebra is a common name for a course in middle school mathematics in the United States, usually taught in the 7th grade or 8th grade. The objective of it is to prepare students for the study of algebra. Usually, algebra is taught in the 8th and 9th grade.As an intermediate stage after arithmetic, prealgebra helps students pass specific conceptual barriers. Students are introduced to the idea that an equals sign, rather than just being the answer to a question as in basic arithmetic, means that two sides are equivalent and can be manipulated together. They also learn how numbers, variables, and words can be used in the same ways.
Pre-algebra
0.831621
3,213
A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n + b 1 = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n + b 2 = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n + b m = 0 , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}+b_{1}=0\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}+b_{2}=0\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}+b_{m}=0,\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.
Homogeneous system of linear equations
0.831621
3,214
A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.
Greedy algorithm
0.83162
3,215
In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single equations, namely as a: System of linear equations, System of nonlinear equations, System of bilinear equations, System of polynomial equations, System of differential equations, or a System of difference equations
Simultaneous equation
0.831612
3,216
Marijn Heule, Oliver Kullmann and Victor W. Marek showed that such a coloring is only possible up to the number 7824. The actual statement of the theorem proved is There are 27825 ≈ 3.63×102355 possible coloring combinations for the numbers up to 7825. These possible colorings were logically and algorithmically narrowed down to around a trillion (still highly complex) cases, and those, expressed as Boolean satisfiability problems, were examined using a SAT solver. Creating the proof took about 4 CPU-years of computation over a period of two days on the Stampede supercomputer at the Texas Advanced Computing Center and generated a 200 terabyte propositional proof, which was compressed to 68 gigabytes. The paper describing the proof was published in the SAT 2016 conference, where it won the best paper award. The figure below shows a possible family of colorings for the set {1,...,7824} with no monochromatic Pythagorean triple, and the white squares can be colored either red or blue while still satisfying this condition.
Boolean Pythagorean triples problem
0.83161
3,217
Maximal lotteries satisfy a strong notion of Pareto efficiency and a weak notion of strategyproofness. In contrast to random dictatorship, maximal lotteries do not satisfy the standard notion of strategyproofness. Also, maximal lotteries are not monotonic in probabilities, i.e., it is possible that the probability of an alternative decreases when this alternative is ranked up. However, the probability of the alternative will remain positive.Maximal lotteries or variants thereof have been rediscovered multiple times by economists, mathematicians, political scientists, philosophers, and computer scientists. In particular, the support of maximal lotteries, which is known as the essential set or the bipartisan set, has been studied in detail.Similar ideas appear also in the study of reinforcement learning and evolutionary biology to explain the multiplicity of co-existing species.
Maximal lotteries
0.831593
3,218
In mathematics, specifically abstract algebra, the opposite of a ring is another ring with the same elements and addition operation, but with the multiplication performed in the reverse order. More explicitly, the opposite of a ring (R, +, ⋅) is the ring (R, +, ∗) whose multiplication ∗ is defined by a ∗ b = b ⋅ a for all a, b in R. The opposite ring can be used to define multimodules, a generalization of bimodules. They also help clarify the relationship between left and right modules (see § Properties). Monoids, groups, rings, and algebras can all be viewed as categories with a single object. The construction of the opposite category generalizes the opposite group, opposite ring, etc.
Opposite algebra
0.831577
3,219
The antiisomorphism ι {\displaystyle \iota } can be defined generally for semigroups, monoids, groups, rings, rngs, algebras. In case of rings (and rngs) we obtain the general equivalence.
Opposite algebra
0.831577
3,220
Emission spectroscopy is a spectroscopic technique which examines the wavelengths of photons emitted by atoms or molecules during their transition from an excited state to a lower energy state. Each element emits a characteristic set of discrete wavelengths according to its electronic structure, and by observing these wavelengths the elemental composition of the sample can be determined. Emission spectroscopy developed in the late 19th century and efforts in theoretical explanation of atomic emission spectra eventually led to quantum mechanics.
Atomic spectrum
0.831568
3,221
In the study of special relativity, a perfectly rigid body does not exist; and objects can only be assumed to be rigid if they are not moving near the speed of light. In quantum mechanics, a rigid body is usually thought of as a collection of point masses. For instance, molecules (consisting of the point masses: electrons and nuclei) are often seen as rigid bodies (see classification of molecules as rigid rotors).
Rigid-body kinematics
0.831549
3,222
In physics, a rigid body, also known as a rigid object, is a solid body in which deformation is zero or negligible. The distance between any two given points on a rigid body remains constant in time regardless of external forces or moments exerted on it. A rigid body is usually considered as a continuous distribution of mass.
Rigid-body kinematics
0.831549
3,223
The true Hilbert space of physical states is constructed as a subspace of the original Hilbert space of vectors that satisfy ( ∇ ⋅ E → ( x ) − ρ ( x ) ) | ψ ⟩ = 0. {\displaystyle (\nabla \cdot {\vec {E}}(x)-\rho (x))|\psi \rangle =0.} In more general theories, the constraint algebra may be a noncommutative algebra.
Constraint algebra
0.831547
3,224
In theoretical physics, a constraint algebra is a linear space of all constraints and all of their polynomial functions or functionals whose action on the physical vectors of the Hilbert space should be equal to zero.For example, in electromagnetism, the equation for the Gauss' law ∇ ⋅ E → = ρ {\displaystyle \nabla \cdot {\vec {E}}=\rho } is an equation of motion that does not include any time derivatives. This is why it is counted as a constraint, not a dynamical equation of motion. In quantum electrodynamics, one first constructs a Hilbert space in which Gauss' law does not hold automatically.
Constraint algebra
0.831547
3,225
Scientists frequently explain their choice of field by referring to curves of interest and development, as in "peptide chemistry tapering off ... but now ... this is the future, molecular biology, and I knew that this lab would move faster to this new area" (191). Desire for credit appears to only be a secondary phenomenon; instead a kind of "credibility capital" seems to be the driving motive. In a case study, they show one scientist sequentially choosing a school, a field, a professor to study under, a specialty to get expertise in, and a research institution to work at, by maximizing and reinvesting this credibility (i.e. ability to do science), despite not having received much in the way of credit (e.g. awards, recognition). Four examples: (a) X threatens to fire Ray if his assay fails, (b) a number of scientists flood into a field with theories after a successful experiment then leave when new evidence disproves their theories, (c) Y supports the results of "a big shot in his field" when others question them in order to receive invitations to meetings from the big shot where Y can meet new people, (d) K dismisses some of L's results on the grounds that "good people" won't believe them unless the level of noise is reduced (as opposed to K thinking them unreliable himself).
Laboratory Life
0.831523
3,226
RNA-Seq experiments generate a large volume of raw sequence reads which have to be processed to yield useful information. Data analysis usually requires a combination of bioinformatics software tools (see also List of RNA-Seq bioinformatics tools) that vary according to the experimental design and goals. The process can be broken down into four stages: quality control, alignment, quantification, and differential expression. Most popular RNA-Seq programs are run from a command-line interface, either in a Unix environment or within the R/Bioconductor statistical environment.
Transcriptomics technologies
0.831497
3,227
High-density arrays use a single fluorescent label, and each sample is hybridised and detected individually. High-density arrays were popularised by the Affymetrix GeneChip array, where each transcript is quantified by several short 25-mer probes that together assay one gene.NimbleGen arrays were a high-density array produced by a maskless-photochemistry method, which permitted flexible manufacture of arrays in small or large numbers. These arrays had 100,000s of 45 to 85-mer probes and were hybridised with a one-colour labelled sample for expression analysis. Some designs incorporated up to 12 independent arrays per slide.
Transcriptomics technologies
0.831497
3,228
Investigations was developed between 1990 and 1998. It was just one of a number of reform mathematics curricula initially funded by a National Science Foundation grant. The goals of the project raised opposition to the curriculum from critics (both parents and mathematics teachers) who objected to the emphasis on conceptual learning instead of instruction in more recognized specific methods for basic arithmetic.. The goal of the Investigations curriculum is to help all children understand the fundamental ideas of number and arithmetic, geometry, data, measurement and early algebra. Unlike traditional methods, the original edition did not provide student textbooks to describe standard methods or provide solved examples.
Investigations in Numbers, Data, and Space
0.831479
3,229
If A and B are two unital algebras, then an algebra homomorphism F: A → B {\displaystyle F:A\rightarrow B} is said to be unital if it maps the unity of A to the unity of B. Often the words "algebra homomorphism" are actually used to mean "unital algebra homomorphism", in which case non-unital algebra homomorphisms are excluded. A unital algebra homomorphism is a (unital) ring homomorphism.
Algebra homomorphism
0.831477
3,230
In mathematics, an algebra homomorphism is a homomorphism between two algebras. More precisely, if A and B are algebras over a field (or a ring) K, it is a function F: A → B {\displaystyle F\colon A\to B} such that, for all k in K and x, y in A, one has F ( k x ) = k F ( x ) {\displaystyle F(kx)=kF(x)} F ( x + y ) = F ( x ) + F ( y ) {\displaystyle F(x+y)=F(x)+F(y)} F ( x y ) = F ( x ) F ( y ) {\displaystyle F(xy)=F(x)F(y)} The first two conditions say that F is a K-linear map, and the last condition says that F preserves the algebra multiplication. So, if the algebras are associative, F is a rng homomorphism, and, if the algebras are rings and F preserves the identity, it is a ring homomorphism. If F admits an inverse homomorphism, or equivalently if it is bijective, F is said to be an isomorphism between A and B.
Algebra homomorphism
0.831477
3,231
Despite the fundamental impossibility of directly viewing quantum states, multimedia visualizations are an important tool in education. Interactive media provides an alternative experience beyond everyday personal experience as a tool for understanding quantum mechanics. Among the multimedia sites that have been studied with positive results are QuVis and Phet.
Teaching quantum mechanics
0.831454
3,232
In introducing history as part of the process of teaching quantum mechanics sets up a potential conflict of goals: accurate history or pedagogical clarity. Studies have shown that teaching through history helps students recognize that the counterintuitive issues are fundamental rather than simply something they don't understand. Specifically discussing the historical debates on quantum concepts drives home the idea the quantum differs from classical. Discussing the philosophy of science introduces the idea that language derived from everyday experience limits our ability to describe quantum phenomena.
Teaching quantum mechanics
0.831454
3,233
Students' misconceptions range from fully classical physics thinking, mixed models, to quasi-quantum ideas. For example, if the concept that quantum mechanics does not describe a path for electrons or photons is misunderstood, students may believe that they follow specific trajectories (classical), or sinusoidal paths (mixes), or are simultaneously wave and particles (quasi-quantum: "in which students understand that quantum objects can behave as both particles and waves, but still have difficulty describing events in a nondeterministic way"). Among the concepts most often misunderstood are: the postulates of quantum mechanics provide no description for the trajectories for electrons or photons, amplitude of a wave is not a measure of energy, most bound states have no corresponding classical orbits, in practice, quantum mechanics gives probabilisitic rather than deterministic results, intrinsic uncertainty rather than measurement error.Issues also arise from misunderstanding classical concepts related to quantum concepts, such as the difference between light energy and light intensity.
Teaching quantum mechanics
0.831454
3,234
Quantum mechanics is a difficult subject to teach due to its counterintuitive nature. As the subject is now offered by advanced secondary schools, educators have applied scientific methodology to the process of teaching quantum mechanics, in order to identify common misconceptions and ways of improving students' understanding.
Teaching quantum mechanics
0.831454
3,235
Quantum mechanics can be taught with a focus on different interpretations, different models, or via mathematical techniques. Studies have shown that focus on non-mathematical concepts can lead to adequate understanding.
Teaching quantum mechanics
0.831454
3,236
Philipp Blitzenbauer engages students through simple but intrinsically quantum single-photon experiments. The approach avoids the ambiguous classical vs quantum character of photons in optical interference experiments like the double slit. Students exposed to quantum mechanics in this way avoid developing misconceptions apparent among students in the control group.
Teaching quantum mechanics
0.831454
3,237
N. David Mermin reports that an unconventional strategy based on abstract but simple math concepts is sufficient to teach quantum mechanics to students interested in quantum computing application rather than physics. Many of the issues that confound students of physics to not apply to this case and the mathematical background of quantum computing resembles the background already taught in computer science. Mermin develops notation and operations with classical bit then introduces quantum bits as superpositions of two classical states. He never needs to discuss even Planck's constant, which he suggests is important for quantum computer hardware but not software.
Teaching quantum mechanics
0.831454
3,238
Mohan analyzes two widely used representative quantum mechanics textbooks against the learning challenges reported by Krijtenburg-Lewerissa and others. Both texts adopt language ('waves' and 'particles') familiar to students in other contexts without directly exploring the significant shifts in meaning required by quantum mechanics. Mohan attributes some of the learning challenges to this unexplored application of inappropriate language.
Teaching quantum mechanics
0.831454
3,239
Cost-efficiency as well as absolute speed can be critical, especially in cluster environments where lower node costs allow purchasing more nodes. Increasing software speed Some Sort Benchmark entrants use a variation on radix sort for the first phase of sorting: they separate data into one of many "bins" based on the beginning of its value. Sort Benchmark data is random and especially well-suited to this optimization. Compacting the input, intermediate files, and output can reduce time spent on I/O, but is not allowed in the Sort Benchmark. Because the Sort Benchmark sorts long (100-byte) records using short (10-byte) keys, sorting software sometimes rearranges the keys separately from the values to reduce memory I/O volume.
External Sorting
0.831421
3,240
Proximity problems is a class of problems in computational geometry which involve estimation of distances between geometric objects. A subset of these problems stated in terms of points only are sometimes referred to as closest point problems, although the term "closest point problem" is also used synonymously to the nearest neighbor search. A common trait for many of these problems is the possibility to establish the Θ(n log n) lower bound on their computational complexity by reduction from the element uniqueness problem basing on an observation that if there is an efficient algorithm to compute some kind of minimal distance for a set of objects, it is trivial to check whether this distance equals to 0.
Proximity problems
0.831416
3,241
While these problems pose no computational complexity challenge, some of them are notable because of their ubiquity in computer applications of geometry. Distance between a pair of line segments. It cannot be expressed by a single formula, unlike, e.g., the distance from a point to a line. Its calculation requires careful enumeration of possible configurations, especially in 3D and higher dimensions. Bounding box, the minimal axis-aligned hyperrectangle that contains all geometric data
Proximity problems
0.831416
3,242
Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Typical applications include medical and physiological diagnostics and research including blood sugar, pulse oximetry, functional neuroimaging, sports medicine, elite sports training, ergonomics, rehabilitation, neonatal research, brain computer interface, urology (bladder contraction), and neurology (neurovascular coupling). There are also applications in other areas as well such as pharmaceutical, food and agrochemical quality control, atmospheric chemistry, combustion research and knowledge
Near-infrared spectrum
0.831402
3,243
The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités (1889), as an example to show that the principle of indifference may not produce definite, well-defined results for probabilities if it is applied uncritically when the domain of possibilities is infinite.
Bertrand's paradox (probability)
0.831395
3,244
In compiler optimization, register allocation is the process of assigning local automatic variables and expression results to a limited number of processor registers. Register allocation can happen over a basic block (local register allocation), over a whole function/procedure (global register allocation), or across function boundaries traversed via call-graph (interprocedural register allocation). When done per function/procedure the calling convention may require insertion of save/restore around each call-site.
Register allocation
0.831387
3,245
Some other register allocation approaches do not limit to one technique to optimize register's use. Cavazos et al., for instance, proposed a solution where it is possible to use both the linear scan and the graph coloring algorithms. In this approach, the choice between one or the other solution is determined dynamically: first, a machine learning algorithm is used "offline", that is to say not at runtime, to build a heuristic function that determines which allocation algorithm needs to be used. The heuristic function is then used at runtime; in light of the code behavior, the allocator can then choose between one of the two available algorithms.Trace register allocation is a recent approach developed by Eisl et al. This technique handles the allocation locally: it relies on dynamic profiling data to determine which branches will be the most frequently used in a given control flow graph.
Register allocation
0.831387
3,246
In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism. The homomorphism theorem is used to prove the isomorphism theorems.
Fundamental theorem on homomorphisms
0.83137
3,247
Similar theorems are valid for monoids, vector spaces, modules, and rings.
Fundamental theorem on homomorphisms
0.83137
3,248
The situation is described by the following commutative diagram: h is injective if and only if N = ker(f). Therefore, by setting N = ker(f) we immediately get the first isomorphism theorem. We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group".
Fundamental theorem on homomorphisms
0.83137
3,249
The following remarks apply only to finite planes. There are two main kinds of finite plane geometry: affine and projective. In an affine plane, the normal sense of parallel lines applies. In a projective plane, by contrast, any two lines intersect at a unique point, so parallel lines do not exist. Both finite affine plane geometry and finite projective plane geometry may be described by fairly simple axioms.
Finite geometry
0.831364
3,250
For some important differences between finite plane geometry and the geometry of higher-dimensional finite spaces, see axiomatic projective space. For a discussion of higher-dimensional finite spaces in general, see, for instance, the works of J.W.P. Hirschfeld. The study of these higher-dimensional spaces (n ≥ 3) has many important applications in advanced mathematical theories.
Finite geometry
0.831364
3,251
An affine plane geometry is a nonempty set X (whose elements are called "points"), along with a nonempty collection L of subsets of X (whose elements are called "lines"), such that: For every two distinct points, there is exactly one line that contains both points. Playfair's axiom: Given a line ℓ {\displaystyle \ell } and a point p {\displaystyle p} not on ℓ {\displaystyle \ell } , there exists exactly one line ℓ ′ {\displaystyle \ell '} containing p {\displaystyle p} such that ℓ ∩ ℓ ′ = ∅ . {\displaystyle \ell \cap \ell '=\varnothing .} There exists a set of four points, no three of which belong to the same line.The last axiom ensures that the geometry is not trivial (either empty or too simple to be of interest, such as a single line with an arbitrary number of points on it), while the first two specify the nature of the geometry.
Finite geometry
0.831364
3,252
Consequently, all finite projective spaces of geometric dimension at least three are defined over finite fields. A finite projective space defined over such a finite field has q + 1 points on a line, so the two concepts of order coincide. Such a finite projective space is denoted by PG(n, q), where PG stands for projective geometry, n is the geometric dimension of the geometry and q is the size (order) of the finite field used to construct the geometry. In general, the number of k-dimensional subspaces of PG(n, q) is given by the product: ( n + 1 k + 1 ) q = ∏ i = 0 k q n + 1 − i − 1 q i + 1 − 1 , {\displaystyle {{n+1} \choose {k+1}}_{q}=\prod _{i=0}^{k}{\frac {q^{n+1-i}-1}{q^{i+1}-1}},} which is a Gaussian binomial coefficient, a q analogue of a binomial coefficient.
Finite geometry
0.831364
3,253
If D is finite then it must be a finite field GF(q), since by Wedderburn's little theorem all finite division rings are fields. In this case, this construction produces a finite projective space. Furthermore, if the geometric dimension of a projective space is at least three then there is a division ring from which the space can be constructed in this manner.
Finite geometry
0.831364
3,254
A standard algebraic construction of systems satisfies these axioms. For a division ring D construct an (n + 1)-dimensional vector space over D (vector space dimension is the number of elements in a basis). Let P be the 1-dimensional (single generator) subspaces and L the 2-dimensional (two independent generators) subspaces (closed under vector addition) of this vector space. Incidence is containment.
Finite geometry
0.831364
3,255
If any of the lines is removed from the plane, along with the points on that line, the resulting geometry is the affine plane of order 2. The Fano plane is called the projective plane of order 2 because it is unique (up to isomorphism). In general, the projective plane of order n has n2 + n + 1 points and the same number of lines; each line contains n + 1 points, and each point is on n + 1 lines. A permutation of the Fano plane's seven points that carries collinear points (points on the same line) to collinear points is called a collineation of the plane. The full collineation group is of order 168 and is isomorphic to the group PSL(2,7) ≈ PSL(3,2), which in this special case is also isomorphic to the general linear group GL(3,2) ≈ PGL(3,2).
Finite geometry
0.831364
3,256
The smallest geometry satisfying all three axioms contains seven points. In this simplest of the projective planes, there are also seven lines; each point is on three lines, and each line contains three points. This particular projective plane is sometimes called the Fano plane.
Finite geometry
0.831364
3,257
A projective plane geometry is a nonempty set X (whose elements are called "points"), along with a nonempty collection L of subsets of X (whose elements are called "lines"), such that: For every two distinct points, there is exactly one line that contains both points. The intersection of any two distinct lines contains exactly one point. There exists a set of four points, no three of which belong to the same line.An examination of the first two axioms shows that they are nearly identical, except that the roles of points and lines have been interchanged. This suggests the principle of duality for projective plane geometries, meaning that any true statement valid in all these geometries remains true if we exchange points for lines and lines for points.
Finite geometry
0.831364
3,258
Planes not derived from finite fields also exist (e.g. for n = 9 {\displaystyle n=9} ), but all known examples have order a prime power.The best general result to date is the Bruck–Ryser theorem of 1949, which states: If n is a positive integer of the form 4k + 1 or 4k + 2 and n is not equal to the sum of two integer squares, then n does not occur as the order of a finite plane.The smallest integer that is not a prime power and not covered by the Bruck–Ryser theorem is 10; 10 is of the form 4k + 2, but it is equal to the sum of squares 12 + 32. The non-existence of a finite plane of order 10 was proven in a computer-assisted proof that finished in 1989 – see (Lam 1991) for details. The next smallest number to consider is 12, for which neither a positive nor a negative result has been proved.
Finite geometry
0.831364
3,259
A finite plane of order n is one such that each line has n points (for an affine plane), or such that each line has n + 1 points (for a projective plane). One major open question in finite geometry is: Is the order of a finite plane always a prime power?This is conjectured to be true. Affine and projective planes of order n exist whenever n is a prime power (a prime number raised to a positive integer exponent), by using affine and projective planes over the finite field with n = pk elements.
Finite geometry
0.831364
3,260
As geometries, these planes are isomorphic to the Fano plane. Every point is contained in 7 lines. Every pair of distinct points are contained in exactly one line and every pair of distinct planes intersects in exactly one line. In 1892, Gino Fano was the first to consider such a finite geometry.
Finite geometry
0.831364
3,261
These are much harder to classify, as not all of them are isomorphic with a PG(d, q). The Desarguesian planes (those that are isomorphic with a PG(2, q)) satisfy Desargues's theorem and are projective planes over finite fields, but there are many non-Desarguesian planes. Dimension at least 3: Two non-intersecting lines exist. The Veblen–Young theorem states in the finite case that every projective space of geometric dimension n ≥ 3 is isomorphic with a PG(n, q), the n-dimensional projective space over some finite field GF(q).
Finite geometry
0.831364
3,262
Individual examples can be found in the work of Thomas Penyngton Kirkman (1847) and the systematic development of finite projective geometry given by von Staudt (1856). The first axiomatic treatment of finite projective geometry was developed by the Italian mathematician Gino Fano. In his work on proving the independence of the set of axioms for projective n-space that he developed, he considered a finite three dimensional space with 15 points, 35 lines and 15 planes (see diagram), in which each line had only three points on it.In 1906 Oswald Veblen and W. H. Bussey described projective geometry using homogeneous coordinates with entries from the Galois field GF(q). When n + 1 coordinates are used, the n-dimensional finite geometry is denoted PG(n, q). It arises in synthetic geometry and has an associated transformation group.
Finite geometry
0.831364
3,263
A concept for consistent simulations of inorganic-organic interfaces, that formed the basis of IFF, was first introduced in 2003.A major obstacle was the poor definition of atomic charges in molecular models, especially for inorganic compounds, due to reliance on quantum chemistry calculations and partitioning methods that may be suitable for field-based but not for point-based charge distributions necessary in force fields. As a result, uncertainties in quantum-mechanically derived point charges were often 100% or higher, clearly unsuited to quantify chemical bonding or chemical processes in force fields and in molecular simulations. IFF utilizes a method to assign atomic charges that translates chemical bonding accurately into molecular models, including metals, oxides, minerals, and organic molecules.
Interface force field
0.831356
3,264
A database in IFF provides simulation-ready models of crystal structures and crystallographic surfaces of metals and minerals. Often, variable surface chemistry is important, such as in pH-responsive surfaces of silica, hydroxyapatite, and cement minerals. The model options in the database incorporate extensive experimental data, which can be selected and customized by users. For example, models for silica cover the flexible area density of silanol groups and siloxide groups according to data from differential thermal gravimetry, spectroscopy, zeta potentials, surface titration, and pK values. Similarly, hydroxyapatite minerals in bone and teeth displays surfaces that differ in dihydrogenphosphate versus monohydrogenphosphate content as a function of pH value. The surface chemistry is often as critical as good interatomic potentials to predict the dynamics of electrolyte interfaces, molecular recognition, and surface reactions.
Interface force field
0.831356
3,265
IFF includes a physical-chemical interpretation for all parameters as well as a surface model database that covers different cleavage planes and surface chemistry of included compounds. The Interface Force Field is compatible with force fields for the simulation of primarily organic compounds and can be used with common molecular dynamics and Monte Carlo codes. Structures and energies of included chemical elements and compounds are rigorously validated and property predictions are up to a factor of 100 more accurate relative to earlier models.
Interface force field
0.831356
3,266
In the context of chemistry and molecular modelling, the Interface force field (IFF) is a force field for classical molecular simulations of atoms, molecules, and assemblies up to the large nanometer scale, covering compounds from across the periodic table. It employs a consistent classical Hamiltonian energy function for metals, oxides, and organic compounds, linking biomolecular and materials simulation platforms into a single platform. The reliability is often higher than that of density functional theory calculations at more than a million times lower computational cost.
Interface force field
0.831356
3,267
Parallel Processing Letters publishes short papers in the field of parallel processing. This journal has a wide scope and topics covered include: theory of parallel computation parallel programming languages parallel architectures and VLSI circuits unconventional computational problems (e.g., time-varying variables, interacting variables, time-varying complexity) unconventional computational paradigms (e.g., biomolecular computing, chemical computing, quantum computing) parallel programming environments design and analysis of parallel and distributed algorithms
Parallel Processing Letters
0.831355
3,268
Wilkie proved that there are statements about the positive integers that cannot be proved using the eleven axioms above and showed what extra information is needed before such statements can be proved. Using Nevanlinna theory it has also been proved that if one restricts the kinds of exponential one takes then the above eleven axioms are sufficient to prove every true statement.Another problem stemming from Wilkie's result, which remains open, is that which asks what the smallest algebra is for which W ( x , y ) {\displaystyle W(x,y)} is not true but the eleven axioms above are. In 1985 an algebra with 59 elements was found that satisfied the axioms but for which W ( x , y ) {\displaystyle W(x,y)} was false. Smaller such algebras have since been found, and it is now known that the smallest such one must have either 11 or 12 elements.
Tarski's high school algebra problem
0.831346
3,269
The list of eleven axioms can be found explicitly written down in the works of Richard Dedekind, although they were obviously known and used by mathematicians long before then. Dedekind was the first, though, who seemed to be asking if these axioms were somehow sufficient to tell us everything we could want to know about the integers. The question was put on a firm footing as a problem in logic and model theory sometime in the 1960s by Alfred Tarski, and by the 1980s it had become known as Tarski's high school algebra problem.
Tarski's high school algebra problem
0.831346
3,270
In mathematical logic, Tarski's high school algebra problem was a question posed by Alfred Tarski. It asks whether there are identities involving addition, multiplication, and exponentiation over the positive integers that cannot be proved using eleven axioms about these operations that are taught in high-school-level mathematics. The question was solved in 1980 by Alex Wilkie, who showed that such unprovable identities do exist.
Tarski's high school algebra problem
0.831346
3,271
When the electric field is removed the atom returns to its original state. The time required to do so is called relaxation time; an exponential decay. This is the essence of the model in physics.
Dielectric polarization
0.83134
3,272
Advanced Placement (AP) Chemistry (also known as AP Chem) is a course and examination offered by the College Board as a part of the Advanced Placement Program to give American and Canadian high school students the opportunity to demonstrate their abilities and earn college-level credit. AP Chemistry has the lowest test participation rate, with around half of AP Chemistry students taking the exam.
AP Chemistry
0.83134
3,273
The annual AP Chemistry examination, which is typically administered in May, is divided into two major sections (multiple-choice questions and free response essays).
AP Chemistry
0.83134
3,274
AP Chemistry is a course geared toward students with interests in chemical biologies, as well as any of the biological sciences. The course aims to prepare students to take the AP Chemistry exam toward the end of the academic year. AP Chemistry covers most introductory general chemistry topics (excluding organic chemistry), including: Reactions Chemical equilibrium Chemical kinetics Stoichiometry Thermodynamics Electrochemistry Reaction types States of matter Gases, Ideal gases and Kinetic theory Liquids Solids Solutions Structure of matter Atomic theory, including evidence for atomic theory Chemical bonding, including intermolecular forces (IMF) Nuclear chemistry (removed for May 2014 test) Molecular geometry Molecular models Mass spectrometry Laboratory and chemical calculations Thermochemistry Chemical kinetics Chemical equilibrium Gas laws calculations
AP Chemistry
0.831339
3,275
The College Board recommends successful completion of high school chemistry and algebra 2; however, requirement of this may differ from school to school. AP Chemistry usually requires knowledge of algebra 2; however, some schools allow students to take the course concurrently with this class. The requirement of regular or honors level high school chemistry may also be waived, but usually requires completion of a special assignment or exam, or completion of high school chemistry alongside AP Chemistry.
AP Chemistry
0.831339
3,276
Structure and Matter, 20% States of Matter, 20% Reactions, 35–40% Descriptive Chemistry, 10–15% Laboratory, 5–10%
AP Chemistry
0.831339
3,277
The 2014 AP Chemistry exam was the first administration of a redesigned test as a result of a redesigning of the AP Chemistry course. The exam format is now different from the previous years, with 60 multiple choice questions (now with only four answer choices per question), 3 long free response questions, and 4 short free response questions. The new exam has a focus on longer, more in depth, lab-based questions. The penalty for incorrect answers on the multiple choice section was also removed. More detailed information can be found at the related link.
AP Chemistry
0.831339
3,278
To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals that are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree–Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of Gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations, which are in fact a particular representation of the Hartree–Fock equation.
Molecular orbitals
0.831335
3,279
In chemistry, a molecular orbital () is a mathematical function describing the location and wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region. The terms atomic orbital and molecular orbital were introduced by Robert S. Mulliken in 1932 to mean one-electron orbital wave functions.
Molecular orbitals
0.831335
3,280
Molecular orbitals were first introduced by Friedrich Hund and Robert S. Mulliken in 1927 and 1928. The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones. His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry.
Molecular orbitals
0.831335
3,281
There are a number of programs in which quantum chemical calculations of MOs can be performed, including Spartan.Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultra-violet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be very poor in other cases.
Molecular orbitals
0.831335
3,282
Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin.
Molecular orbitals
0.831335
3,283
Applying a result of MacIntyre on the model theory of p {\displaystyle p} -adic integers, one deduces again that ζ G ( s ) {\displaystyle \zeta _{G}(s)} is a rational function in p − s {\displaystyle p^{-s}} . Moreover, M. du Sautoy and F. Grunewald showed that the integral can be approximated by Artin L-functions. Using the fact that Artin L-functions are holomorphic in a neighbourhood of the line ℜ ( s ) = 1 {\displaystyle \Re (s)=1} , they showed that for any torsionfree nilpotent group, the function ζ G ( s ) {\displaystyle \zeta _{G}(s)} is meromorphic in the domain ℜ ( s ) > α − δ {\displaystyle \Re (s)>\alpha -\delta } where α {\displaystyle \alpha } is the abscissa of convergence of ζ G ( s ) {\displaystyle \zeta _{G}(s)} , and δ {\displaystyle \delta } is some positive number, and holomorphic in some neighbourhood of ℜ ( s ) = α {\displaystyle \Re (s)=\alpha } . Using a Tauberian theorem this implies ∑ n ≤ x s n ( G ) ∼ x α log k ⁡ x {\displaystyle \sum _{n\leq x}s_{n}(G)\sim x^{\alpha }\log ^{k}x} for some real number α {\displaystyle \alpha } and a non-negative integer k {\displaystyle k} .
Subgroup growth
0.831332
3,284
As of 2018, the Illumina monopoly on high-quality next-generation sequencing reagents has meant that the sequencing reagents alone cost more than FDA-approved syndromic testing panels. Also additional direct costs of metagenomics such as extraction, library preparation, and computational analysis have to be considered. In general, metagenomic sequencing is most useful and cost efficient for pathogen discovery when at least one of the following criteria are met: the identification of the organism is not sufficient (one desires to go beyond discovery to produce data for genomic characterization), a coinfection is suspected, other simpler assays are ineffective or will take an inordinate amount of time, screening of environmental samples for previously undescribed or divergent pathogens.
Clinical metagenomic next-generation sequencing
0.831325
3,285
In clinical microbiology labs, the quantitation of microbial burden is considered a routine function as it is associated with the severity and progression of the disease. To achieve a good quantitation a high sensitivity of the technique is needed.Whereas interfering substances represent a common problem to clinical chemistry or to PCR diagnostics, the degree of interference from host (for example, in tissue biopsies) or nonpathogen nucleic acids (for example, in stool) in metagenomics is a new twist. In addition, due to the relative size of the human genome in comparison with microbial genomes the interference can occur at low levels of contaminating material.Another challenge for clinical metagenomics in regards to sensitivity is the diagnosis of coinfections where there are present high-titer pathogens that can generate biased results as they may disproportionately soak up reads and make difficult to distinguish the less predominant pathogens.In addition to issues with interfering substances, specially in the diagnosis area, accurate quantitation and sensitivities are essential as a confusion in the results can affect to a third person, the patient.
Clinical metagenomic next-generation sequencing
0.831325
3,286
While for substantial resources Illumina NextSeq, NovaSeq, PacBio Sequel, Oxford Nanopore and PromethION are preferred. Moreover, for pathogen sequencing the use of controls is of fundamental importance ensuring mNGS assay quality and stability over time; PhiX is used as sequencing control, then the other controls include the positive control, an additional internal control (e.g., spiked DNA or other known pathogen) and a negative control (usually water sample). Bioinformatic analysis: Whereas the sequencing itself has been made widely accessible and more user friendly, the data analysis and interpretation that follows still requires specialized bioinformatics expertise and appropriate computational resources.
Clinical metagenomic next-generation sequencing
0.831325
3,287
If there is a strong previous suspicion of the pathogen genome composition and since the amount of pathogen nucleic acid in more noise samples is overwhelmed by the RNA/DNA of other organisms, selecting an extraction kit of only RNA or DNA would be a more specific and convenient approach. Some commerciable available kits are for example RNeasy PowerSoil Total RNA kit (Qiagen), RNeasy Minikit (Qiagen), MagMAX Viral Isolation kit (ABI), Viral RNA Minikit (Qiagen). Optimization strategies for library preparation: because of high levels of background noise in metagenomic sequencing, several target enrichment procedures have been developed that aim to increase the probability of capturing pathogen-derived transcripts and/or genomes.
Clinical metagenomic next-generation sequencing
0.831325
3,288
A typical mNGS workflow consists of the following steps: Sample acquisition: the most commonly used samples for metagenomic sequencing are blood, stool, cerebrospinal fluid (CSF), urine, or nasopharyngeal swabs. Among these, blood and CSF are the cleanest, having less background noise, while the others are expected to have a great amount of commensals and/or opportunistic infections and thus have more background noise. Samples should be collected with much caution as surgical specimens could be contaminated during handling of the biopsy; for example, lumbar punctures to obtain CSF specimens may be contaminated during the procedure. RNA/DNA extraction: the DNA and the RNA of the sample is extracted by using an extraction kit.
Clinical metagenomic next-generation sequencing
0.831325
3,289
Most of the metagenomics outcomes data generated consist of case reports which belie the increasing interest on diagnostic metagenomics.Accordingly, there is an overall lack of penetration of this approach into the clinical microbiology laboratory, as making a diagnosis with metagenomics is still basically only useful in the context of case report but not for a true daily diagnostic purpose.As of 2018, cost-effectiveness modelling of metagenomics in the diagnosis of fever of unknown origin concluded that, even after limiting the cost of diagnostic metagenomics to $100 – 1000 per test, it would require 2.5-4 times the diagnostic yield of computed tomography of the abdomen and pelvis in order to be cost neutral and cautioned against ‘widespread rush’ to deploy metagenomic testing.Furthermore, in the case of the discovery of potential novel infectious agents, usually only the positive results are published even though the vast majority of sequenced cases are negative, thus resulting in very biased information. Besides, most of the discovery work based in metagenomic that precedes the diagnostic-based work even mentioned the known agents detected while screening unsolved cases for completely novel causes.
Clinical metagenomic next-generation sequencing
0.831325
3,290
In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...).
Theoretical Mechanics
0.831323
3,291
In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
Theoretical Mechanics
0.831323
3,292
One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application.
Theoretical Mechanics
0.831323
3,293
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related alternative formulations of classical mechanics. It was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Since Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system, an alternative name for the mechanics governed by Newton's laws and Euler's laws is vectorial mechanics. By contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its total kinetic energy and potential energy—not Newton's vectorial forces of individual particles.
Theoretical Mechanics
0.831323
3,294
Comput., its publisher and contributors frequently use the shorter abbreviation SICOMP. SICOMP typically hosts the special issues of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) and the Annual ACM Symposium on Theory of Computing (STOC), where about 15% of papers published in FOCS and STOC each year are invited to these special issues. For example, Volume 48 contains 11 out of 85 papers published in FOCS 2016.
SIAM Journal on Computing
0.831304
3,295
The SIAM Journal on Computing is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is SIAM J.
SIAM Journal on Computing
0.831304
3,296
The Phycological Society of America (PSA) is a professional society, founded in 1946, that is dedicated to the advancement of phycology, the study of algae. The PSA is responsible for the publication of Journal of Phycology and organizes annual conferences among other events that aid in the advancement of related algal sciences. Membership in the Phycological Society of America is open to anyone from any nation who is concerned with the physiology, taxonomy, molecular biology, experimental biology, cell biology, and developmental biology of related algal sciences. As of 2012, membership was approximately 2,000 from 63 countries.
Phycological Society of America
0.8313
3,297
In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: Automatic inspection, e.g., in manufacturing applications; Assisting humans in identification tasks, e.g., a species identification system; Controlling processes, e.g., an industrial robot; Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry; Interaction, e.g., as the input to a device for computer-human interaction; Modeling objects or environments, e.g., medical image analysis or topographical modeling; Navigation, e.g., by an autonomous vehicle or mobile robot; Organizing information, e.g., for indexing databases of images and image sequences. Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences.
Image classification
0.831297
3,298
Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications.
Image classification
0.831297
3,299
Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.
Image classification
0.831297