id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
1,900
|
If G {\displaystyle G} is a finite group, then for any group element a , {\displaystyle a,} the elements in the conjugacy class of a {\displaystyle a} are in one-to-one correspondence with cosets of the centralizer C G ( a ) . {\displaystyle \operatorname {C} _{G}(a).} This can be seen by observing that any two elements b {\displaystyle b} and c {\displaystyle c} belonging to the same coset (and hence, b = c z {\displaystyle b=cz} for some z {\displaystyle z} in the centralizer C G ( a ) {\displaystyle \operatorname {C} _{G}(a)} ) give rise to the same element when conjugating a {\displaystyle a}: That can also be seen from the orbit-stabilizer theorem, when considering the group as acting on itself through conjugation, so that orbits are conjugacy classes and stabilizer subgroups are centralizers. The converse holds as well.
|
Conjugate subgroup
| 0.836865
|
1,901
|
The versatility of polymerase chain reaction (PCR) has led to modifications of the basic protocol being used in a large number of variant techniques designed for various purposes. This article summarizes many of the most common variations currently or formerly used in molecular biology laboratories; familiarity with the fundamental premise by which PCR works and corresponding terms and concepts is necessary for understanding these variant techniques.
|
Variants of PCR
| 0.836862
|
1,902
|
To release the DNA from the cells, the PCR is either started with an extended time at 95 °C (when standard polymerase is used), or with a shortened denaturation step at 100 °C and special chimeric DNA polymerase.The digital polymerase chain reaction simultaneously amplifies thousands of samples, each in a separate droplet within an emulsion. Suicide PCR is typically used in paleogenetics or other studies where avoiding false positives and ensuring the specificity of the amplified fragment is the highest priority. It was originally described in a study to verify the presence of the microbe Yersinia pestis in dental samples obtained from 14th-century graves of people supposedly killed by plague during the medieval Black Death epidemic.
|
Variants of PCR
| 0.836862
|
1,903
|
A company might analyze its own potential (productivity, market position) by comparing it to those of the competitors (benchmarking). A market can be analyzed to estimate its potential for a certain product. Processes can be structurally analyzed due to their optimization.
|
Potential analysis
| 0.836838
|
1,904
|
For small elements X , Y {\displaystyle X,Y} of the Lie algebra, the structure of the Lie group near the identity element is given by exp ( X ) exp ( Y ) ≈ exp ( X + Y + 1 2 ) . {\displaystyle \exp(X)\exp(Y)\approx \exp(X+Y+{\tfrac {1}{2}}).} Note the factor of 1/2. They also appear in explicit expressions for differentials, such as e − X d e X {\displaystyle e^{-X}de^{X}} ; see Baker–Campbell–Hausdorff formula#Infinitesimal case for details.
|
Structure constants
| 0.836821
|
1,905
|
The structure constants play a role in Lie algebra representations, and in fact, give exactly the matrix elements of the adjoint representation. The Killing form and the Casimir invariant also have a particularly simple form, when written in terms of the structure constants. The structure constants often make an appearance in the approximation to the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group.
|
Structure constants
| 0.836821
|
1,906
|
The linear expansion of the Lie bracket of pairs of generators then looks like = ∑ c f a b c T c {\displaystyle =\sum _{c}f_{ab}^{\;\;c}T_{c}} .Again, by linear extension, the structure constants completely determine the Lie brackets of all elements of the Lie algebra. All Lie algebras satisfy the Jacobi identity.
|
Structure constants
| 0.836821
|
1,907
|
For a Lie algebra, the basis vectors are termed the generators of the algebra, and the product rather called the Lie bracket (often the Lie bracket is an additional product operation beyond the already existing product, thus necessitating a separate name). For two vectors A {\displaystyle A} and B {\displaystyle B} in the algebra, the Lie bracket is denoted {\displaystyle } . Again, there is no particular need to distinguish the upper and lower indices; they can be written all up or all down. In physics, it is common to use the notation T i {\displaystyle T_{i}} for the generators, and f a b c {\displaystyle f_{ab}^{\;\;c}} or f a b c {\displaystyle f^{abc}} (ignoring the upper-lower distinction) for the structure constants.
|
Structure constants
| 0.836821
|
1,908
|
Given a set of basis vectors { e i } {\displaystyle \{\mathbf {e} _{i}\}} for the underlying vector space of the algebra, the product operation is uniquely defined by the products of basis vectors: e i ⋅ e j = c i j {\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\mathbf {c} _{ij}} .The structure constants or structure coefficients c i j k {\displaystyle c_{ij}^{\;k}} are just the coeffiecients of c i j {\displaystyle \mathbf {c} _{ij}} in the same basis: e i ⋅ e j = c i j = ∑ k c i j k e k {\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\mathbf {c} _{ij}=\sum _{k}c_{ij}^{\;\;k}\mathbf {e} _{k}} .Otherwise said they are the coefficients that express c i j {\displaystyle \mathbf {c} _{ij}} as linear combination of the basis vectors e k {\displaystyle \mathbf {e} _{k}} . The upper and lower indices are frequently not distinguished, unless the algebra is endowed with some other structure that would require this (for example, a pseudo-Riemannian metric, on the algebra of the indefinite orthogonal group so(p,q)). That is, structure constants are often written with all-upper, or all-lower indexes. The distinction between upper and lower is then a convention, reminding the reader that lower indices behave like the components of a dual vector, i.e. are covariant under a change of basis, while upper indices are contravariant. The structure constants obviously depend on the chosen basis. For Lie algebras, one frequently used convention for the basis is in terms of the ladder operators defined by the Cartan subalgebra; this is presented further down in the article, after some preliminary examples.
|
Structure constants
| 0.836821
|
1,909
|
The algebra s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} of the special unitary group SU(2) is three-dimensional, with generators given by the Pauli matrices σ i {\displaystyle \sigma _{i}} . The generators of the group SU(2) satisfy the commutation relations (where ε a b c {\displaystyle \varepsilon ^{abc}} is the Levi-Civita symbol): where In this case, the structure constants are f a b c = 2 i ε a b c {\displaystyle f^{abc}=2i\varepsilon ^{abc}} . Note that the constant 2i can be absorbed into the definition of the basis vectors; thus, defining t a = − i σ a / 2 {\displaystyle t_{a}=-i\sigma _{a}/2} , one can equally well write Doing so emphasizes that the Lie algebra s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} of the Lie group SU(2) is isomorphic to the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of SO(3). This brings the structure constants into line with those of the rotation group SO(3).
|
Structure constants
| 0.836821
|
1,910
|
For a given α {\displaystyle \alpha } , there are as many α i {\displaystyle \alpha _{i}} as there are H i {\displaystyle H_{i}} and so one may define the vector α = α i H i {\displaystyle \alpha =\alpha _{i}H_{i}} , this vector is termed a root of the algebra. The roots of Lie algebras appear in regular structures (for example, in simple Lie algebras, the roots can have only two different lengths); see root system for details. The structure constants N α , β {\displaystyle N_{\alpha ,\beta }} have the property that they are non-zero only when α + β {\displaystyle \alpha +\beta } are a root. In addition, they are antisymmetric: N α , β = − N β , α {\displaystyle N_{\alpha ,\beta }=-N_{\beta ,\alpha }} and can always be chosen such that N α , β = − N − α , − β {\displaystyle N_{\alpha ,\beta }=-N_{-\alpha ,-\beta }} They also obey cocycle conditions: N α , β = N β , γ = N γ , α {\displaystyle N_{\alpha ,\beta }=N_{\beta ,\gamma }=N_{\gamma ,\alpha }} whenever α + β + γ = 0 {\displaystyle \alpha +\beta +\gamma =0} , and also that N α , β N γ , δ + N β , γ N α , δ + N γ , α N β , δ = 0 {\displaystyle N_{\alpha ,\beta }N_{\gamma ,\delta }+N_{\beta ,\gamma }N_{\alpha ,\delta }+N_{\gamma ,\alpha }N_{\beta ,\delta }=0} whenever α + β + γ + δ = 0 {\displaystyle \alpha +\beta +\gamma +\delta =0} . == References ==
|
Structure constants
| 0.836821
|
1,911
|
The dimension r {\displaystyle r} of this subalgebra is called the rank of the algebra. In the adjoint representation, the matrices a d ( H i ) {\displaystyle \mathrm {ad} (H_{i})} are mutually commuting, and can be simultaneously diagonalized. The matrices a d ( H i ) {\displaystyle \mathrm {ad} (H_{i})} have (simultaneous) eigenvectors; those with a non-zero eigenvalue α {\displaystyle \alpha } are conventionally denoted by E α {\displaystyle E_{\alpha }} .
|
Structure constants
| 0.836821
|
1,912
|
Given a Lie algebra g {\displaystyle {\mathfrak {g}}} , the Cartan subalgebra h ⊂ g {\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}} is the maximal Abelian subalgebra. By definition, it consists of those elements that commute with one-another. An orthonormal basis can be freely chosen on h {\displaystyle {\mathfrak {h}}} ; write this basis as H 1 , ⋯ , H r {\displaystyle H_{1},\cdots ,H_{r}} with ⟨ H i , H j ⟩ = δ i j {\displaystyle \langle H_{i},H_{j}\rangle =\delta _{ij}} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product on the vector space.
|
Structure constants
| 0.836821
|
1,913
|
One conventional approach to providing a basis for a Lie algebra is by means of the so-called "ladder operators" appearing as eigenvectors of the Cartan subalgebra. The construction of this basis, using conventional notation, is quickly sketched here. An alternative construction (the Serre construction) can be found in the article semisimple Lie algebra.
|
Structure constants
| 0.836821
|
1,914
|
Primer-BLAST is widely used, and freely accessible from the National Center for Biotechnology Information (NCBI) website. On the other hand, FastPCR, a commercial application, allows simultaneous testing of a single primer or a set of primers designed for multiplex target sequences.
|
In silico PCR
| 0.836798
|
1,915
|
In silico PCR refers to computational tools used to calculate theoretical polymerase chain reaction (PCR) results using a given set of primers (probes) to amplify DNA sequences from a sequenced genome or transcriptome.These tools are used to optimize the design of primers for target DNA or cDNA sequences. Primer optimization has two goals: efficiency and selectivity. Efficiency involves taking into account such factors as GC-content, efficiency of binding, complementarity, secondary structure, and annealing and melting point (Tm). Primer selectivity requires that the primer pairs not fortuitously bind to random sites other than the target of interest, nor should the primer pairs bind to conserved regions of a gene family.
|
In silico PCR
| 0.836798
|
1,916
|
These graphs have been used to solve a problem in extremal graph theory, of constructing a graph with a given number of edges and vertices whose largest tree induced as a subgraph is as small as possible.All eigenvalues of the adjacency matrix A of a line graph are at least −2. The reason for this is that A can be written as A = J T J − 2 I {\displaystyle A=J^{\mathsf {T}}J-2I} , where J is the signless incidence matrix of the pre-line graph and I is the identity. In particular, A + 2I is the Gramian matrix of a system of vectors: all graphs with this property have been called generalized line graphs.
|
Line graph
| 0.836787
|
1,917
|
The concept of the line graph of G may naturally be extended to the case where G is a multigraph. In this case, the characterizations of these graphs can be simplified: the characterization in terms of clique partitions no longer needs to prevent two vertices from belonging to the same to cliques, and the characterization by forbidden graphs has seven forbidden graphs instead of nine.However, for multigraphs, there are larger numbers of pairs of non-isomorphic graphs that have the same line graphs. For instance a complete bipartite graph K1,n has the same line graph as the dipole graph and Shannon multigraph with the same number of edges. Nevertheless, analogues to Whitney's isomorphism theorem can still be derived in this case.
|
Line graph
| 0.836787
|
1,918
|
However, the algorithm of Degiorgi & Simon (1995) uses only Whitney's isomorphism theorem. It is complicated by the need to recognize deletions that cause the remaining graph to become a line graph, but when specialized to the static recognition problem only insertions need to be performed, and the algorithm performs the following steps: Construct the input graph L by adding vertices one at a time, at each step choosing a vertex to add that is adjacent to at least one previously-added vertex. While adding vertices to L, maintain a graph G for which L = L(G); if the algorithm ever fails to find an appropriate graph G, then the input is not a line graph and the algorithm terminates.
|
Line graph
| 0.836787
|
1,919
|
For instance if edges d and e in the graph G are incident at a vertex v with degree k, then in the line graph L(G) the edge connecting the two vertices d and e can be given weight 1/(k − 1). In this way every edge in G (provided neither end is connected to a vertex of degree 1) will have strength 2 in the line graph L(G) corresponding to the two ends that the edge has in G. It is straightforward to extend this definition of a weighted line graph to cases where the original graph G was directed or even weighted. The principle in all cases is to ensure the line graph L(G) reflects the dynamics as well as the topology of the original graph G.
|
Line graph
| 0.836787
|
1,920
|
Put another way, the Whitney graph isomorphism theorem guarantees that the line graph almost always encodes the topology of the original graph G faithfully but it does not guarantee that dynamics on these two graphs have a simple relationship. One solution is to construct a weighted line graph, that is, a line graph with weighted edges. There are several natural ways to do this.
|
Line graph
| 0.836787
|
1,921
|
The N-terminal "ring" can be from 7 to 9 amino acids long and is formed by an isopeptide bond between the N-terminal amine of the first amino acid of the peptide and the carboxylate side chain of an aspartate or glutamate residue. The C-terminal "tail" ranges from 7 to 15 amino acids in length.The first amino acid of lasso peptides is almost invariably glycine or cysteine, with mutations at this site not being tolerated by known enzymes. Thus, bioinformatics-based approaches to lasso peptide discovery have thus used this as a constraint. However, some lasso peptides were recently discovered that also contain serine or alanine as their first residue.The threading of the lasso tail is trapped either by disulfide bonds between ring and tail cysteine residues (class I lasso peptides), by steric effects due to bulky residues on the tail (class II lasso peptides), or both (class III lasso peptides). The compact structure makes lasso peptides frequently resistant to proteases or thermal unfolding.
|
Ribosomally synthesized and post-translationally modified peptide
| 0.836777
|
1,922
|
Lasso peptides are short peptides containing an N-terminal macrolactam macrocycle "ring" through which a linear C-terminal "tail" is threaded. Because of this threaded-loop topology, these peptides resemble lassos, giving rise to their name. They are a member of a larger class of amino-acid-based lasso structures. Additionally, lasso peptides are formally rotaxanes.
|
Ribosomally synthesized and post-translationally modified peptide
| 0.836777
|
1,923
|
Commonly, the B protein is referred to as the lasso protease, and the C protein is referred to as the lasso cyclase. Some lasso peptide biosynthetic gene clusters also require an additional protein of unknown function for biosynthesis. Additionally, lasso peptide gene clusters usually include an ABC transporter (D protein) or an isopeptidase, although these are not strictly required for lasso peptide biosynthesis and are sometimes absent. No X-ray crystal structure is yet known for any lasso peptide biosynthetic protein. The biosynthesis of lasso peptides is particularly interesting due to the inaccessibility of the threaded-lasso topology to chemical peptide synthesis.
|
Ribosomally synthesized and post-translationally modified peptide
| 0.836777
|
1,924
|
Instruments such as the astrolabe, the quadrant, and others were used to measure and accurately record the relative positions and movements of planets and other celestial objects. The sextant and other related instruments were essential for navigation at sea. Most instruments are used within the field of geometry, including the ruler, dividers, protractor, set square, compass, ellipsograph, T-square and opisometer. Others are used in arithmetic (for example the abacus, slide rule and calculator) or in algebra (the integraph). In astronomy, many have said the pyramids (along with Stonehenge) were actually instruments used for tracking the stars over long periods or for the annual planting seasons.
|
Mathematical instruments
| 0.836748
|
1,925
|
This is a method to find each digit of the square root in a sequence. This method is based on the binomial theorem and basically an inverse algorithm solving ( x + y ) 2 = x 2 + 2 x y + y 2 {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}} . It is slower than the Babylonian method, but it has several advantages: It can be easier for manual calculations. Every digit of the root found is known to be correct, i.e., it does not have to be changed later.
|
Methods of computing square roots
| 0.836744
|
1,926
|
A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6.
|
Methods of computing square roots
| 0.836744
|
1,927
|
The most common analytical methods are iterative and consist of two steps: finding a suitable starting value, followed by iterative refinement until some termination criterion is met. The starting value can be any number, but fewer iterations will be required the closer it is to the final result. The most familiar such method, most suited for programmatic calculation, is Newton's method, which is based on a property of the derivative in the calculus.
|
Methods of computing square roots
| 0.836744
|
1,928
|
Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted S {\displaystyle {\sqrt {S}}} , S 2 {\displaystyle {\sqrt{S}}} , or S 1 / 2 {\displaystyle S^{1/2}} ) of a real number. Arithmetically, it means given S {\displaystyle S} , a procedure for finding a number which when multiplied by itself, yields S {\displaystyle S} ; algebraically, it means a procedure for finding the non-negative root of the equation x 2 − S = 0 {\displaystyle x^{2}-S=0} ; geometrically, it means given two line segments, a procedure for constructing their geometric mean. Every real number except zero has two square roots. The principal square root of most numbers is an irrational number with an infinite decimal expansion.
|
Methods of computing square roots
| 0.836744
|
1,929
|
Its case study is an investigation into possible gender bias in student admission at the University of California, Berkeley in the 1970s, in which the admission statistics for six separate departments showed a small bias in favor of women in admissions, and yet when grouped together into a single set the same statistics seemed to show a larger bias against women. A closer examination of the data explained that the lower overall admission rate for women was not because of discrimination by any department, but rather because the female applicants aimed higher, to the departments whose overall admission rates were low. The same chapter also brings in a later case of alleged anti-women bias at Berkeley, the lawsuit over the tenure denial of mathematician Jenny Harrison.
|
Math on Trial
| 0.836735
|
1,930
|
Noah Giansiracusa complains that the authors sometimes perpetrate the same fallacies or mistaken calculations that they warn of, that their treatment of legal reasoning can be superficial, and that their accounts of some cases appear to exhibit bias by the authors instead of presenting the cases neutrally. Daniel Ullman also outlines several miscalculations by the authors, while pointing out that they do not affect the overall story told by the book. Michael Finkelstein, a lawyer and scholar of legal statistics, points out an error of fact in Chapter 9 (the book discusses the jury's opinion in a case that had no jury), citing it as evidence of its tendency to aggrandize the role of mathematics in these cases.
|
Math on Trial
| 0.836735
|
1,931
|
Ludwig Paditz write that it "vividly shows how the desire for scientific certainty can lead even well-meaning courts to commit grave injustice". Paul H. Edelman singles out the wide range of times and places of the cases presented as a particular strength of the book.Several reviewers suggest that, beyond a general audience, the book may also be useful as supplementary material for students of probability and statistics, although reviewer Chris Stapel warns that it often overemphasizes the significance of mathematics in the legal cases presented. As reviewer Iwan Praton writes, in many of these cases, the correct reasoning was also presented, but "it is not enough to be correct—one has to be persuasive, too".However, as well as these positive reviews, the book attracted a significant amount of criticism from its reviewers.
|
Math on Trial
| 0.836735
|
1,932
|
Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces.
|
Surface Physics
| 0.836725
|
1,933
|
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives.
|
Surface Physics
| 0.836725
|
1,934
|
Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering, Scanning Probe Microscopy, surface enhanced Raman Spectroscopy and X-ray Photoelectron Spectroscopy (XPS).
|
Surface Physics
| 0.836725
|
1,935
|
Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry.
|
Surface Physics
| 0.836725
|
1,936
|
The book is sectioned into four parts. The first part, Genetics and the Scientific Method briefly review the History of genetics and the various methods used in genetic study. The second part focus on Mendelian inheritance, the third part deals with Molecular genetics and the last section deals with Quantitative genetics and Evolutionary Genetics.
|
Principles of genetics
| 0.836719
|
1,937
|
The Principle of genetics is a genetics textbook authored by D. Peter Snustad & Michael J. Simmons, an emeritus professor of biology, published by John Wiley & Sons, Inc.. The 6th edition of the book was published on 2012.
|
Principles of genetics
| 0.836719
|
1,938
|
There are two common ways to describe the steepness of a road or railroad. One is by the angle between 0° and 90° (in degrees), and the other is by the slope in a percentage. See also steep grade railway and rack railway. The formulae for converting a slope given as a percentage into an angle in degrees and vice versa are: angle = arctan ( slope 100 % ) {\displaystyle {\text{angle}}=\arctan \left({\frac {\text{slope}}{100\%}}\right)} (this is the inverse function of tangent; see trigonometry)and slope = 100 % × tan ( angle ) , {\displaystyle {\mbox{slope}}=100\%\times \tan({\mbox{angle}}),} where angle is in degrees and the trigonometric functions operate in degrees.
|
Slope
| 0.83671
|
1,939
|
The concept of a slope or gradient is also used as a basis for developing other applications in mathematics: Gradient descent, a first-order iterative optimization algorithm for finding the minimum of a function Gradient theorem, theorem that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve Gradient method, an algorithm to solve problems with search directions defined by the gradient of the function at the current point Conjugate gradient method, an algorithm for the numerical solution of particular systems of linear equations Nonlinear conjugate gradient method, generalizes the conjugate gradient method to nonlinear optimization Stochastic gradient descent, iterative method for optimizing a differentiable objective function
|
Slope
| 0.83671
|
1,940
|
The full solution thus is ( y = x ) ∪ ( n 1 / ( n − 1 ) , n n / ( n − 1 ) ) for n > 0 ; n ≠ 1 {\displaystyle (y=x)\cup \left(n^{1/(n-1)},n^{n/(n-1)}\right){\text{ for }}n>0;n\neq 1} . Based on the above solution, the derivative d y / d x {\displaystyle dy/dx} is 1 for the ( x , y ) {\displaystyle (x,y)} pairs on the line y = x {\displaystyle y=x} , and for the other ( x , y ) {\displaystyle (x,y)} pairs can be found by ( d y / d n ) / ( d x / d n ) {\displaystyle (dy/dn)/(dx/dn)} , which straightforward calculus gives as d y / d x = − n 2 {\displaystyle dy/dx=-n^{2}} for n > 0 {\displaystyle n>0} and n ≠ 1 {\displaystyle n\neq 1} . The following treatment explores some special cases and notes linkages to other mathematical concepts.
|
Equation xy = yx
| 0.836662
|
1,941
|
A similar solution was found by Euler.J. van Hengel pointed out that if r , n {\displaystyle r,n} are positive integers with r ≥ 3 {\displaystyle r\geq 3} , then r r + n > ( r + n ) r ; {\displaystyle r^{r+n}>(r+n)^{r};} therefore it is enough to consider possibilities x = 1 {\displaystyle x=1} and x = 2 {\displaystyle x=2} in order to find solutions in natural numbers.The problem was discussed in a number of publications. In 1960, the equation was among the questions on the William Lowell Putnam Competition, which prompted Alvin Hausner to extend results to algebraic number fields.
|
Equation xy = yx
| 0.836662
|
1,942
|
In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant. As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match). A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra.
|
Simple algebra (universal algebra)
| 0.836662
|
1,943
|
Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics, another part of applied mathematics.According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91: Game theory, economics, social and behavioral scienceswith MSC2010 classifications for 'Game theory' at codes 91Axx Archived 2015-04-02 at the Wayback Machine and for 'Mathematical economics' at codes 91Bxx Archived 2015-04-02 at the Wayback Machine.
|
Applied geometry
| 0.836646
|
1,944
|
In mathematics, especially in the area of algebra studying the theory of abelian groups, a pure subgroup is a generalization of direct summand. It has found many uses in abelian group theory and related areas.
|
Pure subgroup
| 0.836628
|
1,945
|
The first ESA was held in 1993 and contained 35 papers. The intended scope was all research in algorithms, theoretical as well as applied, carried out in the fields of computer science and discrete mathematics. An explicit aim was to intensify the exchange between these two research communities.
|
Workshop on Algorithms Engineering
| 0.836618
|
1,946
|
The European Symposium on Algorithms (ESA) is an international conference covering the field of algorithms. It has been held annually since 1993, typically in early Autumn in a different European location each year. Like most theoretical computer science conferences its contributions are strongly peer-reviewed; the articles appear in proceedings published in Springer Lecture Notes in Computer Science. Acceptance rate of ESA is 24% in 2012 in both Design and Analysis and Engineering and Applications tracks.
|
Workshop on Algorithms Engineering
| 0.836618
|
1,947
|
WAOA, the Workshop on Approximation and Online Algorithms, has been part of ALGO since 2003. ATMOS, the Workshop on Algorithmic Approaches for Transportation Modeling, Optimization and Systems, formerly the Workshop on Algorithmic Methods and Models for Optimization of Railways, has been part of ALGO in 2003–2006 and 2008–2009. IPEC, the International Symposium on Parameterized and Exact Computation, founded in 2004 and formerly the International Workshop on Parameterized and Exact Computation (IWPEC), is part of ALGO since 2011ATMOS was co-located with the International Colloquium on Automata, Languages and Programming (ICALP) in 2001–2002.
|
Workshop on Algorithms Engineering
| 0.836618
|
1,948
|
Since 2001, ESA is co-located with other algorithms conferences and workshops in a combined meeting called ALGO. This is the largest European event devoted to algorithms, attracting hundreds of researchers. Other events in the ALGO conferences include the following. WABI, the Workshop on Algorithms in Bioinformatics, is part of ALGO in most years.
|
Workshop on Algorithms Engineering
| 0.836618
|
1,949
|
In both the UK and the US, professional societies had long existed for civil and mechanical engineers. The Institution of Electrical Engineers (IEE) was founded in the UK in 1871, and the AIEE in the United States in 1884. These societies contributed to the exchange of electrical knowledge and the development of electrical engineering education. On an international level, the International Electrotechnical Commission (IEC), which was founded in 1906, prepares standards for power engineering, with 20,000 electrotechnical experts from 172 countries developing global specifications based on consensus.
|
Power engineering
| 0.83659
|
1,950
|
Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering.
|
Power engineering
| 0.83659
|
1,951
|
The installation powered a 100 horsepower (75 kW) synchronous motor at Telluride, Colorado with the motor being started by a Tesla induction motor. On the other side of the Atlantic, Oskar von Miller built a 20 kV 176 km three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt. In 1895, after a protracted decision-making process, the Adams No. 1 generating station at Niagara Falls began transmitting three-phase alternating current power to Buffalo at 11 kV. Following completion of the Niagara Falls project, new power systems increasingly chose alternating current as opposed to direct current for electrical transmission.
|
Power engineering
| 0.83659
|
1,952
|
One DNA or RNA molecule differs from another primarily in the sequence of nucleotides. Nucleotide sequences are of great importance in biology since they carry the ultimate instructions that encode all biological molecules, molecular assemblies, subcellular and cellular structures, organs, and organisms, and directly enable cognition, memory, and behavior. Enormous efforts have gone into the development of experimental methods to determine the nucleotide sequence of biological DNA and RNA molecules, and today hundreds of millions of nucleotides are sequenced daily at genome centers and smaller laboratories worldwide. In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, https://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site.
|
Nucleic acids
| 0.836589
|
1,953
|
Let us redesign the original library with extensibility in mind using the ideas from the paper Extensibility for the Masses. We use the same implementation as in the first code example but now add a new interface containing the functions over the type as well as a factory for the algebra. Notice that we now generate the expression in ExampleTwo.AddOneToTwo() using the ExpAlgebra interface instead of directly from the types. We can now add a function by extending the ExpAlgebra interface, we will add functionality to print the expression: Notice that in ExampleThree.Print() we are printing an expression that was already compiled in ExampleTwo, we did not need to modify any existing code. Notice also that this is still strongly typed, we do not need reflection or casting. If we would replace the PrintFactory() with the ExpFactory() in the ExampleThree.Print() we would get a compilation error since the .Print() method does not exist in that context.
|
Expression problem
| 0.836587
|
1,954
|
There are various solutions to the expression problem. Each solution varies in the amount of code a user must write to implement them, and the language features they require. Multiple dispatch Open classes Coproducts of functors Type classes Tagless-final / Object algebras Polymorphic Variants
|
Expression problem
| 0.836587
|
1,955
|
For PLT, the problem had shown up in the construction of DrScheme, now DrRacket, and they solved it via a rediscovery of mixins. To avoid using a programming language problem in a paper about programming languages, Krishnamurthi et al. used an old geometry programming problem to explain their pattern-oriented solution. In conversations with Felleisen and Krishnamurthi after the ECOOP presentation, Wadler understood the PL-centric nature of the problem and he pointed out that Krishnamurthi's solution used a cast to circumvent Java's type system.
|
Expression problem
| 0.836587
|
1,956
|
Most importantly, he discussed situations in which there was more flexibility than Reynolds considered, including internalization and optimization of methods. At ECOOP '98, Shriram Krishnamurthi et al. presented a design pattern solution to the problem of simultaneously extending an expression-oriented programming language and its tool-set. They dubbed it the "expressivity problem" because they thought programming language designers could use the problem to demonstrate the expressive power of their creations.
|
Expression problem
| 0.836587
|
1,957
|
Philip Wadler formulated the challenge and named it "The Expression Problem" in response to a discussion with Rice University's Programming Languages Team (PLT). He also cited three sources that defined the context for his challenge: The problem was first observed by John Reynolds in 1975. Reynolds discussed two forms of Data Abstraction: User-defined Types, which are now known as Abstract Data Types (ADTs) (not to be confused with Algebraic Data Types), and Procedural Data Structures, which are now understood as a primitive form of Objects with only one method. He argued that they are complementary, in that User-defined Types could be extended with new behaviors, and Procedural Data Structures could be extended with new representations.
|
Expression problem
| 0.836587
|
1,958
|
As of 2009, the primary of the care physicians did not have adequate training in genetics or genomics. Although medical school curricula typically include medical genetics, fewer than half offer a standalone course, and the emphasis on practical applications is weak.In the United States, Stanford University was the first medical school in the United States to offer a course teaching the interpretation of genetic data. Students were able to study their own genotypes, determined using commercially available genotyping platforms (23andMe or Navigenics). Although there was skepticism that this would improve educational outcomes, a survey later showed that this had increased students’ enthusiasm for the subject. A similar class is offered at Mount Sinai School of Medicine, launched in 2012, in which students have the option of analyzing their entire genome sequence instead of only their genotype.
|
Education in personalized medicine
| 0.836553
|
1,959
|
Personalized medicine involves medical treatments based on the characteristics of individual patients, including their medical history, family history, and genetics. Although personal genetic information is becoming increasingly important in healthcare, there is a lack of sufficient education in medical genetics among physicians and the general public. For example, pharmacogenomics (genetic factors influencing drug response) is practiced worldwide by only a limited number of pharmacists, although most pharmacy colleges in the United States now include it in their curriculum. It is also increasingly common for genetic testing to be offered directly to consumers, who subsequently seek out educational materials and bring their results to their doctors. Issues involving genetic testing also invariably lead to ethical and legal concerns, such as the potential for inadvertent effects on family members, increased insurance rates, or increased psychological stress.
|
Education in personalized medicine
| 0.836553
|
1,960
|
In the differential equations, the nabla symbol, ∇, denotes the three-dimensional gradient operator, del, the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator, the ∇× symbol (pronounced "del cross") denotes the curl operator.
|
Maxwell equation
| 0.836527
|
1,961
|
List of numerical computational geometry topics enumerates the topics of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. This area is also called "machine geometry", computer-aided geometric design, and geometric modelling. See List of combinatorial computational geometry topics for another flavor of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character.
|
List of numerical computational geometry topics
| 0.836513
|
1,962
|
The College Board suggested as preparation for the test four years of mathematics, including two years of algebra, one year of geometry, and one year of either precalculus or trigonometry.While the precalculus or trigonometry course may have been good preparation for this test, students could have needed to buy extra resource materials if they want to score beyond a 700. The exam covered several years of mathematics, and students were expected to work quickly and efficiently.
|
SAT Subject Test in Mathematics Level 2
| 0.836496
|
1,963
|
Compared to Mathematics 1, Mathematics 2 was more advanced. Whereas the Mathematics 1 test covered Algebra II and basic trigonometry, a pre-calculus class was good preparation for Mathematics 2. On January 19, 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Mathematics Level 2. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
|
SAT Subject Test in Mathematics Level 2
| 0.836496
|
1,964
|
In the U.S., the SAT Subject Test in Mathematics Level 2 (formerly known as Math II or Math IIChest, the "C" representing chest) was a one-hour multiple choice test. The questions covered a broad range of topics. Approximately 10-14% of questions focused on numbers and operations, 48-52% focused on algebra and functions, 28-32% focused on geometry (coordinate, three-dimensional, and trigonometric geometry were covered; plane geometry was not directly tested), and 8-12% focused on data analysis, statistics and probability.
|
SAT Subject Test in Mathematics Level 2
| 0.836496
|
1,965
|
MOWChIP-seq is enhanced and low-input ChIP-seq thus it applies to all molecular biology that can be probed using ChIP-seq. This includes analysis of histone modifications, RNA pol II binding, and transcription factor binding. Published MOWChIP-seq results include studies of various histone marks (H3K4me3, H3K27ac, H3K27me3, H3K9me3, H3K36me3, and H3K79me2)1,2.
|
MOWChIP-seq
| 0.836481
|
1,966
|
MOWChIP-seq (Microfluidic Oscillatory Washing–based Chromatin ImmunoPrecipitation followed by sequencing) is a microfluidic technology used in molecular biology for profiling genome-wide histone modifications and other molecular bindings using as few as 30-100 cells per assay. MOWChIP-seq is a special type of ChIP-seq assay designed for low-input and high-throughput assays. The overall process of MOWChIP-seq is similar to that of conventional ChIP-seq assay except that the chromatin immunoprecipitation (ChIP) and washing steps occur in a small microfluidic chamber. MOWChIP-seq takes advantage of the capability of microfluidics for manipulating micrometer-sized beads.
|
MOWChIP-seq
| 0.836481
|
1,967
|
Despite the obvious power of the approach, eDNA metabarcoding is affected by precision and accuracy challenges distributed throughout the workflow in the field, in the laboratory and at the keyboard. As set out in the diagram at the right, following the initial study design (hypothesis/question, targeted taxonomic group etc) the current eDNA workflow consists of three components: field, laboratory and bioinformatics. The field component consists of sample collection (e.g., water, sediment, air) that is preserved or frozen prior to DNA extraction. The laboratory component has four basic steps: (i) DNA is concentrated (if not performed in the field) and purified, (ii) PCR is used to amplify a target gene or region, (iii) unique nucleotide sequences called "indexes" (also referred to as "barcodes") are incorporated using PCR or are ligated (bound) onto different PCR products, creating a "library" whereby multiple samples can be pooled together, and (iv) pooled libraries are then sequenced on a high‐throughput machine. The final step after laboratory processing of samples is to computationally process the output files from the sequencer using a robust bioinformatics pipeline.
|
DNA metabarcoding
| 0.836453
|
1,968
|
In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that kinds of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA-DNA hybridisation to decide if they belong to the same species or not. This concept was narrowed in 2006 to a similarity of 98.7%.DNA-DNA hybridisation is outdated, and results have sometimes led to misleading conclusions about species, as with the pomarine and great skua. Modern approaches compare sequence similarity using computational methods.
|
DNA metabarcoding
| 0.836453
|
1,969
|
Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study. eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.
|
DNA metabarcoding
| 0.836453
|
1,970
|
eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but, as of 2020, it is only just finding its footing in the assessment of macroorganisms.
|
DNA metabarcoding
| 0.836453
|
1,971
|
Since the inception of high‐throughput sequencing (HTS), the use of metabarcoding as a biodiversity detection tool has drawn immense interest. However, there has yet to be clarity regarding what source material is used to conduct metabarcoding analyses (e.g., environmental DNA versus community DNA). Without clarity between these two source materials, differences in sampling, as well as differences in laboratory procedures, can impact subsequent bioinformatics pipelines used for data processing, and complicate the interpretation of spatial and temporal biodiversity patterns. Here, we seek to clearly differentiate among the prevailing source materials used and their effect on downstream analysis and interpretation for environmental DNA metabarcoding of animals and plants compared to that of community DNA metabarcoding.With community DNA metabarcoding of animals and plants, the targeted groups are most often collected in bulk (e.g., soil, malaise trap or net), and individuals are removed from other sample debris and pooled together prior to bulk DNA extraction.
|
DNA metabarcoding
| 0.836453
|
1,972
|
This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the member is composed. The applied loads may be axial (tensile or compressive), or rotational (strength shear). With a complete description of the loading and the geometry of the member, the state of stress and state of strain at any point within the member can be calculated.
|
Mechanical strength
| 0.836418
|
1,973
|
The field of strength of materials (also called mechanics of materials) typically refers to various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength, Young's modulus, and Poisson's ratio. In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered. The theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko.
|
Mechanical strength
| 0.836418
|
1,974
|
Support vector machines are based upon the idea of maximizing the margin i.e. maximizing the minimum distance from the separating hyperplane to the nearest example. The basic SVM supports only binary classification, but extensions have been proposed to handle the multiclass classification case as well. In these extensions, additional parameters and constraints are added to the optimization problem to handle the separation of the different classes.
|
Multiclass classification
| 0.83641
|
1,975
|
SequenceBase is a privately held company, is an international patent sequence information provider with headquarters located in Edison, NJ, USA. SequenceBase develops and markets the SequenceBase Research Portal to the biotechnology, legal, pharmaceutical, scientific, technical and academic bioinformatics communities. Clarivate Analytics has acquired SequenceBase on 9th September 2019.USGENE provides searchable access to all available peptide and nucleotide sequences from the published applications and issued patents of the United States Patent and Trademark Office (USPTO). USGENE can be searched directly via the SequenceBase Research Portal or via STN International by FIZ Karlsruhe. The SequenceBase Research Portal offers BLAST+ as a sequence searching method.
|
SequenceBase
| 0.836382
|
1,976
|
In mathematics, in algebra, in the realm of group theory, a subgroup H {\displaystyle H} of a finite group G {\displaystyle G} is said to be semipermutable if H {\displaystyle H} commutes with every subgroup K {\displaystyle K} whose order is relatively prime to that of H {\displaystyle H} . Clearly, every permutable subgroup of a finite group is semipermutable. The converse, however, is not necessarily true.
|
Semipermutable subgroup
| 0.83638
|
1,977
|
In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.
|
Randomized complexity
| 0.83637
|
1,978
|
Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method. Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a much more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.
|
Randomized complexity
| 0.83637
|
1,979
|
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge.
|
Electric dipole
| 0.836366
|
1,980
|
Some authors may split d in half and use s = d/2 since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where d is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector p also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for the dipole, from the positive charge to the negative charge, is used in chemistry.An idealization of this two-charge system is the electrical point dipole consisting of two (infinite) charges only infinitesimally separated, but with a finite p. This quantity is used in the definition of polarization density.
|
Electric dipole
| 0.836366
|
1,981
|
Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge +q and the other one with charge −q separated by a distance d, constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one.
|
Electric dipole
| 0.836366
|
1,982
|
Phase Genomics is an American biotechnology company based in Seattle, Washington. The company develops proximity ligation kits and Hi-C sequencing technology used to analyze chromosomes. Phase Genomics sells proximity ligation kits, scientific services, and computational analyses.
|
Phase Genomics
| 0.836362
|
1,983
|
This result was shown by Seinosuke Toda in 1989 and is known as Toda's theorem. This is evidence of how hard it is to solve problems in PP. The class #P is in some sense about as hard, since P#P = PPP and therefore P#P includes PH as well.
|
PP (complexity class)
| 0.83636
|
1,984
|
If H is a normal subgroup of G, then the quotient group G/H becomes a topological group when given the quotient topology. It is Hausdorff if and only if H is closed in G. For example, the quotient group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } is isomorphic to the circle group S1. In any topological group, the identity component (i.e., the connected component containing the identity element) is a closed normal subgroup. If C is the identity component and a is any point of G, then the left coset aC is the component of G containing a. So the collection of all left cosets (or right cosets) of C in G is equal to the collection of all components of G. It follows that the quotient group G/C is totally disconnected.
|
Closed subgroup
| 0.83636
|
1,985
|
If H is a subgroup of G, the set of left cosets G/H with the quotient topology is called a homogeneous space for G. The quotient map q: G → G / H {\displaystyle q:G\to G/H} is always open. For example, for a positive integer n, the sphere Sn is a homogeneous space for the rotation group SO(n+1) in R {\displaystyle \mathbb {R} } n+1, with Sn = SO(n+1)/SO(n). A homogeneous space G/H is Hausdorff if and only if H is closed in G. Partly for this reason, it is natural to concentrate on closed subgroups when studying topological groups.
|
Closed subgroup
| 0.83636
|
1,986
|
S {\displaystyle S} is a complete uniform space (under the point-set topology definition of "complete uniform space") when S {\displaystyle S} is endowed with the uniformity induced on it by the canonical uniformity of X {\displaystyle X} ; A subset S {\displaystyle S} is called a sequentially complete subset if every Cauchy sequence in S {\displaystyle S} (or equivalently, every elementary Cauchy filter/prefilter on S {\displaystyle S} ) converges to at least one point of S . {\displaystyle S.} Importantly, convergence outside of S {\displaystyle S} is allowed: If X {\displaystyle X} is not Hausdorff and if every Cauchy prefilter on S {\displaystyle S} converges to some point of S , {\displaystyle S,} then S {\displaystyle S} will be complete even if some or all Cauchy prefilters on S {\displaystyle S} also converge to points(s) in the complement X ∖ S .
|
Closed subgroup
| 0.83636
|
1,987
|
In mathematics, topological groups are the combination of groups and topological spaces, i.e. they are groups and topological spaces at the same time, such that the continuity condition for the group operations connects these two structures together and consequently they are not independent from each other.Topological groups have been studied extensively in the period of 1925 to 1940. Haar and Weil (respectively in 1933 and 1940) showed that the integrals and Fourier series are special cases of a very wide class of topological groups.Topological groups, along with continuous group actions, are used to study continuous symmetries, which have many applications, for example, in physics. In functional analysis, every topological vector space is an additive topological group with the additional property that scalar multiplication is continuous; consequently, many results from the theory of topological groups can be applied to functional analysis.
|
Closed subgroup
| 0.836359
|
1,988
|
Examples include all finite-dimensional alternative algebras, and the algebra of real 2-by-2 matrices. Up to isomorphism the only alternative, quadratic real algebras without divisors of zero are the reals, complexes, quaternions, and octonions. The Cayley–Dickson algebras (where K is R), which begin with: C (a commutative and associative algebra); the quaternions H (an associative algebra); the octonions (an alternative algebra); the sedenions, and the infinite sequence of Cayley-Dickson algebras (power-associative algebras).
|
Quadratic representation
| 0.836343
|
1,989
|
These include most of the algebras of interest to multilinear algebra, such as the tensor algebra, symmetric algebra, and exterior algebra over a given vector space. Graded algebras can be generalized to filtered algebras.
|
Quadratic representation
| 0.836343
|
1,990
|
Power-associative algebras, are those algebras satisfying the power-associative identity. Examples include all associative algebras, all alternative algebras, Jordan algebras over a field other than GF(2) (see previous section), and the sedenions. The hyperbolic quaternion algebra over R, which was an experimental algebra before the adoption of Minkowski space for special relativity.More classes of algebras: Graded algebras.
|
Quadratic representation
| 0.836343
|
1,991
|
The most important examples of alternative algebras are the octonions (an algebra over the reals), and generalizations of the octonions over other fields. All associative algebras are alternative. Up to isomorphism, the only finite-dimensional real alternative, division algebras (see below) are the reals, complexes, quaternions and octonions.
|
Quadratic representation
| 0.836343
|
1,992
|
In contrast to the Lie algebra case, not every Jordan algebra can be constructed this way. Those that can are called special. Alternative algebras are algebras satisfying the alternative property.
|
Quadratic representation
| 0.836343
|
1,993
|
Every associative algebra gives rise to a Lie algebra by using the commutator as Lie bracket. In fact every Lie algebra can either be constructed this way, or is a subalgebra of a Lie algebra so constructed. Every associative algebra over a field of characteristic other than 2 gives rise to a Jordan algebra by defining a new multiplication x*y = (xy+yx)/2.
|
Quadratic representation
| 0.836343
|
1,994
|
Euclidean space R3 with multiplication given by the vector cross product is an example of an algebra which is anticommutative and not associative. The cross product also satisfies the Jacobi identity. Lie algebras are algebras satisfying anticommutativity and the Jacobi identity. Algebras of vector fields on a differentiable manifold (if K is R or the complex numbers C) or an algebraic variety (for general K); Jordan algebras are algebras which satisfy the commutative law and the Jordan identity.
|
Quadratic representation
| 0.836343
|
1,995
|
Power associative: the subalgebra generated by any element is associative, i.e., nth power associative for all n ≥ 2. nth power commutative with n ≥ 2: xn−kxk = xkxn−k for all integers k so that 0 < k < n. Third power commutative: x2x = xx2. Fourth power commutative: x3x = xx3 (compare with fourth power associative above). Power commutative: the subalgebra generated by any element is commutative, i.e., nth power commutative for all n ≥ 2. Nilpotent of index n ≥ 2: the product of any n elements, in any association, vanishes, but not for some n−1 elements: x1x2…xn = 0 and there exist n−1 elements so that y1y2…yn−1 ≠ 0 for a specific association. Nil of index n ≥ 2: power associative and xn = 0 and there exist an element y so that yn−1 ≠ 0.
|
Quadratic representation
| 0.836343
|
1,996
|
Let x, y and z denote arbitrary elements of the algebra A over the field K. Let powers to positive (non-zero) integer be recursively defined by x1 ≝ x and either xn+1 ≝ xnx (right powers) or xn+1 ≝ xxn (left powers) depending on authors. Unital: there exist an element e so that ex = x = xe; in that case we can define x0 ≝ e. Associative: (xy)z = x(yz). Commutative: xy = yx. Anticommutative: xy = −yx.
|
Quadratic representation
| 0.836343
|
1,997
|
They carry two multiplications, turning them into commutative algebras and Lie algebras in different ways. Genetic algebras are non-associative algebras used in mathematical genetics. Triple systems
|
Quadratic representation
| 0.836343
|
1,998
|
Hypercomplex algebras are all finite-dimensional unital R-algebras, they thus include Cayley-Dickson algebras and many more. The Poisson algebras are considered in geometric quantization.
|
Quadratic representation
| 0.836343
|
1,999
|
The quaternions and octonions are not commutative. Of these algebras, all are associative except for the octonions. Quadratic algebras, which require that xx = re + sx, for some elements r and s in the ground field, and e a unit for the algebra.
|
Quadratic representation
| 0.836343
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.