id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
1,000
Conditioning (probability) Conditional expectation Conditional probability distribution Regular conditional probability Disintegration theorem Bayes' theorem de Finetti's theorem Exchangeable random variables Rule of succession Conditional independence Conditional event algebra Goodman–Nguyen–van Fraassen algebra
List of probability topics
0.841859
1,001
This is a list of probability topics. It overlaps with the (alphabetical) list of statistical topics. There are also the outline of probability and catalog of articles in probability theory. For distributions, see List of probability distributions. For journals, see list of probability journals. For contributors to the field, see list of mathematical probabilists and list of statisticians.
List of probability topics
0.841859
1,002
Discrete random variable Probability mass function Constant random variable Expected value Jensen's inequality Variance Standard deviation Geometric standard deviation Multivariate random variable Joint probability distribution Marginal distribution Kirkwood approximation Independent identically-distributed random variables Independent and identically-distributed random variables Statistical independence Conditional independence Pairwise independence Covariance Covariance matrix De Finetti's theorem Correlation Uncorrelated Correlation function Canonical correlation Convergence of random variables Weak convergence of measures Helly–Bray theorem Slutsky's theorem Skorokhod's representation theorem Lévy's continuity theorem Uniform integrability Markov's inequality Chebyshev's inequality = Chernoff bound Chernoff's inequality Bernstein inequalities (probability theory) Hoeffding's inequality Kolmogorov's inequality Etemadi's inequality Chung–Erdős inequality Khintchine inequality Paley–Zygmund inequality Laws of large numbers Asymptotic equipartition property Typical set Law of large numbers Kolmogorov's two-series theorem Random field Conditional random field Borel–Cantelli lemma Wick product
List of probability topics
0.841859
1,003
Central limit theorem Illustration of the central limit theorem Concrete illustration of the central limit theorem Berry–Esséen theorem Berry–Esséen theorem De Moivre–Laplace theorem Lyapunov's central limit theorem Martingale central limit theorem Infinite divisibility (probability) Method of moments (probability theory) Stability (probability) Stein's lemma Characteristic function (probability theory) Lévy continuity theorem Darmois–Skitovich theorem Edgeworth series Helly–Bray theorem Kac–Bernstein theorem Location parameter Maxwell's theorem Moment-generating function Factorial moment generating function Negative probability Probability-generating function Vysochanskiï–Petunin inequality Mutual information Kullback–Leibler divergence Normally distributed and uncorrelated does not imply independent Le Cam's theorem Large deviations theory Contraction principle (large deviations theory) Varadhan's lemma Tilted large deviation principle Rate function Laplace principle (large deviations theory) Exponentially equivalent measures Cramér's theorem (second part)
List of probability topics
0.841859
1,004
Buffon's needle Integral geometry Hadwiger's theorem Wendel's theorem
List of probability topics
0.841859
1,005
Adapted process Basic affine jump diffusion Bernoulli process Bernoulli scheme Branching process Point process Chapman–Kolmogorov equation Chinese restaurant process Coupling (probability) Ergodic theory Maximal ergodic theorem Ergodic (adjective) Galton–Watson process Gauss–Markov process Gaussian process Gaussian random field Gaussian isoperimetric inequality Large deviations of Gaussian random functions Girsanov's theorem Hawkes process Increasing process Itô's lemma Jump diffusion Law of the iterated logarithm Lévy flight Lévy process Loop-erased random walk Markov chain Examples of Markov chains Detailed balance Markov property Hidden Markov model Maximum-entropy Markov model Markov chain mixing time Markov partition Markov process Continuous-time Markov process Piecewise-deterministic Markov process Martingale Doob martingale Optional stopping theorem Martingale representation theorem Azuma's inequality Wald's equation Poisson process Poisson random measure Population process Process with independent increments Progressively measurable process Queueing theory Erlang unit Random walk Random walk Monte Carlo Renewal theory Skorokhod's embedding theorem Stationary process Stochastic calculus Itô calculus Malliavin calculus Stratonovich integral Time series analysis Autoregressive model Moving average model Autoregressive moving average model Autoregressive integrated moving average model Anomaly time series Voter model Wiener process Brownian motion Geometric Brownian motion Donsker's theorem Empirical process Wiener equation Wiener sausage
List of probability topics
0.841859
1,006
Punnett square Hardy–Weinberg principle Ewens's sampling formula Population genetics
List of probability topics
0.841859
1,007
Probability theory Probability space Sample space Standard probability space Random element Random compact set Dynkin system Probability axioms Normalizing constant Event (probability theory) Complementary event Elementary event Mutually exclusive Boole's inequality Probability density function Cumulative distribution function Law of total cumulance Law of total expectation Law of total probability Law of total variance Almost surely Cox's theorem Bayesianism Prior probability Posterior probability Borel's paradox Bertrand's paradox Coherence (philosophical gambling strategy) Dutch book Algebra of random variables Belief propagation Transferable belief model Dempster–Shafer theory Possibility theory
List of probability topics
0.841859
1,008
Probability distribution Probability distribution function Probability density function Probability mass function Cumulative distribution function Quantile Moment (mathematics) Moment about the mean Standardized moment Skewness Kurtosis Locality Cumulant Factorial moment Expected value Law of the unconscious statistician Second moment method Variance Coefficient of variation Variance-to-mean ratio Covariance function An inequality on location and scale parameters Taylor expansions for the moments of functions of random variables Moment problem Hamburger moment problem Carleman's condition Hausdorff moment problem Trigonometric moment problem Stieltjes moment problem Prior probability distribution Total variation distance Hellinger distance Wasserstein metric Lévy–Prokhorov metric Lévy metric Continuity correction Heavy-tailed distribution Truncated distribution Infinite divisibility Stability (probability) Indecomposable distribution Power law Anderson's theorem Probability bounds analysis Probability box
List of probability topics
0.841859
1,009
Probability Randomness, Pseudorandomness, Quasirandomness Randomization, hardware random number generator Random number generation Random sequence Uncertainty Statistical dispersion Observational error Equiprobable Equipossible Average Probability interpretations Markovian Statistical regularity Central tendency Bean machine Relative frequency Frequency probability Maximum likelihood Bayesian probability Principle of indifference Credal set Cox's theorem Principle of maximum entropy Information entropy Urn problems Extractor Free probability Exotic probability Schrödinger method Empirical measure Glivenko–Cantelli theorem Zero–one law Kolmogorov's zero–one law Hewitt–Savage zero–one law Law of truly large numbers Littlewood's law Infinite monkey theorem Littlewood–Offord problem Inclusion–exclusion principle Impossible event Information geometry Talagrand's concentration inequality
List of probability topics
0.841859
1,010
Magnetic charges are not seen experimentally in laboratory experiments, but would be present for theories including magnetic monopoles.In supersymmetry: The supercharge refers to the generator that rotates the fermions into bosons, and vice versa, in the supersymmetry.In conformal field theory: The central charge of the Virasoro algebra, sometimes referred to as the conformal central charge or the conformal anomaly. Here, the term 'central' is used in the sense of the center in group theory: it is an operator that commutes with all the other operators in the algebra. The central charge is the eigenvalue of the central generator of the algebra; here, it is the energy–momentum tensor of the two-dimensional conformal field theory.In gravitation: Eigenvalues of the energy–momentum tensor correspond to physical mass.
Charge (physics)
0.841802
1,011
Various charge quantum numbers have been introduced by theories of particle physics. These include the charges of the Standard Model: The color charge of quarks. The color charge generates the SU(3) color symmetry of quantum chromodynamics. The weak isospin quantum numbers of the electroweak interaction.
Charge (physics)
0.841802
1,012
The electric charge for electromagnetic interactions. In mathematics texts, this is sometimes referred to as the u 1 {\displaystyle u_{1}} -charge of a Lie algebra module.Note that these charge quantum numbers show up in the Lagrangian via the Gauge covariant derivative#Standard_Model. Charges of approximate symmetries: The strong isospin charges.
Charge (physics)
0.841802
1,013
So, for example, when the symmetry group is a Lie group, then the charge operators correspond to the simple roots of the root system of the Lie algebra; the discreteness of the root system accounting for the quantization of the charge. The simple roots are used, as all the other roots can be obtained as linear combinations of these. The general roots are often called raising and lowering operators, or ladder operators. The charge quantum numbers then correspond to the weights of the highest-weight modules of a given representation of the Lie algebra. So, for example, when a particle in a quantum field theory belongs to a symmetry, then it transforms according to a particular representation of that symmetry; the charge quantum number is then the weight of the representation.
Charge (physics)
0.841802
1,014
Abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge", the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge.
Charge (physics)
0.841802
1,015
In physics, a charge is any of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. Charges correspond to the time-invariant generators of a symmetry group, and specifically, to the generators that commute with the Hamiltonian. Charges are often denoted by the letter Q, and so the invariance of the charge corresponds to the vanishing commutator = 0 {\displaystyle =0} , where H is the Hamiltonian. Thus, charges are associated with conserved quantum numbers; these are the eigenvalues q of the generator Q.
Charge (physics)
0.841802
1,016
The decomposition of such products of representations into direct sums of irreducible representations can in general be written as Λ ⊗ Λ ′ = ⨁ i L i Λ i {\displaystyle \Lambda \otimes \Lambda '=\bigoplus _{i}{\mathcal {L}}_{i}\Lambda _{i}} for representations Λ {\displaystyle \Lambda } . The dimensions of the representations obey the "dimension sum rule": d Λ ⋅ d Λ ′ = ∑ i L i d Λ i . {\displaystyle d_{\Lambda }\cdot d_{\Lambda '}=\sum _{i}{\mathcal {L}}_{i}d_{\Lambda _{i}}.} Here, d Λ {\displaystyle d_{\Lambda }} is the dimension of the representation Λ {\displaystyle \Lambda } , and the integers L {\displaystyle {\mathcal {L}}} being the Littlewood–Richardson coefficients. The decomposition of the representations is again given by the Clebsch–Gordan coefficients, this time in the general Lie-algebra setting.
Charge (physics)
0.841802
1,017
{\displaystyle 2\otimes {\overline {2}}=3\oplus 1.\ } That is, the product of two (Lorentz) spinors is a (Lorentz) vector and a (Lorentz) scalar. Note that the complex Lie algebra sl(2,C) has a compact real form su(2) (in fact, all Lie algebras have a unique compact real form). The same decomposition holds for the compact form as well: the product of two spinors in su(2) being a vector in the rotation group O(3) and a singlet.
Charge (physics)
0.841802
1,018
Nucleic acid secondary structure is the basepairing interactions within a single nucleic acid polymer or between two polymers. It can be represented as a list of bases which are paired in a nucleic acid molecule. The secondary structures of biological DNAs and RNAs tend to be different: biological DNA mostly exists as fully base paired double helices, while biological RNA is single stranded and often forms complex and intricate base-pairing interactions due to its increased ability to form hydrogen bonds stemming from the extra hydroxyl group in the ribose sugar. In a non-biological context, secondary structure is a vital consideration in the nucleic acid design of nucleic acid structures for DNA nanotechnology and DNA computing, since the pattern of basepairing ultimately determines the overall structure of the molecules.
Nucleic acid secondary structure
0.841791
1,019
In molecular biology, two nucleotides on opposite complementary DNA or RNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick base pairing, adenine (A) forms a base pair with thymine (T) and guanine (G) forms one with cytosine (C) in DNA. In RNA, thymine is replaced by uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair, also occur—particularly in RNA—giving rise to complex and functional tertiary structures.
Nucleic acid secondary structure
0.841791
1,020
Scores on this exam are sometimes required for entrance to chemistry Ph.D. programs in the United States. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 940 (corresponding to the 99 percentile) and 460 (1 percentile) respectively.
GRE Chemistry Test
0.841765
1,021
The GRE subject test in chemistry is a standardized test in the United States created by the Educational Testing Service, and is designed to assess a candidate's potential for graduate or post-graduate study in the field of chemistry. It contains questions from many fields of chemistry. 15% of the questions will come from analytical chemistry, 25% will come from inorganic chemistry, 30% will come from organic chemistry and 30% will come from physical chemistry.This exam, like all the GRE subject tests, is paper-based, as opposed to the GRE general test which is usually computer-based. It contains 130 questions, which are to be answered within 2 hours and 50 minutes.
GRE Chemistry Test
0.841765
1,022
1452–1519 Leonardo da Vinci made many contributions 1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures1660: Hooke's law by Robert Hooke 1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion1750: Euler–Bernoulli beam equation 1700–1782: Daniel Bernoulli introduced the principle of virtual work 1707–1783: Leonhard Euler developed the theory of buckling of columns1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework 1942: R. Courant divided a domain into finite subregions 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today
Theory of elasticity
0.841744
1,023
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants.
Theory of elasticity
0.841744
1,024
The word problem on free lattices and more generally free bounded lattices has a decidable solution. Bounded lattices are algebraic structures with the two binary operations ∨ and ∧ and the two constants (nullary operations) 0 and 1. The set of all well-formed expressions that can be formulated using these operations on elements from a given set of generators X will be called W(X).
Word problem (mathematics)
0.841725
1,025
The most direct solution to a word problem takes the form of a normal form theorem and algorithm which maps every element in an equivalence class of expressions to a single encoding known as the normal form - the word problem is then solved by comparing these normal forms via syntactic equality. For example one might decide that x ⋅ y ⋅ z − 1 {\displaystyle x\cdot y\cdot z^{-1}} is the normal form of ( x ⋅ y ) / z {\displaystyle (x\cdot y)/z} , ( x / z ) ⋅ y {\displaystyle (x/z)\cdot y} , and ( y / z ) ⋅ x {\displaystyle (y/z)\cdot x} , and devise a transformation system to rewrite those expressions to that form, in the process proving that all equivalent expressions will be rewritten to the same normal form. But not all solutions to the word problem use a normal form theorem - there are algebraic properties which indirectly imply the existence of an algorithm.While the word problem asks whether two terms containing constants are equal, a proper extension of the word problem known as the unification problem asks whether two terms t 1 , t 2 {\displaystyle t_{1},t_{2}} containing variables have instances that are equal, or in other words whether the equation t 1 = t 2 {\displaystyle t_{1}=t_{2}} has any solutions.
Word problem (mathematics)
0.841725
1,026
In computer algebra one often wishes to encode mathematical expressions using an expression tree. But there are often multiple equivalent expression trees. The question naturally arises of whether there is an algorithm which, given as input two expressions, decides whether they represent the same element.
Word problem (mathematics)
0.841725
1,027
One of the most deeply studied cases of the word problem is in the theory of semigroups and groups. A timeline of papers relevant to the Novikov-Boone theorem is as follows: 1910 (1910): Axel Thue poses a general problem of term rewriting on tree-like structures. He states "A solution of this problem in the most general case may perhaps be connected with unsurmountable difficulties". 1911 (1911): Max Dehn poses the word problem for finitely presented groups.
Word problem (mathematics)
0.841725
1,028
One of the earliest proofs that a word problem is undecidable was for combinatory logic: when are two strings of combinators equivalent? Because combinators encode all possible Turing machines, and the equivalence of two Turing machines is undecidable, it follows that the equivalence of two strings of combinators is undecidable. Alonzo Church observed this in 1936.Likewise, one has essentially the same problem in (untyped) lambda calculus: given two distinct lambda expressions, there is no algorithm which can discern whether they are equivalent or not; equivalence is undecidable. For several typed variants of the lambda calculus, equivalence is decidable by comparison of normal forms.
Word problem (mathematics)
0.841725
1,029
: 355 1958 (1958) – 1959 (1959): Boone publishes a simplified version of his construction. 1961 (1961): Graham Higman characterises the subgroups of finitely presented groups with Higman's embedding theorem, connecting recursion theory with group theory in an unexpected way and giving a very different proof of the unsolvability of the word problem. 1961 (1961) – 1963 (1963): Britton presents a greatly simplified version of Boone's 1959 proof that the word problem for groups is unsolvable.
Word problem (mathematics)
0.841725
1,030
In universal algebra one studies algebraic structures consisting of a generating set A, a collection of operations on A of finite arity, and a finite set of identities that these operations must satisfy. The word problem for an algebra is then to determine, given two expressions (words) involving the generators and operations, whether they represent the same element of the algebra modulo the identities. The word problems for groups and semigroups can be phrased as word problems for algebras.The word problem on free Heyting algebras is difficult. The only known results are that the free Heyting algebra on one generator is infinite, and that the free complete Heyting algebra on one generator exists (and has one more element than the free Heyting algebra).
Word problem (mathematics)
0.841725
1,031
For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is a univariate algebraic (polynomial) equation with integer coefficients and y 4 + x y 2 = x 3 3 − x y 2 + y 2 − 1 7 {\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}} is a multivariate polynomial equation over the rational numbers. Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Equation
0.841651
1,032
In general, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} , or P = Q {\displaystyle P=Q} where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
Equation
0.841651
1,033
The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians. Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
Equation
0.841651
1,034
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form a x + b y + c z + d = 0 {\displaystyle ax+by+cz+d=0} , where a , b , c {\displaystyle a,b,c} and d {\displaystyle d} are real numbers and x , y , z {\displaystyle x,y,z} are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values a , b , c {\displaystyle a,b,c} are the coordinates of a vector perpendicular to the plane defined by the equation.
Equation
0.841651
1,035
For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable. A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as f ′ ( x ) = f ( x − 2 ) {\displaystyle f'(x)=f(x-2)} A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
Equation
0.841651
1,036
Equations can be classified according to the types of operations and quantities involved. Important types include: An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree: linear equation for degree one quadratic equation for degree two cubic equation for degree three quartic equation for degree four quintic equation for degree five sextic equation for degree six septic equation for degree seven octic equation for degree eight A Diophantine equation is an equation where the unknowns are required to be integers A transcendental equation is an equation involving a transcendental function of its unknowns A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations A functional equation is an equation in which the unknowns are functions rather than simple quantities Equations involving derivatives, integrals and finite differences: A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as f ′ ( x ) = x 2 {\displaystyle f'(x)=x^{2}} . Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables An integral equation is a functional equation involving the antiderivatives of the unknown functions.
Equation
0.841651
1,037
These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
Equation
0.841651
1,038
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics.
Equation
0.841651
1,039
In this operation, sometimes called rotate no carry, the bits are "rotated" as if the left and right ends of the register were joined. The value that is shifted into the right during a left-shift is whatever value was shifted out on the left, and vice versa for a right-shift operation. This is useful if it is necessary to retain all the existing bits, and is frequently used in digital cryptography.
Binary and
0.841636
1,040
The molecule approaches this capture region aided by brownian motion and any attraction it might have to the surface of the membrane. Once inside the nanopore, the molecule translocates through via a combination of electro-phoretic, electro-osmotic and sometimes thermo-phoretic forces. Inside the pore the molecule occupies a volume that partially restricts the flow of ions, observed as an ionic current drop. Based on various factors such as geometry, size and chemical composition, the change in magnitude of the ionic current and the duration of the translocation will vary. Different molecules can then be sensed and potentially identified based on this modulation in ionic current.
Nanopore sequencing
0.841628
1,041
A mathematical chess problem is a mathematical problem which is formulated using a chessboard and chess pieces. These problems belong to recreational mathematics. The most well-known problems of this kind are the eight queens puzzle and the knight's tour problem, which have connection to graph theory and combinatorics. Many famous mathematicians studied mathematical chess problems, such as, Thabit, Euler, Legendre and Gauss. Besides finding a solution to a particular problem, mathematicians are usually interested in counting the total number of possible solutions, finding solutions with certain properties, as well as generalization of the problems to N×N or M×N boards.
Mathematical chess problem
0.841584
1,042
For rooks, eight are required; the solution is to place them all on one file or rank. The solutions for other pieces are given below. Domination by queens on the main diagonal of a chessboard of any size can be shown equivalent to a problem in number theory of finding a Salem–Spencer set, a set of numbers in which none of the numbers is the average of two others. The optimal placement of queens is obtained by leaving vacant a set of squares that all have the same parity (all are in even positions or all in odd positions along the diagonal) and that form a Salem–Spencer set.
Mathematical chess problem
0.841584
1,043
Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site. Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
Protein
0.841574
1,044
Because DNA nanoballs remain confined their spots on the patterned array there are no optical duplicates to contend with during bioinformatics analysis of sequencing reads. It is suggested to run Picard MarkDuplicates as follows: java -jar picard.jar MarkDuplicates I=input.bam O=marked_duplicates.bam M=marked_dup_metrics.txt READ_NAME_REGEX=null A test with Picard-friendly, reformatted read names demonstrates the absence of this class of duplicate read: The single read marked as an optical duplicate is most assuredly artefactual. In any case, the effect on the estimated library size is negligible.
DNA nanoball sequencing
0.841562
1,045
Massively parallel next generation sequencing platforms like DNA nanoball sequencing may contribute to the diagnosis and treatment of many genetic diseases. The cost of sequencing an entire human genome has fallen from about one million dollars in 2008, to $4400 in 2010 with the DNA nanoball technology. Sequencing the entire genomes of patients with heritable diseases or cancer, mutations associated with these diseases have been identified, opening up strategies, such as targeted therapeutics for at-risk people and for genetic counseling. As the price of sequencing an entire human genome approaches the $1000 mark, genomic sequencing of every individual may become feasible as part of normal preventative medicine. == References ==
DNA nanoball sequencing
0.841562
1,046
DNA nanoball sequencing has been used in recent studies. Lee et al. used this technology to find mutations that were present in a lung cancer and compared them to normal lung tissue. They were able to identify over 50,000 single nucleotide variants. Roach et al. used DNA nanoball sequencing to sequence the genomes of a family of four relatives and were able to identify SNPs that may be responsible for a Mendelian disorder, and were able to estimate the inter-generation mutation rate. The Institute for Systems Biology has used this technology to sequence 615 complete human genome samples as part of a survey studying neurodegenerative diseases, and the National Cancer Institute is using DNA nanoball sequencing to sequence 50 tumours and matched normal tissues from pediatric cancers.
DNA nanoball sequencing
0.841562
1,047
The nanoballs are then adsorbed onto a sequencing flow cell. The color of the fluorescence at each interrogated position is recorded through a high-resolution camera. Bioinformatics are used to analyze the fluorescence data and make a base call, and for mapping or quantifying the 50bp, 100bp, or 150bp single- or paired-end reads.
DNA nanoball sequencing
0.841562
1,048
Ry receptors are homotetrameric complexes with each subunit exhibiting a molecular size of over 500,000 daltons (about 5,000 amino acyl residues). They possess C-terminal domains with six putative transmembrane α-helical spanners (TMSs). Putative pore-forming sequences occur between the fifth and sixth TMSs as suggested for members of the VIC family. Recently an 8 TMS topology with four hairpin loops has been suggested.
Ryanodine-Inositol 1,4,5-triphosphate receptor calcium channels
0.841549
1,049
The Protein Information Resource (PIR), located at Georgetown University Medical Center, is an integrated public bioinformatics resource to support genomic and proteomic research, and scientific studies. It contains protein sequences databases
Protein ontology
0.841547
1,050
PIR was established in 1984 by the National Biomedical Research Foundation as a resource to assist researchers and customers in the identification and interpretation of protein sequence information. Prior to that, the foundation compiled the first comprehensive collection of macromolecular sequences in the Atlas of Protein Sequence and Structure, published from 1964 to 1974 under the editorship of Margaret Dayhoff. Dayhoff and her research group pioneered in the development of computer methods for the comparison of protein sequences, for the detection of distantly related sequences and duplications within sequences, and for the inference of evolutionary histories from alignments of protein sequences.Winona Barker and Robert Ledley assumed leadership of the project after the death of Dayhoff in 1983. In 1999, Cathy H. Wu joined the National Biomedical Research Foundation, and later on Georgetown University Medical Center, to head the bioinformatics efforts of PIR, and has served first as Principal Investigator and, since 2001, as Director.For four decades, PIR has provided many protein databases and analysis tools freely accessible to the scientific community, including the Protein Sequence Database, the first international database (see PIR-International), which grew out of Atlas of Protein Sequences and Structure.In 2002, PIR – along with its international partners, the European Bioinformatics Institute and the Swiss Institute of Bioinformatics – were awarded a grant from NIH to create UniProt, a single worldwide database of protein sequence and function, by unifying the Protein Information Resource-Protein Sequence Database, Swiss-Prot, and TrEMBL databases.
Protein ontology
0.841547
1,051
In physics and electrical engineering, the universal dielectric response, or UDR, refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current, AC. First defined in a landmark article by A. K. Jonscher in Nature published in 1977, the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence.The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials.
Universal dielectric response
0.841542
1,052
Intuitively, an index e {\displaystyle e} falls into this set if and only if for every m {\displaystyle m} "there is an s {\displaystyle s} such that the Turing machine with index e {\displaystyle e} halts on input m {\displaystyle m} after s {\displaystyle s} steps”. A complete proof would show that the property displayed in quotes in the previous sentence is definable in the language of Peano arithmetic by a Σ 1 0 {\displaystyle \Sigma _{1}^{0}} formula. Every Σ 1 0 {\displaystyle \Sigma _{1}^{0}} subset of Baire space or Cantor space is an open set in the usual topology on the space.
Arithmetical hierarchy
0.841536
1,053
The halting problem for a Δ n 0 , Y {\displaystyle \Delta _{n}^{0,Y}} oracle in fact sits in Σ n + 1 0 , Y {\displaystyle \Sigma _{n+1}^{0,Y}} . Post's theorem establishes a close connection between the arithmetical hierarchy of sets of natural numbers and the Turing degrees. In particular, it establishes the following facts for all n ≥ 1: The set ∅ ( n ) {\displaystyle \emptyset ^{(n)}} (the nth Turing jump of the empty set) is many-one complete in Σ n 0 {\displaystyle \Sigma _{n}^{0}} .
Arithmetical hierarchy
0.841536
1,054
Since the only Hausdorff topology on a finite set is the discrete one, a finite Hausdorff topological group must necessarily be discrete. It follows that every finite subgroup of a Hausdorff group is discrete. A discrete subgroup H of G is cocompact if there is a compact subset K of G such that HK = G. Discrete normal subgroups play an important role in the theory of covering groups and locally isomorphic groups.
Discrete group theory
0.841495
1,055
which is algebraically identical to the formula derived in the previous section. The quantity T E = ε / k {\displaystyle T_{\rm {E}}=\varepsilon /k} has the dimensions of temperature and is a characteristic property of a crystal. It is known as the Einstein temperature. Hence, the Einstein crystal model predicts that the energy and heat capacities of a crystal are universal functions of the dimensionless ratio T / T E {\displaystyle T/T_{\rm {E}}} . Similarly, the Debye model predicts a universal function of the ratio T / T D {\displaystyle T/T_{\rm {D}}} , where T D {\displaystyle T_{\rm {D}}} is the Debye temperature.
Einstein temperature
0.841488
1,056
The Einstein solid is a model of a crystalline solid that contains a large number of independent three-dimensional quantum harmonic oscillators of the same frequency. The independence assumption is relaxed in the Debye model. While the model provides qualitative agreement with experimental data, especially for the high-temperature limit, these oscillations are in fact phonons, or collective modes involving many atoms. Albert Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics.
Einstein temperature
0.841488
1,057
An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity.
Deep Learning
0.841442
1,058
For this purpose Facebook introduced the feature that once a user is automatically recognized in an image, they receive a notification. They can choose whether of not they like to be publicly labeled on the image, or tell Facebook that it is not them in the picture. This user interface is a mechanism to generate "a constant stream of verification data" to further train the network in real-time. As Mühlhoff argues, involvement of human users to generate training and verification data is so typical for most commercial end-user applications of Deep Learning that such systems may be referred to as "human-aided artificial intelligence".
Deep Learning
0.841442
1,059
Most Deep Learning systems rely on training and verification data that is generated and/or annotated by humans. It has been argued in media philosophy that not only low-paid clickwork (e.g. on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork.Mühlhoff argues that in most commercial end-user applications of Deep Learning such as Facebook's face recognition system, the need for training data does not stop once an ANN is trained. Rather, there is a continued demand for human-generated verification data to constantly calibrate and update the ANN.
Deep Learning
0.841442
1,060
In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTM RNNs. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which they made available through Google Voice Search.The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010.
Deep Learning
0.841442
1,061
The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results.Speech recognition was taken over by LSTM. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks.
Deep Learning
0.841442
1,062
Funded by the US government's NSA and DARPA, SRI studied deep neural networks in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 National Institute of Standards and Technology Speaker Recognition evaluation. The SRI deep neural network was then deployed in the Nuance Verifier, representing the first major industrial application of deep learning.
Deep Learning
0.841442
1,063
Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer.In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton.Since 1997, Sven Behnke extended the feed-forward hierarchical convolutional approach in the Neural Abstraction Pyramid by lateral and backward connections in order to flexibly incorporate context into decisions and iteratively resolve local ambiguities. Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of artificial neural network's (ANN) computational cost and a lack of understanding of how the brain wires its biological networks. Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years.
Deep Learning
0.841442
1,064
In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century.In 1994, André de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained.
Deep Learning
0.841442
1,065
It combines this with a softmax operator and a projection matrix. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use it.
Deep Learning
0.841442
1,066
In 1993, a chunker solved a deep learning task whose depth exceeded 1000.In 1992, Jürgen Schmidhuber also published an alternative to RNNs which is now called a linear Transformer or a Transformer with linearized self-attention (save for a normalization operator). It learns internal spotlights of attention: a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention). This fast weight attention mapping is applied to a query pattern.
Deep Learning
0.841442
1,067
It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network.
Deep Learning
0.841442
1,068
LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, Jürgen Schmidhuber (1992) proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning.
Deep Learning
0.841442
1,069
The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons.In 1988, Wei Zhang et al. applied the backpropagation algorithm to a convolutional neural network (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. They also proposed an implementation of the CNN with an optical computing system. In 1989, Yann LeCun et al. applied backpropagation to a CNN with the purpose of recognizing handwritten ZIP codes on mail.
Deep Learning
0.841442
1,070
If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically.Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution.
Deep Learning
0.841442
1,071
In particular, GPUs are well-suited for the matrix/vector computations involved in machine learning. GPUs speed up training algorithms by orders of magnitude, reducing running times from weeks to days. Further, specialized hardware and algorithm optimizations can be used for efficient processing of deep learning models.
Deep Learning
0.841442
1,072
Advances in hardware have driven renewed interest in deep learning. In 2009, Nvidia was involved in what was called the "big bang" of deep learning, "as deep-learning neural networks were trained with Nvidia graphics processing units (GPUs)". That year, Andrew Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times.
Deep Learning
0.841442
1,073
Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks (CNNs) were superseded for ASR by CTC for LSTM. but are more successful in computer vision.
Deep Learning
0.841442
1,074
Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees.Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR).
Deep Learning
0.841442
1,075
It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.
Deep Learning
0.841442
1,076
In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation. The papers referred to learning for deep belief nets. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets (DNN) might become practical.
Deep Learning
0.841442
1,077
LSTM recurrent neural networks can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. The "vanilla LSTM" with forget gate was introduced in 1999 by Felix Gers, Schmidhuber and Fred Cummins. LSTM has become the most cited neural network of the 20th century.
Deep Learning
0.841442
1,078
It not only tested the neural history compressor, but also identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve this problem. This led to the deep learning method called long short-term memory (LSTM), published in 1997.
Deep Learning
0.841442
1,079
Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et. al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Sepp Hochreiter's diploma thesis (1991) was called "one of the most important documents in the history of machine learning" by his supervisor Schmidhuber.
Deep Learning
0.841442
1,080
Transformers are also increasingly being used in computer vision.In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns.
Deep Learning
0.841442
1,081
In 1969, he also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep learning in general. CNNs have become an essential tool for computer vision.
Deep Learning
0.841442
1,082
The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation already in 1960 in the context of control theory. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. In 1985, David E. Rumelhart et al. published an experimental analysis of the technique.Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1980.
Deep Learning
0.841442
1,083
However, since only the output layer had learning connections, this was not yet deep learning. It was what later was called an extreme learning machine.The first general, working learning algorithm for supervised, deep, feedforward, multilayer perceptrons was published by Alexey Ivakhnenko and Lapa in 1967. A 1971 paper described a deep network with eight layers trained by the group method of data handling.The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari.
Deep Learning
0.841442
1,084
His learning RNN was popularised by John Hopfield in 1982. RNNs have become central for speech recognition and language processing. Charles Tappert writes that Frank Rosenblatt developed and explored all of the basic ingredients of the deep learning systems of today, referring to Rosenblatt's 1962 book which introduced a multilayer perceptron (MLP) with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer.
Deep Learning
0.841442
1,085
There are two types of neural networks: feedforward neural networks (FNNs) and recurrent neural networks (RNNs). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created and analyzed the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive.
Deep Learning
0.841442
1,086
Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering.
Deep Learning
0.841442
1,087
Deep learning is a class of machine learning algorithms that: 199–200 uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces. From another angle to view deep learning, deep learning refers to "computer-simulate" or "automate" human learning processes from a source (e.g., an image of dogs) to a learned object (dogs). Therefore, a notion coined as "deeper" learning or "deepest" learning makes sense. The deepest learning refers to the fully automatic learning from a source to a final learned object. A deeper learning thus refers to a mixed learning process: a human learning process from a source to a learned semi-object, followed by a computer learning process from the human learned semi-object to a final learned object.
Deep Learning
0.841442
1,088
Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration.
Deep Learning
0.841442
1,089
Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.
Deep Learning
0.841442
1,090
The most powerful A.I. systems, like Watson (...) use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning. In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website. While deep learning consists of dozens and even hundreds of layers, that architecture doesn't seem to resemble the structure of the brain. Simulations on shallow networks, which are closer to the brain dynamics, indicate a similar performance as deep learning with a lower complexity.
Deep Learning
0.841442
1,091
Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted: Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (...) have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.
Deep Learning
0.841442
1,092
A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs.AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.
Deep Learning
0.841442
1,093
Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.
Deep Learning
0.841442
1,094
Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods relies on.
Deep Learning
0.841442
1,095
This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas: Scale-up/out and accelerated DNN training and decoding Sequence discriminative training Feature processing by deep models with solid understanding of the underlying mechanisms Adaptation of DNNs and related deep models Multi-task and transfer learning by DNNs and related deep models CNNs and how to design them to best exploit domain knowledge of speech RNN and its rich LSTM variants Other types of deep models including tensor-based models and integrated deep generative/discriminative models.All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.
Deep Learning
0.841442
1,096
Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks.The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT.
Deep Learning
0.841442
1,097
Program comprehension (also program understanding or code comprehension) is a domain of computer science concerned with the ways software engineers maintain existing source code. The cognitive and other processes involved are identified and studied. The results are used to develop tools and training. Software maintenance tasks have five categories: adaptive maintenance, corrective maintenance, perfective maintenance, code reuse, and code leverage.
Program comprehension
0.841421
1,098
Titles of works on program comprehension include Using a behavioral theory of program comprehension in software engineering The concept assignment problem in program understanding, and Program Comprehension During Software Maintenance and Evolution.Computer scientists pioneering program comprehension include Ruven Brooks, Ted J. Biggerstaff, and Anneliese von Mayrhauser.
Program comprehension
0.841421
1,099
Assess the quality of the clustering by adding up the variation within each cluster. Repeat the processes with different values of k. Pick the best value for 'k' by finding the "elbow" in the plot of which k value has the lowest variance.One example of this in biology is used in the 3D mapping of a genome. Information of a mouse's HIST1 region of chromosome 13 is gathered from Gene Expression Omnibus. This information contains data on which nuclear profiles show up in certain genomic regions. With this information, the Jaccard distance can be used to find a normalized distance between all the loci.
Computational biologist
0.841404