id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
600
Fraud detection deals with the identification of bank fraud, such as money laundering, credit card fraud and telecommunication fraud, which have vast domains of research and applications of machine learning. Because ensemble learning improves the robustness of the normal behavior modelling, it has been proposed as an efficient technique to detect such fraudulent cases and activities in banking and credit card systems.
Ensembles of classifiers
0.846393
601
Land cover mapping is one of the major applications of Earth observation satellite sensors, using remote sensing and geospatial data, to identify the materials and objects which are located on the surface of target areas. Generally, the classes of target materials include roads, buildings, rivers, lakes, and vegetation. Some different ensemble learning approaches based on artificial neural networks, kernel principal component analysis (KPCA), decision trees with boosting, random forest and automatic design of multiple classifier systems, are proposed to efficiently identify land cover objects.
Ensembles of classifiers
0.846393
602
In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Unlike a statistical ensemble in statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
Ensembles of classifiers
0.846393
603
In molecular biology, an amplicon is a piece of DNA or RNA that is the source and/or product of amplification or replication events. It can be formed artificially, using various methods including polymerase chain reactions (PCR) or ligase chain reactions (LCR), or naturally through gene duplication. In this context, amplification refers to the production of one or more copies of a genetic fragment or target sequence, specifically the amplicon.
Amplicon sequencing
0.846376
604
The bound surface charge is the charge piled up at the surface of the dielectric, given by the dipole moment perpendicular to the surface: where s is the separation between the point charges constituting the dipole, d {\displaystyle \mathbf {d} } is the electric dipole moment, n ^ {\displaystyle \mathbf {\hat {n}} } is the unit normal vector to the surface. Taking infinitesimals: and dividing by the differential surface element dS gives the bound surface charge density: where P is the polarization density, i.e. density of electric dipole moments within the material, and dV is the differential volume element. Using the divergence theorem, the bound volume charge density within the material is hence: The negative sign arises due to the opposite signs on the charges in the dipoles, one end is within the volume of the object, the other at the surface. A more rigorous derivation is given below.
Charge density
0.846187
605
In quantum mechanics, charge density ρq is related to wavefunction ψ(r) by the equation where q is the charge of the particle and |ψ(r)|2 = ψ*(r)ψ(r) is the probability density function i.e. probability per unit volume of a particle located at r. When the wavefunction is normalized - the average charge in the region r ∈ R is where d3r is the integration measure over 3d position space.
Charge density
0.846187
606
For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that are the nontrivial prime factors of n. Computational problems are one of the main objects of study in theoretical computer science. The field of computational complexity theory attempts to determine the amount of resources (computational complexity) solving a given problem will require and explain why some problems are intractable or undecidable. Computational problems belong to complexity classes that define broadly the resources (e.g. time, space/memory, energy, circuit depth) it takes to compute (solve) them with various abstract machines.
Computational problem
0.846174
607
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n. "is a computational problem. A computational problem can be viewed as a set of instances or cases together with a, possibly empty, set of solutions for every instance/case.
Computational problem
0.846174
608
The first theoretical treatment of electrostatic screening, due to Peter Debye and Erich Hückel, dealt with a stationary point charge embedded in a fluid. Consider a fluid of electrons in a background of heavy, positively charged ions. For simplicity, we ignore the motion and spatial distribution of the ions, approximating them as a uniform background charge. This simplification is permissible since the electrons are lighter and more mobile than the ions, provided we consider distances much larger than the ionic separation. In condensed matter physics, this model is referred to as jellium.
Electric-field screening
0.846145
609
In reality, these long-range effects are suppressed by the flow of particles in response to electric fields. This flow reduces the effective interaction between particles to a short-range "screened" Coulomb interaction. This system corresponds to the simplest example of a renormalized interaction.In solid-state physics, especially for metals and semiconductors, the screening effect describes the electrostatic field and Coulomb potential of an ion inside the solid. Like the electric field of the nucleus is reduced inside an atom or ion due to the shielding effect, the electric fields of ions in conducting solids are further reduced by the cloud of conduction electrons.
Electric-field screening
0.846145
610
In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying fluids, such as ionized gases (classical plasmas), electrolytes, and charge carriers in electronic conductors (semiconductors, metals). In a fluid, with a given permittivity ε, composed of electrically charged constituent particles, each pair of particles (with charges q1 and q2) interact through the Coulomb force as where the vector r is the relative position between the charges. This interaction complicates the theoretical treatment of the fluid.
Electric-field screening
0.846145
611
Publicly available information from biomedical documents is readily accessible through the internet and is becoming a powerful resource for collecting known protein–protein interactions (PPIs), PPI prediction and protein docking. Text mining is much less costly and time-consuming compared to other high-throughput techniques. Currently, text mining methods generally detect binary relations between interacting proteins from individual sentences using rule/pattern-based information extraction and machine learning approaches. A wide variety of text mining applications for PPI extraction and/or prediction are available for public use, as well as repositories which often store manually validated and/or computationally predicted PPIs.
Protein interaction
0.846125
612
Text mining can be implemented in two stages: information retrieval, where texts containing names of either or both interacting proteins are retrieved and information extraction, where targeted information (interacting proteins, implicated residues, interaction types, etc.) is extracted. There are also studies using phylogenetic profiling, basing their functionalities on the theory that proteins involved in common pathways co-evolve in a correlated fashion across species. Some more complex text mining methodologies use advanced Natural Language Processing (NLP) techniques and build knowledge networks (for example, considering gene names as nodes and verbs as edges). Other developments involve kernel methods to predict protein interactions.
Protein interaction
0.846125
613
In physics, the electromagnetic dual concept is based on the idea that, in the static case, electromagnetism has two separate facets: electric fields and magnetic fields. Expressions in one of these will have a directly analogous, or dual, expression in the other. The reason for this can ultimately be traced to special relativity, where applying the Lorentz transformation to the electric field will transform it into a magnetic field. These are special cases of duality in mathematics.
Duality (electricity and magnetism)
0.846023
614
Some of the more contemporary periodical publications specializing in the field are MATCH Communications in Mathematical and in Computer Chemistry, first published in 1975, and the Journal of Mathematical Chemistry, first published in 1987. In 1986 a series of annual conferences MATH/CHEM/COMP taking place in Dubrovnik was initiated by the late Ante Graovac. The basic models for mathematical chemistry are molecular graph and topological index. In 2005 the International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik (Croatia) by Milan Randić. The Academy has 82 members (2009) from all over the world, including six scientists awarded with a Nobel Prize.
Mathematical chemistry
0.846004
615
Another important area is molecular knot theory and circuit topology that describe the topology of folded linear molecules such as proteins and Nucleic Acids. The history of the approach may be traced back to the 19th century. Georg Helm published a treatise titled "The Principles of Mathematical Chemistry: The Energetics of Chemical Phenomena" in 1894.
Mathematical chemistry
0.846004
616
Mathematical chemistry is the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. Mathematical chemistry has also sometimes been called computer chemistry, but should not be confused with computational chemistry. Major areas of research in mathematical chemistry include chemical graph theory, which deals with topology such as the mathematical study of isomerism and the development of topological descriptors or indices which find application in quantitative structure-property relationships; and chemical aspects of group theory, which finds applications in stereochemistry and quantum chemistry.
Mathematical chemistry
0.846004
617
In molecular physics/nanotechnology, electrostatic deflection is the deformation of a beam-like structure/element bent by an electric field. It can be due to interaction between electrostatic fields and net charge or electric polarization effects. The beam-like structure/element is generally cantilevered (fix at one of its ends). In nanomaterials, carbon nanotubes (CNTs) are typical ones for electrostatic deflections.
Electrostatic deflection (molecular physics/nanotechnology)
0.845941
618
Classical electromagnetism or classical electrodynamics is a branch of theoretical physics that studies the interactions between electric charges and currents using an extension of the classical Newtonian model; It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics, which is a quantum field theory. Fundamental physical aspects of classical electrodynamics are presented in many texts, such as those by Richard Feynman, Robert B. Leighton and Matthew Sands, David J. Griffiths, Wolfgang K. H. Panofsky and Melba Phillips, and John David Jackson.
Classical electromagnetism
0.845929
619
A changing electromagnetic field propagates away from its origin in the form of a wave. These waves travel in vacuum at the speed of light and exist in a wide spectrum of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in order of increasing frequency): radio waves, microwaves, light (infrared, visible light and ultraviolet), x-rays and gamma rays. In the field of particle physics this electromagnetic radiation is the manifestation of the electromagnetic interaction between charged particles.
Classical electromagnetism
0.845929
620
Though streaming algorithms had already been studied by Munro and Paterson as early as 1978, as well as Philippe Flajolet and G. Nigel Martin in 1982/83, the field of streaming algorithms was first formalized and popularized in a 1996 paper by Noga Alon, Yossi Matias, and Mario Szegedy. For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing. Semi-streaming algorithms were introduced in 2005 as a relaxation of streaming algorithms for graphs, in which the space allowed is linear in the number of vertices n, but only logarithmic in the number of edges m. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble in o ( n ) {\displaystyle o(n)} space.
Streaming algorithms
0.845917
621
In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item. As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream.
Streaming algorithms
0.845917
622
Much of the streaming literature is concerned with computing statistics on frequency distributions that are too large to be stored. For this class of problems, there is a vector a = ( a 1 , … , a n ) {\displaystyle \mathbf {a} =(a_{1},\dots ,a_{n})} (initialized to the zero vector 0 {\displaystyle \mathbf {0} } ) that has updates presented to it in a stream. The goal of these algorithms is to compute functions of a {\displaystyle \mathbf {a} } using considerably less space than it would take to represent a {\displaystyle \mathbf {a} } precisely. There are two common models for updating such streams, called the "cash register" and "turnstile" models.In the cash register model, each update is of the form ⟨ i , c ⟩ {\displaystyle \langle i,c\rangle } , so that a i {\displaystyle a_{i}} is incremented by some positive integer c {\displaystyle c} .
Streaming algorithms
0.845917
623
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are possible only for our closest star. It intersects with many disciplines of pure physics, astrophysics, and computer science, including fluid dynamics, plasma physics including magnetohydrodynamics, seismology, particle physics, atomic physics, nuclear physics, stellar evolution, space physics, spectroscopy, radiative transfer, applied optics, signal processing, computer vision, computational physics, stellar physics and solar astronomy. Because the Sun is uniquely situated for close-range observing (other stars cannot be resolved with anything like the spatial or temporal resolution that the Sun can), there is a split between the related discipline of observational astrophysics (of distant stars) and observational solar physics. The study of solar physics is also important as it provides a "physical laboratory" for the study of plasma physics.
Solar physicist
0.845885
624
In astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth, as it was believed at the time. This model is known as the heliocentric model.
Solar physicist
0.845885
625
Modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites. Of particular interest are the structure of the solar photosphere, the coronal heat problem and sunspots.
Solar physicist
0.845885
626
Scoring algorithm, also known as Fisher's scoring, is a form of Newton's method used in statistics to solve maximum likelihood equations numerically, named after Ronald Fisher.
Scoring algorithm
0.845874
627
Protein Science is a peer-reviewed scientific journal covering research on the structure, function, and biochemical significance of proteins, their role in molecular and cell biology, genetics, and evolution, and their regulation and mechanisms of action. It is published by Wiley-Blackwell on behalf of The Protein Society. The 2021 impact factor of the journal is 6.725.
Protein Sci
0.845707
628
STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grade, culminating in Matriculation exams. These electives are: pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths) and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grade, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are: FSc pre-medical (Physics, Chemistry, Biology), FSc pre-engineering (Physics, Chemistry, Maths) and ICS (Physics/Statistics, Computer Science, Maths).
Science, technology, engineering, and mathematics
0.845636
629
People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing, the father of computer science, and Sara Josephine Baker, American physician and public-health leader.Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields. A recent study has shown that gay men are less likely to have completed a bachelor's degree in a STEM field and to work in a STEM occupation. Along with this, those of sexual minorities overall have been shown to be less likely to remain in STEM majors throughout college.
Science, technology, engineering, and mathematics
0.845636
630
In November 2012 the White House announcement before congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign. The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers." The identified sectors were as follows: advanced manufacturing, Automotive, construction, financial services, geospatial technology, homeland security, information technology, Transportation, Aerospace, Biotechnology, energy, healthcare, hospitality, and retail. The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century.
Science, technology, engineering, and mathematics
0.845636
631
See protein folding. A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction.
Structural biologist
0.845533
632
With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure."
Structural biologist
0.845533
633
Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution.
Structural biologist
0.845533
634
Structural biology is a field that is many centuries old which, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization. Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy.
Structural biologist
0.845533
635
Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes and noncoding DNA (including regulatory sequences), associations with diseases and phenotypes, and identify potential drug targets.
DNA sequence
0.845532
636
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery.Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment.
DNA sequence
0.845532
637
If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS).
NP-hardness
0.845479
638
All NP-complete problems are also NP-hard (see List of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as the travelling salesman problem—is NP-hard. The subset sum problem is another example: given a set of integers, does any non-empty subset of them add up to zero?
NP-hardness
0.845479
639
A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H.: 80 An equivalent definition is to require that every problem L in NP can be solved in polynomial time by an oracle machine with an oracle for H. Informally, an algorithm can be thought of that calls such an oracle machine as a subroutine for solving H and solves L in polynomial time if the subroutine call takes only one step to compute. Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H.: 91 As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems.
NP-hardness
0.845479
640
Within each interatomic surface, the electron density is a maximum at the corresponding internuclear saddle point, which also lies at the minimum of the ridge between corresponding pair of nuclei, the ridge being defined by the pair of gradient trajectories (bond path) originating at the saddle point and terminating at the nuclei. Because QTAIM atoms are always bounded by surfaces having zero flux in the gradient vector field of the electron density, they have some unique quantum mechanical properties compared to other subsystem definitions, including unique electronic kinetic energy the satisfaction of an electronic virial theorem analogous to the molecular electronic virial theorem and some interesting variational properties. QTAIM has gradually become a method for addressing possible questions regarding chemical systems, in a variety of situations hardly handled before by any other model or theory in chemistry.
Atoms in molecules
0.845433
641
In addition to bonding, QTAIM allows the calculation of certain physical properties on a per-atom basis, by dividing space up into atomic volumes containing exactly one nucleus, which acts as a local attractor of the electron density. In QTAIM an atom is defined as a proper open system, i.e. a system that can share energy and electron density which is localized in the 3D space. The mathematical study of these features is usually referred to in the literature as charge density topology.
Atoms in molecules
0.845433
642
The development of QTAIM was driven by the assumption that, since the concepts of atoms and bonds have been and continue to be so ubiquitously useful in interpreting, classifying, predicting and communicating chemistry, they should have a well-defined physical basis. QTAIM recovers the central operational concepts of the molecular structure hypothesis, that of a functional grouping of atoms with an additive and characteristic set of properties, together with a definition of the bonds that link the atoms and impart the structure. QTAIM defines chemical bonding and structure of a chemical system based on the topology of the electron density.
Atoms in molecules
0.845433
643
In quantum chemistry, the quantum theory of atoms in molecules (QTAIM), sometimes referred to as atoms in molecules (AIM), is a model of molecular and condensed matter electronic systems (such as crystals) in which the principal objects of molecular structure - atoms and bonds - are natural expressions of a system's observable electron density distribution function. An electron density distribution of a molecule is a probability distribution that describes the average manner in which the electronic charge is distributed throughout real space in the attractive field exerted by the nuclei. According to QTAIM, molecular structure is revealed by the stationary points of the electron density together with the gradient paths of the electron density that originate and terminate at these points. QTAIM was primarily developed by Professor Richard Bader and his research group at McMaster University over the course of decades, beginning with analyses of theoretically calculated electron densities of simple molecules in the early 1960s and culminating with analyses of both theoretically and experimentally measured electron densities of crystals in the 90s.
Atoms in molecules
0.845433
644
A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4" or: 4D. Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension.
Multidimensional geometry
0.845365
645
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
Mathematical equation
0.845326
646
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics. One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
Mathematical equation
0.845326
647
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
Mathematical equation
0.845326
648
In algebra, an example of an identity is the difference of two squares: x 2 − y 2 = ( x + y ) ( x − y ) {\displaystyle x^{2}-y^{2}=(x+y)(x-y)} which is true for all x and y. Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are: sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1 {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1} and sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) {\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )} which are both true for all values of θ. For example, to solve for the value of θ that satisfies the equation: 3 sin ⁡ ( θ ) cos ⁡ ( θ ) = 1 , {\displaystyle 3\sin(\theta )\cos(\theta )=1\,,} where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give: 3 2 sin ⁡ ( 2 θ ) = 1 , {\displaystyle {\frac {3}{2}}\sin(2\theta )=1\,,} yielding the following solution for θ: θ = 1 2 arcsin ⁡ ( 2 3 ) ≈ 20.9 ∘ . {\displaystyle \theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.} Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
Mathematical equation
0.845326
649
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
Mathematical equation
0.845326
650
Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
Mathematical equation
0.845326
651
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations.
Mathematical equation
0.845326
652
Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
Mathematical equation
0.845326
653
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis.
Mathematical equation
0.845326
654
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Mathematical equation
0.845326
655
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
Mathematical equation
0.845326
656
In physics, the energy spectrum of a particle is the number of particles or intensity of a particle beam as a function of particle energy. Examples of techniques that produce an energy spectrum are alpha-particle spectroscopy, electron energy loss spectroscopy, and mass-analyzed ion-kinetic-energy spectrometry.
Spectrum (physical sciences)
0.845244
657
Wedderburn proved these results in 1907 in his doctoral thesis, On hypercomplex numbers, which appeared in the Proceedings of the London Mathematical Society. His thesis classified finite-dimensional simple and also semisimple algebras over fields. Simple algebras are building blocks of semisimple algebras: any finite-dimensional semisimple algebra is a Cartesian product, in the sense of algebras, of finite-dimensional simple algebras.
Simple algebra
0.84517
658
Also, for any n ≥ 1 {\displaystyle n\geq 1} , the algebra of n × n {\displaystyle n\times n} matrices with entries in a division ring is simple. Joseph Wedderburn proved that if a ring R {\displaystyle R} is a finite-dimensional simple algebra over a field k {\displaystyle k} , it is isomorphic to a matrix algebra over some division algebra over k {\displaystyle k} . In particular, the only simple rings that are finite-dimensional algebras over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions.
Simple algebra
0.84517
659
It is then called a simple algebra over this field. Several references (e.g., Lang (2002) or Bourbaki (2012)) require in addition that a simple ring be left or right Artinian (or equivalently semi-simple).
Simple algebra
0.84517
660
The Weyl algebra also gives an example of a simple algebra that is not a matrix algebra over a division algebra over its center: the Weyl algebra is infinite-dimensional, so Wedderburn's theorem does not apply. Wedderburn's result was later generalized to semisimple rings in the Wedderburn-Artin theorem: this says that every semisimple ring is a finite product of matrix rings over division rings. As a consequence of this generalization, every simple ring that is left or right artinian is a matrix ring over a division ring.
Simple algebra
0.84517
661
One must be careful of the terminology: not every simple ring is a semisimple ring, and not every simple algebra is a semisimple algebra! However, every finite-dimensional simple algebra is a semisimple algebra, and every simple ring that is left or right artinian is a semisimple ring. An example of a simple ring that is not semisimple is the Weyl algebra.
Simple algebra
0.84517
662
Every finite-dimensional central simple algebra over a finite field is isomorphic to a matrix ring over that field. The algebra of all linear transformations of an infinite-dimensional vector space over a field k {\displaystyle k} is a simple ring that is not a semisimple ring. It is also a simple algebra over k {\displaystyle k} that is not a semisimple algebra.
Simple algebra
0.84517
663
These results follow from the Frobenius theorem. Every finite-dimensional simple algebra over C {\displaystyle \mathbb {C} } is a central simple algebra, and is isomorphic to a matrix ring over C {\displaystyle \mathbb {C} } .
Simple algebra
0.84517
664
In abstract algebra, a branch of mathematics, a simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. In particular, a commutative ring is a simple ring if and only if it is a field. The center of a simple ring is necessarily a field. It follows that a simple ring is an associative algebra over this field.
Simple algebra
0.84517
665
Let R {\displaystyle \mathbb {R} } be the field of real numbers, C {\displaystyle \mathbb {C} } be the field of complex numbers, and H {\displaystyle \mathbb {H} } the quaternions. A central simple algebra (sometimes called a Brauer algebra) is a simple finite-dimensional algebra over a field F {\displaystyle F} whose center is F {\displaystyle F} . Every finite-dimensional simple algebra over R {\displaystyle \mathbb {R} } is isomorphic to an algebra of n × n {\displaystyle n\times n} matrices with entries in R {\displaystyle \mathbb {R} } , C {\displaystyle \mathbb {C} } , or H {\displaystyle \mathbb {H} } . Every central simple algebra over R {\displaystyle \mathbb {R} } is isomorphic to an algebra of n × n {\displaystyle n\times n} matrices with entries R {\displaystyle \mathbb {R} } or H {\displaystyle \mathbb {H} } .
Simple algebra
0.84517
666
Using the bijection F: SX → SY constructed from a bijection f: X → Y, one defines: f is an isomorphism between (X,U) and (Y,V) if F(U) = V.This general notion of isomorphism generalizes many less general notions listed below. For algebraic structures: isomorphism is a bijective homomorphism. In particular, for vector spaces: linear bijection.
Equivalent definitions of mathematical structures
0.845065
667
However, not all fixed points of this action correspond to species of structures.Given two species, Bourbaki defines the notion "procedure of deduction" (of a structure of the second species from a structure of the first species). A pair of mutually inverse procedures of deduction leads to the notion "equivalent species".Example. The structure of a topological space may be defined as an open set topology or alternatively, a closed set topology.
Equivalent definitions of mathematical structures
0.845065
668
(This notion, defined for all structures, may be thought of as a generalization of the signature defined only for algebraic structures.) Let Set* denote the groupoid of sets and bijections. That is, the category whose objects are (all) sets, and morphisms are (all) bijections.Proposition.
Equivalent definitions of mathematical structures
0.845065
669
In mathematics, equivalent definitions are used in two somewhat different ways. First, within a particular mathematical theory (for example, Euclidean geometry), a notion (for example, ellipse or minimal surface) may have more than one definition. These definitions are equivalent in the context of a given mathematical structure (Euclidean space, in this case). Second, a mathematical structure may have more than one definition (for example, topological space has at least seven definitions; ordered field has at least two definitions).
Equivalent definitions of mathematical structures
0.845065
670
Thus, in practice a topology on a set is treated like an abstract data type that provides all needed notions (and constructors) but hides the distinction between "primary" and "secondary" notions. The same applies to other kinds of mathematical structures. "Interestingly, the formalization of structures in set theory is a similar task as the formalization of structures for computers."
Equivalent definitions of mathematical structures
0.845065
671
These are second-order structures.More complicated non-algebraic structures combine an algebraic component and a non-algebraic component. For example, the structure of a topological group consists of a topology and the structure of a group. Thus it belongs to the product of P(P(X)) and another ("algebraic") set in the scale; this product is again a set in the scale.
Equivalent definitions of mathematical structures
0.845065
672
A triple (+, ·, ≤) consisting of two binary functions N × N → N and one binary relation on N belongs to P(N × N × N) × P(N × N × N) × P(N × N). Similarly, every algebraic structure on a set belongs to the corresponding set in the scale of sets on X. Non-algebraic structures on a set X often involve sets of subsets of X (that is, subsets of P(X), in other words, elements of P(P(X))). For example, the structure of a topological space, called a topology on X, treated as the set of "open" sets; or the structure of a measurable space, treated as the σ-algebra of "measurable" sets; both are elements of P(P(X)).
Equivalent definitions of mathematical structures
0.845065
673
Speed breeding is introduced by Watson et al. 2018. Classical (human performed) phenotyping during speed breeding is also possible, using a procedure developed by Richard et al. 2015. As of 2020 it is highly anticipated that SB and automated phenotyping will, combined, produce greatly improved outcomes – see § Phenotyping and artificial intelligence above.
Crop breeding
0.845056
674
Thus axonemal microtubules, which have a long half-life, carry a "signature acetylation," which is absent from cytosolic microtubules that have a shorter half-life. In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by posttranslational acetylation. The following are examples of various other proteins with roles in regulating signal transduction, whose activities are also affected by acetylation and deacetylation.
Protein acetylation
0.845048
675
A drug that depends on such metabolic transformations in order to act is termed a prodrug. Acetylation is an important modification of proteins in cell biology; and proteomics studies have identified thousands of acetylated mammalian proteins. Acetylation occurs as a co-translational and post-translational modification of proteins, for example, histones, p53, and tubulins. Among these proteins, chromatin proteins and metabolic enzymes are highly represented, indicating that acetylation has a considerable impact on gene expression and metabolism. In bacteria, 90% of proteins involved in central metabolism of Salmonella enterica are acetylated.
Protein acetylation
0.845048
676
In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor. This results in reduced storage requirements and means that the Cholesky factor can be applied with fewer arithmetic operations. (Sometimes it may also pertain to an incomplete Cholesky factor used as a preconditioner—for example, in the preconditioned conjugate gradient algorithm.) Minimum degree algorithms are often used in the finite element method where the reordering of nodes can be carried out depending only on the topology of the mesh, rather than on the coefficients in the partial differential equation, resulting in efficiency savings when the same mesh is used for a variety of coefficient values.
Minimum degree algorithm
0.845036
677
In natural language processing, dependency-based parsing can be formulated as an ASP problem. The following code parses the Latin sentence "Puella pulchra in villa linguam latinam discit", "the pretty girl is learning Latin in the villa". The syntax tree is expressed by the arc predicates which represent the dependencies between the words of the sentence. The computed structure is a linearly ordered rooted tree.
Answer-set programming
0.845003
678
By the implicit function theorem, then, x ∗ ( q ) {\displaystyle x^{*}(q)} may be viewed locally as a continuously differentiable function, and the local response of x ∗ ( q ) {\displaystyle x^{*}(q)} to small changes in q is given by D q x ∗ ( q ) = − − 1 D q f ( x ∗ ( q ) ; q ) . {\displaystyle D_{q}x^{*}(q)=-^{-1}D_{q}f(x^{*}(q);q).} Applying the chain rule and first order condition, D q p ( x ∗ ( q ) , q ) = D q p ( x ; q ) | x = x ∗ ( q ) . {\displaystyle D_{q}p(x^{*}(q),q)=D_{q}p(x;q)|_{x=x^{*}(q)}.} (See Envelope theorem).
Comparative statics
0.844914
679
Suppose p ( x ; q ) {\displaystyle p(x;q)} is a smooth and strictly concave objective function where x is a vector of n endogenous variables and q is a vector of m exogenous parameters. Consider the unconstrained optimization problem x ∗ ( q ) = arg ⁡ max p ( x ; q ) {\displaystyle x^{*}(q)=\arg \max p(x;q)} . Let f ( x ; q ) = D x p ( x ; q ) {\displaystyle f(x;q)=D_{x}p(x;q)} , the n by n matrix of first partial derivatives of p ( x ; q ) {\displaystyle p(x;q)} with respect to its first n arguments x1,...,xn. The maximizer x ∗ ( q ) {\displaystyle x^{*}(q)} is defined by the n×1 first order condition f ( x ∗ ( q ) ; q ) = 0 {\displaystyle f(x^{*}(q);q)=0} .
Comparative statics
0.844914
680
One limitation of comparative statics using the implicit function theorem is that results are valid only in a (potentially very small) neighborhood of the optimum—that is, only for very small changes in the exogenous variables. Another limitation is the potentially overly restrictive nature of the assumptions conventionally used to justify comparative statics procedures. For example, John Nachbar discovered in one of his case studies that using comparative statics in general equilibrium analysis works best with very small, individual level of data rather than at an aggregate level.Paul Milgrom and Chris Shannon pointed out in 1994 that the assumptions conventionally used to justify the use of comparative statics on optimization problems are not actually necessary—specifically, the assumptions of convexity of preferred sets or constraint sets, smoothness of their boundaries, first and second derivative conditions, and linearity of budget sets or objective functions. In fact, sometimes a problem meeting these conditions can be monotonically transformed to give a problem with identical comparative statics but violating some or all of these conditions; hence these conditions are not necessary to justify the comparative statics.
Comparative statics
0.844914
681
Comparative statics results are usually derived by using the implicit function theorem to calculate a linear approximation to the system of equations that defines the equilibrium, under the assumption that the equilibrium is stable. That is, if we consider a sufficiently small change in some exogenous parameter, we can calculate how each endogenous variable changes using only the first derivatives of the terms that appear in the equilibrium equations. For example, suppose the equilibrium value of some endogenous variable x {\displaystyle x} is determined by the following equation: f ( x , a ) = 0 {\displaystyle f(x,a)=0\,} where a {\displaystyle a} is an exogenous parameter. Then, to a first-order approximation, the change in x {\displaystyle x} caused by a small change in a {\displaystyle a} must satisfy: B d x + C d a = 0.
Comparative statics
0.844913
682
A generalization of the above method allows the optimization problem to include a set of constraints. This leads to the general envelope theorem. Applications include determining changes in Marshallian demand in response to changes in price or wage.
Comparative statics
0.844913
683
Daniel Dennett has called the hard problem a "hunch", and maintains that conscious experience, as it is usually understood, is merely a complex cognitive illusion. Patricia Churchland, also an eliminative materialist, maintains that philosophers ought to be more patient: neuroscience is still in its early stages, so Chalmers's hard problem is premature. Clarity will come from learning more about the brain, not from metaphysical speculation.
Combination problem
0.844881
684
Just as mass is energy, Strawson believes that consciousness "just is" matter. : 7 Max Tegmark, theoretical physicist and creator of the mathematical universe hypothesis, disagrees with these conclusions. By his account, the universe is not just describable by math but is math; comparing physics to economics or population dynamics is a disanalogy. While population dynamics may be grounded in individual people, those people are grounded in "purely mathematical objects" such as energy and charge. The universe is, in a fundamental sense, made of nothing.
Combination problem
0.844881
685
The conscious mind, Russell argued, is one such structure.Proponents of panpsychism who use this line of reasoning include Chalmers, Annaka Harris, and Galen Strawson. Chalmers has argued that the extrinsic properties of physics must have corresponding intrinsic properties; otherwise the universe would be "a giant causal flux" with nothing for "causation to relate", which he deems a logical impossibility. He sees consciousness as a promising candidate for that role.
Combination problem
0.844881
686
This led Alfred North Whitehead to conclude that intrinsic properties are "intrinsically unknowable. "(3) Consciousness has many similarities to these intrinsic properties of physics. It, too, cannot be directly observed from an outside perspective.
Combination problem
0.844881
687
In other words, physics describes matter's extrinsic properties, but not the intrinsic properties that ground them. (2) Russell argued that physics is mathematical because "it is only mathematical properties we can discover." This is true almost by definition: if only extrinsic properties are outwardly observable, then they will be the only ones discovered.
Combination problem
0.844881
688
The objects that ground physics, however, can be described only through more mathematics. In Russell's words, physics describes "certain equations giving abstract properties of their changes." When it comes to describing "what it is that changes, and what it changes from and to—as to this, physics is silent."
Combination problem
0.844881
689
(1) Like many sciences, physics describes the world through mathematics. Unlike other sciences, physics cannot describe what Schopenhauer called the "object that grounds" mathematics. Economics is grounded in resources being allocated, and population dynamics is grounded in individual people within that population.
Combination problem
0.844881
690
Physics is mathematical, not because we know so much about the physical world, but because we know so little: it is only its mathematical properties that we can discover. For the rest our knowledge is negative. Rather than solely trying to solve the problem of consciousness, Russell also attempted to solve the problem of substance, which is arguably a form of the problem of infinite regress.
Combination problem
0.844881
691
According to Plato: This world is indeed a living being endowed with a soul and intelligence ... a single visible living entity containing all other living entities, which by their nature are all related. Stoicism developed a cosmology that held that the natural world is infused with the divine fiery essence pneuma, directed by the universal intelligence logos. The relationship between beings' individual logos and the universal logos was a central concern of the Roman Stoic Marcus Aurelius. The metaphysics of Stoicism finds connections with Hellenistic philosophies such as Neoplatonism. Gnosticism also made use of the Platonic idea of anima mundi.
Combination problem
0.844881
692
This notion has taken on a wide variety of forms. Some historical and non-Western panpsychists ascribe attributes such as life or spirits to all entities (animism). Contemporary academic proponents, however, hold that sentience or subjective experience is ubiquitous, while distinguishing these qualities from more complex human mental attributes. They therefore ascribe a primitive form of mentality to entities at the fundamental level of physics but do not ascribe mentality to most aggregate things, such as rocks or buildings.
Combination problem
0.844881
693
In general, it seems that data is most useful to us when it is abstracted from its original structure and repackaged in a way that is easier to understand, even if this comes at the cost of accuracy. Hoffman offers the "fitness beats truth theorem" as mathematical proof that perceptions of reality bear little resemblance to reality's true nature. From this he concludes that our senses do not faithfully represent the external world.
Combination problem
0.844881
694
Panpsychist interpretations of quantum mechanics have been put forward by such philosophers as Whitehead, Shan Gao, Michael Lockwood, and Hoffman, who is a cognitive scientist. Protopanpsychist interpretations have been put forward by Bohm and Pylkkänen.Quantum theories of consciousness have yet to gain mainstream attention. Tegmark has formally calculated the "decoherence rates" of neurons, finding that the brain is a "classical rather than a quantum system" and that quantum mechanics does not relate "to consciousness in any fundamental way. "In 2007, Steven Pinker criticized explanations of consciousness invoking quantum physics, saying: "to my ear, this amounts to the feeling that quantum mechanics sure is weird, and consciousness sure is weird, so maybe quantum mechanics can explain consciousness," a view echoed by physicist Stephen Hawking. In 2017, Penrose rejected these characterizations, stating that disagreements are about the nature of quantum mechanics.
Combination problem
0.844881
695
Leaning toward the many-worlds interpretation due to its mathematical parsimony, he believes his variety of panpsychist property dualism may be the theory Penrose is seeking. Chalmers believes that information will play an integral role in any theory of consciousness because the mind and brain have corresponding informational structures. He considers the computational nature of physics further evidence of information's central role, and suggests that information that is physically realised is simultaneously phenomenally realised; both regularities in nature and conscious experience are expressions of information's underlying character.
Combination problem
0.844881
696
The many-worlds interpretation of quantum mechanics does not take observation as central to the wave-function collapse, because it denies that the collapse happens. On the many-worlds interpretation, just as the cat is both dead and alive, the observer both sees a dead cat and sees a living cat. Even though observation does not play a central role in this case, questions about observation are still relevant to the discussion.
Combination problem
0.844881
697
Though not referring specifically to quantum mechanics, Chalmers has written that if a theory of everything is ever discovered, it will be a set of "psychophysical laws", rather than simply a set of physical laws. With Chalmers as their inspiration, Bohm and Pylkkänen set out to do just that in their panprotopsychism. Chalmers, who is critical of the Copenhagen interpretation and most quantum theories of consciousness, has coined this "the Law of the Minimisation of Mystery."
Combination problem
0.844881
698
This has raised questions about, in John S. Bell's words, "where the observer begins and ends." The measurement problem has largely been characterised as the clash of classical physics and quantum mechanics. Bohm argued that it is rather a clash of classical physics, quantum mechanics, and phenomenology; all three levels of description seem to be difficult to reconcile, or even contradictory.
Combination problem
0.844881
699
According to the Copenhagen interpretation of quantum mechanics, one of the oldest interpretations and the most widely taught, it is the act of observation that collapses the wave-function. Erwin Schrödinger famously articulated the Copenhagen interpretation's unusual implications in the thought experiment now known as Schrödinger's cat. He imagines a box that contains a cat, a flask of poison, radioactive material, and a Geiger counter.
Combination problem
0.844881