id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
500
Also, as for second cousins, parents not related to the common ancestor are indicated by numerals. Here, the prime equation is fY = ft = fP1,P2 = (1/4) . After working through the appropriate algebra, this becomes ft = (1/4) ] , which is the iteration version.
Quantitative genetics
0.847725
501
After working through the appropriate algebra, this becomes ft = (1/4) ]] , which is the iteration version. A "final" version is ft = (1/64) . To visualize the pattern in full cousin equations, start the series with the full sib equation re-written in iteration form: ft = (1/4).
Quantitative genetics
0.847725
502
There are two major approaches to defining and partitioning genotypic variance. One is based on the gene-model effects, while the other is based on the genotype substitution effects They are algebraically inter-convertible with each other. In this section, the basic random fertilization derivation is considered, with the effects of inbreeding and dispersion set aside. This is dealt with later to arrive at a more general solution. Until this mono-genic treatment is replaced by a multi-genic one, and until epistasis is resolved in the light of the findings of epigenetics, the Genotypic variance has only the components considered here.
Quantitative genetics
0.847725
503
The previous sections treated dispersion as an "assistant" to selection, and it became apparent that the two work well together. In quantitative genetics, selection is usually examined in this "biometrical" fashion, but the changes in the means (as monitored by ΔG) reflect the changes in allele and genotype frequencies beneath this surface. Referral to the section on "Genetic drift" brings to mind that it also effects changes in allele and genotype frequencies, and associated means; and that this is the companion aspect to the dispersion considered here ("the other side of the same coin"). However, these two forces of frequency change are seldom in concert, and may often act contrary to each other.
Quantitative genetics
0.847725
504
Formal definitions of these effects recognize this phenotypic focus. Epistasis has been approached statistically as interaction (i.e., inconsistencies), but epigenetics suggests a new approach may be needed. If 0 a was known as "over-dominance".Mendel's pea attribute "length of stem" provides us with a good example.
Quantitative genetics
0.847725
505
In diploid organisms, the average genotypic "value" (locus value) may be defined by the allele "effect" together with a dominance effect, and also by how genes interact with genes at other loci (epistasis). The founder of quantitative genetics - Sir Ronald Fisher - perceived much of this when he proposed the first mathematics of this branch of genetics. Being a statistician, he defined the gene effects as deviations from a central value—enabling the use of statistical concepts such as mean and variance, which use this idea. The central value he chose for the gene was the midpoint between the two opposing homozygotes at the one locus.
Quantitative genetics
0.847725
506
The number of gametes involved in fertilization varies from sample to sample, and is given as 2Nk . The total (Σ) number of gametes sampled overall is 52 . Because each sample has its own size, weights are needed to obtain averages (and other statistics) when obtaining the overall results. These are ω k = 2 N k / ( ∑ k s 2 N k ) {\textstyle \omega _{k}=2N_{k}/(\sum _{k}^{s}2N_{k})} , and are given at white label "4" in the diagram.
Quantitative genetics
0.847725
507
: 1710–181 The narrow-sense heritability (h2) is usually used, thereby linking to the genic variance (σ2A) . However, if appropriate, use of the broad-sense heritability (H2) would connect to the genotypic variance (σ2G) ; and even possibly an allelic heritability might be contemplated, connecting to (σ2a ). To apply these concepts before selection actually takes place, and so predict the outcome of alternatives (such as choice of selection threshold, for example), these phenotypic statistics are re-considered against the properties of the Normal Distribution, especially those concerning truncation of the superior tail of the Distribution.
Quantitative genetics
0.847725
508
Further gathering of terms leads to 1 2 D + 1 2 F ′ + 1 2 H 3 + 1 4 H 2 {\textstyle {\tfrac {1}{2}}{\mathsf {D}}+{\tfrac {1}{2}}{\mathsf {F}}^{\prime }+{\tfrac {1}{2}}{\mathsf {H}}_{3}+{\tfrac {1}{4}}{\mathsf {H}}_{2}} , where 1 2 H 3 = ( q − p ) 2 1 2 H 1 = ( q − p ) 2 2 p q d 2 {\textstyle {\tfrac {1}{2}}{\mathsf {H}}_{3}=(q-p)^{2}{\tfrac {1}{2}}{\mathsf {H}}_{1}=(q-p)^{2}2pqd^{2}} . It is useful later in Diallel analysis, which is an experimental design for estimating these genetical statistics.If, following the last-given rearrangements, the first three terms are amalgamated together, rearranged further and simplified, the result is the variance of the Fisherian substitution expectation. That is: σ A 2 = σ a 2 + c o v a d + σ d 2 {\displaystyle \sigma _{A}^{2}=\sigma _{a}^{2}+{\mathsf {cov}}_{ad}+\sigma _{d}^{2}} Notice particularly that σ2A is not σ2a.
Quantitative genetics
0.847725
509
The covad and substitution deviation variances are simply artifacts of this shift. The allelic and dominance variances are genuine genetical partitions of the original gene-model, and are the only eu-genetical components. Even then, the algebraic formula for the allelic variance is effected by the presence of G: it is only the dominance variance (i.e. σ2d ) which is unaffected by the shift from mp to G. These insights are commonly not appreciated.
Quantitative genetics
0.847725
510
Most of these calculi can be formalized as abstract relation algebras, such that reasoning can be carried out at a symbolic level. For computing solutions of a constraint network, the path-consistency algorithm is an important tool.
Spatial reasoning
0.847585
511
There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures.
Discrete Mathematics
0.847502
512
Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Discrete Mathematics
0.847502
513
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations.
Discrete Mathematics
0.847502
514
The telecommunication industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need. Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools. Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
Discrete Mathematics
0.847502
515
In 1970, Yuri Matiyasevich proved that this could not be done. The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades.
Discrete Mathematics
0.847502
516
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution.
Discrete Mathematics
0.847502
517
Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics. Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field. Order theory is the study of partially ordered sets, both finite and infinite.
Discrete Mathematics
0.847502
518
Combinatorics studies the way in which discrete structures can be combined or arranged. Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Discrete Mathematics
0.847502
519
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Discrete Mathematics
0.847502
520
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas. In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Discrete Mathematics
0.847502
521
The Solar Physics Division of the American Astronomical Society boasts 555 members (as of May 2007), compared to several thousand in the parent organization.A major thrust of current (2009) effort in the field of solar physics is integrated understanding of the entire Solar System including the Sun and its effects throughout interplanetary space within the heliosphere and on planets and planetary atmospheres. Studies of phenomena that affect multiple systems in the heliosphere, or that are considered to fit within a heliospheric context, are called heliophysics, a new coinage that entered usage in the early years of the current millennium.
Solar Physics
0.847364
522
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: The following laws hold in Boolean algebra, but not in ordinary algebra: Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in Absorption Law 1, the left hand side would be 1(1 + 1) = 2, while the right hand side would be 1 (and so on).
Boolean problem
0.847361
523
Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing magical about the choice of symbols for the values of Boolean algebra. We could rename 0 and 1 to say α and β, and as long as we did so consistently throughout it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose we rename 0 and 1 to 1 and 0 respectively.
Boolean problem
0.847361
524
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on.
Boolean problem
0.847361
525
Thus 0 and 1 are dual, and ∧ and ∨ are dual. The Duality Principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change we did not need to make as part of this interchange was to complement.
Boolean problem
0.847361
526
Then it would still be Boolean algebra, and moreover operating on the same values. However it would not be identical to our original Boolean algebra because now we find ∨ behaving the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that we've been fiddling with the notation, despite the fact that we're still using 0s and 1s.
Boolean problem
0.847361
527
In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second.
Boolean problem
0.847361
528
Of the twenty-four propositions, the first three are quoted without proof from Euclid's Elements of Conics (a lost work by Euclid on conic sections). Propositions 4 and 5 establish elementary properties of the parabola. Propositions 6–17 give the mechanical proof of the main theorem; propositions 18–24 present the geometric proof.
The Quadrature of the Parabola
0.84731
529
When the center of gravity of the triangle is known, the equilibrium of the lever yields the area of the parabola in terms of the area of the triangle which has the same base and equal height. Archimedes here deviates from the procedure found in On the Equilibrium of Planes in that he has the centers of gravity at a level below that of the balance. The second and more famous proof uses pure geometry, particularly the sum of a geometric series.
The Quadrature of the Parabola
0.84731
530
Conic sections such as the parabola were already well known in Archimedes' time thanks to Menaechmus a century earlier. However, before the advent of the differential and integral calculus, there were no easy means to find the area of a conic section. Archimedes provides the first attested solution to this problem by focusing specifically on the area bounded by a parabola and a chord.Archimedes gives two proofs of the main theorem: one using abstract mechanics and the other one by pure geometry. In the first proof, Archimedes considers a lever in equilibrium under the action of gravity, with weighted segments of a parabola and a triangle suspended along the arms of a lever at specific distances from the fulcrum.
The Quadrature of the Parabola
0.84731
531
Quadrature of the Parabola (Greek: Τετραγωνισμὸς παραβολῆς) is a treatise on geometry, written by Archimedes in the 3rd century BC and addressed to his Alexandrian acquaintance Dositheus. It contains 24 propositions regarding parabolas, culminating in two proofs showing that the area of a parabolic segment (the region enclosed by a parabola and a line) is 4 3 {\displaystyle {\tfrac {4}{3}}} that of a certain inscribed triangle. It is one of the best-known works of Archimedes, in particular for its ingenious use of the method of exhaustion and in the second part of a geometric series. Archimedes dissects the area into infinitely many triangles whose areas form a geometric progression. He then computes the sum of the resulting geometric series, and proves that this is the area of the parabolic segment. This represents the most sophisticated use of a reductio ad absurdum argument in ancient Greek mathematics, and Archimedes' solution remained unsurpassed until the development of integral calculus in the 17th century, being succeeded by Cavalieri's quadrature formula.
The Quadrature of the Parabola
0.84731
532
Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation.
Learning algorithms
0.847127
533
Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.
Learning algorithms
0.847127
534
Their main success came in the mid-1980s with the reinvention of backpropagation. : 25 Machine learning (ML), reorganized and recognized as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory.
Learning algorithms
0.847127
535
Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval. : 708–710, 755 Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart, and Hinton.
Learning algorithms
0.847127
536
: 488 However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. : 488 By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Learning algorithms
0.847127
537
As a scientific endeavor, machine learning grew out of the quest for artificial intelligence (AI). In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.
Learning algorithms
0.847127
538
Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
Learning algorithms
0.847127
539
AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) International Conference on Machine Learning (ICML) International Conference on Learning Representations (ICLR) International Conference on Intelligent Robots and Systems (IROS) Conference on Knowledge Discovery and Data Mining (KDD) Conference on Neural Information Processing Systems (NeurIPS)
Learning algorithms
0.847127
540
Software suites containing a variety of machine learning algorithms include the following:
Learning algorithms
0.847127
541
Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML).
Learning algorithms
0.847127
542
Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function.
Learning algorithms
0.847127
543
Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.
Learning algorithms
0.847127
544
Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing.
Learning algorithms
0.847127
545
In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses.
Learning algorithms
0.847127
546
For example, the rule { o n i o n s , p o t a t o e s } ⇒ { b u r g e r } {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics.
Learning algorithms
0.847127
547
Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets.
Learning algorithms
0.847127
548
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
Learning algorithms
0.847127
549
Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning.ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods.
Learning algorithms
0.847127
550
The rules of quantum tic-tac-toe attempt to capture three phenomena of quantum systems: superposition the ability of quantum objects to be in two places at once. entanglement the phenomenon where distant parts of a quantum system display correlations that cannot be explained by either timelike causality or common cause. collapse the phenomenon where the quantum states of a system are reduced to classical states. Collapses occur when a measurement happens, but the mathematics of the current formulation of quantum mechanics is silent on the measurement process. Many of the interpretations of quantum mechanics derive from different efforts to deal with the measurement problem.
Quantum tic-tac-toe
0.847064
551
The researchers who invented quantum tic-tac-toe were studying abstract quantum systems, formal systems whose axiomatic foundation included only a few of the axioms of quantum mechanics. Quantum tic-tac-toe became the most thoroughly studied abstract quantum system and offered insights that spawned new research. It also turned out to be a fun and engaging game, a game which also provides good pedagogy in the classroom.
Quantum tic-tac-toe
0.847064
552
How the universe can be like this is rather counterintuitive. There is a disconnect between the mathematics and our mental images of reality, a disconnect that is absent in classical physics. This is why quantum mechanics supports multiple "interpretations".
Quantum tic-tac-toe
0.847064
553
The motivation to invent quantum tic-tac-toe was to explore what it means to be in two places at once. In classical physics, a single object cannot be in two places at once. In quantum physics, however, the mathematics used to describe quantum systems seems to imply that before being subjected to quantum measurement (or "observed") certain quantum particles can be in multiple places at once. (The textbook example of this is the double-slit experiment.)
Quantum tic-tac-toe
0.847064
554
Quantum tic-tac-toe is a "quantum generalization" of tic-tac-toe in which the players' moves are "superpositions" of plays in the classical game. The game was invented by Allan Goff of Novatia Labs, who describes it as "a way of introducing quantum physics without mathematics", and offering "a conceptual foundation for understanding the meaning of quantum mechanics".
Quantum tic-tac-toe
0.847064
555
Roland E. Larson & Robert P. Hostetler (1989) Precalculus, second edition, D.C. Heath and Company ISBN 0-669-16277-9 Margaret L. Lial & Charles D. Miller (1988) Precalculus, Scott Foresman ISBN 0-673-15872-1 Jerome E. Kaufmann (1988) Precalculus, PWS-Kent Publishing Company (Wadsworth) Karl J. Smith (1990) Precalculus Mathematics: a functional approach, fourth edition, Brooks/Cole ISBN 0-534-11922-0 Michael Sullivan (1993) Precalculus, third edition, Dellen imprint of Macmillan Publishers ISBN 0-02-418421-7
Precalculus
0.846997
556
Another difference in the modern text is avoidance of complex numbers, except as they may arise as roots of a quadratic equation with a negative discriminant, or in Euler's formula as application of trigonometry. Euler used not only complex numbers but also infinite series in his precalculus. Today's course may cover arithmetic and geometric sequences and series, but not the application by Saint-Vincent to gain his hyperbolic logarithm, which Euler used to finesse his precalculus.
Precalculus
0.846997
557
This part of precalculus prepares the student for integration of the monomial x p {\displaystyle x^{p}} in the instance of p = − 1 {\displaystyle p=-1} . Today's precalculus text computes e {\displaystyle e} as the limit e = lim n → ∞ ( 1 + 1 n ) n {\displaystyle e=\lim _{n\rightarrow \infty }\left(1+{\frac {1}{n}}\right)^{n}} . An exposition on compound interest in financial mathematics may motivate this limit.
Precalculus
0.846997
558
The general logarithm, to an arbitrary positive base, Euler presents as the inverse of an exponential function. Then the natural logarithm is obtained by taking as base "the number for which the hyperbolic logarithm is one", sometimes called Euler's number, and written e {\displaystyle e} . This appropriation of the significant number from Gregoire de Saint-Vincent’s calculus suffices to establish the natural logarithm.
Precalculus
0.846997
559
For students to succeed at finding the derivatives and antiderivatives with calculus, they will need facility with algebraic expressions, particularly in modification and transformation of such expressions. Leonhard Euler wrote the first precalculus book in 1748 called Introductio in analysin infinitorum (Latin: Introduction to the Analysis of the Infinite), which "was meant as a survey of concepts and methods in analysis and analytic geometry preliminary to the study of differential and integral calculus." He began with the fundamental concepts of variables and functions. His innovation is noted for its use of exponentiation to introduce the transcendental functions.
Precalculus
0.846997
560
Algebraic skills are exercised with trigonometric functions and trigonometric identities. The binomial theorem, polar coordinates, parametric equations, and the limits of sequences and series are other common topics of precalculus. Sometimes the mathematical induction method of proof for propositions dependent upon a natural number may be demonstrated, but generally coursework involves exercises rather than theory.
Precalculus
0.846997
561
Precalculus prepares students for calculus somewhat differently from the way that pre-algebra prepares students for algebra. While pre-algebra often has extensive coverage of basic algebraic concepts, precalculus courses might see only small amounts of calculus concepts, if at all, and often involves covering algebraic topics that might not have been given attention in earlier algebra courses. Some precalculus courses might differ with others in terms of content. For example, an honors-level course might spend more time on conic sections, Euclidean vectors, and other topics needed for calculus, used in fields such as medicine or engineering.
Precalculus
0.846997
562
Jay Abramson and others (2014) Precalculus from OpenStax David Lippman & Melonie Rasmussen (2017) Precalculus: an investigation of functions Carl Stitz & Jeff Zeager (2013) Precalculus (pdf)
Precalculus
0.846997
563
In mathematics education, precalculus is a course, or a set of courses, that includes algebra and trigonometry at a level which is designed to prepare students for the study of calculus, thus the name precalculus. Schools often distinguish between algebra and trigonometry as two separate parts of the coursework.
Precalculus
0.846997
564
Thus "x − y" is an example of a partially computable function. Proper subtraction x┴y (as defined above) The identity function: for each i, a function UZn = ΨZn(x1, ..., xn) exists that plucks xi out of the set of arguments (x1, ..., xn) MultiplicationBoolos–Burgess–Jeffrey (2002) give the following as prose descriptions of Turing machines for: Doubling: 2p Parity Addition MultiplicationWith regards to the counter machine, an abstract machine model equivalent to the Turing machine: Examples Computable by Abacus machine (cf Boolos–Burgess–Jeffrey (2002)) Addition Multiplication Exponention: (a flow-chart/block diagram description of the algorithm)Demonstrations of computability by abacus machine (Boolos–Burgess–Jeffrey (2002)) and by counter machine (Minsky 1967): The six recursive function operators: Zero function Successor function Identity function Composition function Primitive recursion (induction) MinimizationThe fact that the abacus/counter-machine models can simulate the recursive functions provides the proof that: If a function is "machine computable" then it is "hand-calculable by partial recursion". Kleene's Theorem XXIX: "Theorem XXIX: "Every computable partial function φ is partial recursive..." (italics in original, p. 374).The converse appears as his Theorem XXVIII. Together these form the proof of their equivalence, Kleene's Theorem XXX.
Algorithm characterization
0.846977
565
100, The Undecidable).It would appear from this, and the following, that far as Gödel was concerned, the Turing machine was sufficient and the lambda calculus was "much less suitable." He goes on to make the point that, with regards to limitations on human reason, the jury is still out: ("Note that the question of whether there exist finite non-mechanical procedures** not equivalent with any algorithm, has nothing whatsoever to do with the adequacy of the definition of "formal system" and of "mechanical procedure.") (p.
Algorithm characterization
0.846977
566
J. Math., vol. 58 (1936) ).Church's definitions encompass so-called "recursion" and the "lambda calculus" (i.e. the λ-definable functions).
Algorithm characterization
0.846977
567
due to "A. M. Turing's work a precise and unquestionably adequate definition of the general notion of formal system can now be given a completely general version of Theorems VI and XI is now possible." (p. 616).
Algorithm characterization
0.846977
568
In calculus, constants are treated in several different ways depending on the operation. For example, the derivative (rate of change) of a constant function is zero. This is because constants, by definition, do not change. Their derivative is hence zero.
Constant (mathematics)
0.846968
569
The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: d d x 2 x = lim h → 0 2 x + h − 2 x h = lim h → 0 2 x 2 h − 1 h = 2 x lim h → 0 2 h − 1 h since x is constant (i.e. does not depend on h ) = 2 x ⋅ c o n s t a n t , where c o n s t a n t means not depending on x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}2^{x}&=\lim _{h\to 0}{\frac {2^{x+h}-2^{x}}{h}}=\lim _{h\to 0}2^{x}{\frac {2^{h}-1}{h}}\\&=2^{x}\lim _{h\to 0}{\frac {2^{h}-1}{h}}&&{\text{since }}x{\text{ is constant (i.e. does not depend on }}h{\text{)}}\\&=2^{x}\cdot \mathbf {constant,} &&{\text{where }}\mathbf {constant} {\text{ means not depending on }}x.\end{aligned}}} "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context.
Constant (mathematics)
0.846968
570
Some values occur frequently in mathematics and are conventionally denoted by a specific symbol. These standard symbols and their values are called mathematical constants. Examples include: 0 (zero). 1 (one), the natural number after zero. π (pi), the constant representing the ratio of a circle's circumference to its diameter, approximately equal to 3.141592653589793238462643. e, approximately equal to 2.718281828459045235360287. i, the imaginary unit such that i2 = −1. 2 {\displaystyle {\sqrt {2}}} (square root of 2), the length of the diagonal of a square with unit sides, approximately equal to 1.414213562373095048801688. φ (golden ratio), approximately equal to 1.618033988749894848204586, or algebraically, 1 + 5 2 {\displaystyle 1+{\sqrt {5}} \over 2} .
Constant (mathematics)
0.846968
571
In fact, it turns out that ker ⁡ ϕ {\displaystyle \ker \phi } is the smallest normal subgroup of ⟨ r , f ⟩ {\displaystyle \langle r,f\rangle } containing these three elements; in other words, all relations are consequences of these three. The quotient of the free group by this normal subgroup is denoted ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle } . This is called a presentation of D 4 {\displaystyle \mathrm {D} _{4}} by generators and relations, because the first isomorphism theorem for φ yields an isomorphism ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ → D 4 {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle \to \mathrm {D} _{4}} .A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group.
Elementary group theory
0.846963
572
Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of p-adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group.
Elementary group theory
0.846963
573
Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, g ⋅ h {\displaystyle g\cdot h} and g − 1 {\displaystyle g^{-1}} must not vary wildly if g {\displaystyle g} and h {\displaystyle h} vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication.
Elementary group theory
0.846963
574
Adjoining inverses of all elements of the monoid ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} produces a group ( Q ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Q} \smallsetminus \{0\},\cdot )} , and likewise adjoining inverses to any (abelian) monoid M produces a group known as the Grothendieck group of M. A group can be thought of as a small category with one object x in which every morphism is an isomorphism: given such a category, the set Hom ⁡ ( x , x ) {\displaystyle \operatorname {Hom} (x,x)} is a group; conversely, given a group G, one can build a small category with one object x in which Hom ⁡ ( x , x ) ≃ G {\displaystyle \operatorname {Hom} (x,x)\simeq G} . More generally, a groupoid is any small category in which every morphism is an isomorphism. In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined: fg is defined only when the source of f matches the target of g. Groupoids arise in topology (for instance, the fundamental groupoid) and in the theory of stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an n-ary operation (i.e., an operation taking n arguments, for some nonnegative integer n). With the proper generalization of the group axioms, this gives a notion of n-ary group.
Elementary group theory
0.846963
575
More general structures may be defined by relaxing some of the axioms defining a group. The table gives a list of several structures generalizing groups. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers N {\displaystyle \mathbb {N} } (including zero) under addition form a monoid, as do the nonzero integers under multiplication ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} .
Elementary group theory
0.846963
576
Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups.
Elementary group theory
0.846963
577
After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups.
Elementary group theory
0.846963
578
Point groups describe symmetry in molecular chemistry. The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: groupe) for the symmetry group of the roots of an equation, now called a Galois group.
Elementary group theory
0.846963
579
Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.In geometry, groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity.
Elementary group theory
0.846963
580
When a group G {\displaystyle G} has a normal subgroup N {\displaystyle N} other than { 1 } {\displaystyle \{1\}} and G {\displaystyle G} itself, questions about G {\displaystyle G} can sometimes be reduced to questions about N {\displaystyle N} and G / N {\displaystyle G/N} . A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem.
Elementary group theory
0.846963
581
The institute awards numerous prizes to acknowledge contributions to physics research, education and application.
Physics Web
0.846707
582
In 1960, the Physical Society and the Institute of Physics merged, creating a single organization with the name The Institute of Physics and the Physical Society, with John Cockcroft elected at its first president. The new society combined the learned society tradition of the Physical Society with the professional body tradition of the Institute of Physics. Under the leadership of Thomas E. Nevin, an Irish branch of the Institute of Physics was formed in 1964. Upon being granted a royal charter in 1970, the organization was renamed as the Institute of Physics.
Physics Web
0.846707
583
As with the Physical Society, dissemination of knowledge was fundamental to the institute, which began publication of the Journal of Scientific Instruments in 1922. The annual Reports on Progress in Physics began in 1934 and is still published today. In 1952, the institute began the "Graduateship" course and examination, which ran until 1984 when the expansion of access to universities removed demand.In 1932, the Physical Society of London merged with the Optical Society to create the Physical Society.
Physics Web
0.846707
584
In the early part of the 20th century, the profession of "physicist" emerged, partly as a result of the increased demand for scientists during the First World War. In 1917, following discussions between William Eccles and William Duddell, the Council of the Physical Society, along with the Faraday Society, the Optical Society, and the Roentgen Society, started to explore ways of improving the professional status of physicists, and in 1918, the Institute of Physics was created at a meeting of the four societies held at King's College London. In 1919, Sir Richard Glazebrook was elected first president of the institute, and the inaugural meeting of the Institute took place in 1921.
Physics Web
0.846707
585
The Institute of Physics was formed in 1960 from the merger of the Physical Society, founded as the Physical Society of London in 1874, and the Institute of Physics, founded in 1918.The Physical Society of London had been officially formed on 14 February 1874 by Frederick Guthrie, following the canvassing of opinion of Fellows of the Royal Society by the physicist and parapsychologist Sir William Barrett at the British Association for the Advancement of Science meeting in Bradford in 1873, with John Hall Gladstone as its first president. From its beginning, the society held open meetings and demonstrations and published Proceedings of the Physical Society. Meetings were held every two weeks, mainly at Imperial College London. The first Guthrie lecture, now known as the Faraday Medal and Prize, was delivered in 1914.
Physics Web
0.846707
586
The Institute of Physics (IOP) is a UK-based learned society and professional body that works to advance physics education, research and application.It was founded in 1874 and has a worldwide membership of over 20,000. The IOP is the Physical Society for the UK and Ireland and supports physics in education, research and industry. In addition to this, the IOP provides services to its members including careers advice and professional development and grants the professional qualification of Chartered Physicist (CPhys), as well as Chartered Engineer (CEng) as a nominated body of the Engineering Council. The IOP's publishing company, IOP Publishing, publishes 85 academic titles.
Physics Web
0.846707
587
In 2015, the membership of the Institute of Physics was 86% male at MInstP and 91% male at FInstP. 85% of Honorary Fellows were male.The institute grants academic dress to the various grades of membership. Those who have passed the institute's graduateeship examination (offered 1952–1984) are entitled to a violet damask Oxford burgon-shaped hood.
Physics Web
0.846707
588
The IOP has 23,000 members split across four grades of membership: Associate Member (AMInstP), Member (entitled to use the postnominals MInstP), Fellow (entitled to use the postnominals FInstP) and Honorary Fellow (entitled to use the postnominals Hon.FInstP). Undergraduates, apprentices and trainees can become Associate Members, and qualification for MInstP is normally by completion of an undergraduate degree that is "recognised" by the institute – this covers almost all UK physics degrees. An MInstP can become an FInstP by making "an outstanding contribution to the profession." These four grades of membership replaced the previous seven grades in January 2018; these changes introduced removed affiliate memberships for undergraduates (they are now Associate Members), removed the post-nominal letters AMInstP, and made Associate Members voting members.
Physics Web
0.846707
589
Sponsorship is provided by EDF Energy and support from the British Science Association. IOP runs the Stimulating Physics Network, aimed at increasing the uptake of physics at A-level, and administers teacher-training scholarships funded by the Department for Education.In March 2019, the Institute of Physics launched the Bell Burnell Graduate Scholarship Fund with the goal of helping female and black students to become physics researchers. The program is funded by Jocelyn Bell Burnell and provides aid to low-income students as well as those who qualify for refugee status. Bell won the Special Breakthrough Prize in Fundamental Physics in 2018 and donated the entire £2.3 million prize money to launch the fund.The institute is also interested in the ethical impact of physics, as is witnessed though the Physics and Ethics Education Project.
Physics Web
0.846707
590
The IOP provides an important educational service for secondary schools in the UK. This is the Lab in a Lorry, a mobile laboratory in a large articulated truck. This has three small laboratories where schoolchildren can try out various hands-on experiments, using physics equipment not usually available in the average school laboratory.
Physics Web
0.846707
591
The IOP accredits undergraduate degrees (BSc/BA and MSci/MPhys) in physics in British and Irish universities. At post-16 level, the IOP developed the 'Advancing Physics' A-level course, in conjunction with the OCR examining board, which is accredited by the Qualifications and Curriculum Authority. Advancing Physics was sold to Oxford University Press in January 2011. The IOP also developed the Integrated Sciences degree, which is run at four universities in England.
Physics Web
0.846707
592
Since its formation, the institute has had its headquarters in London. The early meetings of the Physical Society of London were hosted in South Kensington, until a permanent base was found in Burlington House in 1894. In 1927, the Institute of Physics acquired, rent-free, 1 Lowther Gardens; it was joined there by the Physical Society in 1939. During the Second World War, the institute moved temporarily to the University of Reading.
Physics Web
0.846707
593
IOP Publishing is a wholly owned subsidiary of the IOP that publishes 85 academic titles. Any profits generated by the publishing company are used to fund the charitable activities of the IOP. It won the Queen's Award for Export Achievement in 1990, 1995 and 2000 and publishes a large number of journals, websites and magazines, such as the Physics World membership magazine of the Institute of Physics, which was launched in 1988.
Physics Web
0.846707
594
It is more common to use the convention that a clockwise bending moment to the left of the point under consideration is taken as positive. This then corresponds to the second derivative of a function which, when positive, indicates a curvature that is 'lower at the centre' i.e. sagging. When defining moments and curvatures in this way calculus can be more readily used to find slopes and deflections.
Bending Moment
0.846557
595
It is therefore clear that a point of zero bending moment within a beam is a point of contraflexure—that is, the point of transition from hogging to sagging or vice versa. Moments and torques are measured as a force multiplied by a distance so they have as unit newton-metres (N·m), or pound-foot (lb·ft). The concept of bending moment is very important in engineering (particularly in civil and mechanical engineering) and physics.
Bending Moment
0.846557
596
In spectroscopy and quantum chemistry, the multiplicity of an energy level is defined as 2S+1, where S is the total spin angular momentum. States with multiplicity 1, 2, 3, 4, 5 are respectively called singlets, doublets, triplets, quartets and quintets.In the ground state of an atom or molecule, the unpaired electrons usually all have parallel spin. In this case the multiplicity is also equal to the number of unpaired electrons plus one.
Multiplicity (chemistry)
0.846538
597
In organic chemistry, carbenes are molecules which have carbon atoms with only six electrons in their valence shells and therefore disobey the octet rule. Carbenes generally split into singlet carbenes and triplet carbenes, named for their spin multiplicities. Both have two non-bonding electrons; in singlet carbenes these exist as a lone pair and have opposite spins so that there is no net spin, while in triplet carbenes these electrons have parallel spins.
Multiplicity (chemistry)
0.846538
598
Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics.
Phase problem
0.846504
599
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years.
Phase problem
0.846504