id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
3,000
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.
Theory of probabilities
0.832141
3,001
Eventually, analytical considerations compelled the incorporation of continuous variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti.
Theory of probabilities
0.832141
3,002
The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657. In the 19th century, what is considered the classical definition of probability was completed by Pierre Laplace.Initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial.
Theory of probabilities
0.832141
3,003
The advantage of two-phase electrical power over single-phase was that it allowed for simple, self-starting electric motors. In the early days of electrical engineering, it was easier to analyze and design two-phase systems where the phases were completely separated. It was not until the invention of the method of symmetrical components in 1918 that polyphase power systems had a convenient mathematical tool for describing unbalanced load cases.
Two-phase electric power
0.832138
3,004
The order condition is necessary but not sufficient for identification. The rank condition is a necessary and sufficient condition for identification. In the case of only exclusion restrictions, it must "be possible to form at least one nonvanishing determinant of order M − 1 from the columns of A corresponding to the variables excluded a priori from that equation" (Fisher 1966, p. 40), where A is the matrix of coefficients of the equations. This is the generalization in matrix algebra of the requirement "while it does enter the other equation" mentioned above (in the line above the formulas).
Parameter identification problem
0.832115
3,005
The neurobiological view sees plants as information-processing organisms with rather complex processes of communication occurring throughout the individual plant. It studies how environmental information is gathered, processed, integrated, and shared (sensory plant biology) to enable these adaptive and coordinated responses (plant behaviour); and how sensory perceptions and behavioural events are 'remembered' in order to allow predictions of future activities upon the basis of past experiences. Plants, it is claimed by some plant physiologists, are as sophisticated in behaviour as animals, but this sophistication has been masked by the time scales of plants' responses to stimuli, which are typically many orders of magnitude slower than those of animals.It has been argued that although plants are capable of adaptation, it should not be called intelligence per se, as plant neurobiologists rely primarily on metaphors and analogies to argue that complex responses in plants can only be produced by intelligence.
Plant perception (physiology)
0.832113
3,006
The common occurrence of plasmodesmata in plants "poses a problem for signaling from an electrophysiological point of view", since extensive electrical coupling would preclude the need for any cell-to-cell transport of ‘neurotransmitter-like' compounds.The authors call for an end to "superficial analogies and questionable extrapolations" if the concept of "plant neurobiology" is to benefit the research community. Several responses to this criticism have attempted to clarify that the term "plant neurobiology" is a metaphor and that metaphors have proved useful on previous occasions. Plant ecophysiology describes this phenomenon.
Plant perception (physiology)
0.832112
3,007
The breadth of fields of plant science represented by these researchers reflects the fact that the vast majority of the plant science research community rejects plant neurobiology as a legitimate notion. Their main arguments are that: "Plant neurobiology does not add to our understanding of plant physiology, plant cell biology or signaling". "There is no evidence for structures such as neurons, synapses or a brain in plants".
Plant perception (physiology)
0.832112
3,008
Plant sensory and response systems have been compared to the neurobiological processes of animals. Plant neurobiology concerns mostly the sensory adaptive behaviour of plants and plant electrophysiology. Indian scientist J. C. Bose is credited as the first person to research and talk about the neurobiology of plants. Many plant scientists and neuroscientists, however, view the term "plant neurobiology" as a misnomer, because plants do not have neurons.The ideas behind plant neurobiology were criticised in a 2007 article published in Trends in Plant Science by Amedeo Alpi and 35 other scientists, including such eminent plant biologists as Gerd Jürgens, Ben Scheres, and Chris Sommerville.
Plant perception (physiology)
0.832112
3,009
Plant perception is the ability of plants to sense and respond to the environment by adjusting their morphology and physiology. Botanical research has revealed that plants are capable of reacting to a broad range of stimuli, including chemicals, gravity, light, moisture, infections, temperature, oxygen and carbon dioxide concentrations, parasite infestation, disease, physical disruption, sound, and touch. The scientific study of plant perception is informed by numerous disciplines, such as plant physiology, ecology, and molecular biology.
Plant perception (physiology)
0.832112
3,010
A certificate for a root is a computational proof of the correctness of a candidate solution. For instance, a certificate may consist of an approximate solution x {\displaystyle x} , a region R {\displaystyle R} containing x {\displaystyle x} , and a proof that R {\displaystyle R} contains exactly one solution to the system of equations. In this context, an a priori numerical certificate is a certificate in the sense of correctness in computer science. On the other hand, an a posteriori numerical certificate operates only on solutions, regardless of how they are computed. Hence, a posteriori certification is different from algorithmic correctness – for an extreme example, an algorithm could randomly generate candidates and attempt to certify them as approximate roots using a posteriori certification.
Numerical certification
0.832108
3,011
Suppose G: R n → R n {\displaystyle G:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} is a function whose fixed points correspond to the roots of F {\displaystyle F} . For example, the Newton operator has this property. Suppose that I {\displaystyle I} is a region, then, If G {\displaystyle G} maps I {\displaystyle I} into itself, i.e., G ( I ) ⊆ I {\displaystyle G(I)\subseteq I} , then by Brouwer fixed-point theorem, G {\displaystyle G} has at least one fixed point in I {\displaystyle I} , and, hence F {\displaystyle F} has at least one root in I {\displaystyle I} . If G {\displaystyle G} is contractive in a region containing I {\displaystyle I} , then there is at most one root in I {\displaystyle I} .There are versions of the following methods over the complex numbers, but both the interval arithmetic and conditions must be adjusted to reflect this case.
Numerical certification
0.832108
3,012
Numerical certification is the process of verifying the correctness of a candidate solution to a system of equations. In (numerical) computational mathematics, such as numerical algebraic geometry, candidate solutions are computed algorithmically, but there is the possibility that errors have corrupted the candidates. For instance, in addition to the inexactness of input data and candidate solutions, numerical errors or errors in the discretization of the problem may result in corrupted candidate solutions. The goal of numerical certification is to provide a certificate which proves which of these candidates are, indeed, approximate solutions.
Numerical certification
0.832108
3,013
Numerical algebraic geometry solves polynomial systems using homotopy continuation and path tracking methods. By monitoring the condition number for a tracked homotopy at every step, and ensuring that no two solution paths ever intersect, one can compute a numerical certificate along with a solution. This scheme is called a priori path tracking.Non-certified numerical path tracking relies on heuristic methods for controlling time step size and precision. In contrast, a priori certified path tracking goes beyond heuristics to provide step size control that guarantees that for every step along the path, the current point is within the domain of quadratic convergence for the current path. == References ==
Numerical certification
0.832108
3,014
In practice, any interval containing F ′ ( J ) {\displaystyle F'(J)} can be used in this computation. If x {\displaystyle x} is a root of F {\displaystyle F} , then by the mean value theorem, there is some c ∈ J {\displaystyle c\in J} such that F ( m ( J ) ) − F ′ ( c ) ( m ( J ) − x ) = F ( x ) = 0 {\displaystyle F(m(J))-F'(c)(m(J)-x)=F(x)=0} .
Numerical certification
0.832108
3,015
The aim is to pack 27 cuboids with side lengths A , B , C {\displaystyle A,B,C} into a box of side length A + B + C {\displaystyle A+B+C} , subject to two constraints: 1) A , B , C {\displaystyle A,B,C} must not be equal 2) The smallest of A , B , C {\displaystyle A,B,C} must be larger than ( A + B + C ) / 4 {\displaystyle (A+B+C)/4} One possibility would be A = 18 , B = 20 , C = 22 {\displaystyle A=18,B=20,C=22} – the box would then have to have the dimensions 60×60×60. Modern tools such as laser cutters allow the creation of complex two-dimensional puzzles made of wood or acrylic plastic. In recent times this has become predominant and puzzles of extraordinarily decorative geometry have been designed.
Mechanical puzzles
0.832085
3,016
For puzzles of this kind, the goal is to disentangle a metal or string loop from an object. Topology plays an important role with these puzzles. The image shows a version of the derringer puzzle.
Mechanical puzzles
0.832085
3,017
Tertiary is a term used in organic chemistry to classify various types of compounds (e. g. alcohols, alkyl halides, amines) or reactive intermediates (e. g. alkyl radicals, carbocations).
Tertiary (chemistry)
0.83207
3,018
BIOSIS Previews Biochemistry & Biophysics Citation Index Science Citation Index Current Contents/Physical, Chemical & Earth Sciences Chemical Abstracts Service Advanced Polymers Abstracts BIOBASE Biotechnology & Bioengineering Abstracts Compendex Embase Scopus Ceramic Abstracts Civil Engineering Abstracts Earthquake Engineering Abstracts Engineered Materials Abstracts International Aerospace Abstracts & Database MEDLINE/PubMed Polymer Library
Macromolecular Bioscience
0.832068
3,019
Macromolecular Bioscience is a monthly peer-reviewed scientific journal covering polymer science. It publishes Reviews, Feature Articles, Communications, and Full Papers at the intersection of polymer and materials sciences with life science and medicine. The editorial office is in Weinheim, Germany. The editor-in-chief is Anne Pfisterer. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.979.
Macromolecular Bioscience
0.832068
3,020
AI researcher Fei-Fei Li began working on the idea for ImageNet in 2006. At a time when most AI research focused on models and algorithms, Li wanted to expand and improve the data available to train AI algorithms. In 2007, Li met with Princeton professor Christiane Fellbaum, one of the creators of WordNet, to discuss the project. As a result of this meeting, Li went on to build ImageNet starting from the word database of WordNet and using many of its features.As an assistant professor at Princeton, Li assembled a team of researchers to work on the ImageNet project. They used Amazon Mechanical Turk to help with the classification of images.They presented their database for the first time as a poster at the 2009 Conference on Computer Vision and Pattern Recognition (CVPR) in Florida.
ImageNet challenge
0.83206
3,021
On 30 September 2012, a convolutional neural network (CNN) called AlexNet achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge, more than 10.8 percentage points lower than that of the runner up. This was made feasible due to the use of graphics processing units (GPUs) during training, an essential ingredient of the deep learning revolution. According to The Economist, "Suddenly people started to pay attention, not just within the AI community but across the technology industry as a whole. "In 2015, AlexNet was outperformed by Microsoft's very deep CNN with over 100 layers, which won the ImageNet 2015 contest.
ImageNet challenge
0.83206
3,022
Stimulus–response models are applied in international relations,psychology,risk assessment,neuroscience, neurally-inspired system design, and many other fields. Pharmacological dose response relationships are an application of stimulus-response models. Another field this model can be applied to is psychological problems/disorders such as Tourettes syndrome. Research shows Gilles de la Tourette syndrome (GTS) can be characterized by enhanced cognitive functions related to creating, modifying and maintaining connections between stimuli and responses (S‐R links).
Stimulus–response model
0.832046
3,023
Many characterizations/definitions of mechanisms in the philosophy of science/biology have been provided in the past decades. For example, one influential characterization of neuro- and molecular biological mechanisms by Peter K. Machamer, Lindley Darden and Carl Craver is as follows: mechanisms are entities and activities organized such that they are productive of regular changes from start to termination conditions. Other characterizations have been proposed by Stuart Glennan (1996, 2002), who articulates an interactionist account of mechanisms, and William Bechtel (1993, 2006), who emphasizes parts and operations.The characterization by Machemer et al. is as follows: mechanisms are entities and activities organized such that they are productive of changes from start conditions to termination conditions. There are three distinguishable aspects of this characterization: Ontic aspect The ontic constituency of biological mechanisms includes entities and activities.
Mechanism (biology)
0.832028
3,024
Mechanisms in science/biology have reappeared as a subject of philosophical analysis and discussion in the last several decades because of a variety of factors, many of which relate to metascientific issues such as explanation and causation. For example, the decline of Covering Law (CL) models of explanation, e.g., Hempel's deductive-nomological model, has stimulated interest how mechanisms might play an explanatory role in certain domains of science, especially higher-level disciplines such as biology (i.e., neurobiology, molecular biology, neuroscience, and so on). This is not just because of the philosophical problem of giving some account of what "laws of nature," which CL models encounter, but also the incontrovertible fact that most biological phenomena are not characterizable in nomological terms (i.e., in terms of lawful relationships). For example, protein biosynthesis does not occur according to any law, and therefore, on the DN model, no explanation for the biosynthesis phenomenon could be given. itis a haaland in the
Mechanism (biology)
0.832028
3,025
In the science of biology, a mechanism is a system of causally interacting parts and processes that produce one or more effects. Scientists explain phenomena by describing mechanisms that could produce the phenomena. For example, natural selection is a mechanism of biological evolution; other mechanisms of evolution include genetic drift, mutation, and gene flow. In ecology, mechanisms such as predation and host-parasite interactions produce change in ecological systems. In practice, no description of a mechanism is ever complete because not all details of the parts and processes of a mechanism are fully known. For example, natural selection is a mechanism of evolution that includes countless, inter-individual interactions with other individuals, components, and processes of the environment in which natural selection operates.
Mechanism (biology)
0.832028
3,026
Protein production is the biotechnological process of generating a specific protein. It is typically achieved by the manipulation of gene expression in an organism such that it expresses large amounts of a recombinant gene. This includes the transcription of the recombinant DNA to messenger RNA (mRNA), the translation of mRNA into polypeptide chains, which are ultimately folded into functional proteins and may be targeted to specific subcellular or extracellular locations.Protein production systems (also known as expression systems) are used in the life sciences, biotechnology, and medicine. Molecular biology research uses numerous proteins and enzymes, many of which are from expression systems; particularly DNA polymerase for PCR, reverse transcriptase for RNA analysis, restriction endonucleases for cloning, and to make proteins that are screened in drug discovery as biological targets or as potential drugs themselves. There are also significant applications for expression systems in industrial fermentation, notably the production of biopharmaceuticals such as human insulin to treat diabetes, and to manufacture enzymes.
Protein production (biotechnology)
0.832006
3,027
Dynamic programming applied to each resulting matrix determines a series of optimal local alignments which are then summed into a "summary" matrix to which dynamic programming is applied again to determine the overall structural alignment. SSAP originally produced only pairwise alignments but has since been extended to multiple alignments as well. It has been applied in an all-to-all fashion to produce a hierarchical fold classification scheme known as CATH (Class, Architecture, Topology, Homology), which has been used to construct the CATH Protein Structure Classification database.
Protein structural alignment
0.832001
3,028
MAMMOTH approaches the alignment problem from a different objective than almost all other methods. Rather than trying to find an alignment that maximally superimposes the largest number of residues, it seeks the subset of the structural alignment least likely to occur by chance. To do this it marks a local motif alignment with flags to indicate which residues simultaneously satisfy more stringent criteria: 1) Local structure overlap 2) regular secondary structure 3) 3D-superposition 4) same ordering in primary sequence. It converts the statistics of the number of residues with high-confidence matches and the size of the protein to compute an Expectation value for the outcome by chance.
Protein structural alignment
0.832001
3,029
In number theory, the integer complexity of an integer is the smallest number of ones that can be used to represent it using ones and any number of additions, multiplications, and parentheses. It is always within a constant factor of the logarithm of the given integer.
Integer complexity
0.831993
3,030
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the O ( n 2 ) {\displaystyle O(n^{2})} time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest. It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear O ( n log ⁡ log ⁡ n ) {\displaystyle O(n\log \log n)} time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower O ( n log ⁡ n ) {\displaystyle O(n\log n)} time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
Closest pair of points problem
0.831987
3,031
Every left or right coset of H has the same number of elements (or cardinality in the case of an infinite H) as H itself. Furthermore, the number of left cosets is equal to the number of right cosets and is known as the index of H in G, written as . Lagrange's theorem allows us to compute the index in the case where G and H are finite: This equation also holds in the case where the groups are infinite, although the meaning may be less clear.
Right coset
0.831981
3,032
Cosets of Q in R are used in the construction of Vitali sets, a type of non-measurable set. Cosets are central in the definition of the transfer. Cosets are important in computational group theory. For example, Thistlethwaite's algorithm for solving Rubik's Cube relies heavily on cosets. In geometry, a Clifford–Klein form is a double coset space Γ\G/H, where G is a reductive Lie group, H is a closed subgroup, and Γ is a discrete subgroup (of G) that acts properly discontinuously on the homogeneous space G/H.
Right coset
0.831981
3,033
The number of left cosets of H in G is equal to the number of right cosets of H in G. This common value is called the index of H in G and is usually denoted by . Cosets are a basic tool in the study of groups; for example, they play a central role in Lagrange's theorem that states that for any finite group G, the number of elements of every subgroup H of G divides the number of elements of G. Cosets of a particular type of subgroup (a normal subgroup) can be used as the elements of another group called a quotient group or factor group. Cosets also appear in other areas of mathematics such as vector spaces and error-correcting codes.
Right coset
0.831981
3,034
Double bonds are formed by sharing a face between two cubic atoms. This results in sharing four electrons: Triple bonds could not be accounted for by the cubical atom model, because there is no way of having two cubes share three parallel edges. Lewis suggested that the electron pairs in atomic bonds have a special attraction, which result in a tetrahedral structure, as in the figure below (the new location of the electrons is represented by the dotted circles in the middle of the thick edges). This allows the formation of a single bond by sharing a corner, a double bond by sharing an edge, and a triple bond by sharing a face. It also accounts for the free rotation around single bonds and for the tetrahedral geometry of methane.
Cubic atoms
0.831978
3,035
The first nucleotide sequence database was created. Previously known as the European Molecular Biology Laboratory (EMBL) Nucleotide Sequence Data Library (now known as European Nucleotide archive). Human Genome Project began in 1988. The project's goal was sequence and map all the genes in a human which required the capability to create and utilize a large sequence database.
Sequence database
0.831952
3,036
The National Biomedical Research Foundation (NBRF) was on the cutting edge of utilizing computers for medicine and biology at this time. Dayhoff and her team made use of their facilities for determining amino acid sequences of protein molecules in mainframe computers. The number of discovered sequences continued to grow allowing for a deeper comparative analysis of proteins than ever before. This led to many developments such as, probabilistic models of amino acid substitutions, sequence aligning and phylogenetic trees of evolutionary relationships of proteins.
Sequence database
0.831952
3,037
In the field of bioinformatics, a sequence database is a type of biological database that is composed of a large collection of computerized ("digital") nucleic acid sequences, protein sequences, or other polymer sequences stored on a computer. The UniProt database is an example of a protein sequence database. As of 2013 it contained over 40 million sequences and is growing at an exponential rate. Historically, sequences were published in paper form, but as the number of sequences grew, this storage method became unsustainable.
Sequence database
0.831952
3,038
It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it from forecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions." In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into prescriptive analytics for decision optimization.
Predictive classification
0.831943
3,039
Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining.
Predictive classification
0.831943
3,040
Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power.
Predictive classification
0.831943
3,041
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one; in unsupervised learning it is usually called a matching matrix. Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature. The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e. commonly mislabeling one as another). It is a special kind of contingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table).
Confusion matrix
0.831937
3,042
Rotational transitions are important in physics due to the unique spectral lines that result. Because there is a net gain or loss of energy during a transition, electromagnetic radiation of a particular frequency must be absorbed or emitted. This forms spectral lines at that frequency which can be detected with a spectrometer, as in rotational spectroscopy or Raman spectroscopy.
Rotational transition
0.831929
3,043
In quantum mechanics, a rotational transition is an abrupt change in angular momentum. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred.
Rotational transition
0.831929
3,044
The larger a time constant is, the slower the rise or fall of the potential of a neuron. A long time constant can result in temporal summation, or the algebraic summation of repeated potentials. A short time constant rather produces a coincidence detector through spatial summation.
Thermal time constant
0.831916
3,045
In physics and engineering, the time constant, usually denoted by the Greek letter τ (tau), is the parameter characterizing the response to a step input of a first-order, linear time-invariant (LTI) system. The time constant is the main characteristic unit of a first-order LTI system. In the time domain, the usual choice to explore the time response is through the step response to a step input, or the impulse response to a Dirac delta function input.
Thermal time constant
0.831916
3,046
Microbiomes are among the main targets of single cell genomics due to the difficulty of culturing the majority of microorganisms in most environments. Single-cell genomics is a powerful way to obtain microbial genome sequences without cultivation. This approach has been widely applied on marine, soil, subsurface, organismal, and other types of microbiomes in order to address a wide array of questions related to microbial ecology, evolution, public health and biotechnology potential.Cancer sequencing is also an emerging application of scDNAseq.
Single-cell sequencing
0.831915
3,047
Yet he struck a pragmatic note by adding that the traditional rule for margin proportions cannot be followed as a doctrine: for example, wide margins for pocket books would be counter-productive. Similarly, he refuted the notion that the type area must have the same proportions as the page: he preferred to trust visual judgment in assessing the placement of the type area on the page, instead of following a pre-determined doctrine. Bringhurst describes a book page as a tangible proportion, which together with the textblock produce an antiphonal geometry, which has the capability to bind the reader to the book, or conversely put the reader's nerve on edge or drive the reader away.
Canons of page construction
0.8319
3,048
Moreover, the theorem of invariance of domain asserts that a subset of a Euclidean space is open (for the subspace topology) if and only if it is homeomorphic to an open subset of a Euclidean space of the same dimension. Euclidean spaces are complete and locally compact. That is, a closed subset of a Euclidean space is compact if it is bounded (that is, contained in a ball). In particular, closed balls are compact.
Euclidean n-space
0.831892
3,049
In other words, open balls form a base of the topology. The topological dimension of a Euclidean space equals its dimension. This implies that Euclidean spaces of different dimensions are not homeomorphic.
Euclidean n-space
0.831892
3,050
The Euclidean distance makes a Euclidean space a metric space, and thus a topological space. This topology is called the Euclidean topology. In the case of R n , {\displaystyle \mathbb {R} ^{n},} this topology is also the product topology. The open sets are the subsets that contains an open ball around each of their points.
Euclidean n-space
0.831892
3,051
They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process. Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques.
Response-surface methodology
0.831882
3,052
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this.
Response-surface methodology
0.831882
3,053
This is a list of topics that are included in high school physics curricula or textbooks.
List of physics concepts in primary and secondary education curricula
0.831878
3,054
SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry
List of physics concepts in primary and secondary education curricula
0.831878
3,055
Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram
List of physics concepts in primary and secondary education curricula
0.831878
3,056
The Disquisitiones continued to exert influence in the 20th century. For example, in section V, article 303, Gauss summarized his calculations of class numbers of proper primitive binary quadratic forms, and conjectured that he had found all of them with class numbers 1, 2, and 3. This was later interpreted as the determination of imaginary quadratic number fields with even discriminant and class number 1, 2, and 3, and extended to the case of odd discriminant. Sometimes called the class number problem, this more general question was eventually confirmed in 1986 (the specific question Gauss asked was confirmed by Landau in 1902 for class number one). In section VII, article 358, Gauss proved what can be interpreted as the first nontrivial case of the Riemann hypothesis for curves over finite fields (the Hasse–Weil theorem).
Disquisitiones Arithmeticae
0.831877
3,057
Before the Disquisitiones was published, number theory consisted of a collection of isolated theorems and conjectures. Gauss brought the work of his predecessors together with his own original work into a systematic framework, filled in gaps, corrected unsound proofs, and extended the subject in numerous ways. The logical structure of the Disquisitiones (theorem statement followed by proof, followed by corollaries) set a standard for later texts. While recognising the primary importance of logical proof, Gauss also illustrates many theorems with numerical examples.
Disquisitiones Arithmeticae
0.831877
3,058
The book is divided into seven sections: Congruent Numbers in General Congruences of the First Degree Residues of Powers Congruences of the Second Degree Forms and Indeterminate Equations of the Second Degree Various Applications of the Preceding Discussions Equations Defining Sections of a Circle These sections are subdivided into 366 numbered items, which state a theorem with proof or otherwise develop a remark or thought. Sections I to III are essentially a review of previous results, including Fermat's little theorem, Wilson's theorem and the existence of primitive roots. Although few of the results in these sections are original, Gauss was the first mathematician to bring this material together in a systematic way. He also realized the importance of the property of unique factorization (assured by the fundamental theorem of arithmetic, first studied by Euclid), which he restates and proves using modern tools.
Disquisitiones Arithmeticae
0.831877
3,059
The Disquisitiones covers both elementary number theory and parts of the area of mathematics now called algebraic number theory. Gauss did not explicitly recognize the concept of a group, which is central to modern algebra, so he did not use this term. His own title for his subject was Higher Arithmetic.
Disquisitiones Arithmeticae
0.831877
3,060
The Disquisitiones Arithmeticae (Latin for "Arithmetical Investigations") is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. It is notable for having had a revolutionary impact on the field of number theory as it not only made the field truly rigorous and systematic but also paved the path for modern number theory. In this book Gauss brought together and reconciled results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange, and Legendre and added many profound and original results of his own.
Disquisitiones Arithmeticae
0.831877
3,061
Examples of common and historical third-generation programming languages are ALGOL, BASIC, C, COBOL, Fortran, Java, and Pascal. top-down and bottom-up design tree A widely used abstract data type (ADT) that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes. type theory In mathematics, logic, and computer science, a type theory is any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics. In type theory, every "term" has a "type" and operations are restricted to terms of a certain type.
Glossary of computer science
0.831867
3,062
state In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system. statement In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements.
Glossary of computer science
0.831867
3,063
software prototyping Is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing. A prototype typically simulates only a few aspects of, and may be completely different from, the final product.
Glossary of computer science
0.831867
3,064
software engineering Is the systematic application of engineering approaches to the development of software. Software engineering is a computing discipline. software maintenance In software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.
Glossary of computer science
0.831867
3,065
Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. software development process In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC).
Glossary of computer science
0.831867
3,066
Software design may refer to either "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying complex systems" or "the activity following requirements specification and before programming, as ... a stylized software engineering process." software development Is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process.
Glossary of computer science
0.831867
3,067
It is linked to all the other software engineering disciplines, most strongly to software design and software testing. software deployment Is all of the activities that make a software system available for use. software design Is the process by which an agent creates a specification of a software artifact, intended to accomplish goals, using a set of primitive components and subject to constraints.
Glossary of computer science
0.831867
3,068
Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo). software construction Is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging.
Glossary of computer science
0.831867
3,069
This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media.
Glossary of computer science
0.831867
3,070
quantum computing The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically. : I-5 queue A collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. quicksort Also partition-exchange sort. An efficient sorting algorithm which serves as a systematic method for placing the elements of a random access file or an array in order.
Glossary of computer science
0.831867
3,071
Prolog Is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.
Glossary of computer science
0.831867
3,072
programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and of their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It has become a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications.
Glossary of computer science
0.831867
3,073
object An object can be a variable, a data structure, a function, or a method, and as such, is a value in memory referenced by an identifier. In the class-based object-oriented programming paradigm, object refers to a particular instance of a class, where the object can be a combination of variables, functions, and data structures. In relational database management, an object can be a table or column, or an association between data and a database entity (such as relating a person's age to a specific person). object code Also object module.
Glossary of computer science
0.831867
3,074
Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers. number theory A branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions.
Glossary of computer science
0.831867
3,075
natural language processing (NLP) A subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. node Is a basic unit of a data structure, such as a linked list or tree data structure.
Glossary of computer science
0.831867
3,076
The data and behavior comprise an interface, which specifies how the object may be utilized by any of various consumers of the object. methodology In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC).
Glossary of computer science
0.831867
3,077
The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance. mathematical logic A subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science.
Glossary of computer science
0.831867
3,078
Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems.
Glossary of computer science
0.831867
3,079
machine learning (ML) The scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. machine vision (MV) The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry.
Glossary of computer science
0.831867
3,080
iteration Is the repetition of a process in order to generate an outcome. The sequence will approach some end point or end value. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms.
Glossary of computer science
0.831867
3,081
The most well-known types are copyrights, patents, trademarks, and trade secrets. intelligent agent In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals.
Glossary of computer science
0.831867
3,082
Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum. human-computer interaction (HCI) Researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study.
Glossary of computer science
0.831867
3,083
In compiled languages, global variables are generally static variables, whose extent (lifetime) is the entire runtime of the program, though in interpreted languages (including command-line interpreters), global variables are generally dynamically allocated when declared, since they are not known ahead of time. graph theory In mathematics, the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically.
Glossary of computer science
0.831867
3,084
game theory The study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.
Glossary of computer science
0.831867
3,085
The exact interpretation depends upon the use - while "instructions" is traditionally taken to mean machine code instructions for a physical CPU, in some contexts a file containing bytecode or scripting language instructions may also be considered executable. executable module execution In computer and software engineering is the process by which a computer or virtual machine executes the instructions of a computer program. Each instruction of a program is a description of a particular action which to be carried out in order for a specific problem to be solved; as instructions of a program and therefore the actions they describe are being carried out by an executing machine, specific effects are produced in accordance to the semantics of the instructions being executed.
Glossary of computer science
0.831867
3,086
In technical terms, they are a family of population-based trial-and-error problem-solvers with a metaheuristic or stochastic optimization character. executable Also executable code, executable file, executable program, or simply executable. Causes a computer "to perform indicated tasks according to encoded instructions," as opposed to a data file that must be parsed by a program to be meaningful.
Glossary of computer science
0.831867
3,087
Event-driven programming is the dominant paradigm used in graphical user interfaces and other applications (e.g. JavaScript web applications) that are centered on performing certain actions in response to user input. This is also true of programming for device drivers (e.g. P in USB device driver stacks). evolutionary computing A family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms.
Glossary of computer science
0.831867
3,088
An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often utilized in military messaging.
Glossary of computer science
0.831867
3,089
It is a term used in software engineering. Formally it represents the target subject of a specific programming project, whether narrowly or broadly defined. Domain Name System (DNS) A hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or to a private network.
Glossary of computer science
0.831867
3,090
Notable types are the hard disk drive (HDD) containing a non-removable disk, the floppy disk drive (FDD) and its removable floppy disk, and various optical disc drives (ODD) and associated optical disc media. distributed computing A field of computer science that studies distributed systems. A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another.
Glossary of computer science
0.831867
3,091
Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science. data structure A data organization, management, and storage format that enables efficient access and modification.
Glossary of computer science
0.831867
3,092
Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. data science An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining.
Glossary of computer science
0.831867
3,093
Where databases are more complex, they are often developed using formal design and modeling techniques. data mining Is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use.
Glossary of computer science
0.831867
3,094
Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. CSV See comma-separated values.
Glossary of computer science
0.831867
3,095
cryptography Or cryptology, is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics.
Glossary of computer science
0.831867
3,096
It has scientific, engineering, mathematical, technological and social aspects. Major computing fields include computer engineering, computer science, cybersecurity, data science, information systems, information technology and software engineering. concatenation In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end.
Glossary of computer science
0.831867
3,097
computer security Also cybersecurity or information technology security (IT security). The protection of computer systems from theft or damage to their hardware, software, or electronic data, as well as from disruption or misdirection of the services they provide. computer vision An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos.
Glossary of computer science
0.831867
3,098
It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems. computer scientist A person who has acquired the knowledge of computer science, the study of the theoretical foundations of information and computation and their application.
Glossary of computer science
0.831867
3,099
The purpose of programming is to find a sequence of instructions that will automate the performance of a task for solving a given problem. The process of programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic. computer science The theory, experimentation, and engineering that form the basis for the design and use of computers.
Glossary of computer science
0.831867