id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
800
A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—which sets out the functional values of logical expressions on each of their functional arguments, that is, for each combination of values taken by their logical variables. In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is, logically valid. A truth table has one column for each input variable (for example, A and B), and one final column showing all of the possible results of the logical operation that the table represents (for example, A XOR B). Each row of the truth table contains one possible configuration of the input variables (for instance, A=true, B=false), and the result of the operation for those values.
Truth table
0.843607
801
Irving Anellis's research shows that C.S. Peirce appears to be the earliest logician (in 1883) to devise a truth table matrix.From the summary of Peirce's paper: In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell's 1912 lecture on "The Philosophy of Logical Atomism" truth table matrices. The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with the composition of Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.
Truth table
0.843607
802
Topology of a protein describes the entanglement of the backbone and the arrangement of contacts within the folded chain. Two theoretical frameworks of knot theory and Circuit topology have been applied to characterise protein topology. Being able to describe protein topology opens up new pathways for protein engineering and pharmaceutical development, and adds to our understanding of protein misfolding diseases such as neuromuscular disorders and cancer.
Structural protein
0.843583
803
NCBI Entrez Protein database NCBI Protein Structure database Human Protein Reference Database Human Proteinpedia Folding@Home (Stanford University) Archived 2012-09-08 at the Wayback Machine Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures) Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month Archived 2020-07-24 at the Wayback Machine, presenting short accounts on selected proteins from the PDB) Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure. UniProt the Universal Protein Resource
Structural protein
0.843583
804
The ITS region is the most widely sequenced DNA region in molecular ecology of fungi and has been recommended as the universal fungal barcode sequence. It has typically been most useful for molecular systematics at the species to genus level, and even within species (e.g., to identify geographic races). Because of its higher degree of variation than other genic regions of rDNA (for example, small- and large-subunit rRNA), variation among individual rDNA repeats can sometimes be observed within both the ITS and IGS regions. In addition to the universal ITS1+ITS4 primers used by many labs, several taxon-specific primers have been described that allow selective amplification of fungal sequences (e.g., see Gardes & Bruns 1993 paper describing amplification of basidiomycete ITS sequences from mycorrhiza samples). Despite shotgun sequencing methods becoming increasingly utilized in microbial sequencing, the low biomass of fungi in clinical samples make the ITS region amplification an area of ongoing research.
ITS sequencing
0.843583
805
Another variant of the puzzle appears in the book The Man Who Counted, a mathematical puzzle book originally published in Portuguese by Júlio César de Mello e Souza in 1938. This version starts with 35 camels, to be divided in the same proportions as in the 17-camel version. After the hero of the story lends a camel, and the 36 camels are divided among the three brothers, two are left over: one to be returned to the hero, and another given to him as a reward for his cleverness. The endnotes to the English translation of the book cite the 17-camel version of the problem to the works of Fourrey and Gaston Boucheny (1939).Beyond recreational mathematics, the story has been used as the basis for school mathematics lessons, as a parable with varied morals in religion, law, economics, and politics, and even as a lay-explanation for catalysis in chemistry.
17-animal inheritance puzzle
0.843565
806
Instead, memory safety properties must either be guaranteed by the compiler via static program analysis and automated theorem proving or carefully managed by the programmer at runtime. For example, the Rust programming language implements a borrow checker to ensure memory safety, while C and C++ provide no memory safety guarantees.
Memory safety
0.843505
807
Around the same time MALDI became popularized, John Bennett Fenn was cited for the development of electrospray ionization. Koichi Tanaka received the 2002 Nobel Prize in Chemistry alongside John Fenn, and Kurt Wüthrich "for the development of methods for identification and structure analyses of biological macromolecules." These ionization methods have greatly facilitated the study of proteins by mass spectrometry. Consequently, protein mass spectrometry now plays a leading role in protein characterization.
Protein mass spectrometry
0.843454
808
In addition to the product, the coproduct and the antipode of a Hopf algebra can be expressed in terms of structure constants. The connecting axiom, which defines a consistency condition on the Hopf algebra, can be expressed as a relation between these various structure constants.
Structure constant
0.843406
809
A Lie group is abelian exactly when all structure constants are 0. A Lie group is real exactly when its structure constants are real. The structure constants are completely anti-symmetric in all indices if and only if the Lie algebra is a direct sum of simple compact Lie algebras. A nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion.
Structure constant
0.843406
810
The Hall polynomials are the structure constants of the Hall algebra.
Structure constant
0.843406
811
Given the structure constants, the resulting product is obtained by bilinearity and can be uniquely extended to all vectors in the vector space, thus uniquely determining the product for the algebra. Structure constants are used whenever an explicit form for the algebra must be given. Thus, they are frequently used when discussing Lie algebras in physics, as the basis vectors indicate specific directions in physical space, or correspond to specific particles (recall that Lie algebras are algebras over a field, with the bilinear product being given by the Lie bracket, usually defined via the commutator).
Structure constant
0.843406
812
In mathematics, the structure constants or structure coefficients of an algebra over a field are the coefficients of the basis expansion (into linear combination of basis vectors) of the products of basis vectors. Because the product operation in the algebra is bilinear, by linearity knowing the product of basis vectors allows to compute the product of any elements (just like a matrix allows to compute the action of the linear operator on any vector by providing the action of the operator on basis vectors). Therefore, the structure constants can be used to specify the product operation of the algebra (just like a matrix defines a linear operator).
Structure constant
0.843406
813
The polymerase chain reaction (PCR) is a scientific technique that is used to replicate a piece of a DNA molecule by several orders of magnitude. PCR implements a cycle of repeated heated and cooling known as thermal cycling along with the addition of DNA primers and DNA polymerases to selectively replicate the DNA fragment of interest. The technique was developed by Kary Mullis in 1983 while working for the Cetus Corporation. Mullis would go on to win the Nobel Prize in Chemistry in 1993 as a result of the impact that PCR had in many areas such as DNA cloning, DNA sequencing, and gene analysis.
Biomolecular engineering
0.8434
814
Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecular engineering approach and was approved by the U.S. FDA. Also, Biomolecular Engineering was a former name of the journal New Biotechnology.
Biomolecular engineering
0.8434
815
During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level".
Biomolecular engineering
0.8434
816
Chemical engineering is the processing of raw materials into chemical products. It involves preparation of raw materials to produce reactants, the chemical reaction of these reactants under controlled conditions, the separation of products, the recycle of byproducts, and the disposal of wastes. Each step involves certain basic building blocks called "unit operations," such as extraction, filtration, and distillation. These unit operations are found in all chemical processes. Biomolecular engineering is a subset of Chemical Engineering that applies these same principles to the processing of chemical substances made by living organisms.
Biomolecular engineering
0.8434
817
Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine. Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering.
Biomolecular engineering
0.8434
818
In this way, it encompasses many of the industrial applications of the biomolecular engineering discipline. By examination of the biotech industry, it can be gathered that the principal leader of the industry is the United States, followed by France and Spain. It is also true that the focus of the biotechnology industry and the application of biomolecular engineering is primarily clinical and medical. People are willing to pay for good health, so most of the money directed towards the biotech industry stays in health-related ventures.
Biomolecular engineering
0.8434
819
Biomolecular engineering is an extensive discipline with applications in many different industries and fields. As such, it is difficult to pinpoint a general perspective on the Biomolecular engineering profession. The biotechnology industry, however, provides an adequate representation. The biotechnology industry, or biotech industry, encompasses all firms that use biotechnology to produce goods or services or to perform biotechnology research and development.
Biomolecular engineering
0.8434
820
Biomedical engineering is a sub category of bioengineering that uses many of the same principles but focuses more on the medical applications of the various engineering developments. Some applications of biomedical engineering include: Biomaterials - Design of new materials for implantation in the human body and analysis of their effect on the body. Cellular engineering – Design of new cells using recombinant DNA and development of procedures to allow normal cells to adhere to artificial implanted biomaterials Tissue engineering – Design of new tissues from the basic biological building blocks to form new tissues Artificial organs – Application of tissue engineering to whole organs Medical imaging – Imaging of tissues using CAT scan, MRI, ultrasound, x-ray or other technologies Medical Optics and Lasers – Application of lasers to medical diagnosis and treatment Rehabilitation engineering – Design of devices and systems used to aid disabled people Man-machine interfacing - Control of surgical robots and remote diagnostic and therapeutic systems using eye tracking, voice recognition and muscle and brain wave controls Human factors and ergonomics – Design of systems to improve human performance in a wide range of applications
Biomolecular engineering
0.8434
821
Bioelectrical engineering involves the electrical fields generated by living cells or organisms. Examples include the electric potential developed between muscles or nerves of the body. This discipline requires knowledge in the fields of electricity and biology to understand and utilize these concepts to improve or better current bioprocesses or technology. Bioelectrochemistry - Chemistry concerned with electron/proton transport throughout the cell Bioelectronics - Field of research coupling biology and electronics
Biomolecular engineering
0.8434
822
Biochemistry is the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemical processes govern all living organisms and living processes and the field of biochemistry seeks to understand and manipulate these processes.
Biomolecular engineering
0.8434
823
Biocatalysis – Chemical transformations using enzymes. Bioseparations – Separation of biologically active molecules. Thermodynamics and Kinetics (chemistry) – Analysis of reactions involving cell growth and biochemicals. Bioreactor design and analysis – Design of reactors for performing biochemical transformations.
Biomolecular engineering
0.8434
824
Bio-inspired technologies of the future can help explain biomolecular engineering. Looking at the Moore's law "Prediction", in the future quantum and biology-based processors are "big" technologies. With the use of biomolecular engineering, the way our processors work can be manipulated in order to function in the same sense a biological cell work. Biomolecular engineering has the potential to become one of the most important scientific disciplines because of its advancements in the analyses of gene expression patterns as well as the purposeful manipulation of many important biomolecules to improve functionality.
Biomolecular engineering
0.8434
825
Newly developed and offered undergraduate programs across the United States, often coupled to the chemical engineering program, allow students to achieve a B.S. degree. According to ABET (Accreditation Board for Engineering and Technology), biomolecular engineering curricula "must provide thorough grounding in the basic sciences including chemistry, physics, and biology, with some content at an advanced level… engineering application of these basic sciences to design, analysis, and control, of chemical, physical, and/or biological processes." Common curricula consist of major engineering courses including transport, thermodynamics, separations, and kinetics, with additions of life sciences courses including biology and biochemistry, and including specialized biomolecular courses focusing on cell biology, nano- and biotechnology, biopolymers, etc.
Biomolecular engineering
0.8434
826
A broad term encompassing all engineering applied to the life sciences. This field of study utilizes the principles of biology along with engineering principles to create marketable products. Some bioengineering applications include: Biomimetics - The study and development of synthetic systems that mimic the form and function of natural biologically produced substances and processes. Bioprocess engineering - The study and development of process equipment and optimization that aids in the production of many products such as food and pharmaceuticals. Industrial microbiology - The implementation of microorganisms in the production of industrial products such as food and antibiotics. Another common application of industrial microbiology is the treatment of wastewater in chemical plants via utilization of certain microorganisms.
Biomolecular engineering
0.8434
827
In practice, the Rasch model has at least two principal advantages in comparison to the IRT approach. The first advantage is the primacy of Rasch's specific requirements, which (when met) provides fundamental person-free measurement (where persons and items can be mapped onto the same invariant scale). Another advantage of the Rasch approach is that estimation of parameters is more straightforward in Rasch models due to the presence of sufficient statistics, which in this application means a one-to-one mapping of raw number-correct scores to Rasch θ {\displaystyle {\theta }} estimates.
Item Response Theory
0.843288
828
There are several methods for assessing fit, such as a Chi-square statistic, or a standardized version of it. Two and three-parameter IRT models adjust item discrimination, ensuring improved data-model fit, so fit statistics lack the confirmatory diagnostic value found in one-parameter models, where the idealized model is specified in advance.
Item Response Theory
0.843288
829
The rotation group of a bounded object is equal to its full symmetry group if and only if the object is chiral. The point groups that are generated purely by a finite set of reflection mirror planes passing through the same point are the finite Coxeter groups, represented by Coxeter notation. The point groups in three dimensions are heavily used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups.
Binary polyhedral group
0.843251
830
In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O(3), the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O(3) itself is a subgroup of the Euclidean group E(3) of all isometries. Symmetry groups of geometric objects are isometry groups.
Binary polyhedral group
0.843251
831
§ The seven remaining point groups, which have multiple 3-or-more-fold rotation axes; these groups can also be characterized as point groups having multiple 3-fold rotation axes. The possible combinations are: Four 3-fold axes (the three tetrahedral symmetries T, Th, and Td) Four 3-fold axes and three 4-fold axes (octahedral symmetries O and Oh) Ten 3-fold axes and six 5-fold axes (icosahedral symmetries I and Ih)According to the crystallographic restriction theorem, only a limited number of point groups are compatible with discrete translational symmetry: 27 from the 7 infinite series, and 5 of the 7 others. Together, these make up the 32 so-called crystallographic point groups.
Binary polyhedral group
0.843251
832
Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example, Knapsack was shown to be NP-complete by reducing Exact cover to Knapsack. Satisfiability: the boolean satisfiability problem for formulas in conjunctive normal form (often referred to as SAT) 0–1 integer programming (A variation in which only the restrictions must be satisfied, with no optimization) Clique (see also independent set problem) Set packing Vertex cover Set covering Feedback node set Feedback arc set Directed Hamilton circuit (Karp's name, now usually called Directed Hamiltonian cycle) Undirected Hamilton circuit (Karp's name, now usually called Undirected Hamiltonian cycle) Satisfiability with at most 3 literals per clause (equivalent to 3-SAT) Chromatic number (also called the Graph Coloring Problem) Clique cover Exact cover Hitting set Steiner tree 3-dimensional matching Knapsack (Karp's definition of Knapsack is closer to Subset sum) Job sequencing Partition Max cut
Reducibility among combinatorial problems
0.843224
833
To give some counter-examples, we have e.g. z ∉ T ( X ) {\displaystyle {\color {red}z}\not \in T(X)} , since z {\displaystyle z} is neither an admitted variable symbol nor an admitted constant symbol; 3 ∉ T ( X ) {\displaystyle {\color {red}3}\not \in T(X)} , for the same reason, + 1 ∉ T ( X ) {\displaystyle {\color {red}+1}\not \in T(X)} , since + {\displaystyle +} is a 2-ary function symbol, but is used here with only one argument term (viz. 1 {\displaystyle {\color {red}1}} ).Now that the term set T ( X ) {\displaystyle T(X)} is established, we consider the term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} . This algebra uses T ( X ) {\displaystyle T(X)} as its domain, on which addition and multiplication need to be defined. The addition function + T ( X ) {\displaystyle +^{{\mathcal {T}}(X)}} takes two terms p {\displaystyle p} and q {\displaystyle q} and returns the term + p q {\displaystyle {\color {red}+}pq} ; similarly, the multiplication function ∗ T ( X ) {\displaystyle *^{{\mathcal {T}}(X)}} maps given terms p {\displaystyle p} and q {\displaystyle q} to the term ∗ p q {\displaystyle {\color {red}*}pq} .
Term algebra
0.843176
834
As an example type inspired from integer arithmetic can be defined by τ 0 = { 0 , 1 } {\displaystyle \tau _{0}=\{0,1\}} , τ 1 = { } {\displaystyle \tau _{1}=\{\}} , τ 2 = { + , ∗ } {\displaystyle \tau _{2}=\{+,*\}} , and τ i = { } {\displaystyle \tau _{i}=\{\}} for each i > 2 {\displaystyle i>2} . The best-known algebra of type τ {\displaystyle \tau } has the natural numbers as its domain and interprets 0 {\displaystyle 0} , 1 {\displaystyle 1} , + {\displaystyle +} , and ∗ {\displaystyle *} in the usual way; we refer to it as A n a t {\displaystyle {\mathcal {A}}_{nat}} . For the example variable set X = { x , y } {\displaystyle X=\{x,y\}} , we are going to investigate the term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} . First, the set T ( X ) {\displaystyle T(X)} of terms of type τ {\displaystyle \tau } over X {\displaystyle X} is considered.
Term algebra
0.843176
835
, t n ) {\displaystyle f(t_{1},...,t_{n})} .A term algebra is called absolutely free because for any algebra A {\displaystyle {\mathcal {A}}} of type τ {\displaystyle \tau } , and for any function g: X → A {\displaystyle g:X\to {\mathcal {A}}} , g {\displaystyle g} extends to a unique homomorphism g ∗: T ( X ) → A {\displaystyle g^{\ast }:{\mathcal {T}}(X)\to {\mathcal {A}}} , which simply evaluates each term t ∈ T ( X ) {\displaystyle t\in {\mathcal {T}}(X)} to its corresponding value g ∗ ( t ) ∈ A {\displaystyle g^{\ast }(t)\in {\mathcal {A}}} . Formally, for each t ∈ T ( X ) {\displaystyle t\in {\mathcal {T}}(X)}: If t ∈ X {\displaystyle t\in X} , then g ∗ ( t ) = g ( t ) {\displaystyle g^{\ast }(t)=g(t)} . If t = f ∈ τ 0 {\displaystyle t=f\in \tau _{0}} , then g ∗ ( t ) = f A ( ) {\displaystyle g^{\ast }(t)=f^{\mathcal {A}}()} .
Term algebra
0.843176
836
, t n {\displaystyle t_{1},...,t_{n}} , the application of an n {\displaystyle n} -ary function symbol f {\displaystyle f} to them represents again a term.The term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} is, in summary, the algebra of type τ {\displaystyle \tau } that maps each expression to its string representation. Formally, T ( X ) {\displaystyle {\mathcal {T}}(X)} is defined as follows: The domain of T ( X ) {\displaystyle {\mathcal {T}}(X)} is T ( X ) {\displaystyle T(X)} . For each nullary function f {\displaystyle f} in τ 0 {\displaystyle \tau _{0}} , f T ( X ) ( ) {\displaystyle f^{{\mathcal {T}}(X)}()} is defined as the string f {\displaystyle f} .
Term algebra
0.843176
837
Term algebras can be shown decidable using quantifier elimination. The complexity of the decision problem is in NONELEMENTARY because binary constructors are injective and thus pairing functions.
Term algebra
0.843176
838
The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand. Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration.
Term algebra
0.843176
839
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra.From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category.A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them. An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear.
Term algebra
0.843176
840
A powerful feature of C++'s templates is template specialization. This allows alternative implementations to be provided based on certain characteristics of the parameterized type that is being instantiated. Template specialization has two purposes: to allow certain forms of optimization, and to reduce code bloat. For example, consider a sort() template function.
Generic algorithm
0.843167
841
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to provide an easy-to-use environment for individual application scientists themselves to create their own workflows, provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time, simplify the process of sharing and reusing workflows between the scientists, and enable scientists to track the provenance of the workflow execution results and the workflow creation steps.Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
Bioinformatics
0.843135
842
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Bioinformatics
0.843135
843
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads. Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics
0.843135
844
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Bioinformatics
0.843135
845
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies. A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Bioinformatics
0.843135
846
Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Bioinformatics
0.843135
847
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins. One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily.
Bioinformatics
0.843135
848
First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Bioinformatics
0.843135
849
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene.
Bioinformatics
0.843135
850
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration. Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD. The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Bioinformatics
0.843135
851
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both. Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Bioinformatics
0.843135
852
There are a few exceptions with only one electron (or zero for palladium) in the ns orbital in favor of completing a half or a whole d shell. The usual explanation in chemistry textbooks is that half-filled or completely filled subshells are particularly stable arrangements of electrons.
D electron count
0.843035
853
For free atoms, electron configurations have been determined by atomic spectroscopy. Lists of atomic energy levels and their electron configurations have been published by the National Institute of Standards and Technology (NIST) for both neutral and ionized atoms.For neutral atoms of all elements, the ground-state electron configurations are listed in general chemistry and inorganic chemistry: 38 textbooks. The ground-state configurations are often explained using two principles: the Aufbau principle that subshells are filled in order of increasing energy, and the Madelung rule that this order corresponds to the order of increasing values of (n + l) where n is the principal quantum number and l is the azimuthal quantum number. This rule predicts for example that the 4s orbital (n = 4, l = 0, n + l = 4) is filled before the 3d orbital (n = 3, l = 2, n + l = 5), as in titanium with configuration 4s23d2.
D electron count
0.843035
854
Counting d electrons is a formalism. Often it is difficult or impossible to assign electrons and charge to the metal center or a ligand. For a high-oxidation-state metal center with a +4 charge or greater it is understood that the true charge separation is much smaller. But referring to the formal oxidation state and d electron count can still be useful when trying to understand the chemistry.
D electron count
0.843035
855
In this situation the complex geometry is octahedral, which means two of the d orbitals have the proper geometry to be involved in bonding. The other three d orbitals in the basic model do not have significant interactions with the ligands and remain as three degenerate non-bonding orbitals. The two orbitals that are involved in bonding form a linear combination with two ligand orbitals with the proper symmetry.
D electron count
0.843035
856
That leaves the (n − 1)d orbitals to be involved in some portion of the bonding and in the process also describes the metal complex's valence electrons. The final description of the valence is highly dependent on the complex's geometry, in turn highly dependent on the d electron count and character of the associated ligands. For example, in the MO diagram provided for the 3+ the ns orbital – which is placed above (n − 1)d in the representation of atomic orbitals (AOs) – is used in a linear combination with the ligand orbitals, forming a very stable bonding orbital with significant ligand character as well as an unoccupied high energy antibonding orbital which is not shown.
D electron count
0.843035
857
According to Ligand Field Theory, the ns orbital is involved in bonding to the ligands and forms a strongly bonding orbital which has predominantly ligand character and the correspondingly strong anti-bonding orbital which is unfilled and usually well above the lowest unoccupied molecular orbital (LUMO). Since the orbitals resulting from the ns orbital are either buried in bonding or elevated well above the valence, the ns orbitals are not relevant to describing the valence. Depending on the geometry of the final complex, either all three of the np orbitals or portions of them are involved in bonding, similar to the ns orbitals. The np orbitals if any that remain non-bonding still exceed the valence of the complex.
D electron count
0.843035
858
Each of the ten possible d electron counts has an associated Tanabe–Sugano diagram describing gradations of possible ligand field environments a metal center could experience in an octahedral geometry. The Tanabe–Sugano diagram with a small amount of information accurately predicts absorptions in the UV and visible electromagnetic spectrum resulting from d to d orbital electron transitions. It is these d–d transitions, ligand to metal charge transfers (LMCT), or metal to ligand charge transfers (MLCT) that generally give metals complexes their vibrant colors.
D electron count
0.843035
859
The d electron count or number of d electrons is a chemistry formalism used to describe the electron configuration of the valence electrons of a transition metal center in a coordination complex. The d electron count is an effective way to understand the geometry and reactivity of transition metal complexes. The formalism has been incorporated into the two major models used to describe coordination complexes; crystal field theory and ligand field theory, which is a more advanced version based on molecular orbital theory. However the d electron count of an atom in a complex is often different from the d electron count of a free atom or a free ion of the same element.
D electron count
0.843035
860
In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schrödinger's wave-equation allows these coefficients to be calculated.
Rectangular potential barrier
0.843021
861
In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called "quantum tunneling") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schrödinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left. Although classically a particle behaving as a point mass would be reflected if its energy is less than V 0 {\displaystyle V_{0}} , a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side.
Rectangular potential barrier
0.843021
862
Electronic structure calculations rank among the most computationally intensive tasks in all scientific calculations. For this reason, quantum chemistry calculations take up significant shares on many scientific supercomputer facilities. A number of methods to obtain electronic structures exist, and their applicability varies from case to case.
Electronic structure of atom
0.842872
863
Along with nuclear dynamics, the electronic structure problem is one of the two steps in studying the quantum mechanical motion of a molecular system. Except for a small number of simple problems such as hydrogen-like atoms, the solution of electronic structure problems require modern computers. Electronic structure problems are routinely solved with quantum chemistry computer programs.
Electronic structure of atom
0.842872
864
In physics, electronic structure is the state of motion of electrons in an electrostatic field created by stationary nuclei. The term encompasses both the wave functions of the electrons and the energies associated with them. Electronic structure is obtained by solving quantum mechanical equations for the aforementioned clamped-nuclei problem. Electronic structure problems arise from the Born–Oppenheimer approximation.
Electronic structure of atom
0.842872
865
Basic studies include identification of genes and inherited disorders. This research has been conducted for centuries on both a large-scale physical observation basis and on a more microscopic scale. Genetic analysis can be used generally to describe methods both used in and resulting from the sciences of genetics and molecular biology, or to applications resulting from this research. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes.
Genetic analysis
0.842853
866
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts of the process. The base system of analysis revolves around general genetics.
Genetic analysis
0.842853
867
Cytogenetics is a branch of genetics that is concerned with the study of the structure and function of the cell, especially the chromosomes. Polymerase chain reaction studies the amplification of DNA. Because of the close analysis of chromosomes in cytogenetics, abnormalities are more readily seen and diagnosed.
Genetic analysis
0.842853
868
Numerous practical advancements have been made in the field of genetics and molecular biology through the processes of genetic analysis. One of the most prevalent advancements during the late 20th and early 21st centuries is a greater understanding of cancer's link to genetics. By identifying which genes in the cancer cells are working abnormally, doctors can better diagnose and treat cancers.
Genetic analysis
0.842853
869
Modern genetic analysis began in the mid-1800s with research conducted by Gregor Mendel. Mendel, who is known as the "father of modern genetics", was inspired to study variation in plants. Between 1856 and 1863, Mendel cultivated and tested some 29,000 pea plants (i.e., Pisum sativum). This study showed that one in four pea plants had purebred recessive alleles, two out of four were hybrid and one out of four were purebred dominant.
Genetic analysis
0.842853
870
This research has been able to identify the concepts of genetic mutations, fusion genes and changes in DNA copy numbers, and advances are made in the field every day. Much of these applications have led to new types of sciences that use the foundations of genetic analysis. Reverse genetics uses the methods to determine what is missing in a genetic code or what can be added to change that code. Genetic linkage studies analyze the spatial arrangements of genes and chromosomes.
Genetic analysis
0.842853
871
The polymerase chain reaction (PCR) is a biochemical technology in molecular biology to amplify a single or a few copies of a piece of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence. PCR is now a common and often indispensable technique used in medical and biological research labs for a variety of applications. These include DNA cloning for sequencing, DNA-based phylogeny, or functional analysis of genes; the diagnosis of hereditary diseases; the identification of genetic fingerprints (used in forensic sciences and paternity testing); and the detection and diagnosis of infectious diseases.
Genetic analysis
0.842853
872
In 1998, the European Union's Directive 98/44/ECclarified that patents on DNA sequences were allowable. In 2010 in the US, AMP sued Myriad Genetics to challenge the latter's patents regarding two genes, BRCA1, BRCA2, which are associated with breast cancer. In 2013, the U.S. Supreme Court partially agreed, ruling that a naturally occurring gene sequence could not be patented.
Molecular diagnostic
0.842852
873
The field of molecular biology grew in the late twentieth century, as did its clinical application. In 1980, Yuet Wai Kan et al. suggested a prenatal genetic test for Thalassemia that did not rely upon DNA sequencing—then in its infancy—but on restriction enzymes that cut DNA where they recognised specific short sequences, creating different lengths of DNA strand depending on which allele (genetic variation) the fetus possessed. In the 1980s, the phrase was used in the names of companies such as Molecular Diagnostics Incorporated and Bethseda Research Laboraties Molecular Diagnostics.During the 1990s, the identification of newly discovered genes and new techniques for DNA sequencing led to the appearance of a distinct field of molecular and genomic laboratory medicine; in 1995, the Association for Molecular Pathology (AMP) was formed to give it structure. In 1999, the AMP co-founded The Journal of Medical Diagnostics.
Molecular diagnostic
0.842852
874
For example, the BRCA1/2 test by Myriad Genetics assesses women for lifetime risk of breast cancer. Also, some cancers are not always employed with clear symptoms. It is useful to analyze people when they do not show obvious symptoms and thus can detect cancer at early stages.
Molecular diagnostic
0.842852
875
Some of a patient's single nucleotide polymorphisms—slight differences in their DNA—can help predict how quickly they will metabolise particular drugs; this is called pharmacogenomics. For example, the enzyme CYP2C19 metabolises several drugs, such as the anti-clotting agent Clopidogrel, into their active forms. Some patients possess polymorphisms in specific places on the 2C19 gene that make poor metabolisers of those drugs; physicians can test for these polymorphisms and find out whether the drugs will be fully effective for that patient. Advances in molecular biology have helped show that some syndromes that were previously classed as a single disease are actually multiple subtypes with entirely different causes and treatments. Molecular diagnostics can help diagnose the subtype—for example of infections and cancers—or the genetic analysis of a disease with an inherited component, such as Silver-Russell syndrome.
Molecular diagnostic
0.842852
876
Suppose our economy consists of 2 assets, a stock and a risk-free bond, and that we use the Black–Scholes model. In the model the evolution of the stock price can be described by Geometric Brownian Motion: d S t = μ S t d t + σ S t d W t {\displaystyle dS_{t}=\mu S_{t}\,dt+\sigma S_{t}\,dW_{t}} where W t {\displaystyle W_{t}} is a standard Brownian motion with respect to the physical measure. If we define W ~ t = W t + μ − r σ t , {\displaystyle {\tilde {W}}_{t}=W_{t}+{\frac {\mu -r}{\sigma }}t,} Girsanov's theorem states that there exists a measure Q {\displaystyle Q} under which W ~ t {\displaystyle {\tilde {W}}_{t}} is a Brownian motion. μ − r σ {\displaystyle {\frac {\mu -r}{\sigma }}} is known as the market price of risk.
Physical measure
0.842834
877
Note that if we used the actual real-world probabilities, every security would require a different adjustment (as they differ in riskiness). The absence of arbitrage is crucial for the existence of a risk-neutral measure. In fact, by the fundamental theorem of asset pricing, the condition of no-arbitrage is equivalent to the existence of a risk-neutral measure.
Physical measure
0.842834
878
The discounted payoff process of a derivative on the stock H t = E Q ⁡ ( H T | F t ) {\displaystyle H_{t}=\operatorname {E} _{Q}(H_{T}|F_{t})} is a martingale under Q {\displaystyle Q} . Notice the drift of the SDE is r {\displaystyle r} , the risk-free interest rate, implying risk neutrality. Since S ~ {\displaystyle {\tilde {S}}} and H {\displaystyle H} are Q {\displaystyle Q} -martingales we can invoke the martingale representation theorem to find a replicating strategy – a portfolio of stocks and bonds that pays off H t {\displaystyle H_{t}} at all times t ≤ T {\displaystyle t\leq T} .
Physical measure
0.842834
879
Utilizing rules within Itô calculus, one may informally differentiate with respect to t {\displaystyle t} and rearrange the above expression to derive the SDE d W t = d W ~ t − μ − r σ d t , {\displaystyle dW_{t}=d{\tilde {W}}_{t}-{\frac {\mu -r}{\sigma }}\,dt,} Put this back in the original equation: d S t = r S t d t + σ S t d W ~ t . {\displaystyle dS_{t}=rS_{t}\,dt+\sigma S_{t}\,d{\tilde {W}}_{t}.} Let S ~ t {\displaystyle {\tilde {S}}_{t}} be the discounted stock price given by S ~ t = e − r t S t {\displaystyle {\tilde {S}}_{t}=e^{-rt}S_{t}} , then by Ito's lemma we get the SDE: d S ~ t = σ S ~ t d W ~ t .
Physical measure
0.842834
880
Mathematics and Mechanics of Solids is an international journal which publishes original research in solid mechanics and materials science. The journal’s aim is to publish original, self-contained research that focuses on the mechanical behaviour of solids with particular emphasis on mathematical principles.
Mathematics & Mechanics of Solids
0.842803
881
Mathematics & Mechanics of Solids is abstracted and indexed in, among other databases: SCOPUS, and the Social Sciences Citation Index. According to the Journal Citation Reports, its 2016 impact factor is 2.953, ranking it 72 out of 275 journals in the category ‘Materials Science, Multidisciplinary’. and 11 out of 100 journals in the category ‘Mathematics, Interdisciplinary Applications’. and 13 out of 133 journals in the category ‘Mechanics’.
Mathematics & Mechanics of Solids
0.842803
882
The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module.
Multi-dimensional space
0.842797
883
The net current into a volume is where S = ∂V is the boundary of V oriented by outward-pointing normals, and dS is shorthand for NdS, the outward pointing normal of the boundary ∂V. Here J is the current density (charge per unit area per unit time) at the surface of the volume. The vector points in the direction of the current. From the Divergence theorem this can be written Charge conservation requires that the net current into a volume must necessarily equal the net change in charge within the volume. The total charge q in volume V is the integral (sum) of the charge density in V So, by the Leibniz integral rule Equating (1) and (2) gives Since this is true for every volume, we have in general
Charge conservation
0.842757
884
The full statement of gauge invariance is that the physics of an electromagnetic field are unchanged when the scalar and vector potential are shifted by the gradient of an arbitrary scalar field χ {\displaystyle \chi }: ϕ ′ = ϕ − ∂ χ ∂ t A ′ = A + ∇ χ . {\displaystyle \phi '=\phi -{\frac {\partial \chi }{\partial t}}\qquad \qquad \mathbf {A} '=\mathbf {A} +\nabla \chi .} In quantum mechanics the scalar field is equivalent to a phase shift in the wavefunction of the charged particle: ψ ′ = e i q χ ψ {\displaystyle \psi '=e^{iq\chi }\psi } so gauge invariance is equivalent to the well known fact that changes in the phase of a wavefunction are unobservable, and only changes in the magnitude of the wavefunction result in changes to the probability function | ψ | 2 {\displaystyle |\psi |^{2}} .
Charge conservation
0.842757
885
Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics. The symmetry that is associated with charge conservation is the global gauge invariance of the electromagnetic field. This is related to the fact that the electric and magnetic fields are not changed by different choices of the value representing the zero point of electrostatic potential ϕ {\displaystyle \phi } . However the full symmetry is more complicated, and also involves the vector potential A {\displaystyle \mathbf {A} } .
Charge conservation
0.842757
886
This deduction could be derived directly from the continuity equation, since at steady state ∂ Q / ∂ t = 0 {\displaystyle \partial Q/\partial t=0} holds, and implies Q ˙ I N ( t ) = Q ˙ O U T ( t ) {\displaystyle {\dot {Q}}_{\rm {IN}}(t)={\dot {Q}}_{\rm {OUT}}(t)} . In electromagnetic field theory, vector calculus can be used to express the law in terms of charge density ρ (in coulombs per cubic meter) and electric current density J (in amperes per square meter). This is called the charge density continuity equation The term on the left is the rate of change of the charge density ρ at a point.
Charge conservation
0.842756
887
These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat.
Microwave background
0.842704
888
An international team had since worked on writing a formal proof; it was finished (and verified) in 2015.Once written formally, a proof can be verified using a program called a proof assistant. These programs are useful in situations where one is uncertain about a proof's correctness.A major open problem in theoretical computer science is P versus NP. It is one of the seven Millennium Prize Problems.
Fields of mathematics
0.84266
889
Discrete mathematics is useful in many areas of computer science, such as complexity theory, information theory, graph theory, and so on.In return, computing has also become essential for obtaining new results. This is a group of techniques known as experimental mathematics, which is the use of experimentation to discover mathematical insights. The most well-known example is the four-color theorem, which was proven in 1976 with the help of a computer.
Fields of mathematics
0.84266
890
The rise of technology in the 20th century opened the way to a new science: computing. This field is closely related to mathematics in several ways. Theoretical computer science is essentially mathematical in nature. Communication technologies apply branches of mathematics that may be very old (e.g., arithmetic), especially with respect to transmission security, in cryptography and coding theory.
Fields of mathematics
0.84266
891
The field came to full fruition with the contributions of Adrien-Marie Legendre and Carl Friedrich Gauss.Many easily stated number problems have solutions that require sophisticated methods, often from across mathematics. A prominent example is Fermat's Last Theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was proved only in 1994 by Andrew Wiles, who used tools including scheme theory from algebraic geometry, category theory, and homological algebra. Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven despite considerable effort.Number theory includes several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), diophantine equations, and transcendence theory (problem oriented).
Fields of mathematics
0.84266
892
Number theory dates back to ancient Babylon and probably China. Two prominent early number theorists were Euclid of ancient Greece and Diophantus of Alexandria. The modern study of number theory in its abstract form is largely attributed to Pierre de Fermat and Leonhard Euler.
Fields of mathematics
0.84266
893
The method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. In the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. This produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs.At the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes (non-Euclidean geometries and Weierstrass function) and contradictions (Russell's paradox).
Fields of mathematics
0.84266
894
The emergence of computer-assisted proofs has allowed proof lengths to further expand, such as the 255-page Feit–Thompson theorem. The result of this trend is a philosophy of the quasi-empiricist proof that can not be considered infallible, but has a probability attached to it.The concept of rigor in mathematics dates back to ancient Greece, where their society encouraged logical, deductive reasoning. However, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity.
Fields of mathematics
0.84266
895
Number theory began with the manipulation of numbers, that is, natural numbers ( N ) , {\displaystyle (\mathbb {N} ),} and later expanded to integers ( Z ) {\displaystyle (\mathbb {Z} )} and rational numbers ( Q ) . {\displaystyle (\mathbb {Q} ).} Number theory was once called arithmetic, but nowadays this term is mostly used for numerical calculations.
Fields of mathematics
0.84266
896
Mathematics and physics have influenced each other over their modern history. Modern physics uses mathematics abundantly, and is also the motivation of major mathematical developments.
Fields of mathematics
0.84266
897
In mathematical logic, an algebraic sentence is one that can be stated using only equations between terms with free variables. Inequalities and quantifiers are specifically disallowed. Sentential logic is the subset of first-order logic involving only algebraic sentences. Saying that a sentence is algebraic is a stronger condition than saying it is elementary.
Algebraic sentence
0.842659
898
The above metatheorem does not hold if we consider the validity of more general first-order logic formulas instead of only atomic positive equalities. As an example consider the formula (x = 0) ∨ (x = 1). This formula is always true in a two-element Boolean algebra. In a four-element Boolean algebra whose domain is the powerset of { 0 , 1 } {\displaystyle \{0,1\}} , this formula corresponds to the statement (x = ∅) ∨ (x = {0,1}) and is false when x is { 1 } {\displaystyle \{1\}} . The decidability for the first-order theory of many classes of Boolean algebras can still be shown, using quantifier elimination or small model property (with the domain size computed as a function of the formula and generally larger than 2).
Two-element Boolean algebra
0.842631
899
Hence all identities of Boolean algebra are captured by 2. This theorem is useful because any equation in 2 can be verified by a decision procedure. Logicians refer to this fact as "2 is decidable".
Two-element Boolean algebra
0.842631