id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
400
|
In China, from ancient times counting rods were used to represent numbers, and arithmetic was accomplished with rod calculus and later the suanpan. The Book on Numbers and Computation and the Nine Chapters on the Mathematical Art include exercises that are exemplars of linear algebra.In about 980 Al-Sijzi wrote his Ways of Making Easy the Derivation of Geometrical Figures, which was translated and published by Jan Hogendijk in 1996.An Arabic language collection of exercises was given a Spanish translation as Compendio de Algebra de Abenbéder and reviewed in Nature.Robert Recorde first published The Ground of Arts in 1543. Firstly, it was almost all exposition with very few exercises — The later came into prominence in the eighteenth and nineteenth centuries. As a comparison we might look at another best seller, namely Walkingame’s Tutor's Assistant, first published in 1751, 70 per cent of which was devoted to exercises as opposed to about 1 per cent by Recorde.
|
Mathematical exercise
| 0.84963
|
401
|
In such courses emphasis was on learning by doing, without an attempt to teach specific heuristics: the students worked lots of problems because (according to the implicit instructional model behind such courses) that’s how one gets good at mathematics.Such exercise collections may be proprietary to the instructor and his institution. As an example of the value of exercise sets, consider the accomplishment of Toru Kumon and his Kumon method. In his program, a student does not proceed before mastery of each level of exercise. At the Russian School of Mathematics, students begin multi-step problems as early as the first grade, learning to build on previous results to progress towards the solution. In the 1960s, collections of mathematical exercises were translated from Russian and published by W. H. Freeman and Company: The USSR Olympiad Problem Book (1962), Problems in Higher Algebra (1965), and Problems in Differential Equations (1963).
|
Mathematical exercise
| 0.84963
|
402
|
... Supplementary exercises at the end of each chapter expand the other exercise sets and provide cumulative exercises that require skills from earlier chapters.This text includes "Functions and Graphs in Applications" (Ch 0.6) which is fourteen pages of preparation for word problems. Authors of a book on finite fields chose their exercises freely: In order to enhance the attractiveness of this book as a textbook, we have included worked-out examples at appropriate points in the text and have included lists of exercises for Chapters 1 — 9. These exercises range from routine problems to alternative proofs of key theorems, but containing also material going beyond what is covered in the text.J.
|
Mathematical exercise
| 0.84963
|
403
|
These are short stories of adventure and industry with the end omitted and, though betraying a strong family resemblance, are not without a certain element of romance.A distinction between an exercise and a mathematical problem was made by Alan H. Schoenfeld: Students must master the relevant subject matter, and exercises are appropriate for that. But if rote exercises are the only kinds of problems that students see in their classes, we are doing the students a grave disservice.He advocated setting challenges: By "real problems" ... I mean mathematical tasks that pose an honest challenge to the student and that the student needs to work at in order to obtain a solution.A similar sentiment was expressed by Marvin Bittinger when he prepared the second edition of his textbook: In response to comments from users, the authors have added exercises that require something of the student other than an understanding of the immediate objectives of the lesson at hand, yet are not necessarily highly challenging.The zone of proximal development for each student, or cohort of students, sets exercises at a level of difficulty that challenges but does not frustrate them. Some comments in the preface of a calculus textbook show the central place of exercises in the book: The exercises comprise about one-quarter of the text – the most important part of the text in our opinion.
|
Mathematical exercise
| 0.84963
|
404
|
In primary school students start with single digit arithmetic exercises. Later most exercises involve at least two digits. A common exercise in elementary algebra calls for factorization of polynomials.
|
Mathematical exercise
| 0.84963
|
405
|
The connection densities, or neighbourhood densities of memory arrangements help distinguish which elements are a part of, or related to, the target memory. As the density of neural networks increases, the number of retrieval cues (associated nodes) also increases, which may allow for enhanced memory of the event. However, too many connections can inhibit memory in two ways. First, as described under the sub-section Spreading Activation, the total activation being spread from node 1 to connecting nodes is divided by the number of connections.
|
Memory errors
| 0.849573
|
406
|
A disadvantage is that many of these structures are of proteins of unknown function and do not have corresponding publications. This requires new ways of communicating this structural information to the broader research community. The Bioinformatics core of the Joint center for structural genomics (JCSG) has recently developed a wiki-based approach namely Open protein structure annotation network (TOPSAN) for annotating protein structures emerging from high-throughput structural genomics centers.
|
Structural proteomics
| 0.849213
|
407
|
As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure. Structural genomics emphasizes high throughput determination of protein structures.
|
Structural proteomics
| 0.849213
|
408
|
In physics, a pair potential is a function that describes the potential energy of two interacting objects solely as a function of the distance between them.Some interactions, like Coulomb's law in electrodynamics or Newton's law of universal gravitation in mechanics naturally have this form for simple spherical objects. For other types of more complex interactions or objects it is useful and common to approximate the interaction by a pair potential, for example interatomic potentials in physics and computational chemistry that use approximations like the Lennard-Jones and Morse potentials.
|
Pair potential
| 0.849171
|
409
|
Pair potentials are very common in physics and computational chemistry and biology; exceptions are very rare. An example of a potential energy function that is not a pair potential is the three-body Axilrod-Teller potential. Another example is the Stillinger-Weber potential for silicon, which includes the angle in a triangle of silicon atoms as an input parameter. == References ==
|
Pair potential
| 0.84917
|
410
|
In physics, the electric displacement field (denoted by D) or electric induction is a vector field that appears in Maxwell's equations. It accounts for the electromagnetic effects of polarization and that of an electric field, combining the two in an auxiliary field. It plays a major role in topics such as the capacitance of a material, as well the response of dielectrics to electric field, and how shapes can change due to electric fields in piezoelectricity or flexoelectricity as well as the creation of voltages and charge transfer due to elastic strains. In any material, if there is an inversion center then the charge at, for instance, + x {\displaystyle +x} and − x {\displaystyle -x} are the same.
|
Electric displacement field
| 0.849134
|
411
|
In the fields of bioinformatics and computational biology, Genome survey sequences (GSS) are nucleotide sequences similar to expressed sequence tags (ESTs) that the only difference is that most of them are genomic in origin, rather than mRNA.Genome survey sequences are typically generated and submitted to NCBI by labs performing genome sequencing and are used, amongst other things, as a framework for the mapping and sequencing of genome size pieces included in the standard GenBank divisions.
|
Genome survey sequence
| 0.849115
|
412
|
In most mathematical work beyond practical geometry, angles are typically measured in radians rather than degrees. This is for a variety of reasons; for example, the trigonometric functions have simpler and more "natural" properties when their arguments are expressed in radians. These considerations outweigh the convenient divisibility of the number 360. One complete turn (360°) is equal to 2π radians, so 180° is equal to π radians, or equivalently, the degree is a mathematical constant: 1° = π⁄180.
|
Degree (geometry)
| 0.8491
|
413
|
It is possible to combine dimensional universal physical constants to define fixed quantities of any desired dimension, and this property has been used to construct various systems of natural units of measurement. Depending on the choice and arrangement of constants used, the resulting natural units may be convenient to an area of study. For example, Planck units, constructed from c, G, ħ, and kB give conveniently sized measurement units for use in studies of quantum gravity, and Hartree atomic units, constructed from ħ, me, e and 4πε0 give convenient units in atomic physics. The choice of constants used leads to widely varying quantities.
|
Physical constant
| 0.848953
|
414
|
However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for the gravitational constant over the last nine billion years.Similarly, an upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy.It is problematic to discuss the proposed rate of change (or lack thereof) of a single dimensional physical constant in isolation. The reason for this is that the choice of units is arbitrary, making the question of whether a constant is undergoing change an artefact of the choice (and definition) of the units.For example, in SI units, the speed of light was given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now.
|
Physical constant
| 0.848953
|
415
|
Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering. Many applications, practices, and devices associated or involved in metallurgy were established in ancient China, such as the innovation of the blast furnace, cast iron, hydraulic-powered trip hammers, and double acting piston bellows.
|
Metal physics
| 0.848915
|
416
|
Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms.Historically, metallurgy has predominately focused on the production of metals.
|
Metal physics
| 0.848915
|
417
|
Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys. Metallurgy encompasses both the science and the technology of metals; that is, the way in which science is applied to the production of metals, and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement.
|
Metal physics
| 0.848915
|
418
|
{\displaystyle \rho =(\sigma \otimes \tau )\circ \Delta .} Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).
|
Commutative algebra (structure)
| 0.848855
|
419
|
Consider, for example, two representations σ: A → E n d ( V ) {\displaystyle \sigma :A\rightarrow \mathrm {End} (V)} and τ: A → E n d ( W ) {\displaystyle \tau :A\rightarrow \mathrm {End} (W)} . One might try to form a tensor product representation ρ: x ↦ σ ( x ) ⊗ τ ( x ) {\displaystyle \rho :x\mapsto \sigma (x)\otimes \tau (x)} according to how it acts on the product vector space, so that ρ ( x ) ( v ⊗ w ) = ( σ ( x ) ( v ) ) ⊗ ( τ ( x ) ( w ) ) . {\displaystyle \rho (x)(v\otimes w)=(\sigma (x)(v))\otimes (\tau (x)(w)).} However, such a map would not be linear, since one would have ρ ( k x ) = σ ( k x ) ⊗ τ ( k x ) = k σ ( x ) ⊗ k τ ( x ) = k 2 ( σ ( x ) ⊗ τ ( x ) ) = k 2 ρ ( x ) {\displaystyle \rho (kx)=\sigma (kx)\otimes \tau (kx)=k\sigma (x)\otimes k\tau (x)=k^{2}(\sigma (x)\otimes \tau (x))=k^{2}\rho (x)} for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ: A → A ⊗ A, and defining the tensor product representation as ρ = ( σ ⊗ τ ) ∘ Δ .
|
Commutative algebra (structure)
| 0.848855
|
420
|
Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map m: A ⊗ R A → A {\displaystyle m:A\otimes _{R}A\to A} .The associativity then refers to the identity: m ∘ ( id ⊗ m ) = m ∘ ( m ⊗ id ) . {\displaystyle m\circ ({\operatorname {id} }\otimes m)=m\circ (m\otimes \operatorname {id} ).}
|
Commutative algebra (structure)
| 0.848855
|
421
|
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules. Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules.
|
Commutative algebra (structure)
| 0.848855
|
422
|
The Clifford algebras, which are useful in geometry and physics. Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics. The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra. A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra Ω ( M ) = ⨁ p = 0 n Ω p ( M ) {\displaystyle \Omega (M)=\bigoplus _{p=0}^{n}\Omega ^{p}(M)} , where Ω p ( M ) {\displaystyle \Omega ^{p}(M)} consists of differential p-forms on a manifold M, is a differential graded algebra.
|
Commutative algebra (structure)
| 0.848855
|
423
|
Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z , Q {\displaystyle \mathbb {Z} ,\mathbb {Q} } ). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, L ⊗ R K = V {\displaystyle L\otimes _{R}K=V} . Let A K {\displaystyle A_{K}} be a finite-dimensional K-algebra. An order in A K {\displaystyle A_{K}} is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., 1 2 Z {\displaystyle {1 \over 2}\mathbb {Z} } is a lattice in Q {\displaystyle \mathbb {Q} } but not an order (since it is not an algebra).A maximal order is an order that is maximal among all the orders.
|
Commutative algebra (structure)
| 0.848855
|
424
|
The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics.
|
Commutative algebra (structure)
| 0.848855
|
425
|
Let A be an algebra over a commutative ring R. Then the algebra A is a right module over A e := A o p ⊗ R A {\displaystyle A^{e}:=A^{op}\otimes _{R}A} with the action x ⋅ ( a ⊗ b ) = a x b {\displaystyle x\cdot (a\otimes b)=axb} . Then, by definition, A is said to separable if the multiplication map A ⊗ R A → A , x ⊗ y ↦ x y {\displaystyle A\otimes _{R}A\to A,\,x\otimes y\mapsto xy} splits as an A e {\displaystyle A^{e}} -linear map, where A ⊗ A {\displaystyle A\otimes A} is an A e {\displaystyle A^{e}} -module by ( x ⊗ y ) ⋅ ( a ⊗ b ) = a x ⊗ y b {\displaystyle (x\otimes y)\cdot (a\otimes b)=ax\otimes yb} . Equivalently, A {\displaystyle A} is separable if it is a projective module over A e {\displaystyle A^{e}} ; thus, the A e {\displaystyle A^{e}} -projective dimension of A, sometimes called the bidimension of A, measures the failure of separability.
|
Commutative algebra (structure)
| 0.848855
|
426
|
Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents. In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli.
|
High throughput sequencing
| 0.848831
|
427
|
The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full E. coli genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an E. coli genome at an accuracy of >99.9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific.
|
High throughput sequencing
| 0.848831
|
428
|
Computer algebra system Cryptography Discrete logarithm Triple DES Caesar cipher Exponentiating by squaring Knapsack problem Shor's algorithm Standard Model Symmetry in physics
|
List of group theory topics
| 0.848667
|
429
|
Algebraic geometry Algebraic topology Discrete space Fundamental group Geometry Homology Minkowski's theorem Topological group
|
List of group theory topics
| 0.848667
|
430
|
Affine representation Character theory Great orthogonality theorem Maschke's theorem Monstrous moonshine Projective representation Representation theory Schur's lemma
|
List of group theory topics
| 0.848667
|
431
|
Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
|
List of group theory topics
| 0.848667
|
432
|
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
|
List of group theory topics
| 0.848667
|
433
|
A computer science educator stated in Times Higher Education that the examples are clear and accessible. In contrast, The Economist agreed Domingos "does a good job" but complained that he "constantly invents metaphors that grate or confuse". Kirkus Reviews praised the book, stating that "Readers unfamiliar with logic and computer theory will have a difficult time, but those who persist will discover fascinating insights. "A New Scientist review called it "compelling but rather unquestioning".
|
The Master Algorithm
| 0.848663
|
434
|
The book outlines five approaches of machine learning: inductive reasoning, connectionism, evolutionary computation, Bayes' theorem and analogical modelling. The author explains these tribes to the reader by referring to more understandable processes of logic, connections made in the brain, natural selection, probability and similarity judgments. Throughout the book, it is suggested that each different tribe has the potential to contribute to a unifying "master algorithm". Towards the end of the book the author pictures a "master algorithm" in the near future, where machine learning algorithms asymptotically grow to a perfect understanding of how the world and people in it work. Although the algorithm doesn't yet exist, he briefly reviews his own invention of the Markov logic network.
|
The Master Algorithm
| 0.848663
|
435
|
The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
|
Structure (algebraic)
| 0.848647
|
436
|
In mathematics, an algebraic structure consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities, known as axioms, that these operations must satisfy. An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures.
|
Structure (algebraic)
| 0.848647
|
437
|
Gene sharing is related to, but distinct from, several concepts in genetics, evolution, and molecular biology. Gene sharing entails multiple effects from the same gene, but unlike pleiotropy, it necessarily involves separate functions at the molecular level. A gene could exhibit pleiotropy when single enzyme function affects multiple phenotypic traits; mutations of a shared gene could potentially affect only a single trait.
|
Protein moonlighting
| 0.848627
|
438
|
These expression levels may signify that the protein is performing a different function than previously known.The structure of a protein can also help determine its functions. Protein structure in turn may be elucidated with various techniques including X-ray crystallography or NMR. Dual-polarization interferometry may be used to measure changes in protein structure which may also give hints to the protein's function. Finally, application of systems biology approaches such as interactomics give clues to a proteins function based on what it interacts with.
|
Protein moonlighting
| 0.848627
|
439
|
For example, the tissue, cellular, or subcellular distribution of a protein may provide hints as to the function. Real-time PCR is used to quantify mRNA and hence infer the presence or absence of a particular protein which is encoded by the mRNA within different cell types. Alternatively immunohistochemistry or mass spectrometry can be used to directly detect the presence of proteins and determine in which subcellular locations, cell types, and tissues a particular protein is expressed.
|
Protein moonlighting
| 0.848627
|
440
|
Gene set enrichment determines if the overlap between two gene sets is statistically significant, in this case the overlap between differentially expressed genes and gene sets from known pathways/databases (e.g., Gene Ontology, KEGG, Human Phenotype Ontology) or from complementary analyses in the same data (like co-expression networks). Common tools for gene set enrichment include web interfaces (e.g., ENRICHR, g:profiler, WEBGESTALT) and software packages. When evaluating enrichment results, one heuristic is to first look for enrichment of known biology as a sanity check and then expand the scope to look for novel biology.
|
RNA seq
| 0.84862
|
441
|
Methods: Most tools use regression or non-parametric statistics to identify differentially expressed genes, and are either based on read counts mapped to a reference genome (DESeq2, limma, edgeR) or based on read counts derived from alignment-free quantification (sleuth, Cuffdiff, Ballgown). Following regression, most tools employ either familywise error rate (FWER) or false discovery rate (FDR) p-value adjustments to account for multiple hypotheses (in human studies, ~20,000 protein-coding genes or ~50,000 biotypes). Outputs: A typical output consists of rows corresponding to the number of genes and at least three columns, each gene's log fold change (log-transform of the ratio in expression between conditions, a measure of effect size), p-value, and p-value adjusted for multiple comparisons.
|
RNA seq
| 0.84862
|
442
|
Other covariates (also referred to as factors, features, labels, or parameters) can include batch effects, known artifacts, and any metadata that might confound or mediate gene expression. In addition to known covariates, unknown covariates can also be estimated through unsupervised machine learning approaches including principal component, surrogate variable, and PEER analyses. Hidden variable analyses are often employed for human tissue RNA-Seq data, which typically have additional artifacts not captured in the metadata (e.g., ischemic time, sourcing from multiple institutions, underlying clinical traits, collecting data across many years with many personnel).
|
RNA seq
| 0.84862
|
443
|
RNA-Seq has the potential to identify new disease biology, profile biomarkers for clinical indications, infer druggable pathways, and make genetic diagnoses. These results could be further personalized for subgroups or even individual patients, potentially highlighting more effective prevention, diagnostics, and therapy. The feasibility of this approach is in part dictated by costs in money and time; a related limitation is the required team of specialists (bioinformaticians, physicians/clinicians, basic researchers, technicians) to fully interpret the huge amount of data generated by this analysis.
|
RNA seq
| 0.84862
|
444
|
Fluid Phase Equilibria is a peer-reviewed scientific journal on physical chemistry and thermodynamics that is published by Elsevier. The articles deal with experimental, theoretical and applied research related to properties of pure components and mixtures, especially phase equilibria, caloric and transport properties of fluid and solid phases. It has an impact factor of 2.775 (2020).
|
Fluid Phase Equilibria
| 0.848502
|
445
|
The current editors are: Clare McCabe - Editor in Chief. Vanderbilt University Department of Chemical and Biomolecular Engineering, Nashville, Tennessee, United States Ioannis Economou - Texas A&M University at Qatar, Education City, PO Box 23874, Doha, Qatar Yoshio Iwai - Kyushu University Faculty of Engineering Graduate School of Engineering Department of Chemical Engineering, 744, Motooka, 819-0395, Fukuoka, Japan Georgios Kontogeorgis - Technical University of Denmark Department of Chemical and Biochemical Engineering, Søltofts Plads, Building 229, DK-2800, Kgs Lyngby, Denmark Ana Soto - University of Santiago de Compostela School of Engineering, Rúa Lope Gómez de Marzoa s/n, 15782, Santiago de Compostela, Spain
|
Fluid Phase Equilibria
| 0.848502
|
446
|
A parabolic segment is the region bounded by a parabola and line. To find the area of a parabolic segment, Archimedes considers a certain inscribed triangle. The base of this triangle is the given chord of the parabola, and the third vertex is the point on the parabola such that the tangent to the parabola at that point is parallel to the chord. Proposition 1 of the work states that a line from the third vertex drawn parallel to the axis divides the chord into equal segments. The main theorem claims that the area of the parabolic segment is 4 3 {\displaystyle {\tfrac {4}{3}}} that of the inscribed triangle.
|
Quadrature of the Parabola
| 0.848491
|
447
|
S. cerevisiae, a model organism in biology has a genome of only around 12 million nucleotide pairs, and was the first unicellular eukaryote to have its whole genome sequenced. The first multicellular eukaryote, and animal, to have its whole genome sequenced was the nematode worm: Caenorhabditis elegans in 1998. Eukaryotic genomes are sequenced by several methods including Shotgun sequencing of short DNA fragments and sequencing of larger DNA clones from DNA libraries such as bacterial artificial chromosomes (BACs) and yeast artificial chromosomes (YACs).In 1999, the entire DNA sequence of human chromosome 22, the shortest human autosome, was published.
|
Whole-genome sequencing
| 0.848442
|
448
|
Advanced Placement (AP) Physics 2 is a year-long introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester algebra-based university course in fluid mechanics, thermodynamics, electromagnetism, optics, and modern physics. Along with AP Physics 1, the first AP Physics 2 exam was administered in 2015. The content of AP Physics 2 overlaps with that of AP Physics C: Electricity and Magnetism, but Physics 2 is algebra-based, while Physics C is calculus-based.
|
AP Physics 2
| 0.848425
|
449
|
AP Physics 2 is an algebra-based, introductory college-level physics course in which students explore fluid statics and dynamics; thermodynamics with kinetic theory; PV diagrams and probability; electrostatics; electrical circuits with capacitors; magnetic fields; electromagnetism; physical and geometric optics; and quantum, atomic, and nuclear physics. Through inquiry-based learning, students develop scientific critical thinking and reasoning skills. The College Board has released a "Curriculum Framework" which includes the 7 principles on which the new AP Physics courses will be based as well as smaller "Enduring Understanding" concepts.
|
AP Physics 2
| 0.848425
|
450
|
In February 2014, the official course description and sample curriculum resources were posted to the College Board website, with two practice exams being posted the next month. As of September 2014, face to face workshops are dedicated solely to AP Physics 1 & AP Physics 2.
|
AP Physics 2
| 0.848425
|
451
|
The AP Physics 2 classes began in the fall of 2014, with the first AP exams administered in May 2015. The courses were formed through collaboration between current Advanced Placement teachers and The College Board, with the guidance from the National Research Council and the National Science Foundation. As of August 2013 AP summer institutes, the College Board professional development course for Advanced Placement and Pre-AP teachers, dedicate 20% of the total to preparing AP Physics B educators for the new AP physics course. Face to face workshops sponsored by the College Board focused 20% of their content on the course in September 2013.
|
AP Physics 2
| 0.848425
|
452
|
The compression of amino acid sequences is a comparatively challenging task. The existing specialized amino acid sequence compressors are low compared with that of DNA sequence compressors, mainly because of the characteristics of the data. For example, modeling inversions is harder because of the reverse information loss (from amino acids to DNA sequence). The current lossless data compressor that provides higher compression is AC2. AC2 mixes various context models using Neural Networks and encodes the data using arithmetic encoding.
|
Protein sequence
| 0.848401
|
453
|
A physical quantity (or simply quantity) is a property of a material or system that can be quantified by measurement. A physical quantity can be expressed as a value, which is the algebraic multiplication of a numerical value and a unit of measurement. For example, the physical quantity mass, symbol m, can be quantified as m=n kg, where n is the numerical value and kg is the unit symbol (for kilogram).
|
Physical quantities
| 0.848381
|
454
|
Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution.
|
Solution (equation)
| 0.848351
|
455
|
Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example 4 x 5 − x 3 − 3 = 0 {\displaystyle 4x^{5}-x^{3}-3=0} (by using the rational root theorem), and x 6 − 5 x 3 + 6 = 0 , {\displaystyle x^{6}-5x^{3}+6=0\,,} (by using the substitution x = z1⁄3, which simplifies this to a quadratic equation in z).
|
Solution (equation)
| 0.848351
|
456
|
During the latter half of the 20th century, the fields of genetics and molecular biology matured greatly, significantly increasing understanding of biological heredity. As with other complex and evolving fields of knowledge, the public awareness of these advances has primarily been through the mass media, and a number of common misunderstandings of genetics have arisen.
|
Common misunderstandings of genetics
| 0.848341
|
457
|
In the early years of genetics it was suggested that there might be "a gene for" a wide range of particular characteristics. This was partly because the examples studied from Mendel onwards inevitably focused on genes whose effects could be readily identified; partly that it was easier to teach science that way; and partly because the mathematics of evolutionary dynamics is simpler if there is a simple mapping between genes and phenotypic characteristics.These have led to the general perception that there is "a gene for" arbitrary traits, leading to controversy in particular cases such as the purported "gay gene". However, in light of the known complexities of gene expression networks (and phenomena such as epigenetics), it is clear that instances where a single gene "codes for" a single, discernible phenotypic effect are rare, and that media presentations of "a gene for X" grossly oversimplify the vast majority of situations.
|
Common misunderstandings of genetics
| 0.848341
|
458
|
While the central dogma of molecular biology describes how information cannot be passed back to inheritable genetic information, the other causal arrows in this chain can be bidirectional, with complex feedbacks ultimately regulating gene expression. Instead of being a simple, linear mapping, this complex relationship between genotype and phenotype is not straightforward to decode. Rather than describing genetic information as a blueprint, some have suggested that a more appropriate analogy is that of a recipe for cooking, where a collection of ingredients is combined via a set of instructions to form an emergent structure, such as a cake, that is not described explicitly in the recipe itself.
|
Common misunderstandings of genetics
| 0.848341
|
459
|
It is widely believed that genes provide a "blueprint" for the body in much the same way that architectural or mechanical engineering blueprints describe buildings or machines. At a superficial level, genes and conventional blueprints share the common property of being low dimensional (genes are organised as a one-dimensional string of nucleotides; blueprints are typically two-dimensional drawings on paper) but containing information about fully three-dimensional structures. However, this view ignores the fundamental differences between genes and blueprints in the nature of the mapping from low order information to the high order object. In the case of biological systems, a long and complicated chain of interactions separates genetic information from macroscopic structures and functions.
|
Common misunderstandings of genetics
| 0.848341
|
460
|
Steroid isolation, depending on context, is the isolation of chemical matter required for chemical structure elucidation, derivitzation or degradation chemistry, biological testing, and other research needs (generally milligrams to grams, but often more or the isolation of "analytical quantities" of the substance of interest (where the focus is on identifying and quantifying the substance (for example, in biological tissue or fluid). The amount isolated depends on the analytical method, but is generally less than one microgram.The methods of isolation to achieve the two scales of product are distinct, but include extraction, precipitation, adsorption, chromatography, and crystallization. In both cases, the isolated substance is purified to chemical homogeneity; combined separation and analytical methods, such as LC-MS, are chosen to be "orthogonal"—achieving their separations based on distinct modes of interaction between substance and isolating matrix—to detect a single species in the pure sample. Structure determination refers to the methods to determine the chemical structure of an isolated pure steroid, using an evolving array of chemical and physical methods which have included NMR and small-molecule crystallography. : 10–19 Methods of analysis overlap both of the above areas, emphasizing analytical methods to determining if a steroid is present in a mixture and determining its quantity.
|
Steroid metabolism
| 0.848198
|
461
|
In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far.Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge.
|
Conservation of electric charge
| 0.848075
|
462
|
In physics, charge conservation is the principle that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density ρ ( x ) {\displaystyle \rho (\mathbf {x} )} and current density J ( x ) {\displaystyle \mathbf {J} (\mathbf {x} )} .
|
Conservation of electric charge
| 0.848075
|
463
|
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the function x 2 + y 4 {\displaystyle x^{2}+y^{4}} with respect to the variables x {\displaystyle x} and y , {\displaystyle y,} subject to 1 ≤ x ≤ 10 {\displaystyle 1\leq x\leq 10} and 5 ≤ y ≤ 12. {\displaystyle 5\leq y\leq 12.\,} Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12.
|
Solution space
| 0.848056
|
464
|
Sentences are then built up out of atomic formulas by applying connectives and quantifiers. A set of sentences is called a theory; thus, individual sentences may be called theorems.
|
Sentence (mathematical logic)
| 0.847994
|
465
|
Massive parallel sequencing or massively parallel sequencing is any of several high-throughput approaches to DNA sequencing using the concept of massively parallel processing; it is also called next-generation sequencing (NGS) or second-generation sequencing. Some of these technologies emerged between 1993 and 1998 and have been commercially available since 2005. These technologies use miniaturized and parallelized platforms for sequencing of 1 million to 43 billion short reads (50 to 400 bases each) per instrument run. Many NGS platforms differ in engineering configurations and sequencing chemistry.
|
Massive parallel sequencing
| 0.847978
|
466
|
(The adjective genetic, derived from the Greek word genesis—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860.) Bateson both acted as a mentor and was aided significantly by the work of other scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906.After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance.
|
Genetics
| 0.847976
|
467
|
He recognized recessive traits and inherent variation by postulating that traits of past generations could reappear later, and organisms could produce progeny with different attributes. These observations represent an important prelude to Mendel's theory of particulate inheritance insofar as it features a transition of heredity from its status as myth to that of a scientific discipline, by providing a fundamental theoretical basis for genetics in the twentieth century. Other theories of inheritance preceded Mendel's work.
|
Genetics
| 0.847976
|
468
|
The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century.Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetic" in hereditarian context. He described several rules of biological inheritance in his works The genetic laws of the Nature (Die genetischen Gesetze der Natur, 1819). His second law is the same as what Mendel published.
|
Genetics
| 0.847976
|
469
|
This messenger RNA molecule then serves to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology.The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.
|
Genetics
| 0.847976
|
470
|
Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration.Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.
|
Genetics
| 0.847976
|
471
|
Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brünn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.
|
Genetics
| 0.847976
|
472
|
The adjective quadratic comes from the Latin word quadrātum ("square"). A term raised to the second power like x2 is called a square in algebra because it is the area of a square with side x.
|
Quadratic function
| 0.847954
|
473
|
Here C is the field of complex numbers and Z is the ring of integer numbers. A theorem of Artin and Schreier asserts that (essentially) these are all the possibilities for finite absolute Galois groups. Artin–Schreier theorem. Let K be a field whose absolute Galois group G is finite. Then either K is separably closed and G is trivial or K is real closed and G = Z/2Z.
|
Field Arithmetic
| 0.847913
|
474
|
Let K be a field and let G = Gal(K) be its absolute Galois group. If K is algebraically closed, then G = 1. If K = R is the real numbers, then G = Gal ( C / R ) = Z / 2 Z . {\displaystyle G=\operatorname {Gal} (\mathbf {C} /\mathbf {R} )=\mathbf {Z} /2\mathbf {Z} .}
|
Field Arithmetic
| 0.847913
|
475
|
In mathematics, field arithmetic is a subject that studies the interrelations between arithmetic properties of a field and its absolute Galois group. It is an interdisciplinary subject as it uses tools from algebraic number theory, arithmetic geometry, algebraic geometry, model theory, the theory of finite groups and of profinite groups.
|
Field Arithmetic
| 0.847913
|
476
|
Then K is Hilbertian if and only if K is ω-free. Peter Roquette proved the right-to-left direction of this theorem and conjectured the opposite direction. Michael Fried and Helmut Völklein applied algebraic topology and complex analysis to establish Roquette's conjecture in characteristic zero. Later Pop proved the Theorem for arbitrary characteristic by developing "rigid patching".
|
Field Arithmetic
| 0.847913
|
477
|
A nice theorem in this spirit connects Hilbertian fields with ω-free fields (K is ω-free if any embedding problem for K is properly solvable). Theorem. Let K be a PAC field.
|
Field Arithmetic
| 0.847913
|
478
|
A pseudo algebraically closed field (in short PAC) K is a field satisfying the following geometric property. Each absolutely irreducible algebraic variety V defined over K has a K-rational point. Over PAC fields there is a firm link between arithmetic properties of the field and group theoretic properties of its absolute Galois group.
|
Field Arithmetic
| 0.847913
|
479
|
Then with probability 1 the absolute Galois group Gal(Ns) is free of countable rank. (This result is due to Moshe Jarden. )In contrast to the above examples, if the fields in question are finitely generated over Q, Florian Pop proves that an isomorphism of the absolute Galois groups yields an isomorphism of the fields: Theorem. Let K, L be finitely generated fields over Q and let a: Gal(K) → Gal(L) be an isomorphism. Then there exists a unique isomorphism of the algebraic closures, b: Kalg → Lalg, that induces a. This generalizes an earlier work of Jürgen Neukirch and Koji Uchida on number fields.
|
Field Arithmetic
| 0.847913
|
480
|
Let C be an algebraically closed field and x a variable. Then Gal(C(x)) is free of rank equal to the cardinality of C. (This result is due to Adrien Douady for 0 characteristic and has its origins in Riemann's existence theorem.
|
Field Arithmetic
| 0.847913
|
481
|
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the modified Ampere's law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: i.e., By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: d d t Q Ω = d d t ∭ Ω ρ d V = − {\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-} ∂ Ω {\displaystyle {\scriptstyle \partial \Omega }} J ⋅ d S = − I ∂ Ω . {\displaystyle \mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.} In particular, in an isolated system the total charge is conserved.
|
Maxwell's Equations
| 0.84789
|
482
|
The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact.Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used.
|
Maxwell's Equations
| 0.84789
|
483
|
For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, , denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα.
|
Maxwell's Equations
| 0.84789
|
484
|
In fact the Maxwell equations in the space + time formulation are not Galileo invariant and have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables.
|
Maxwell's Equations
| 0.84789
|
485
|
A Pappian projective space is a projective space in which Pappus's hexagon theorem holds. The following result, due to Francis Buekenhout, is an astonishing statement for finite projective spaces. Theorem: Let be P n {\displaystyle {\mathfrak {P}}_{n}} a finite projective space of dimension n ≥ 3 {\displaystyle n\geq 3} and Q {\displaystyle {\mathcal {Q}}} a non-degenerate quadratic set that contains lines. Then: P n {\displaystyle {\mathfrak {P}}_{n}} is Pappian and Q {\displaystyle {\mathcal {Q}}} is a quadric with index ≥ 2 {\displaystyle \geq 2} .
|
Quadratic set
| 0.847874
|
486
|
( g {\displaystyle g} is called exterior, tangent and secant line if | g ∩ O | = 0 , | g ∩ O | = 1 {\displaystyle |g\cap {\mathcal {O}}|=0,\ |g\cap {\mathcal {O}}|=1} and | g ∩ O | = 2 {\displaystyle |g\cap {\mathcal {O}}|=2} respectively.) (O2) For any point P ∈ O {\displaystyle P\in {\mathcal {O}}} the union O P {\displaystyle {\mathcal {O}}_{P}} of all tangent lines through P {\displaystyle P} is a hyperplane (tangent plane at P {\displaystyle P} ).Example: a) Any sphere (quadric of index 1) is an ovoid. b) In case of real projective spaces one can construct ovoids by combining halves of suitable ellipsoids such that they are no quadrics.For finite projective spaces of dimension n {\displaystyle n} over a field K {\displaystyle K} we have:Theorem: a) In case of | K | < ∞ {\displaystyle |K|<\infty } an ovoid in P n ( K ) {\displaystyle {\mathfrak {P}}_{n}(K)} exists only if n = 2 {\displaystyle n=2} or n = 3 {\displaystyle n=3} . b) In case of | K | < ∞ , char K ≠ 2 {\displaystyle |K|<\infty ,\ \operatorname {char} K\neq 2} an ovoid in P n ( K ) {\displaystyle {\mathfrak {P}}_{n}(K)} is a quadric.Counterexamples (Tits–Suzuki ovoid) show that i.g. statement b) of the theorem above is not true for char K = 2 {\displaystyle \operatorname {char} K=2} :
|
Quadratic set
| 0.847873
|
487
|
According to this theorem of Beniamino Segre, for Pappian projective planes of odd order the ovals are just conics: Theorem: Let be P {\displaystyle {\mathfrak {P}}} a Pappian projective plane of odd order. Any oval in P {\displaystyle {\mathfrak {P}}} is an oval conic (non-degenerate quadric). Definition: (ovoid) A non-empty point set O {\displaystyle {\mathcal {O}}} of a projective space is called ovoid if the following properties are fulfilled: (O1) Any line meets O {\displaystyle {\mathcal {O}}} in at most two points.
|
Quadratic set
| 0.847873
|
488
|
For finite planes the following theorem provides a more simple definition. Theorem: (oval in finite plane) Let be P {\displaystyle {\mathfrak {P}}} a projective plane of order n {\displaystyle n} . A set o {\displaystyle {\mathfrak {o}}} of points is an oval if | o | = n + 1 {\displaystyle |{\mathfrak {o}}|=n+1} and if no three points of o {\displaystyle {\mathfrak {o}}} are collinear.
|
Quadratic set
| 0.847873
|
489
|
The earliest result may be found directly from elementary probability theory. Suppose we model the above process taking L {\displaystyle L} and G {\displaystyle G} as the fragment length and target length, respectively. The probability of "covering" any given location on the target with one particular fragment is then L / G {\displaystyle L/G} . (This presumes L ≪ G {\displaystyle L\ll G} , which is valid often, but not for all real-world cases.)
|
DNA sequencing theory
| 0.847829
|
490
|
The permanent archive of work is primarily mathematical, although numerical calculations are often conducted for particular problems too. DNA sequencing theory addresses physical processes related to sequencing DNA and should not be confused with theories of analyzing resultant DNA sequences, e.g. sequence alignment. Publications sometimes do not make a careful distinction, but the latter are primarily concerned with algorithmic issues. Sequencing theory is based on elements of mathematics, biology, and systems engineering, so it is highly interdisciplinary. The subject may be studied within the context of computational biology.
|
DNA sequencing theory
| 0.847829
|
491
|
DNA sequencing theory is the broad body of work that attempts to lay analytical foundations for determining the order of specific nucleotides in a sequence of DNA, otherwise known as DNA sequencing. The practical aspects revolve around designing and optimizing sequencing projects (known as "strategic genomics"), predicting project performance, troubleshooting experimental results, characterizing factors such as sequence bias and the effects of software processing algorithms, and comparing various sequencing methods to one another. In this sense, it could be considered a branch of systems engineering or operations research.
|
DNA sequencing theory
| 0.847829
|
492
|
For example, in the so-called "discordant read pairs method", DNA insertions can be inferred if the distance between read pairs is larger than expected. Calculations show that around 50-fold redundancy is needed to avoid false-positive errors at 1% threshold.The advent of next-generation sequencing has also made large-scale population sequencing feasible, for example the 1000 Genomes Project to characterize variation in human population groups. While common variation is easily captured, rare variation poses a design challenge: too few samples with significant sequence redundancy risks not having a variant in the sample group, but large samples with light redundancy risk not capturing a variant in the read set that is actually in the sample group. Wendl and Wilson report a simple set of optimization rules that maximize the probability of discovery for a given set of parameters. For example, for observing a rare allele at least twice (to eliminate the possibility is unique to an individual) a little less than 4-fold redundancy should be used, regardless of the sample size.
|
DNA sequencing theory
| 0.847829
|
493
|
Antibodies to particular proteins, or to their modified forms, have been used in biochemistry and cell biology studies. These are among the most common tools used by molecular biologists today. There are several specific techniques and protocols that use antibodies for protein detection. The enzyme-linked immunosorbent assay (ELISA) has been used for decades to detect and quantitatively measure proteins in samples.
|
Protein analysis
| 0.847728
|
494
|
Now, through bioinformatics, there are computer programs that can in some cases predict and model the structure of proteins. These programs use the chemical properties of amino acids and structural properties of known proteins to predict the 3D model of sample proteins. This also allows scientists to model protein interactions on a larger scale. In addition, biomedical engineers are developing methods to factor in the flexibility of protein structures to make comparisons and predictions.
|
Protein analysis
| 0.847727
|
495
|
Although early large-scale shotgun proteomics analyses showed considerable variability between laboratories, presumably due in part to technical and experimental differences between laboratories, reproducibility has been improved in more recent mass spectrometry analysis, particularly on the protein level. Notably, targeted proteomics shows increased reproducibility and repeatability compared with shotgun methods, although at the expense of data density and effectiveness.Data quality. Proteomic analysis is highly amenable to automation and large data sets are created, which are processed by software algorithms. Filter parameters are used to reduce the number of false hits, but they cannot be completely eliminated. Scientists have expressed the need for awareness that proteomics experiments should adhere to the criteria of analytical chemistry (sufficient data quality, sanity check, validation).
|
Protein analysis
| 0.847727
|
496
|
One example of the use of bioinformatics and the use of computational methods is the study of protein biomarkers. Computational predictive models have shown that extensive and diverse feto-maternal protein trafficking occurs during pregnancy and can be readily detected non-invasively in maternal whole blood. This computational approach circumvented a major limitation, the abundance of maternal proteins interfering with the detection of fetal proteins, to fetal proteomic analysis of maternal blood. Computational models can use fetal gene transcripts previously identified in maternal whole blood to create a comprehensive proteomic network of the term neonate.
|
Protein analysis
| 0.847727
|
497
|
Recent advancements in bioorthogonal chemistry have revealed applications in protein analysis. The extension of using organic molecules to observe their reaction with proteins reveals extensive methods to tag them. Unnatural amino acids and various functional groups represent new growing technologies in proteomics. Specific biomolecules that are capable of being metabolized in cells or tissues are inserted into proteins or glycans.
|
Protein analysis
| 0.847727
|
498
|
Other methods include surface plasmon resonance (SPR), protein microarrays, dual polarisation interferometry, microscale thermophoresis, kinetic exclusion assay, and experimental methods such as phage display and in silico computational methods. Knowledge of protein-protein interactions is especially useful in regard to biological networks and systems biology, for example in cell signaling cascades and gene regulatory networks (GRNs, where knowledge of protein-DNA interactions is also informative). Proteome-wide analysis of protein interactions, and integration of these interaction patterns into larger biological networks, is crucial towards understanding systems-level biology.
|
Protein analysis
| 0.847727
|
499
|
The most useful application here for genetical statistics is the correlation between half-sibs. Recall that the correlation coefficient (r) is the ratio of the covariance to the variance . Therefore, rHS = cov(HS) / s2all HS together = / s2P = ¼ H2 . The correlation between full-sibs is of little utility, being rFS = cov(FS) / s2all FS together = / s2P .
|
Quantitative genetics
| 0.847725
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.