id
int32
0
100k
text
stringlengths
21
3.54k
source
stringlengths
1
124
similarity
float32
0.78
0.88
700
In a 2018 interview, Chalmers called quantum mechanics "a magnet for anyone who wants to find room for crazy properties of the mind," but not entirely without warrant. The relationship between observation (and, by extension, consciousness) and the wave-function collapse is known as the measurement problem. It seems that atoms, photons, etc. are in quantum superposition (which is to say, in many seemingly contradictory states or locations simultaneously) until measured in some way. This process is known as a wave-function collapse.
Combination problem
0.844881
701
In The Conscious Mind (1996), Chalmers attempts to pinpoint why the hard problem is so hard. He concludes that consciousness is irreducible to lower-level physical facts, just as the fundamental laws of physics are irreducible to lower-level physical facts. Therefore, consciousness should be taken as fundamental in its own right and studied as such. Just as fundamental properties of reality are ubiquitous (even small objects have mass), consciousness may also be, though he considers that an open question.In Mortal Questions (1979), Thomas Nagel argues that panpsychism follows from four premises:: 181 P1: There is no spiritual plane or disembodied soul; everything that exists is material.
Combination problem
0.844881
702
Bertrand Russell's neutral monist views tended toward panpsychism. The physicist Arthur Eddington also defended a form of panpsychism. The psychologists Gerard Heymans, James Ward and Charles Augustus Strong also endorsed variants of panpsychism. : 158 In 1990, the physicist David Bohm published "A new theory of the relationship of mind and matter," a paper based on his interpretation of quantum mechanics.
Combination problem
0.844881
703
In the philosophy of mind, panpsychism () is the view that the mind or a mindlike aspect is a fundamental and ubiquitous feature of reality. It is also described as a theory that "the mind is a fundamental feature of the world which exists throughout the universe." It is one of the oldest philosophical theories, and has been ascribed to philosophers including Thales, Plato, Spinoza, Leibniz, William James, Alfred North Whitehead, Bertrand Russell, and Galen Strawson. In the 19th century, panpsychism was the default philosophy of mind in Western thought, but it saw a decline in the mid-20th century with the rise of logical positivism. Recent interest in the hard problem of consciousness and developments in the fields of neuroscience, psychology, and quantum physics have revived interest in panpsychism in the 21st century.
Combination problem
0.844881
704
The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude A ( | p ⟩ → | p ′ ⟩ ) {\textstyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )} is: This is to be compared to the: where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with m 0 ≫ | p | {\displaystyle m_{0}\gg |\mathbf {p} |} Comparing with the QM scattering, we have to discard the ( 2 m ) 2 {\displaystyle (2m)^{2}} as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: where Fourier transforming both sides, solving the integral and taking ε → 0 {\displaystyle \varepsilon \to 0} at the end will yield as the Coulomb potential.However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental.The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass.
Electric force
0.844834
705
Intuitively, passing through five points in general linear position specifies five independent linear constraints on the (projective) linear space of conics, and hence specifies a unique conic, though this brief statement ignores subtleties. More precisely, this is seen as follows: conics correspond to points in the five-dimensional projective space P 5 ; {\displaystyle \mathbf {P} ^{5};} requiring a conic to pass through a point imposes a linear condition on the coordinates: for a fixed ( x , y ) , {\displaystyle (x,y),} the equation A x 2 + B x y + C y 2 + D x + E y + F = 0 {\displaystyle Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0} is a linear equation in ( A , B , C , D , E , F ) ; {\displaystyle (A,B,C,D,E,F);} by dimension counting, five constraints (that the curve passes through five points) are necessary to specify a conic, as each constraint cuts the dimension of possibilities by 1, and one starts with 5 dimensions; in 5 dimensions, the intersection of 5 (independent) hyperplanes is a single point (formally, by Bézout's theorem); general linear position of the points means that the constraints are independent, and thus do specify a unique conic; the resulting conic is non-degenerate because it is a curve (since it has more than 1 point), and does not contain a line (else it would split as two lines, at least one of which must contain 3 of the 5 points, by the pigeonhole principle), so it is irreducible.The two subtleties in the above analysis are that the resulting point is a quadratic equation (not a linear equation), and that the constraints are independent. The first is simple: if A, B, and C all vanish, then the equation D x + E y + F = 0 {\displaystyle Dx+Ey+F=0} defines a line, and any 3 points on this (indeed any number of points) lie on a line – thus general linear position ensures a conic.
Five points determine a conic
0.844728
706
Instead of passing through points, a different condition on a curve is being tangent to a given line. Being tangent to five given lines also determines a conic, by projective duality, but from the algebraic point of view tangency to a line is a quadratic constraint, so naive dimension counting yields 25 = 32 conics tangent to five given lines, of which 31 must be ascribed to degenerate conics, as described in fudge factors in enumerative geometry; formalizing this intuition requires significant further development to justify. Another classic problem in enumerative geometry, of similar vintage to conics, is the Problem of Apollonius: a circle that is tangent to three circles in general determines eight circles, as each of these is a quadratic condition and 23 = 8. As a question in real geometry, a full analysis involves many special cases, and the actual number of circles may be any number between 0 and 8, except for 7.
Five points determine a conic
0.844728
707
As is well known, three non-collinear points determine a circle in Euclidean geometry and two distinct points determine a pencil of circles such as the Apollonian circles. These results seem to run counter the general result since circles are special cases of conics. However, in a pappian projective plane a conic is a circle only if it passes through two specific points on the line at infinity, so a circle is determined by five non-collinear points, three in the affine plane and these two special points. Similar considerations explain the smaller than expected number of points needed to define pencils of circles.
Five points determine a conic
0.844728
708
While five points determine a conic, sets of six or more points on a conic are not in general position, that is, they are constrained as is demonstrated in Pascal's theorem. Similarly, while nine points determine a cubic, if the nine points lie on more than one cubic—i.e., they are the intersection of two cubics—then they are not in general position, and indeed satisfy an addition constraint, as stated in the Cayley–Bacharach theorem. Four points do not determine a conic, but rather a pencil, the 1-dimensional linear system of conics which all pass through the four points (formally, have the four points as base locus). Similarly, three points determine a 2-dimensional linear system (net), two points determine a 3-dimensional linear system (web), one point determines a 4-dimensional linear system, and zero points place no constraints on the 5-dimensional linear system of all conics.
Five points determine a conic
0.844728
709
The natural generalization is to ask for what value of k a configuration of k points (in general position) in n-space determines a variety of degree d and dimension m, which is a fundamental question in enumerative geometry. A simple case of this is for a hypersurface (a codimension 1 subvariety, the zeros of a single polynomial, the case m = n − 1 {\displaystyle m=n-1} ), of which plane curves are an example. In the case of a hypersurface, the answer is given in terms of the multiset coefficient, more familiarly the binomial coefficient, or more elegantly the rising factorial, as: k = ( ( n + 1 d ) ) − 1 = ( n + d d ) − 1 = 1 n !
Five points determine a conic
0.844728
710
In Euclidean and projective geometry, five points determine a conic (a degree-2 plane curve), just as two (distinct) points determine a line (a degree-1 plane curve). There are additional subtleties for conics that do not exist for lines, and thus the statement and its proof for conics are both more technical than for lines. Formally, given any five points in the plane in general linear position, meaning no three collinear, there is a unique conic passing through them, which will be non-degenerate; this is true over both the Euclidean plane and any pappian projective plane. Indeed, given any five points there is a conic passing through them, but if three of the points are collinear the conic will be degenerate (reducible, because it contains a line), and may not be unique; see further discussion.
Five points determine a conic
0.844728
711
Synthetically, the conic can be constructed by the Braikenridge–Maclaurin construction, by applying the Braikenridge–Maclaurin theorem, which is the converse of Pascal's theorem. Pascal's theorem states that given 6 points on a conic (a hexagon), the lines defined by opposite sides intersect in three collinear points. This can be reversed to construct the possible locations for a 6th point, given 5 existing ones.
Five points determine a conic
0.844728
712
Now given five points X, Y, A, B, C, the three lines X A , X B , X C {\displaystyle XA,XB,XC} can be taken to the three lines Y A , Y B , Y C {\displaystyle YA,YB,YC} by a unique projective transform, since projective transforms are simply 3-transitive on lines (they are simply 3-transitive on points, hence by projective duality they are 3-transitive on lines). Under this map X maps to Y, since these are the unique intersection points of these lines, and thus satisfy the hypothesis of Steiner’s theorem. The resulting conic thus contains all five points, and is the unique such conic, as desired.
Five points determine a conic
0.844728
713
That five points determine a conic can be proven by synthetic geometry—i.e., in terms of lines and points in the plane—in addition to the analytic (algebraic) proof given above. Such a proof can be given using a theorem of Jakob Steiner, which states: Given a projective transformation f, between the pencil of lines passing through a point X and the pencil of lines passing through a point Y, the set C of intersection points between a line x and its image f ( x ) {\displaystyle f(x)} forms a conic. Note that X and Y are on this conic by considering the preimage and image of the line XY (which is respectively a line through X and a line through Y).This can be shown by taking the points X and Y to the standard points {\displaystyle } and {\displaystyle } by a projective transformation, in which case the pencils of lines correspond to the horizontal and vertical lines in the plane, and the intersections of corresponding lines to the graph of a function, which (must be shown) is a hyperbola, hence a conic, hence the original curve C is a conic.
Five points determine a conic
0.844728
714
SolveSpace is shipped with the following basic features: 2D Sketch Modeling SolveSpace supports parametric 2D drawing of lines, circles, arcs, Cubic bézier curves etc; datum points and lines are also supported for general, reference based modeling.3D Solid Modeling Drawing, extrusion, rotation and revolution along a helix are supported in both modes. In 3D it is possible to use basic Boolean operations (union, difference, intersection), though as of version 3.0, SolveSpace had limitations on the order of application of these operations.Mechanical design and analysis By using the built-in constraint solver it is possible to visualize planar or spatial linkages with pin, ball, or slide joints, trace their movements, and export its data in CSV format. Assembly SolveSpace allows solids to be imported in a special mode that does not allow modeling. These imported solids can then be constrained to ensure that the designed model's dimensions meet necessary requirements. Plane and solid geometry Replace hand-solved trigonometry and spreadsheets with a live dimensioned drawing.
SolveSpace
0.844719
715
SolveSpace is a free and open-source 2D/3D constraint-based parametric computer-aided design (CAD) software that supports basic 2D and 3D constructive solid geometry modeling. It is a constraint-based parametric modeler with simple mechanical simulation capabilities. Version 2.1 and onward runs on Windows, Linux and macOS.
SolveSpace
0.844719
716
In 2017 Tran et al. proposed DeepNovo, the first deep learning based de novo sequencing software. The benchmark analysis in the original publication demonstrated that DeepNovo outperformed previous methods, including PEAKS, Novor and PepNovo, by a significant margin. DeepNovo is implemented in python with the Tensorflow framework.
De novo peptide sequencing
0.844551
717
PEAKS and NovoHMM had the best sensitivity in both QSTAR and LCQ data as well. However, no evaluated algorithms exceeded a 50% of exact identification for both data sets.Recent progress in mass spectrometers made it possible to generate mass spectra of ultra-high resolution . The improved accuracy, together with the increased amount of mass spectrometry data that are being generated, draws the interests of applying deep learning techniques to de novo peptide sequencing.
De novo peptide sequencing
0.844551
718
This method could be helpful for manual de novo peptide sequencing, but doesn't work for high-throughput condition.The fourth method, which is considered to be successful, is the graph theory. Applying graph theory in de novo peptide sequencing was first mentioned by Bartels.
De novo peptide sequencing
0.844551
719
More recently, deep learning techniques have been applied to solve the de novo peptide sequencing problem. The first breakthrough was DeepNovo, which adopted the convolutional neural network structure, achieved major improvements in sequence accuracy, and enabled complete protein sequence assembly without assisting databases Subsequently, additional network structures, such as PointNet (PointNovo), have been adopted to extract features from a raw spectrum. The de novo peptide sequencing problem is then framed as a sequence prediction problem.
De novo peptide sequencing
0.844551
720
Given previously predicted partial peptide sequence, neural-network-based de novo peptide sequencing models will repeatedly generate the most probable next amino acid until the predicted peptide's mass matches the precursor mass. At inference time, search strategies such as beam search can be adopted to explore a larger search space while keeping the computational cost low. Comparing with previous methods, neural-network-based models have demonstrated significantly better accuracy and sensitivity. Moreover, with a careful model design, deep-learning-based de novo peptide sequencing algorithms can also be fast enough to achieve real-time peptide de novo sequencing. PEAKS software incorporates this neural network learning in their de novo sequencing algorithms.
De novo peptide sequencing
0.844551
721
Once an interactome has been created, there are numerous ways to analyze its properties. However, there are two important goals of such analyses. First, scientists try to elucidate the systems properties of interactomes, e.g. the topology of its interactions. Second, studies may focus on individual proteins and their role in the network. Such analyses are mainly carried out using bioinformatics methods and include the following, among many others:
Molecular interaction
0.844526
722
Interactomics is a discipline at the intersection of bioinformatics and biology that deals with studying both the interactions and the consequences of those interactions between and among proteins, and other molecules within a cell. Interactomics thus aims to compare such networks of interactions (i.e., interactomes) between and within species in order to find how the traits of such networks are either preserved or varied. Interactomics is an example of "top-down" systems biology, which takes an overhead view of a biosystem or organism.
Molecular interaction
0.844526
723
Interaction networks can be analyzed using the tools of graph theory. Network properties include the degree distribution, clustering coefficients, betweenness centrality, and many others. The distribution of properties among the proteins of an interactome has revealed that the interactome networks often have scale-free topology where functional modules within a network indicate specialized subnetworks. Such modules can be functional, as in a signaling pathway, or structural, as in a protein complex. In fact, it is a formidable task to identify protein complexes in an interactome, given that a network on its own does not directly reveal the presence of a stable complex.
Molecular interaction
0.844526
724
Some efforts have been made to extract systematically interaction networks directly from the scientific literature. Such approaches range in terms of complexity from simple co-occurrence statistics of entities that are mentioned together in the same context (e.g. sentence) to sophisticated natural language processing and machine learning methods for detecting interaction relationships.
Molecular interaction
0.844526
725
Using these definitions and conventions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics).
Maxwell's differential equations
0.844521
726
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units of calculation, by convention. With a corresponding change in convention for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. : vii Such modified definitions are conventionally used with the Gaussian (CGS) units.
Maxwell's differential equations
0.844521
727
In organic chemistry, a peptide bond is an amide type of covalent chemical bond linking two consecutive alpha-amino acids from C1 (carbon number one) of one alpha-amino acid and N2 (nitrogen number two) of another, along a peptide or protein chain.It can also be called a eupeptide bond to distinguish it from an isopeptide bond, which is another type of amide bond between two amino acids.
Protein backbone
0.844517
728
Often the more easily accessible atmospheric parameter is the mixing ratio w {\displaystyle w} . Through expansion upon the definition of vapor pressure in the law of partial pressures as presented above and the definition of mixing ratio: e p = w w + ϵ , {\displaystyle {\frac {e}{p}}={\frac {w}{w+\epsilon }},} which allows T v = T w + ϵ ϵ ( 1 + w ) . {\displaystyle T_{v}=T{\frac {w+\epsilon }{\epsilon (1+w)}}.} Algebraic expansion of that equation, ignoring higher orders of w {\displaystyle w} due to its typical order in Earth's atmosphere of 10 − 3 {\displaystyle 10^{-3}} , and substituting ϵ {\displaystyle \epsilon } with its constant value yields the linear approximation T v ≈ T ( 1 + 0.608 w ) .
Virtual temperature
0.844475
729
The elements that make up biological molecules (C, H, N, O, P, S) are too light (low atomic number, Z) to be clearly visualized as individual atoms by transmission electron microscopy. To circumvent this problem, the DNA bases can be labeled with heavier atoms (higher Z). Each nucleotide is tagged with a characteristic heavy label, so that they can be distinguished in the transmission electron micrograph. ZS Genetics proposes using three heavy labels: bromine (Z=35), iodine (Z=53), and trichloromethane (total Z=63).
Transmission electron microscopy DNA sequencing
0.844445
730
Sequencing technologies that generate long reads, including transmission electron microscopy DNA sequencing, can capture entire haploblocks in a single read. That is, haplotypes are not broken up among multiple reads, and the genetically linked alleles remain together in the sequencing data. Therefore, long reads make haplotyping easier and more accurate, which is beneficial to the field of population genetics.
Transmission electron microscopy DNA sequencing
0.844445
731
In 2010 Krivanek and colleagues reported several technical improvements to the HAADF method, including a combination of aberration corrected electron optics and low accelerating voltage. The latter is crucial for imaging biological objects, as it allows to reduce damage by the beam and increase the image contrast for light atoms. As a result, single atom substitutions in a boron nitride monolayer could be imaged.Despite the invention of a multitude of chemical and fluorescent sequencing technologies, electron microscopy is still being explored as a means of performing single-molecule DNA sequencing. For example, in 2012 a collaboration between scientists at Harvard University, the University of New Hampshire and ZS Genetics demonstrated the ability to read long sequences of DNA using the technique, however transmission electron microscopy DNA sequencing technology is still far from being commercially available.
Transmission electron microscopy DNA sequencing
0.844445
732
Longer read lengths: ZS Genetics has estimated potential read lengths of transmission electron microscopy DNA sequencing to be 10,000 to 20,000 base pairs with a rate of 1.7 billion base pairs per day. Such long read lengths would allow easier de novo genome assembly and direct detection of haplotypes, among other applications. Lower cost: Transmission electron microscopy DNA sequencing is estimated to cost just US$5,000-US$10,000 per human genome, compared to the more expensive second-generation DNA sequencing alternatives. No dephasing: Dephasing of the DNA strands due to loss in synchronicity during synthesis is a major problem of second-generation sequencing technologies.
Transmission electron microscopy DNA sequencing
0.844445
733
It satisfies the Poisson equation in the sense of distributions. Moreover, when the measure is positive, the Newtonian potential is subharmonic on Rd. If f is a compactly supported continuous function (or, more generally, a finite measure) that is rotationally invariant, then the convolution of f with Γ satisfies for x outside the support of f In dimension d = 3, this reduces to Newton's theorem that the potential energy of a small mass outside a much larger spherically symmetric mass distribution is the same as if all of the mass of the larger object were concentrated at its center.
Single layer potential
0.844427
734
In mathematics, the Newtonian potential or Newton potential is an operator in vector calculus that acts as the inverse to the negative Laplacian, on functions that are smooth and decay rapidly enough at infinity. As such, it is a fundamental object of study in potential theory. In its general nature, it is a singular integral operator, defined by convolution with a function having a mathematical singularity at the origin, the Newtonian kernel Γ which is the fundamental solution of the Laplace equation. It is named for Isaac Newton, who first discovered it and proved that it was a harmonic function in the special case of three variables, where it served as the fundamental gravitational potential in Newton's law of universal gravitation.
Single layer potential
0.844427
735
Systems approach is a method used to study myogenesis, which manipulates a number of different techniques like high-throughput screening technologies, genome wide cell-based assays, and bioinformatics, to identify different factors of a system. This has been specifically used in the investigation of skeletal muscle development and the identification of its regulatory network. Systems approach using high-throughput sequencing and ChIP-chip analysis has been essential in elucidating the targets of myogenic regulatory factors like MyoD and myogenin, their inter-related targets, and how MyoD acts to alter the epigenome in myoblasts and myotubes. This has also revealed the significance of PAX3 in myogenesis, and that it ensures the survival of myogenic progenitors.This approach, using cell based high-throughput transfection assay and whole-mount in situ hybridization, was used in identifying the myogenetic regulator RP58, and the tendon differentiation gene, Mohawk homeobox.
Myogenesis
0.844404
736
A Bayesian Nash equilibrium (BNE) is defined as a strategy profile that maximizes the expected payoff for each player given their beliefs and given the strategies played by the other players. That is, a strategy profile σ {\displaystyle \sigma } is a Bayesian Nash equilibrium if and only if for every player i , {\displaystyle i,} keeping the strategies of every other player fixed, strategy σ i {\displaystyle \sigma _{i}} maximizes the expected payoff of player i {\displaystyle i} according to that player's beliefs.For finite Bayesian games, i.e., both the action and the type space are finite, there are two equivalent representations. The first is called the agent-form game (see Theorem 9.51 of the Game Theory book) which expands the number of players from | N | {\displaystyle |N|} to ∑ i = 1 | N | | Θ i | {\textstyle \sum _{i=1}^{|N|}|\Theta _{i}|} , i.e., every type of each player becomes a player.
Bayesian game
0.844399
737
Asymmetric (public key) encryption: ElGamal Elliptic curve cryptography MAE1 NTRUEncrypt RSA Digital signatures (asymmetric authentication): DSA, and its variants: ECDSA and Deterministic ECDSA EdDSA (Ed25519) RSA Cryptographic hash functions (see also the section on message authentication codes): BLAKE MD5 – Note that there is now a method of generating collisions for MD5 RIPEMD-160 SHA-1 – Note that there is now a method of generating collisions for SHA-1 SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) SHA-3 (SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256) Tiger (TTH), usually used in Tiger tree hashes WHIRLPOOL Cryptographically secure pseudo-random number generators Blum Blum Shub – based on the hardness of factorization Fortuna, intended as an improvement on Yarrow algorithm Linear-feedback shift register (note: many LFSR-based algorithms are weak or have been broken) Yarrow algorithm Key exchange Diffie–Hellman key exchange Elliptic-curve Diffie–Hellman (ECDH) Key derivation functions, often used for password hashing and key stretching bcrypt PBKDF2 scrypt Argon2 Message authentication codes (symmetric authentication algorithms, which take a key as a parameter): HMAC: keyed-hash message authentication Poly1305 SipHash Secret sharing, Secret Splitting, Key Splitting, M of N algorithms Blakey's Scheme Shamir's Scheme Symmetric (secret key) encryption: Advanced Encryption Standard (AES), winner of NIST competition, also known as Rijndael Blowfish Twofish Threefish Data Encryption Standard (DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes IDEA RC4 (cipher) Tiny Encryption Algorithm (TEA) Salsa20, and its updated variant ChaCha20 Post-quantum cryptography Proof-of-work algorithms
Graph algorithms
0.844393
738
Linear search: locates an item in an unsorted sequence Selection algorithm: finds the kth largest item in a sequence Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa Sorted lists Binary search algorithm: locates an item in a sorted sequence Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers Jump search (or block search): linear search on a smaller subset of the sequence Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search. Uniform binary search: an optimization of the classic binary search algorithm Eytzinger binary search: cache friendly binary search algorithm
Graph algorithms
0.844393
739
Clipping Line clipping Cohen–Sutherland Cyrus–Beck Fast-clipping Liang–Barsky Nicholl–Lee–Nicholl Polygon clipping Sutherland–Hodgman Vatti Weiler–Atherton Contour lines and Isosurfaces Marching cubes: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels) Marching squares: generates contour lines for a two-dimensional scalar field Marching tetrahedrons: an alternative to Marching cubes Discrete Green's Theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm Flood fill: fills a connected region of a multi-dimensional array with a specified symbol Global illumination algorithms: Considers direct illumination and reflection from other objects. Ambient occlusion Beam tracing Cone tracing Image-based lighting Metropolis light transport Path tracing Photon mapping Radiosity Ray tracing Hidden-surface removal or Visual surface determination Newell's algorithm: eliminate polygon cycles in the depth sorting required in hidden-surface removal Painter's algorithm: detects visible parts of a 3-dimensional scenery Scanline rendering: constructs an image by moving an imaginary line over the image Warnock algorithm Line Drawing: graphical algorithm for approximating a line segment on discrete graphical media. Bresenham's line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables) DDA line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses floating-point math) Xiaolin Wu's line algorithm: algorithm for line antialiasing. Midpoint circle algorithm: an algorithm used to determine the points needed for drawing a circle Ramer–Douglas–Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points Shading Gouraud shading: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics Phong shading: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics Slerp (spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation Summed area table (also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time
Graph algorithms
0.844393
740
The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect. The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e. 9.648533212...×104 C.
Electric charge
0.844239
741
This science comprises three main sub-disciplines: Polymer chemistry or macromolecular chemistry is concerned with the chemical synthesis and chemical properties of polymers. Polymer physics is concerned with the physical properties of polymer materials and engineering applications. Specifically, it seeks to present the mechanical, thermal, electronic and optical properties of polymers with respect to the underlying physics governing a polymer microstructure. Despite originating as an application of statistical physics to chain structures, polymer physics has now evolved into a discipline in its own right. Polymer characterization is concerned with the analysis of chemical structure, morphology, and the determination of physical properties in relation to compositional and structural parameters.
Macromolecular science
0.844238
742
2005 (Chemistry) Robert Grubbs, Richard Schrock, Yves Chauvin for olefin metathesis.2002 (Chemistry) John Bennett Fenn, Koichi Tanaka, and Kurt Wüthrich for the development of methods for identification and structure analyses of biological macromolecules.2000 (Chemistry) Alan G. MacDiarmid, Alan J. Heeger, and Hideki Shirakawa for work on conductive polymers, contributing to the advent of molecular electronics.1991 (Physics) Pierre-Gilles de Gennes for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers.1974 (Chemistry) Paul J. Flory for contributions to theoretical polymer chemistry.1963 (Chemistry) Giulio Natta and Karl Ziegler for contributions in polymer synthesis. (Ziegler-Natta catalysis).1953 (Chemistry) Hermann Staudinger for contributions to the understanding of macromolecular chemistry.
Macromolecular science
0.844238
743
Polymer science or macromolecular science is a subfield of materials science concerned with polymers, primarily synthetic polymers such as plastics and elastomers. The field of polymer science includes researchers in multiple disciplines including chemistry, physics, and engineering.
Macromolecular science
0.844238
744
Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members. Fred W. Billmeyer, Jr., a Professor of Analytical Chemistry had once said that "although the scarcity of education in polymer science is slowly diminishing but it is still evident in many areas. What is most unfortunate is that it appears to exist, not because of a lack of awareness but, rather, a lack of interest."
Macromolecular science
0.844238
745
Multicellularity has evolved independently at least 25 times in eukaryotes, and also in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, Magnetoglobus multicellularis or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, symbiomycotan fungi, brown algae, red algae, green algae, and land plants. It evolved repeatedly for Chloroplastida (green algae and land plants), once for animals, once for brown algae, three times in the fungi (chytrids, ascomycetes, and basidiomycetes) and perhaps several times for slime molds and red algae. The first evidence of multicellular organization, which is when unicellular organisms coordinate behaviors and may be an evolutionary precursor to true multicellularity, is from cyanobacteria-like organisms that lived 3.0–3.5 billion years ago. To reproduce, true multicellular organisms must solve the problem of regenerating a whole organism from germ cells (i.e., sperm and egg cells), an issue that is studied in evolutionary developmental biology. Animals have evolved a considerable diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants and fungi.
Multicellular organism
0.844223
746
In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, which are all believed to have the same charge (except antimatter). Another charged particle may be an atomic nucleus devoid of electrons, such as an alpha particle.
Charged Particle
0.844179
747
Cell biology Metabolism Genetics Microbiology/immunology
GRE Biology Test
0.844116
748
Since many students who apply to graduate programs in biology do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biology curriculum. A sampling of test item content is given below:
GRE Biology Test
0.844116
749
Ecosystems Behavioral ecology Evolutionary processes History of life
GRE Biology Test
0.844116
750
The GRE subject test in biology was a standardized test in the United States created by the Educational Testing Service, and is designed to assess a candidate's potential for graduate or post-graduate study in the field of biology. The test is comprehensive and covers—in equal proportions—molecular biology, organismal biology, and ecology and evolution.This exam, like all the GRE subject tests, is paper-based, as opposed to the GRE general test which is usually computer-based. It contains 194 questions, which are to be answered within 2 hours and 50 minutes. Scores on this exam are required for entrance to some biology Ph.D.
GRE Biology Test
0.844116
751
While DNA, RNA and proteins are all encoded at the genetic level, glycans (sugar polymers) are not encoded directly from the genome and fewer tools are available for their study. Glycobiology is therefore an area of active research for chemical biologists. For example, cells can be supplied with synthetic variants of natural sugars to probe their function. Carolyn Bertozzi's research group has developed methods for site-specifically reacting molecules at the surface of cells via synthetic sugars.
Chemical Biology
0.844071
752
Native chemical ligation involves the coupling of a C-terminal thioester and an N-terminal cysteine residue, ultimately resulting in formation of a "native" amide bond. Other strategies that have been used for the ligation of peptide fragments using the acyl transfer chemistry first introduced with native chemical ligation include expressed protein ligation, sulfurization/desulfurization techniques, and use of removable thiol auxiliaries. Expressed protein ligation allows for the biotechnological installation of a C-terminal thioester using inteins, thereby allowing the appendage of a synthetic N-terminal peptide to the recombinantly-produced C-terminal portion. Both sulfurization/desulfurization techniques and the use of removable thiol auxiliaries involve the installation of a synthetic thiol moiety to carry out the standard native chemical ligation chemistry, followed by removal of the auxiliary/thiol.
Chemical Biology
0.844071
753
Chemical synthesis of proteins is a valuable tool in chemical biology as it allows for the introduction of non-natural amino acids as well as residue specific incorporation of "posttranslational modifications" such as phosphorylation, glycosylation, acetylation, and even ubiquitination. These capabilities are valuable for chemical biologists as non-natural amino acids can be used to probe and alter the functionality of proteins, while post translational modifications are widely known to regulate the structure and activity of proteins. Although strictly biological techniques have been developed to achieve these ends, the chemical synthesis of peptides often has a lower technical and practical barrier to obtaining small amounts of the desired protein. In order to make protein-sized polypeptide chains via the small peptide fragments made by synthesis, chemical biologists use the process of native chemical ligation.
Chemical Biology
0.844071
754
Chemical biology is a scientific discipline between the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology (synthesis of biomolecules, the simulation of biological systems, etc.).
Chemical Biology
0.844071
755
Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a distinguishing feature like a biotin label or a post translational modification. Methods have been developed that include the use of antibodies, lectins to capture glycoproteins, and immobilized metal ions to capture phosphorylated peptides and enzyme substrates to capture select enzymes.
Chemical Biology
0.844071
756
Chemical biologists used automated synthesis of diverse small molecule libraries in order to perform high-throughput analysis of biological processes. Such experiments may lead to discovery of small molecules with antibiotic or chemotherapeutic properties. These combinatorial chemistry approaches are identical to those employed in the discipline of pharmacology.
Chemical Biology
0.844071
757
Some forms of chemical biology attempt to answer biological questions by studying biological systems at the chemical level. In contrast to research using biochemistry, genetics, or molecular biology, where mutagenesis can provide a new version of the organism, cell, or biomolecule of interest, chemical biology probes systems in vitro and in vivo with small molecules that have been designed for a specific purpose or identified on the basis of biochemical or cell-based screening (see chemical genetics). Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering.
Chemical Biology
0.844071
758
Expressed protein ligation, has proven to be successful techniques for synthetically producing proteins that contain phosphomimetic molecules at either terminus. In addition, researchers have used unnatural amino acid mutagenesis at targeted sites within a peptide sequence.Advances in chemical biology have also improved upon classical techniques of imaging kinase action. For example, the development of peptide biosensors—peptides containing incorporated fluorophores improved temporal resolution of in vitro binding assays.
Chemical Biology
0.844071
759
Many research programs are also focused on employing natural biomolecules to perform biological tasks or to support a new chemical method. In this regard, chemical biology researchers have shown that DNA can serve as a template for synthetic chemistry, self-assembling proteins can serve as a structural scaffold for new materials, and RNA can be evolved in vitro to produce new catalytic function. Additionally, heterobifunctional (two-sided) synthetic small molecules such as dimerizers or PROTACs bring two proteins together inside cells, which can synthetically induce important new biological functions such as targeted protein degradation.
Chemical Biology
0.844071
760
link Journal of the Royal Society Interface – A cross-disciplinary publication promoting research at the interface between the physical and life sciences Molecular BioSystems – Chemical biology journal with a particular focus on the interface between chemistry and the -omic sciences and systems biology. Nature Chemical Biology – A monthly multidisciplinary journal providing an international forum for the timely publication of significant new research at the interface between chemistry and biology. Wiley Encyclopedia of Chemical Biology link
Chemical Biology
0.844071
761
ACS Chemical Biology – The new Chemical Biology journal from the American Chemical Society. Bioorganic & Medicinal Chemistry – The Tetrahedron Journal for Research at the Interface of Chemistry and Biology ChemBioChem – A European Journal of Chemical Biology Chemical Biology – A point of access to chemical biology news and research from across RSC Publishing Cell Chemical Biology – An interdisciplinary journal that publishes papers of exceptional interest in all areas at the interface between chemistry and biology. chembiol.com Journal of Chemical Biology – A new journal publishing novel work and reviews at the interface between biology and the physical sciences, published by Springer.
Chemical Biology
0.844071
762
Among the most widely used are subjecting DNA to UV radiation or chemical mutagens, error-prone PCR, degenerate codons, or recombination. Once a large library of variants is created, selection or screening techniques are used to find mutants with a desired attribute. Common selection/screening techniques include FACS, mRNA display, phage display, and in vitro compartmentalization. Once useful variants are found, their DNA sequence is amplified and subjected to further rounds of diversification and selection. The development of directed evolution methods was honored in 2018 with the awarding of the Nobel Prize in Chemistry to Frances Arnold for evolution of enzymes, and George Smith and Gregory Winter for phage display.
Chemical Biology
0.844071
763
Click chemistry is well suited to fill this niche, since click reactions are rapid, spontaneous, selective, and high-yielding. Unfortunately, the most famous "click reaction," a cycloaddition between an azide and an acyclic alkyne, is copper-catalyzed, posing a serious problem for use in vivo due to copper's toxicity.
Chemical Biology
0.844071
764
Water- and redox- sensitive reactions would not proceed, reagents prone to nucleophilic attack would offer no chemospecificity, and any reactions with large kinetic barriers would not find enough energy in the relatively low-heat environment of a living cell. Thus, chemists have recently developed a panel of bioorthogonal chemistry that proceed chemospecifically, despite the milieu of distracting reactive materials in vivo. The coupling of an probe to a molecule of interest must occur within a reasonably short time frame; therefore, the kinetics of the coupling reaction should be highly favorable.
Chemical Biology
0.844071
765
This definition of codimension in terms of the number of functions needed to cut out a subspace extends to situations in which both the ambient space and subspace are infinite dimensional. In other language, which is basic for any kind of intersection theory, we are taking the union of a certain number of constraints. We have two phenomena to look out for: the two sets of constraints may not be independent; the two sets of constraints may not be compatible.The first of these is often expressed as the principle of counting constraints: if we have a number N of parameters to adjust (i.e. we have N degrees of freedom), and a constraint means we have to 'consume' a parameter to satisfy it, then the codimension of the solution set is at most the number of constraints. We do not expect to be able to find a solution if the predicted codimension, i.e. the number of independent constraints, exceeds N (in the linear algebra case, there is always a trivial, null vector solution, which is therefore discounted). The second is a matter of geometry, on the model of parallel lines; it is something that can be discussed for linear problems by methods of linear algebra, and for non-linear problems in projective space, over the complex number field.
Counting the constants
0.844048
766
Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by Alan Baker (1966, 1967a, 1967b).
Height function
0.844012
767
Classical or naive height is defined in terms of ordinary absolute value on homogeneous coordinates. It is typically a logarithmic scale and therefore can be viewed as being proportional to the "algebraic complexity" or number of bits needed to store a point. It is typically defined to be the logarithm of the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial.The naive height of a rational number x = p/q (in lowest terms) is multiplicative height H ( p / q ) = max { | p | , | q | } {\displaystyle H(p/q)=\max\{|p|,|q|\}} logarithmic height: h ( p / q ) = log ⁡ H ( p / q ) {\displaystyle h(p/q)=\log H(p/q)} Therefore, the naive multiplicative and logarithmic heights of 4/10 are 5 and log(5), for example. The naive height H of an elliptic curve E given by y2 = x3 + Ax + B is defined to be H(E) = log max(4|A|3, 27|B|2).
Height function
0.844012
768
In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by Wolfgang M. Schmidt (1972) demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation.Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by Weil (1929) and Faltings (1983) respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic.
Height function
0.844012
769
A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers.For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. 7 for the coordinates (3/7, 1/2)), but in a logarithmic scale.
Height function
0.844012
770
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
771
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November.
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
772
A. Genetic Foundations Mendelian and non-Mendelian inheritance Transformation, transduction and conjugation Recombination and complementation Mutational analysis Genetic mapping and linkage analysis B. Chromatin and Chromosomes Karyotypes Translocations, inversions, deletions and duplications Aneuploidy and polyploidy Structure Epigenetics C. Genomics Genome structure Physical mapping Repeated DNA and gene families Gene identification Transposable elements Bioinformatics Proteomics Molecular evolution D. Genome Maintenance DNA replication DNA damage and repair DNA modification DNA recombination and gene conversion E. Gene Expression/Recombinant DNA Technology The genetic code Transcription/transcriptional profiling RNA processing Translation F. Gene Regulation Positive and negative control of the operon Promoter recognition by RNA polymerases Attenuation and antitermination Cis-acting regulatory elements Trans-acting regulatory factors Gene rearrangements and amplifications Small non-coding RNA (e.g., siRNA, microRNA) G. Viruses Genome replication and regulation Virus assembly Virus-host interactions H. Methods Restriction maps and PCR Nucleic acid blotting and hybridization DNA cloning in prokaryotes and eukaryotes Sequencing and analysis Protein-nucleic acid interaction Transgenic organisms Microarrays
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
773
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
774
Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
775
A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g., membranes, ribosomes and multienzyme complexes) C Catalysis and Binding Enzyme reaction mechanisms and kinetics Ligand-protein interaction (e.g., hormone receptors, substrates and effectors, transport proteins and antigen-antibody interactions) D Major Metabolic Pathways Carbon, nitrogen and sulfur assimilation Anabolism Catabolism Synthesis and degradation of macromolecules E Bioenergetics (including respiration and photosynthesis) Energy transformations at the substrate level Electron transport Proton and chemical gradients Energy coupling (e.g., phosphorylation and transport) F Regulation and Integration of Metabolism Covalent modification of enzymes Allosteric regulation Compartmentalization Hormones G Methods Biophysical approaches (e.g., spectroscopy, x-ray, crystallography, mass spectroscopy) Isotopes Separation techniques (e.g., centrifugation, chromatography and electrophoresis) Immunotechniques
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
776
Methods of importance to cellular biology, such as fluorescence probes (e.g., FRAP, FRET and GFP) and imaging, will be covered as appropriate within the context of the content below. A. Cellular Compartments of Prokaryotes and Eukaryotes: Organization, Dynamics and Functions Cellular membrane systems (e.g., structure and transport across membrane) Nucleus (e.g., envelope and matrix) Mitochondria and chloroplasts (e.g., biogenesis and evolution) B. Cell Surface and Communication Extracellular matrix (including cell walls) Cell adhesion and junctions Signal transduction Receptor function Excitable membrane systems C. Cytoskeleton, Motility and Shape Regulation of assembly and disassembly of filament systems Motor function, regulation and diversity D. Protein, Processing, Targeting and Turnover Translocation across membranes Posttranslational modification Intracellular trafficking Secretion and endocytosis Protein turnover (e.g., proteosomes, lysosomes, damaged protein response) E. Cell Division, Differentiation and Development Cell cycle, mitosis and cytokinesis Meiosis and gametogenesis Fertilization and early embryonic development (including positional information, homeotic genes, tissue-specific expression, nuclear and cytoplasmic interactions, growth factors and induction, environment, stem cells and polarity)
GRE Biochemistry, Cell and Molecular Biology Test
0.843967
777
According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface ∂Ω can be rewritten as ∂ Ω {\displaystyle {\scriptstyle \partial \Omega }} E ⋅ d S = ∭ Ω ∇ ⋅ E d V {\displaystyle \mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V} The integral version of Gauss's equation can thus be rewritten as Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives ∂ Ω {\displaystyle {\scriptstyle \partial \Omega }} B ⋅ d S = ∭ Ω ∇ ⋅ B d V = 0. {\displaystyle \mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.} which is satisfied for all Ω if and only if ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} everywhere.
Maxwells equations
0.843913
778
Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics.
Maxwells equations
0.843913
779
The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest.
Maxwells equations
0.843913
780
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon.
Maxwells equations
0.843913
781
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.
Maxwells equations
0.843913
782
"Secondary" is a general term used in chemistry that can be applied to many molecules, even more than the ones listed here; the principles seen in these examples can be further applied to other functional group containing molecules. The ones shown above are common molecules seen in many organic reactions. By classifying a molecule as secondary it then be compared with a molecule of primary or tertiary nature to determine the relative reactivity.
Secondary (chemistry)
0.843851
783
Secondary is a term used in organic chemistry to classify various types of compounds (e. g. alcohols, alkyl halides, amines) or reactive intermediates (e. g. alkyl radicals, carbocations). An atom is considered secondary if it has two 'R' Groups attached to it. An 'R' group is a carbon containing group such as a methyl ( CH 3 {\displaystyle {\ce {CH3}}} ). A secondary compound is most often classified on an alpha carbon (middle carbon) or a nitrogen.
Secondary (chemistry)
0.843851
784
It is standard practice in physics to perform blinded data analysis. After data analysis is complete, one is allowed to unblind the data. A prior agreement to publish the data regardless of the results of the analysis may be made to prevent publication bias.
Blinded study
0.843695
785
The unit is most effective in accelerating particle systems, with only a small performance improvement measured for rigid body physics. The Ageia PPU is documented in depth in their US patent application #20050075849. Nvidia/Ageia no longer produces PPUs and hardware acceleration for physics processing, although it is now supported through some of their graphics processing units. Academic PPU research projects
Physics card
0.843695
786
An early academic PPU research project named SPARTA (Simulation of Physics on A Real-Time Architecture) was carried out at Penn State and University of Georgia. This was a simple FPGA based PPU that was limited to two dimensions. This project was extended into a considerably more advanced ASIC-based system named HELLAS. February 2006 saw the release of the first dedicated PPU PhysX from Ageia (later merged into nVidia).
Physics card
0.843695
787
Being a DSP however, it is much more dependent on the CPU to do useful work in a game engine, and would not be capable of implementing a full physics API, so it cannot be classed as a PPU. Also VU0 is capable of providing additional vertex processing power, though this is more a property of the pathways in the system rather than the unit itself. This usage is similar to Havok FX or GPU physics in that an auxiliary unit's general purpose floating point power is used to complement the CPU in either graphics or physics roles.
Physics card
0.843695
788
Although very different from the PhysX, one could argue the PlayStation 2's VU0 is an early, limited implementation of a PPU. Conversely, one could describe a PPU to a PS2 programmer as an evolved replacement for VU0. Its feature-set and placement within the system is geared toward accelerating game update tasks including physics and AI; it can offload such calculations working off its own instruction stream whilst the CPU is operating on something else.
Physics card
0.843695
789
The drive toward GPGPU has made GPUs more suitable for the job of a PPU; DX10 added integer data types, unified shader architecture, and a geometry shader stage which allows a broader range of algorithms to be implemented; Modern GPUs support compute shaders, which run across an indexed space and don't require any graphical resources, just general purpose data buffers. NVidia CUDA provides a little more in the way of inter-thread communication and scratchpad-style workspace associated with the threads. Nonetheless GPUs are built around a larger number of longer latency, slower threads, and designed around texture and framebuffer data paths, and poor branching performance; this distinguishes them from PPUs and Cell as being less well optimized for taking over game world simulation tasks. The Codeplay Sieve compiler supports the PPU, indicating that the Ageia physX chip would be suitable for GPGPU type tasks. However Ageia seem unlikely to pursue this market.
Physics card
0.843695
790
PCs with the cards already installed were available from system builders such as Alienware, Dell, and Falcon Northwest.In February 2008, after Nvidia bought Ageia Technologies and eventually cut off the ability to process PhysX on the AGEIA PPU and NVIDIA GPUs in systems with active ATi/AMD GPUs, it seemed that PhysX went 100% to Nvidia. But in March 2008, Nvidia announced that it will make PhysX an open standard for everyone, so the main graphic-processor manufacturers will have PhysX support in the next generation graphics cards. Nvidia announced that PhysX will also be available for some of their released graphics cards just by downloading some new drivers. See physics engine for a discussion of academic research PPU projects.
Physics card
0.843694
791
ASUS and BFG Technologies bought licenses to manufacture alternate versions of AGEIA's PPU, the PhysX P1 with 128 MB GDDR3: Multi-core device based on the MIPS architecture with integrated physics acceleration hardware and memory subsystem with "tons of cores"125 million transistors 182 mm2 die size Fabrication process: 130 nm Peak power consumption: 30 W Memory: 128 MB GDDR3 RAM with 128-bit interface 32-bit PCI 3.0 (ASUS also made a PCI Express version card) Sphere collision tests: 530 million per second (maximum capability) Convex collision tests: 530,000 per second (maximum capability) Peak instruction bandwidth: 20 billion per second
Physics card
0.843694
792
In vector calculus, a vector potential is a vector field whose curl is a given vector field. This is analogous to a scalar potential, which is a scalar field whose gradient is a given vector field. Formally, given a vector field v, a vector potential is a C 2 {\displaystyle C^{2}} vector field A such that
Vector potential
0.843667
793
200 BC – the Sieve of Eratosthenes 263 AD – Gaussian elimination described by Liu Hui 628 – Chakravala method described by Brahmagupta c. 820 – Al-Khawarizmi described algorithms for solving linear equations and quadratic equations in his Algebra; the word algorithm comes from his name 825 – Al-Khawarizmi described the algorism, algorithms for using the Hindu–Arabic numeral system, in his treatise On the Calculation with Hindu Numerals, which was translated into Latin as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method" c. 850 – cryptanalysis and frequency analysis algorithms developed by Al-Kindi (Alkindus) in A Manuscript on Deciphering Cryptographic Messages, which contains algorithms on breaking encryptions and ciphers c. 1025 – Ibn al-Haytham (Alhazen), was the first mathematician to derive the formula for the sum of the fourth powers, and in turn, he develops an algorithm for determining the general formula for the sum of any integral powers, which was fundamental to the development of integral calculus c. 1400 – Ahmad al-Qalqashandi gives a list of ciphers in his Subh al-a'sha which include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter; he also gives an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which can not occur together in one word
Timeline of algorithms
0.843641
794
1540 – Lodovico Ferrari discovered a method to find the roots of a quartic polynomial 1545 – Gerolamo Cardano published Cardano's method for finding the roots of a cubic polynomial 1614 – John Napier develops method for performing calculations using logarithms 1671 – Newton–Raphson method developed by Isaac Newton 1690 – Newton–Raphson method independently developed by Joseph Raphson 1706 – John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places 1768 – Leonard Euler publishes his method for numerical integration of ordinary differential equations in problem 85 of Institutiones calculi integralis 1789 – Jurij Vega improves Machin's formula and computes π to 140 decimal places, 1805 – FFT-like algorithm known by Carl Friedrich Gauss 1842 – Ada Lovelace writes the first algorithm for a computing engine 1903 – A fast Fourier transform algorithm presented by Carle David Tolmé Runge 1918 - Soundex 1926 – Borůvka's algorithm 1926 – Primary decomposition algorithm presented by Grete Hermann 1927 – Hartree–Fock method developed for simulating a quantum many-body system in a stationary state. 1934 – Delaunay triangulation developed by Boris Delaunay 1936 – Turing machine, an abstract machine developed by Alan Turing, with others developed the modern notion of algorithm.
Timeline of algorithms
0.843641
795
1970 – Dinic's algorithm for computing maximum flow in a flow network by Yefim (Chaim) A. Dinitz 1970 – Knuth–Bendix completion algorithm developed by Donald Knuth and Peter B. Bendix 1970 – BFGS method of the quasi-Newton class 1970 – Needleman–Wunsch algorithm published by Saul B. Needleman and Christian D. Wunsch 1972 – Edmonds–Karp algorithm published by Jack Edmonds and Richard Karp, essentially identical to Dinic's algorithm from 1970 1972 – Graham scan developed by Ronald Graham 1972 – Red–black trees and B-trees discovered 1973 – RSA encryption algorithm discovered by Clifford Cocks 1973 – Jarvis march algorithm developed by R. A. Jarvis 1973 – Hopcroft–Karp algorithm developed by John Hopcroft and Richard Karp 1974 – Pollard's p − 1 algorithm developed by John Pollard 1974 – Quadtree developed by Raphael Finkel and J.L. Bentley 1975 – Genetic algorithms popularized by John Holland 1975 – Pollard's rho algorithm developed by John Pollard 1975 – Aho–Corasick string matching algorithm developed by Alfred V. Aho and Margaret J. Corasick 1975 – Cylindrical algebraic decomposition developed by George E. Collins 1976 – Salamin–Brent algorithm independently discovered by Eugene Salamin and Richard Brent 1976 – Knuth–Morris–Pratt algorithm developed by Donald Knuth and Vaughan Pratt and independently by J. H. Morris 1977 – Boyer–Moore string-search algorithm for searching the occurrence of a string into another string. 1977 – RSA encryption algorithm rediscovered by Ron Rivest, Adi Shamir, and Len Adleman 1977 – LZ77 algorithm developed by Abraham Lempel and Jacob Ziv 1977 – multigrid methods developed independently by Achi Brandt and Wolfgang Hackbusch 1978 – LZ78 algorithm developed from LZ77 by Abraham Lempel and Jacob Ziv 1978 – Bruun's algorithm proposed for powers of two by Georg Bruun 1979 – Khachiyan's ellipsoid method developed by Leonid Khachiyan 1979 – ID3 decision tree algorithm developed by Ross Quinlan
Timeline of algorithms
0.843641
796
The constructive dilemma rule may be written in sequent notation: ( P → Q ) , ( R → S ) , ( P ∨ R ) ⊢ ( Q ∨ S ) {\displaystyle (P\to Q),(R\to S),(P\lor R)\vdash (Q\lor S)} where ⊢ {\displaystyle \vdash } is a metalogical symbol meaning that Q ∨ S {\displaystyle Q\lor S} is a syntactic consequence of P → Q {\displaystyle P\to Q} , R → S {\displaystyle R\to S} , and P ∨ R {\displaystyle P\lor R} in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: ( ( ( P → Q ) ∧ ( R → S ) ) ∧ ( P ∨ R ) ) → ( Q ∨ S ) {\displaystyle (((P\to Q)\land (R\to S))\land (P\lor R))\to (Q\lor S)} where P {\displaystyle P} , Q {\displaystyle Q} , R {\displaystyle R} and S {\displaystyle S} are propositions expressed in some formal system.
Constructive dilemma
0.843625
797
In 3D rendering, access patterns for texture mapping and rasterization of small primitives (with arbitrary distortions of complex surfaces) are far from linear, but can still exhibit spatial locality (e.g., in screen space or texture space). This can be turned into good memory locality via some combination of morton order and tiling for texture maps and frame buffer data (mapping spatial regions onto cache lines), or by sorting primitives via tile based deferred rendering. It can also be advantageous to store matrices in morton order in linear algebra libraries.
Memory access pattern
0.84361
798
Strided or simple 2D, 3D access patterns (e.g., stepping through multi-dimensional arrays) are similarly easy to predict, and are found in implementations of linear algebra algorithms and image processing. Loop tiling is an effective approach. Some systems with DMA provided a strided mode for transferring data between subtile of larger 2D arrays and scratchpad memory.
Memory access pattern
0.84361
799
Nearest neighbor memory access patterns appear in simulation, and are related to sequential or strided patterns. An algorithm may traverse a data structure using information from the nearest neighbors of a data element (in one or more dimensions) to perform a calculation. These are common in physics simulations operating on grids. Nearest neighbor can also refer to inter-node communication in a cluster; physics simulations which rely on such local access patterns can be parallelized with the data partitioned into cluster nodes, with purely nearest-neighbor communication between them, which may have advantages for latency and communication bandwidth. This use case maps well onto torus network topology.
Memory access pattern
0.84361