text
stringlengths
100
500k
subset
stringclasses
4 values
The standard SST model requires calculating assets' and liabilities' first and second order sensitivities towards market factors. These are known as Deltas and Gammas which need to be calculated for each asset and liability (of a replicating portfolio) individually and added according to their weights in the balance sheet. Computationally this is a quite advanced task: there are typically more than 1000 assets and for the defined $82$ SST factors we have to relate to each asset and liability a vector of $82$ elements and a matrix of $82\times 82$ elements. This page describes the basic idea of our approach of handling a large dataset which is relevant for SST calculation and balance sheet optimisation. We need to estimate a yield curve that incorporates the bond's credit risk (we use Nelson-Siegel, Svensson, Spline and Polynomial fitting according to an algorithm described in our white paper). Calculate key rate durations (Deltas): for each single maturity of $1$ to $5$ years we need to calculate the first order factor sensitivity. as well as of the SST logic, i.e. how shocks (absolute and logarithmic) are defined per factor. For a corporate bond we calculate each Delta element as illustrated on the right with the second element as an example. For details on the calculation of Deltas and Gammas see our Technical Paper on the SST standard market model's implementation. Let us assume we want to analyse all constituents of the SBI with respect to their contributions to an insurer's SST ratio. The figure on the right shows the calculations needed as input for the SST standard market model. Calculation of assets' and liabilities' sensitivities is therefore a task that needs to be handled efficiently. Note that the calculations of each element of the Delta vector requires calling the pricing function $2$ times while Gamma calculations typically require to price a bond $4$ times. As for the calculation of Gammas, the bond pricing functions need to be called over $150$ thousand times. Our object oriented system stores the Deltas and Gammas of all loaded assets and of the liabilities modelled by a replicating portfolio. This allows for high flexibility with respect to various analyses and optimisation methods. See example below: all calculations were done and stored for each element of the balance sheet (balance_sheet object). Now assets' and liabilities' (and of course the joint result too) Deltas can be called and plotted with a single line of code. The same applies for Gammas, Scenarios and other features relevant under the SST's standard market model.
CommonCrawl
What does the leading turnstile operator mean? I know that different authors use different notation to represent programming language semantics. As a matter of fact Guy Steele addresses this problem in an interesting video. Can someone help me understand? Thanks. On the left of the turnstile, you can find the local context, a finite list of assumptions on the types of the variables at hand. Above, $n$ can be zero, resulting in $\vdash e:T$. This means that no assumptions on variables are made. Usually, this means that $e$ is a closed term (without any free variables) having type $T$. Often, the rule you mention is written in a more general form, where there can be more hypotheses than the one mentioned in the question. Here, $\Gamma$ represents any context, and $\Gamma, x:T_1$ represents its extension obtained by appending the additional hypothesis $x:T_1$ to the list $\Gamma$. It is common to require that $x$ did not appear in $\Gamma$, so that the extension does not "conflict" with a previous assumption. As a complement to the other answers, note that there are three levels of "implication" in typing derivations. And the same remark holds with logical derivations since there is actually a correspondence between the two (called the Curry-Howard's correspondance). The first implication is the arrow that appears in formulas, and it corresponds to logical implication in a formula (or a function type for the $\lambda$-calculus). The second implication is materialized by the turnstile symbol, and means "assuming every formula on the left, the formula on the right holds". In particular, the rule you give tells how one should prove an implication: to prove $A \Rightarrow B$, then one must prove $B$ under the assumption that $A$ holds. In terms of the $\lambda$-calculus, to prove that $\lambda x.t$ has type $A \to B$, one must show that $t$ has type $B$, assuming that $x$ is a variable of type $A$ (see the correspondence?). The third level of implication is materialized by the horizontal bar, and means "if every premise (elements at the top) holds, then the conclusion (the element at the bottom) holds". You can link that to the interpretation of the typing rule for $\lambda$-abstraction that you gave (see the explanation in the previous paragraph). In type checking systems, the ($\vdash$) represents the ternary relation over type environments, expressions and types: $\vdash \texttt Env \times \texttt Exp \times \texttt Typ$. Note that, the operator reserves its functionality regardless of where it appears, either in the premise or the conclusion of the rule. In every situation that I've seen, $X\vdash Y$ means that there is a proof of $Y$ assuming that $X$ holds. If $X$ is empty, that means that $Y$ is a tautology: it has a proof without needing any assumptions. Not the answer you're looking for? Browse other questions tagged type-theory denotational-semantics or ask your own question. To what does typing correspond in a Turing Machine? What does Godels Incompleteness theorem "true but unprovable" mean? What does Harper mean by "class"? What is the semantic model of types? What does "terms evaluated in related environments yield related values" means in the context of typing judgements? What are the concequences of the unit type and the unit value being the same? What is the use case of multi-type-parameters generic interface? What Happened to "Top" in Denotational Semantics? Why does the denotational semantics for a while loop have a existence quantifier?
CommonCrawl
We define an orientation on the edges of a noncrossing tree induced by the labels: for a noncrossing tree (i.e., the edges do not cross) with vertices $1,2,\ldots,n$ arranged on a circle in this order, all edges are oriented towards the vertex whose label is higher. The main purpose of this paper is to study the distribution of noncrossing trees with respect to the indegree and outdegree sequence determined by this orientation. In particular, an explicit formula for the number of noncrossing trees with given indegree and outdegree sequence is proved and several corollaries are deduced from it. Sources (vertices of indegree $0$) and sinks (vertices of outdegree $0$) play a special role in this context. In particular, it turns out that noncrossing trees with a given number of sources and sinks correspond bijectively to ternary trees with a given number of middle- and right-edges, and an explicit bijection is provided for this fact. Finally, the in- and outdegree distribution of a single vertex is considered and explicit counting formulas are provided again.
CommonCrawl
This paper deals with an interpolation problem in the open unit disc $\mathbb D$ of the complex plane. We characterize the sequences in a Stolz angle of $\mathbb D $, verifying that the bounded sequences are interpolated on them by a certain class of not bounded holomorphic functions on $\mathbb D $, but very close to the bounded ones. We prove that these interpolating sequences are also uniformly separated, as in the case of the interpolation by bounded holomorphic functions.
CommonCrawl
We report our recent lattice calculation of hadronic light-by-light contribution to muon $g-2$ using our recent developed moment method. The connected diagrams and the leading disconnected diagrams are included. The calculation is performed on a $48^3 \times 96$ lattice with physical pion mass and 5.5 fm box size. We expect sizable finite volume and finite lattice spacing corrections to the results of these calculations which will be estimated in calculations to be carried out over the next 1-2 years.
CommonCrawl
For the game, see Quantum Gate (video game) . In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate (or simply quantum gate) is a basic quantum circuit operating on a small number of qubits . They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits. Unlike many classical logic gates, quantum logic gates are reversible . However, it is possible to perform classical computing using only reversible gates. For example, the reversible Toffoli gate can implement all Boolean functions, often at the cost of having to use ancillary bits . The Toffoli gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits. complex dimensions. The base vectors are the possible outcomes if measured, and a quantum state is a linear combination of these outcomes. The most common quantum gates operate on spaces of one or two qubits, just like the common classical logic gates operate on one or two bits. The Hadamard gate is the one-qubit version of the quantum fourier transform . where I is the identity matrix, H is indeed a unitary matrix . , so this gate is a square root of the NOT gate. The sqrt(swap) gate performs half-way of a two-qubit swap. It is universal such that any many-qubit gate can be constructed from only sqrt(swap) and single qubit gates. The sqrt(swap) gate is not, however maximally entangling; more than one application of it is required to produce a Bell state from product states. then the controlled-U gate is a gate that operates on two qubits in such a way that the first qubit serves as a control. It maps the basis states as follows. The CNOT gate is generally used in quantum computing to generate entangled states . The Fredkin gate (also CSWAP or cS gate), named after Edward Fredkin , is a 3-bit gate that performs a controlled swap . It is universal for classical computation. It has the useful property that the numbers of 0s and 1s are conserved throughout, which in the billiard ball model means the same number of balls are output as input. Unfortunately, a working Deutsch gate has remained out of reach, due to lack of a protocol. However, a method was proposed to realize such a Deutsch gate with dipole-dipole interaction in neutral atoms. are universal two-qubit gates and can be transformed into each other. Informally, a set of universal quantum gates is any set of gates to which any operation possible on a quantum computer can be reduced, that is, any other unitary operation can be expressed as a finite sequence of gates from the set. Technically, this is impossible since the number of possible quantum gates is uncountable , whereas the number of finite sequences from a finite set is countable . To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set. Moreover, for unitaries on a constant number of qubits, the Solovay–Kitaev theorem guarantees that this can be done efficiently. , thus showing that all classical logic operations can be performed on a universal quantum computer. Circuit representation of measurement. The two lines on the right hand side represents a classical bit, the single line on the left hand side represents a qubit. Measurement is irreversible and therefore not a quantum gate, because it assigns the observed variable to a singular value. Measurement takes a quantum state and projects it to one of the base vectors, with a likelihood equal to the square of the vectors depth along that base vector. This is a non-reversible operation as it sets the quantum state equal to the base vector that represents the measured state (the state "collapses" to a definite singular value). Why and how this is so is called the measurement problem . If two different quantum registers are entangled (they cannot be expressed as a tensor product ), measurement of one register affects or reveals the state of the other register by partially or entirely collapsing its state too. An example of such a linearly inseparable state is the EPR pair , which can be constructed with the CNOT and the Hadamard gates (described above). This effect is used in many algorithms: if two variables A and B are maximally entangled (the bell state is the simplest example of this), a function F is applied to A such that A is updated to the value of F(A), followed by measurement of A, then B will, when measured, be a value such that F(B) = A[ citation needed ]. This way, measurement of one register can be used to assign properties to some other registers[ citation needed ]. This effect of assignment is used in Shor's algorithm . The algorithm uses two measurements on two registers with entangled copies of a single value that is in a superposition; the first measurement is used to obtain an modular exponentiation and to eliminate all values that do not correspond to this modular exponentiation in the other register. This other register is then fed through a quantum fourier transform and then measured to reveal the period, which concludes the algorithm. The order in which measurement is performed can be reversed, or concurrently interleaved, without affecting the result, since the measurements assignment of one register will limit the value-space from the other entangled register. This type of value-assignment in theory occurs instantaneously over any distance and this has as of 2018 been experimentally verified for distances of up to 1200 kilometers . That the phenomena appears to violate the speed of light is called the EPR paradox and it is an open question in physics how to resolve this. Originally it was solved by giving up the assumption of local realism , but other interpretations have also emerged. For more information see the Bell test experiments . If two or more qubits are viewed as a single quantum state, this combined state is equal to the tensor product of the constituent qubits. An entangled state is any state that cannot be tensor-factorized (the state cannot be separated into its constituent qubits). The CNOT, Ising and Toffoli gates are examples of gates that act on states constructed of multiple qubits. ) is a representation of the gate that maps every state to itself (i.e., does nothing at all). In a circuit diagram the identity gate or matrix will appear as just a wire. is the number of qubits the gates act on, it is believed to be intractable to simulate large quantum systems using classical computers. ^ Aharonov, Dorit (2003-01-09). "A Simple Proof that Toffoli and Hadamard are Quantum Universal". arXiv : quant-ph/0301040 . ^ R. P. Feynman, "Quantum mechanical computers", Optics News, February 1985, 11, p. 11; reprinted in Foundations of Physics 16(6) 507–531. This page was last edited on 20 November 2018, at 13:06 (UTC). Why Do Multiple Hadamard Gates Return to Base State? Reading through IBM's intro to quantum computing and gates and I'm confused about the Hadamard gate. When you use an H gate it appears to rotate the qubit along the X axis pi/4 putting it in a superposition. If my base state was |0> and I pass it through an H gate it will be put into |>. Another H gate will put it back into |0>. If my base state was |1> then the same happens, where two H gates will return to |1>. I'd expect two H gates would be the same as a NOT gate and flip |0> to |1>. I see what is happening but I don't know why. Why doesn't two H gates turn |0> to |1> ? What do you mean by "|>"? If by rotations, you refer to the Bloch sphere representation, the Hadamard gate is a rotation of $\pi$ about the vector $\hatx+\hatz$, not $\pi/4$ about the vector $\hatx$ as you stated. In a nutshell, there are multiple ways of being in a superposition of |0> and |1>. In the graphic representation, all states "reflect" across the same dotted line. When reflected twice, they go back where they came from. You can see that |0> and |1> are reflected to different places, but that both places have equal probability of measuring in each state. Not the answer you're looking for? Browse other questions tagged quantum-computer or ask your own question . What is the use of a Universal-NOT gate? CNOT gate broken in 2 different quantum simulators? Or am I wrong? Is QC with Superpositioned Quantum Gates any different than normal Quantum Computation? What do operations on single Qubits of Unfactorable Superpositions Do? What happens to a qubit in superposition that goes through a Pauli-X gate? Which version of Office 2016 I have? Can women also go shirtless in public legally? Why is 2 * (i * i) faster than 2 * i * i in Java? How do I know how reusable my methods should be? What precisely does it mean for "information to not travel faster than the speed of light"? Is the Set of Continuous Functions that are the Sum of Even and Odd Functions Meager? Why didn't Saruman break Gandalf's staff on Orthanc as Gandalf broke Saruman's? Isn't acknowledging the existence of God, as a state, a contradiction of the separation of Church and State? Is there a term for "the user can't use anything wrong" design? Did the prefects sit separately? Did making Horcruxes affect Voldemort's power? In linear regression, why should we include degree 2 variables when we only want interaction terms? Why can we distinguish different pitches in a chord but not different hues of light? If the brightness of the built-in display is turned black, can it be considered off? I'm trying to teach myself about quantum computing, and I have a decent-ish understanding of linear algebra. I got through the NOT gate, which wasn't too bad, but then I got to the Hadamard gate. And I got stuck. Mainly because while I "understand" the manipulations, I don't understand what they really do or why you'd want to do them, if that makes sense. For example, when the Hadamard gate takes in $|0\rangle$ it gives $\frac\sqrt2$. What does this mean? For the NOT gate, it takes in $|0\rangle$ and gives $|1\rangle$. Nothing unclear about that; it gives the "opposite" of the bit (for the superposition, it takes in $\alpha|0\rangle+\beta|1\rangle$ and gives $\beta|0\rangle + \alpha|1\rangle$) and I understand why that is useful; for the same reasons (basically) that it is useful in a classical computer. But what (for example) is the Hadamard gate doing geometrically to a vector $\beginbmatrix\alpha \\ \beta \endbmatrix$? And why is this useful? The Hadamard gate might be your first encounter with superposition creation. When you say you can relate the usefulness of the Pauli $X$ gate (a.k.a. NOT) to its classical counterpart – well, Hadamard is exactly where you leave the realm of classical analogue, then. It is useful for exactly the same reason, however, namely that it is often used to form a universal set of gates (like clasical AND with NOT and fan-out, or NOR with fan-out alone). Voilà, we can now evaluate functions on $2^n$ different inputs in parallel! This is, for example, the first step in Grover's algorithm . which is known as the GHZ state , also immensely useful. Last but not least, it's a quite useful basis transform that is self-reversible. So another Hadamard gate undoes, in a sense, what a previous application did ($H^2 = I$). You can experiment around what happens if you use it to "sandwich" other operations, for example put one on the target qubit of a CNOT gate and another after it. Or on both of the qubits (for a total of 4 Hadamards). Try it yourself and you'll certainly learn a lot about Quantum computation! Re "what is the Hadamard gate doing geometrically to a vector": read up on the Bloch sphere , you'll going to hear about it everywhere. In this representation, a Hadamard gate does a 180° rotation about a certain slanted axis. The Pauli gates (NOT being one out of three) also do 180° rotations but only about $x$ or $y$ or $z$. Because such geometrical operations are quite restricted, these gates alone can't really do much. (Indeed, if you restrict yourself to those and a CNOT in your quantum computer, you just build a very expensive and uneffective classical device.) Rotating about something tilted is important, and one more ingredient you usually need is also rotating by a smaller fraction of the angle, like 45° (like in the Phase shift gate ). The Hadamard gate operates on a single qubit. The state of a single qubit can be described as $\alpha \left|0\right\rangle + \beta \left|1\right\rangle$, where $|\alpha|^2 + |\beta|^2 = 1$. If you measure the qubit, the output is $0$ with probability $|\alpha|^2$, and $1$ with probability $|\beta|^2$. From a linear-algebraic perspective, the state of a qubit is just a unit norm vector of length two over the complex numbers. The two vectors $\left|0\right\rangle,\left|1\right\rangle$ span a vector space of dimension two (over the complex numbers), and every unit norm vector in that vector space can be the state of a qubit. Since the state always has unit norm, the only linear operators possible on qubits are those that preserve norms. From linear algebra, we know that these are exactly the Hermitian operators. To describe an operator, it suffice to describe its effect on a basis. For example, the value of your operator on the vector $\left| 0 \right\rangle$ is $\frac\left\sqrt2$. According to Wikipedia , the Hadamard gate is used to form a "random input". If applied to a constant qubit (i.e., $\left| 0 \right\rangle$, $\left| 1 \right\rangle$, or a rotation of these by a unit norm complex number), the Hadamard gate forms a "uniformly random" qubit, which when measured behaves like a fair coin toss. This is the kind of behavior we want when "trying all possibilities in parallel". I suggest you continue your reading on quantum computation; when you get to quantum algorithms (like Grover's and Shor's), you will understand what all these quantum gates are useful for. "unit norm vector of length two" was confusing to me because I'm used to using norm and length interchangeably. Not the answer you're looking for? Browse other questions tagged logic quantum-computing or ask your own question . How to apply a 1-qubit gate to a single qubit from an entangled pair? Does a Hadamard Gate have uses outside of pure and evenly mixed states? What is the name of this quantum gate? Why can't a qbit be both entangled and in a pure state? Dispute grade penalty for reading in class? Why is Carlsen being praised for his tie-break play, when Caruana made several game-losing moves? Is Earth an inertial reference frame? How do I know if I'm abstracting graphics APIs too tightly? Why can't we see images reflected on a piece of paper? How did Skylab's electrographic camera work? Why do immortals not use any neck armour in Highlander (1986)? Why don't commercial airplanes carry Earth-observing instruments? How do I prevent "s from turning into ß with babel?
CommonCrawl
If you pick n points uniformly at random from the surface of a d dimensional sphere of radius r>1 with center at the origin, what is the probability that the convex hull of these points contains the unit ball (of dimension d) centered at the origin? ...for $\ell_2^d$ to well-embed into $\ell_\infty^m$ the dimension $m$ must be exponential in $d$...$m$ is at least $\exp(d/(64r^2))$... Saying that the convex symmetric hull of $n$ points can contain the unit ball is the same as saying that $\ell_2^d$ embeds into $\ell_\infty^m$ with distortion at most $r$. These two statements somehow seem a little conflicting. The first statement says that you need exponentially many points in d to make the convex hull include a $\ell_2$ ball, while the second says only polynomially many would make the convex hull $K_m$ include a $\ell_2$ ball with high probability. My understanding is that, if we write $\Gamma$ as the embedding, in the first statement, the row vector of $\Gamma$ is picked on a sphere, which is bounded, so it is harder to contain a $\ell_2$ ball, while in the second statement, we sample rows according to Gaussian, which is unbounded, so it's easier to contain a ball. Note that in both statement, we can control how large the included ball is. If I understand correctly, Asymptotic shape of a random polytope in a convex body theorem 1.1 says that if each row of $\Gamma$ is uniformly picked in a ball, and if $m$ is only polynomial in $d$, then the best we can say is with high probability, $K_m$ contains a $\ell_2$ ball (the paper gives inclusion of $L_q$ centroid $Z_q$, but a scaled $\ell_2$ ball is included in $Z_q$ if $q>2$), but we can't describe how large the ball is, unless we make $m$ exponential in $d$, as stated by the first statemenet. Browse other questions tagged fa.functional-analysis convex-geometry random-matrices geometric-probability metric-embeddings or ask your own question. What is the "positive part" of the unit ball in $M_n(R)$ ? Is the ball reducible in some high dimension?
CommonCrawl
Group Schemes with $\mathbb F_q$-ActionFeb 07 2015We prove an equivalence of categories, generalizing the equivalence between commutative flat group schemes in characteristic $p$ with trivial Verschiebung and Dieudonn\'e $\mathbb F_p$-modules to group schemes with an action of $\mathbb F_q$. Solutions of the Einstein-Dirac Equation on Riemannian 3-Manifolds with Constant Scalar CurvatureFeb 22 2000This paper contains a classification of all 3-dimensional manifolds with constant scalar curvature $S \not= 0$ that carry a non-trivial solution of the Einstein-Dirac equation. A new property of the Alexander polynomialMar 10 2006Jun 21 2006This paper has been withdrawn because the result turns out to be trivial. Models for the n-Swiss Cheese operadsJun 23 2015We describe combinatorial Hopf (co-)operadic models for the Swiss Cheese operads built from Feynman diagrams. This extends previous work of Kontsevich and Lambrechts-Voli\'c for the little disks operads. Parshin's conjecture revisitedJan 23 2007May 21 2008We show that Pashin's conjecture on the vanishing of rational higher K-groups of smooth, projective varieties over finite fields can be thought of as a combination of three weaker conjectures. Inclusive Decays of Heavy FlavoursMar 08 1995Recent progress in the theoretical description of inclusive heavy flavour decays is reviewed. After an outline of the theoretical methods applications to total decay rates and semileptonic decay spectra are presented.
CommonCrawl
I am an econ master student and I need a bit of help understanding an issue, for my thesis. I have two related questions. If the elasticity of supply is infinity (competitive market), there is no wedge between MP and wages. If the elasticitity is below infinity (for example 2), there is a positive wedge (term is 1.5) so workers are paid LESS than their competitive wage. This is counterintuitive. Why can the firm pay LESS than competitive wage, if workers are the ones with the power? This is question 1. It is also strange that if supply is perfectly inelastic ($\eta=0$), the term is infinity, which means competitive wage is zero? I am lost. Now to question 2. Here I am very very lost. So, in the case of competitive markets ($\epsilon=-\infty$), workers are paid their MP. If the elasticity is between negative infinity and -1, the wedge is positive. However, if the elasticity is $\epsilon=-1$, the wedge is infinity. What does this mean? What is so special about $\epsilon=-1$? Even worse, if the elasticity is between 0 and -1, the wedge is negative!! What the heck does this mean? Thanks for taking the time to read this question. If you have any further question, let me know. You are right that for $|\epsilon|\leq1$, you would have issues with the wedge. However, this cannot occur in equilibrium, so it is not a problem. Both your questions stem from the same issue. I will illustrate it for monopolists, but it is analogous for monopsonists. First note, elasticity is not constant (except in the rare case of a constant elasticity of demand function). The elasticity of demand varies across the points on the demand function. In some regions it is $|\epsilon|>1$ and in others it is $|\epsilon|\leq1$. Second, note that the monopolist chooses the point on the demand curve, that she operates in, unlike under perfect competition. It is a well known result that monopolists optimally only choose to operate in the elastic region of demand. This is because, if they are in an inelastic region, then in response to a price increase, demand is reduced proportionally less than the price increase. In such a case, the monopolist can increase its profits, so that cannot be an optimal price. The profit maximizing price chosen by the monopolist is always in the elastic region of the demand curve. This means that $|\epsilon|>1$ always holds for a profit maximizing monopolist and the wedge is always positive. Not the answer you're looking for? Browse other questions tagged microeconomics elasticity self-study competition or ask your own question. How are workers harmed, from firms' payroll tax revenue aimed at reducing workers' tax? Why labour, capital, and output levels cannot be pinned down in perfect competition? How can I model immigration and xenophobia economically using these equations? Demand Elasticity, Factor Substitution: Independent?
CommonCrawl
Gestalt principle of common fate [34, 47]: pixels that move together tend to belong together. The ability to parse static scenes improves over time, suggesting that while motion-based grouping appears early, static grouping is acquired later, possibly bootstrapped by motion cues. CNN to 分割物体, 是物体的像素点置1, 不是的置0. take input $w \times w$ image to output $s \times s$ mask. The key idea behind motion segmentation is that if there is a single object moving with respect to the background through the entire video, then pixels on the object will move differently from pixels on the background. per-pixel saliency is averaged over superpixels. nearest neighbor graph is computed over the superpixels in the video using location and appearance as features.
CommonCrawl
As you know, operational amplifiers can be used in a vast array of circuit configurations and one of the most simple configurations to use is the inverting amplifier. The amplifier only requires the operational amplifier IC and a few other small components. Inverting amplifiers are also used as summing amplifiers, which sums the voltage present on multiple inputs and combines them into a single output voltage. Some examples are summing several signals with the same gain in an audio mixer or converting binary numbers to a voltage in a digital-to-audio-converter (DAC). The below figure, Fig 1.1 illustrates the amplifier's inverting configuration. There are an operation amplifier and two resistors R1 and R2 inside of the inverting configuration. The second resistor, R2 is connected from terminal 3 (output terminal), back to terminal 1, which is the inverting (-) input terminal. As R2 is connected in this manner, we can apply negative feedback, which is the process of "feeding back" a small portion of the output signal back into the input terminal. However, in order to make the feedback negative, the output signal must be fed into the inverting terminal of the operation amplifier. Also, if R2 was connected between the output terminal 3 and the input terminal 2 (noninverting input), there would be positive feedback. Now the output signal is fed back into the noninverting (+) input, creating a positive feedback into the operational amplifier. If you look at the diagram closely, R2 also closed the loop surrounding the operational amp. In addition to R2 in this configuration, terminal 2 has been grounded and connected to the resistor R1 in between terminal 1 and the input signal source of voltage v1. Between terminal 3 and the ground is the point from which we will take the output since there is an impedance level that is ideally zero. Due to this, voltage v0 does not depend on the impedance value of the current that is supplied to the load impedance that is connected between the ground and terminal 3. At the inverting input terminal, the voltage, v1 is given by v1 = v2. This is so because as the gain A is approaching towards infinity, v1 is approaching v2 and ideally equivalent. Having found v1, Ohm's law can now be applied to find the current i1 through R1. We need to know where this current will flow. Since the operation amplifier has an ideal infinite input impedance, which means there can be zero current drawn, it cannot flow here; instead, i1 will flow through R2 into terminal 3. From this, Ohm's law can be applied at R2 to find the value of v0. Figure 1.2 Analysis of the inverting configuration. This is the closed-loop gain. This gain is just the ratio of the two resistor values R2 and R1. There is a minus sign on the right side because the closed-loop inverting amplifier is providing a signal inversion. For example, if the ratio R2/R1 = 10 and a sine wave signal of 2 V is applied at the input terminal (v1), there will be a sine wave signal of 20 V (peak-to-peak) and phase-shifted exactly 180 degrees with respect to v1. Due to the fact that there is a minus sign incorporated in the ratio of the closed-loop gain, the configuration is called the inverting configuration. A significant point worth talking about is that this closed-loop gain depends solely on external passive components (R1 and R2). This will allow the closed-loop gain to be made as accurate as needed by selected different passive components, such as resistors capacitors, or inductors, of appropriate value. Also, because of this dependency, the closed-loop gain is ideally independent of the operational amplifier gain. To summarize: the amplifier started out having a large gain A, and thus through applying a negative feedback, a closed-loop gain R2/R1 has been obtained that is much smaller than the gain but it is now stable and also predictable. I.e. gain is being traded for accuracy. Figure 1.3 Analysis of the inverting configuration with a finite open-loop gain of the operation amplifier. By noting that as A approaches $$\infty $$, G approaches the ideal value of -R2/R1. Looking at Fig 1.3, it can be seen that as A approaches $$\infty $$, that the voltage at the inverting input terminal will approach zero. If you recall, this was the assumption used earlier to analyze the operational amplifier when it was assumed to be ideal. Finally, we will note that Eq. 1.1 shows that in order to minimize the dependency of the closed-loop gain G on the value of the open-loop gain A, the closed-loop gain needs to be much less than the value of the open-loop gain. To conclude this article, the inverting configuration has been discussed and explained. I hope that you have gained a better understanding of the purpose of this amplifier as well as how it is designed. Whether it is an audio mixer or a digital-to-audio converter, you'll find an inverting amplifier within it. Two things are to be remembered when talking about inverting amplifiers: no current flows through either input terminal and that the differential input voltage is zero (v1=v2=0). From these two rules, we derived an equation to calculate the closed-loop gain of the inverting amplifier. If you have any questions or comments, please leave them below!
CommonCrawl
The decomposition of linking number of the edges of a ribbon into the sum of twist and writhe is well known in the applied knot theory of closed polymer chains and DNA. A straightforward proof follows from each projection of the ribbon link: the crossings of the resulting link diagram are naturally partitioned into "ribbon crossings" (equalling 2 $\times$ planar writhe of an edge) and ribbon twists; averaging these over all projection directions gives the familiar writhe and twist respectively. This can be further visualized as a certain decomposition of the secant manifold of the curve, Gauss mapped to the direction sphere, closed with a framing manifold equivalent to the ribbon framing. Vassiliev invariants are knot (and link) invariants which can be considered as $n$-point generalizations of the linking number with certain combinatorial constraints. Indeed, the first Vassiliev invariant for links is the linking number. In this talk, I will discuss work in progress in understanding the standard '$X+Y$' decomposition of the second-order Vassiliev invariant as a sum of a certain 2-point generalized writhe and twist, and interpret this in terms of the knot's secant manifold under the Gauss map. This is joint work with Alexander Taylor. This research was supported by a Leverhulme Research Programme Grant: RP2013-K-009, "SPOCK: Scientific Properties of Complex Knots".
CommonCrawl
While not all pictures may be worth a thousand words, many geometers and other mathematicians find pictures inspire mathematical thoughts. And perhaps some in the general public might enjoy seeing images that inspire mathematics rather than complex equations that are often used to carry out mathematical thought and practices. In honor of Mathematics and Statistics Awareness Month I will try to show how pictures of equations of algebraic expressions inspired and continue to inspire, and hopefully make these algebraic equations less mysterious. Remarkably, the dramatic developments in geometry done in ancient Greece and reflected in the Elements of Euclid were unified and reinforced with the development of analytical geometry by Fermat (1607-1665) and Descartes (1596-1650). This wedding between geometry and algebra via the subject called analytical geometry has been to the great benefit of both of these mathematical domains, not to mention all other mathematical domains. Thus, someone might draw a picture of a circle or an ellipse but now one also has a way of using an "algebraic formula/equation" to represent the picture. Mathematicians can write down equations and look at what they represent as well as seek out equations for "pretty" curves in the hope that the equations will show some pattern that will lead to additional mathematical insights. Some of the curves we will explore have associations with famous mathematicians and you can find examples of such associations in this bibliography of curves. Sometimes these curves are defined by "verbal" descriptions that allow mathematicians to derive equations for the graphs of the algebraic expressions which represent these graphs. For example, an ellipse results from looking for those points in the Euclidean plane, the sum of whose distances from two fixed points (called the foci of the ellipse) is a constant. However, ellipses also arise from cutting right circular cones with a plane in a particular way. Those familiar with Cartesian coordinates and lines can move to the next segment, Conic sections. Let me briefly review the framework for converting algebraic expressions to pretty pictures. One can admire the pictures below, however, without these "details." Pretty pictures of graphs of equations rely on the notation that one can represent a point in two dimensional space (a flat piece of paper or a computer screen) by using two numbers to represent a point. In d-dimensional space one would need d numbers to represent a point, but beyond two-dimensions things are much more complicated because one can't represent what one draws with relative ease on a flat piece of paper. In modern times, one begins with two lines which meet at right angles called coordinate axes. Traditionally (Figure 4 ) the vertical line is called the y-axis and the horizontal line is called the x-axis. Each axis can be thought of as in two halves. The y-axis has a half above the point where the lines intersect (called the origin); one lays out along that part a number scale with positive whole number values (equally spaced from each other) and along the other half, negative whole numbers (equally spaced). The positive values are in the up direction on the line and the negative numbers in the down direction from the origin. For the x-axis one lays out positive numbers to the right and negative numbers to the left of the origin. Each point is represented by an ordered pair of numbers $(x,y)$. The pair $(x,y)$ are called the coordinates of the point. Thus, (2, -3) represents a point with x-coordinate 2 and y-coordinate -3. The numbers in a pair can be any real numbers but in the diagram below the coordinates are usually integers, either positive or negative whole numbers. The point where the two axes meet is assigned the pair (0, 0). If one wants to locate where the point (2, -3) should be placed, one imagines oneself at (0, 0) and treats the pair (2, -3) as "motion directions." Since the x value of the pair is positive 2, walk two units to the right (now one is at the point (2, 0)) and then one walks down (since the second coordinate is negative) 3 units. Other examples of points can be similarly plotted as in Figure 4. If one has an equation involving x and y, one can put a point in a diagram such as that in Figure 4 at each point whose coordinates $(x,y)$ satisfy the equation. Thus, the equation $x = 0$ is satisfied by the points (0, -11), (0, -2), (0, 0), (0, 7), as well as infinitely many other ordered pairs whose first coordinate is zero. Thus, $x = 0$ is the equation of the points that lie on the vertical line we are calling the y-axis. The x-axis consists of all points which satisfy the equation $y = 0$. More generally the horizontal lines have equations like $y = 2$ or $y = -6$ while vertical lines have equations like $x = -3$ or $x = 3/4$. The simplest equations in x and y are those that have only the first power of x and y appearing, rather than terms like $x^2$ or $y^2$. The most general form of such equations is $ax + by + c = 0$, where the letters a, b, and c, are "fixed" constants but can take on values independent of each other. We have already seen the special cases of this equation when a or b (but not both) is zero, lead to lines that coincide with or are parallel to the coordinate axes lines. However, it is not hard to see that in general $ax +by + c = 0$ (a and b not both zero) are straight lines. When $c = 0$ these lines go through the point (0,0) since for any choice of a and b we can compute using the equation that $a(0) + b(0) + 0$ evaluates to 0. So (0, 0) is on the family of lines where $c = 0$. In what follows we will always think of lines (e.g. $ax + by + c = 0$) or graphs of other equations that we might write down as being drawn with respect to a set of coordinate axes such as the ones shown in Figure 4. When drawing a line that does not pass through (0, 0) it is helpful to find the "intercepts" of the line, when the line intersects the x-axis (a point of the form $(u, 0)$) and the y-axis (a point of the form $(0,v)$). Remember that two points determine a unique line. that the graph of this equation is the same as the graph of $y = (x-3)(x-2)$. We have used the tool of "factoring" the expression on the right hand side of (*). From the factored form it is easier to see that the graph of (*) passes through the point (3, 0) and the point (2, 0). When x is zero one sees that y must be 6, so the graph also passes through (0, 6). It turns out that this equation represents a particular example of a parabola. Once one sees the magic of analytical geometry lots of what is done in algebra for some people makes a lot more sense and certainly is very useful for connecting algebra to geometry via analytical geometry. Many very interesting curves can be gotten with equations that use trigonometric functions, logarithmic, or exponential functions, in addition to the more familiar algebraic functions. Here I will only consider examples involving polynomial equations, similar to the equation in (*). The coordinate system used above is often called Cartesian or a rectangular coordinate system. As a matter of historical interest, Descartes did not actually use a system where there were four quadrants, but used only what today would be called the first quadrant, where the values of x and y are both positive. One could also use what are called polar coordinates to write down equations and get interesting pictures, too. Polar coordinates rely on ideas from trigonometry. Another type of coordinate system would be to use homogeneous coordinates for points and work in what is sometimes called the real projective plane. In this geometry a point would be represented by a triple $(x, y, z)$ with not all three of the coordinates zero, and where two triples of coordinates represent the same point when one is a non-zero multiple of the other. Thus, in this coordinate system (1, 2, 1) and (2, 4, 2) would be the same point. One of many advantages of this approach is that the asymmetry between (1, 2) and (2, 4) being different points while $x + 2y = 0$ and $2x + 4y = 0$ are the same lines (Euclidean case) does not occur. In this coordinate system a line has the "homogeneous equation" $ax +by +cz =0$ rather than the equation $ax + by + c = 0$. For a very brief primer of these ideas look here. The constant m is known as the slope of the line. Vertical lines, lines with equations of the form $x =k$, have undefined slope. Part of the reason for interest in the concept of slope is that when Newton and Leibniz developed the ideas that became what today is called Calculus, they used the notions of slope of a line and the concept of tangent line to a curve as part of how to conceptualize about derivatives. The derivative helps one understand the important concept of rate of change. Rates of change come up in physics, chemistry, biology and economics but though Calculus has many applications it has internal questions of interest in its own right. In addition, to understand the "shape" of lines, analytical geometry is also concerned with understanding the "essence" of the properties that make a circle a "circle" or an ellipse a shape different from a "hyperbola." An interesting type of graph which came to be studied in ancient times is known as the hyperbola. Apollonius wrote a book about the conics over 2000 years ago! Hyperbolas are a special kind of interesting class of curves that arise as "conic sections." Figure 5 shows the two napes of a "right circular" cone, where the vertical "axis" of the cone is perpendicular to the "horizontal" plane determined by the intersection of the lines formed by the two rays in the middle of the diagram. When this cone is cut by various planes one gets points that lie in the intersection of the cone and the plane. When the intersection of the plane and the cone is 2-dimensional one can get a circle, an ellipse, a parabola or the two pieces of a single hyperbola (one on the upper part of the cone and one on the lower part). I will postpone until later that one can cut a cone in a very special way to get a single point or a single line, which are often called degenerate conic sections. I will say more about them below. Figure 7 helps clarify how some of the different kinds of conic sections arise from plane sections of one nape of a cone when the plane is at different angles to the axes and generating lines of the cone. All Euclidean distance circles (round) of the same radius are congruent, that is, if they are moved without "distortion," any circle of radius r can be superimposed on any other circle of radius r. However, it may seem mysterious that the same circle has different equations depending on where the coordinate system is placed with regard to the circle. A circle referred to a pair of orthogonal (meet at right angles) axes at the point (0,0) has the equation below. The center of the circle is at (0, 0) the origin. has a very similar appearance to that of the circle of radius r in (2). However, for real numbers when one multiplies such a number by itself (e.g. 3 or -7) the result is always positive ($3 \times 3 = 9; (-7)(-7) = 49$). Furthermore, adding positive numbers can only yield a positive number. So there are no real numbers x and y which satisfy (3). There are "complex numbers" (numbers of the form u + vi where i is the square root of -1) which satisfy (3) but here we will not take on the interesting question of "graphing" the solutions of equations which involve complex numbers. By the way, it is worth noting that when the right-hand side of (2) is zero, the only solution of the equation shown is (5, -2) which can be thought of as a degenerate circle with radius 0. One can now try to study, by way of example, a particular curve, say the hyperbola. Just as circles can differ because they have different radii, hyperbolas can be "congruent" to each other (if one moves one curve without "distorting" it, one can superimpose it on another curve in a different location) but they can also differ in shape. In Figure 8 the different colored curves are not congruent, though they all "look" like hyperbolas. Different shaped curves in the same family will have different equations. The equations $y = 4/x$ and $y = 20/x$ are not congruent but are each hyperbolas, and the change of constant results in the change of one hyperbola of the family to another. But those of you who have studied a bit of analytical geometry (or Calculus) may not recognize equations like $y = A/x$ as graphs of hyperbolas. We can rewrite the equation $y = A/x$ as $xy = A$. For each point on the hyperbola the tangent line (locally the line meets the curve in exactly one point) to the hyperbolas cuts off a triangle for which the x-intercept and the y-intercept when multiplied, yields a constant value. This fact holds for both the case where the coordinate axes meet at right angles or where they meet at other angles. A nifty application of that fact is to examine those points in the interior of a triangle where there is a line through the point which bisects the area of the triangle. It turns out that pieces of three hyperbolas are involved in understanding the nature of this set of area bisectors. A similar fact about parabolas enables one to look at the set of points in a triangle through which a perimeter bisector of the triangle passes. In addition to ellipses and hyperbolas, one can obtain circles, parabolas, and intersecting lines by cutting the napes of a cone. The intersection of a cone and a plane which results in two intersecting lines is considered to be a "degenerate" conic because the geometric object is the product of two 1-dimensional geometric objects rather than a 2-dimensional object per se. And, if one considers a cylinder as a "degenerate" cone, one can also get two coincident lines (see below). The complex world of conic sections gets captured by the equation below, where the values of the letters from a to g are real numbers where not all of a, b, and c are zero. and drawing a graph of the points that satisfy this equation, will yield a "picture" of two intersecting lines. The two lines one "multiplies" to get a quadratic equation as in (7) can be identical lines or different lines, and the different lines can intersect or they can be parallel. But how can one cut the cone in Figure 4 to get two parallel lines? Notice that when the surface usually called a cylinder is cut by a plane, just like when a cone is cut by a plane, one can get "cylinder" sections that are circles or ellipses. Figure 9 shows a part of a cylinder which, when cut by a plane yields an ellipse. However, one can regard a cylinder as a "cone" where the point where the lines meet is very far away, at "infinity." One can see for this "type of cone," one can cut it to yield two parallel lines. and determine if it has real or complex roots by looking at the quantity $b^2-4ac$. This relation is known as an "invariant." Mathematicians using "invariants" were able, based on the coefficients in (6), to determine the type of conic one was looking at, including distinguishing conics from "degenerate" conics. Curves can not only be pretty but they can also be useful! The ancient Greeks and other civilizations (India, China, etc.) were curious about the paths that the planets, heavenly bodies which they realized were different from stars, moved. Ptolemy developed a "cosmology" based on the Earth being at the center of the Universe and heavenly bodies moving in circular orbits. However, by the time of Copernicus and Kepler, Ptolemy's ideas were discarded because they did not easily fit observable facts. Kepler developed "laws" for the planets which suggested that the orbits were ellipses. And there was great speculation about the curves that comets took. Furthermore, a consequence of Newton's the universal law of gravitation is that heavenly bodies such as planets in orbit around the sun, and comets travel along curves which are conic sections--parabolas, ellipses or hyperbolas. In fact, to some extent Newton the mathematician developed the Calculus to help Newton the physicist understand the laws of motion of the plants, and, therefore, provide a theoretical approach to Kepler's "empirical" laws. In the 19th century and earlier it was not uncommon for mathematicians to study those equations which led to attractive curves when they were represented using the Cartesian system, analytical geometry, or representing algebraic expressions visually. Geometry took an interest in symmetrical shapes very early in the history of the development of the subject, though the language we use today for symmetry did not develop until later. Today we use concepts related to geometric transformations (e.g. translations, rotations, reflections and glide reflections) and group theory (a branch of algebra) to understand or "measure" symmetry. However, the regular polyhedra and the regular tilings of the plane are early examples of calling attention to symmetrical things, where the symmetry grows out of ways of getting at the objects being studied that are not specifically rooted in knowledge of geometric transformation or group theory. Two important types of symmetry are point symmetry and line (mirror) symmetry. Can one detect if the graph of an equation will display these types of symmetry? The answer is "yes," one can see this quite easily. The hyperbolas in Figure 8 visually show the origin as a point symmetry. The family of circles defined in (1), has both point symmetry and line symmetry. Given an equation, if replacing x by -x and y by -y results in the same equation, the graph of the equation is symmetric in the point (0, 0). Given an equation, if replacing x by -x results in the same equation for the graph, then the y-axis is a line of mirror symmetry. Given an equation if replacing y by -y results in the same equation for the graph, then the x-axis is a line of mirror symmetry. Note that the hyperbola equations in Figure 8 don't pass the test for having an axis as a line of mirror symmetry, but these hyperbolas do have the lines $y = x$ and $ y = -x$ as lines of mirror symmetry! One of the most useful concepts that has developed in mathematics is that of a function. While functions are of interest for mathematical reasons only, they are "motivated" naturally by many applications. If one is working outside quantum physics, given a system that is being described by the variable time, the system is in only one state at a specific time--given an "input" there is exactly one output. When one inputs a particular x, say $x = u$ from the "domain" of $y = f(x)$, one gets one output from the function, denoted $f(u)$. When one has a function $y = f(x)$ and draws a graph of the pairs $(x,y)$ that satisfy this function using coordinate axes as in Figure 1, then any vertical line will intersect the graph of the function in no more than one point. One can look and admire the pictures at an art gallery without knowing anything about the artist, when the picture was created, or the title of the artwork. However, for pictures in an art museum the comments of the museum curators and/or the artist about his/her work can add to one's appreciation. Below is some "art" work--images of graphs of equations, mostly without comment. There is controversy about the names associated with some of these equations and sometimes the equation associated with a particular name! One way of generating curves is to look at what happens when one shape rolls or slides along another shape. The graphs below are rendered using Wolfram/Alpha. Note: This graph and the prior one have certain qualitative similarities. Can you use your knowledge of algebra to understand some ways the curves are different? Note: This curve is qualitatively similar to the Folium shown above. Is there some "stronger" relation between these two graphs? We have looked at various pretty curves but for the most part we have drawn the graph of a single equation, though in equations (1) and (2) we looked at families of circles centered at the origin of radius r and circles centered at (5, -2) of radius r. However, sometimes when an equation has "parameters" (constant values that can be changed, thereby generating a family for different values of the parameters) the curves that we see as the parameters change are sometimes dramatically different. Such is the case with the limacon, which I will consider as a family of curves that changes as we change the value of the constant that appears in the equation of the family. Sometimes being associated with a particular intriguing equation has given a person visibility even if they are not the first to have actually studied it. Many of the eye-catching pictures one might find appealing are associated with famous mathematicians. One such family of curves is associated with Maria Gaetana Agnesi (1718-1799). Agnesi lived, for her times, a long life. She was also a fascinating figure in the history of mathematics, being credited with having been the first person to give an integrated treatment of differential and integral calculus in one book. I hope you are convinced that much is to be learned from intriguing pictures and the interplay of geometry and algebra. The continuation of this article about pretty pictures will discuss striking graphs in the dot/line sense meaning of the word graph. Here is a sneak preview of a graph that William Tutte (1917-2002) discovered. Although it is not the smallest such example, it was the first example of a plane 3-connected 3-valent graph that had no Hamiltonian circuit (a simple closed curve that visits each vertex of the graph once and only once). Prior to Tutte's discovery of this beautiful example, there was the chance that the 4-color conjecture (now theorem) that plane graphs can be face-colored with 4 or fewer colors might be provable by showing that all plane 3-connected 3-valent graphs HAD Hamiltonian circuits. It turned out not to be easy to prove the 4-color theorem but Kenneth Appel and Wolfgang Haken did so in 1976. Besant, W., Roulettes and Glissetes, Deighton, Bell, London 1870. Bix, R., Conics and Cubics, Springer, New York, 1998. Coolidge, J., A Treatise on Algebraic Plane Curves, Dover, NY, 1959. Hilton, H., Plane Algebraic Curves, Oxford U. Press, London, 1932. Lawrence, J., A Catalog of Special Plane Curves, Dover, New York, 1972. Lockwood, E., A Book of Curves, Cambridge U. Press, Cambridge, 1967. McCarthy, J., The cissoid of Diocles, Math. Gazette 26 (1942) 12-15. Walker, R., Algebraic Curves, Dover, New York, 1950. Yates, R., A Handbook on Curves and Their Properties, 2nd ed., Edwards, Ann Arbor, 1942. Zwikker, C., The Advanced Geometry of Plane Curves and Their Applications, Dover, New York, 1963. Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some of these materials. Some of the items above can be found via the ACM Portal, which also provides bibliographic services. Welcome to the Feature Column!
CommonCrawl
In this talk, we will review the theoretical framework for quantifying market risk of financial institutions in terms of coherent risk measures, which was laid in a seminal paper of Artzner et al.~(1999). For a coherent risk measure on L^\infty, Delbaen (2002) proved that $\rho$ can be represented as the worst expectation over a class of probabilities whenever it has the Fatou property. Lately, it has been asked whether Delbaen's representation theorem holds on more general model spaces containing unbounded positions. We will present a comprehensive investigation on this problem. We characterize the Orlicz spaces over which the representation holds. We also show that the representation holds on general Orlicz spaces if the risk measure possess additional properties, e.g., law-invariance or surplus-invariance.
CommonCrawl
Citation: Eftimiadi G, Vinai P, Eftimiadi C (2017) Staphylococcal Protein A as a Pharmacological Treatment for Autoimmune Disorders. J Autoimmune Disord Vol 3:40. Copyright: © 2017 Eftimiadi G, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Staphylococcal protein A (SpA) is a protective antigen expressed by Staphylococcus aureus that allows the bacterium to manipulate innate and adaptive host immune responses [1–6]. S. aureus is a commensal organism that forms part of the microbiome in healthy humans; however, under certain circumstances it can behave as an invasive pathogen and cause life-threatening infections. SpA, an immunoglobulin-binding protein, is expressed on the bacterial surface and is secreted freely into the extracellular environment as the bacterium grows . It is expressed by all S. aureus strains. SpA binds to the Fc portion of human and animal immunoglobulins, a defense mechanism that provides protection from opsono-phagocytic killing. Furthermore, SpA associates with the Fab portion of VH3-type IgM B cell receptors , mediating their cross-linking and leading to activation and clonal expansion of B lymphocytes and their subsequent apoptotic collapse (Figure 1). Recombinant SpA (purified from Escherichia coli) does not induce B cell clonal expansion; rather, it induces collapse of VH3 clonal B cells directly [10,11]. It is worth noting that B lymphocytes that express VH3-encoded immunoglobulins play specific roles in various autoimmune diseases ; therefore, they may constitute effective pharmacological targets for the treatment of these diseases. Figure 1: Pharmacological effects of SpA binding to the Fab region of B cell receptors in ITP. B cells and plasma cells are involved in the pathogenesis of ITP. They are abnormally regulated and produce autoreactive antibodies, which bind platelets and megakaryocytes, inducing their impairment and/or phagocytic degradation in the spleen and liver. SpA binding to the Fab regions of the B-cell receptor promotes B-cells apoptosis and leads finally to inhibition of autoreactive antibody production and platelet degradation. SpA has high affinity for the Fc portion of IgG. IgG antibodies and IgG-containing circulating immune complexes can be selectively removed by extracorporeal exposure of a patient's plasma to protein A immobilized on a matrix . In the 1990s, the US Food and Drug Administration approved a medical device containing SpA covalently linked to silica beads (PROSORBA®, Cypress Bioscience, Inc., San Diego, CA, USA) for plasma-adsorption treatment of patients with refractory rheumatoid arthritis (RA) or refractory immune thrombocytopenia (ITP). However, the "Guidelines on the Use of Therapeutic Apheresis in Clinical Practice" state the following: "Improvement in ITP may result indirectly from in vivo immunomodulation by the release of protein A into the patient, which can induce targeted B cell depletion". Indeed, leakage of protein A from the matrix, probably due to the activity of serum protease, is thought to occur [13,15]. SpA has been tested in animal models and has proved to be a successful treatment for several autoimmune pathologies; for example, SpA alleviates antibody-induced nephritis and renal failure associated with systemic lupus erythematosus in mice . In addition, the efficacy of SpA as a therapeutic agent was evaluated in a murine model of collagen-induced arthritis (CIA) , which mimics RA in humans. SpA can co-opt circulating IgG molecules and form small, defined hexameric complexes that interact with monocytes, macrophages, and pre-osteoclasts. Formation of these complexes results in Fcγ receptor type I-dependent polarization of macrophages to a regulatory phenotype (Figure 2), thereby rendering them unresponsive to activators such as interferon-γ. The anti-inflammatory complexes can also directly inhibit differentiation of human pre-osteoclasts into osteoclasts "in vitro" (Figure 2). Moreover, administration of SpA during the early stages of disease alleviates the clinical and histologic erosive features of CIA in mice . Figure 2: Pharmacological effects of SpA binding to the Fc region of IgG in RA: the anti-inflammatory role of the immune complexes formed. Monocytes can differentiate into either macrophages or osteoclasts depending on their response to specific biological signals. These cells are a primary source of the inflammatory environment that produce synovial and erosive lesions. Spa binds to the Fc portion of circulating IgG and generates small hexameric immunoglobulin complexes (IgG2SPA)2 that interact with monocytes, macrophages, and pre-osteoclasts. Formation of these complexes results in Fcγ receptor type I–dependent polarization of macrophages to an anti-inflammatory regulatory phenotype and inhibits pre-osteoclasts differentiation into osteoclasts. Ultrapure SpA has been used to successfully treat a murine model of ITP (PRTX-100, Protalex, Inc., Florham Park, NJ, USA) . ITP is an autoimmune bleeding disorder in which autoantibodies or immune complexes bind to platelet surface antigens; autoreactive T cells then target and destroy platelets and megakaryocytes in the spleen and bone marrow. Platelet counts in mice treated with PRTX-100 increase to normal levels within 1–2 weeks, and none of the mice die during the experiments . Toxicology studies have also been performed in monkeys . The monkey is considered to be the best predictive animal model due to its similarity to humans with respect to SpA binding to IgG, B cells, and monocyte/macrophages. Weekly intravenous doses of SpA (up to 100 μg/kg) are well-tolerated and essentially non-toxic. The majority of treated monkeys develop antibodies against SpA. However, no evidence of a hypersensitivity response is observed. Two single-dose Phase I studies examined the safety, pharmacokinetic, immunogenicity, and pharmacodynamic activity of highly purified SpA in human volunteers . The majority of subjects developed detectable anti-protein A antibodies after dosing, with no evidence of a hypersensitivity response. A notable pharmacodynamic effect is a transient post-dose reduction in circulating lymphocytes. SpA dosing increases transcription of multiple genes regulated by type-1 interferons in peripheral blood mononuclear cells; up-regulation of several such genes correlates with the degree of lymphopenia observed 24 h after dosing. This study demonstrates for the first time that small intravenous doses of SpA (0.3–0.45 μg/kg) are safe and well-tolerated in humans. Following this first toxicological study, a Phase Ib randomized, double-blind, placebo-controlled, dose-escalation study of ultrapure SpA (PRTX-100) and methotrexate was conducted in patients with active RA . The most common treatment-related adverse events are nausea, muscle spasms, dizziness, flushing, fatigue, worsening of RA, and headache. However, most cases of drug-related RA flares are followed by prolonged reductions in RA activity, along with improved symptoms and a reduction in swollen joint counts. No serious adverse events are related directly to SpA (PRTX-100), and none occur in the group receiving the highest dose. As shown in the previous study , the majority of subjects develop detectable anti-protein A antibodies, with no evidence of a hypersensitivity response. Although this study did not determine the highest dose of PRTX-100 that could be administered to RA patients on a weekly basis with acceptable toxicity, the results suggest that, at least at the two highest doses tested, PRTX-100 has a positive effect on disease activity. These findings warrant further Phase 2/3 clinical trials to confirm the positive results and to verify if the reduction of RA activity is temporary or permanent. The promising results obtained in the mouse model of ITP , and the promising preclinical data indicating that the drug has the potential to treat ITP by reducing immune-mediated destruction of platelets, support further investigations to evaluate the safety and efficacy of SpA in ITP patients. Data from initial cohorts in two dose escalation trials (PRTX-100 at a dose of 1 μg/kg or 3 μg/kg) demonstrate an acceptable safety profile, and support continued enrolment of higher-dose cohorts. In one of the trials, a platelet response is observed in one of six patients treated with the lowest dose. Clinical trials examining higher-dose cohorts are underway , and updated data from patients treated in Phase 1/2 and European Phase 1b studies will be released in the future. Our proposal to extend experimentation of SpA to the pharmacological treatment of autoimmune movement disorders arose from a clinical case involving a girl with Tourette syndrome . This case presented with characteristics that are similar to those of cases of PANDAS (Pediatric Autoimmune Neuropsychiatric Disorders Associated with Group A Streptococcal Infections), a clinical condition in which tics and obsessive compulsive disorders follow acute Streptococcus pyogenes infections [25,26]. The child presented with high titers of anti-streptolysin O (ASO) and anti-strepto DNase (DNase-B) antibodies and showed a positive reaction to four autoimmunity tests (out of a panel of five) that detect the presence of autoantibodies against brain antigens (Moleculera Labs, Oklahoma City, OK, USA). The assays measure the titers of antibodies against dopamine D1 and D2 receptors , lysoganglioside-GM1, and beta-tubulin, in addition to antibodies that activate calcium/calmodulin-dependent protein kinase type II (CaM kinase II) by binding to receptors on neural cell lines. Microbiological monitoring indicated that the child was an intermittent nasopharyngeal carrier of S. aureus, and that a significant improvement in motor tics occurred during the S. aureus colonization phase. The nostril is the main ecological niche in which S. aureus resides, although the genetic and environmental determinants of carrier status are not fully understood. At any moment in time, about 20% of the general population carries S. aureus, while ~30% are transient carries and ~50% are non-carriers [28,29]. A complex immunological equilibrium exists between host defense mechanisms and the differential expression and roles of S. aureus virulence determinants during colonization and disease . This clinical case was of much interest to us because of the observed "see-saw effect" between the host immune response and tic expression. A significant improvement in motor tics occurs during the S. aureus colonization phase (nasopharyngeal-, oropharyngeal-, and gut-positive bacterial cultures). Furthermore, the colonization phase is associated with downregulated production of antibodies against Streptococcus pyogenes (the etiological agent of PANDAS) and, most importantly, of autoantibody production against D1 and D2 dopamine receptors. Dopamine is a crucial neurotransmitter required for motor control; autoimmune reactions against its neuronal receptors may alter central dopamine pathways and lead to movement and neuropsychiatric disorders, especially in childhood. After decolonization, clinical conditions revert to the poor scores previously observed with a parallel increase of antistreptococcal antibody production. This result was consistent with data from animal models showing that a proinflammatory Th17 cell-associated immune response is required for S. aureus nasal decolonization . Ultimately, the colonization phase triggers an immunomodulatory response, whereas the clearing process triggers a pro-inflammatory response. A sequential "uncoupling" of the anti-inflammatory and pro-inflammatory phenomena occurs. These results confirm data from other authors indicating that the pro-inflammatory and anti-inflammatory properties of S. aureus are uncoupled and can be expressed separately . Several components of the S. aureus cell wall exert anti-inflammatory effects by mediating IL-10 production in macrophages and by downregulating pro-inflammatory cytokine responses, thereby circumventing Th1/Th17 adaptive immune responses during infection . However, the S. aureus virulence determinants expressed during colonization and infection are different . SpA is a virulence factor released extracellularly at an early stage to promote both colonization and immune evasion . The beneficial downregulation of antibody production observed during the S. aureus colonization phase suggests, albeit indirectly, possible involvement of SpA in the process. The safety, tolerability, and pharmacokinetics of SpA in animal models, and of ultrapure SpA (PRTX-100) in human studies, together with encouraging preclinical data, suggest that this protein could soon be utilized as an effective treatment for selective autoimmune disorders such as RA and ITP. Clinical trials are ongoing. The improvement of motor tics accompanying reduced production of autoantibodies against D1 and D2 dopamine receptors supports our proposal to include SpA in new clinical trials aimed at identifying innovative pharmacological strategies for the treatment and management of autoimmune neuropsychiatric and movement disorders. Falugi F, Kim HK, Missiakas DM, Schneewind O (2013) Role of Protein A in the Evasion of Host Adaptive Immune Responses by Staphylococcus aureus. MBio 4: e00575-13. Kim HK, Kim HY, Schneewind O, Missiakas D (2011) Identifying protective antigens of Staphylococcus aureus, a pathogen that suppresses host immune responses. FASEB J 25: 3605–3612. Hong X, Qin J, Li T, Dai Y, Wang Y, et al. (2016) Staphylococcal Protein A Promotes Colonization and Immune Evasion of the Epidemic Healthcare-Associated MRSA ST239. Front Microbiol 7: 951. Kobayashi SD, DeLeo FR (2013) Staphylococcus aureus Protein A Promotes Immune Suppression. MBio 4: e00764-13. Thammavongsa V, Kim HK, Missiakas D, Schneewind O (2015) Staphylococcal manipulation of host immune responses. Nat Rev Microbiol 13: 529-543. Kim HK, Falugi F, Thomer L, Missiakas DM, Schneewind O (2015) Protein A Suppresses Immune Responses during Staphylococcus aureus Bloodstream Infection in Guinea Pigs. MBio 6: e02369-14. Becker S, Frankel MB, Schneewind O, Missiakas D (2014) Release of protein A from the cell wall of Staphylococcus aureus. Proc Natl Acad Sci U S A 111: 1574–1579. Roben PW, Salem AN, Silverman GJ (1995) VH3 family antibodies bind domain D of staphylococcal protein A. J Immunol 154: 6437–6445. Kim HK, Falugi F, Missiakas DM, Schneewind O (2016) Peptidoglycan-linked protein A promotes T cell-dependent antibody expansion during Staphylococcus aureus infection. Proc Natl Acad Sci U S A 113: 5718–5723. Goodyear CS, Silverman GJ (2003) Death by a B cell superantigen: In vivo VH-targeted apoptotic supraclonal B cell deletion by a Staphylococcal Toxin. J Exp Med 197: 1125–1139. Silverman GJ, and Goodyear CS (2006) Confounding B-cell defences: lessons from a staphylococcal superantigen. Nat Rev Immunol 6: 465–475. Foreman AL, Van de Water J, Gougeon ML, Gershwin ME (2007) B cells in autoimmune diseases: Insights from analyses of immunoglobulin variable (Ig V) gene usage. Autoimmun Rev 6: 387–401. Silverman GJ, Goodyear CS, Siegel DL (2005) On the mechanism of staphylococcal protein A immunomodulation. Transfusion 45: 274–280. Szczepiorkowski ZM, Bandarenko N, Kim HC, Linenberger ML, Marques MB, et al. (2007) Guidelines on the use of therapeutic apheresis in clinical practice: evidence-based approach from the Apheresis Applications Committee of the American Society for Apheresis. J Clin Apher 22: 134. Balint JP, Jones FR (1995) Evidence for proteolytic cleavage of covalently bound protein a from a silica based extracorporeal immunoadsorbent and lack of relationship to treatment effects. Transfusion Science 16: 85–94. Viau M, Zouali M (2005) Effect of the B cell superantigen protein A from S. aureus on the early lupus disease of (NZB$\times$ NZW) F 1 mice. Mol Immunol 42: 849–855. MacLellan LM, Montgomery J, Sugiyama F, Kitson SM, Thümmler K, et al. (2011) Co-Opting Endogenous Immunoglobulin for the Regulation of Inflammation and Osteoclastogenesis in Humans and Mice. Arthritis Rheum 63: 3897–3907. Semple JW, Speck ER, Aslam R, Kim M, Zufferey A, et al. (2015) Successful Treatment of Thrombocytopenia with Staphylococcal Protein A (PRTX-100) in a Murine Model of Immune Thrombocytopenia (ITP). Blood 126: 1045. Bernton E, Haughey D (2014) Studies of the safety, pharmacokinetics and immunogenicity of repeated doses of intravenous staphylococcal protein A in cynomolgus monkeys. Basic Clin Pharmacol Toxicol 115: 448–55. Ballow C, Leh A, Slentz-Kesler K, Yan J, Haughey D, et al. (2013) Safety, pharmacokinetic, immunogenicity, and pharmacodynamic responses in healthy volunteers following a single intravenous injection of purified staphylococcal protein A. J Clin Pharmacol 53: 909–918. Bernton E, Gannon W, Kramer W, Kranz E (2014) PRTX-100 and methotrexate in patients with active rheumatoid arthritis: A Phase Ib randomized, double-blind, placebo-controlled, dose-escalation study. Clin Pharmacol Drug Dev 3: 477–486. Bussel JB, Kuter DJ, Audia S, Francovitch RJ, Michel M (2016) Safety and Efficacy of PRTX-100, a Highly Purified Form of Staphylococcal Protein A, in Patients with Immune Thrombocytopenia (ITP). Blood 128: 4929. Protalex, Inc. (2017) Open-Label, Dose Escalation Study of PRTX-100 in Adults With Persistent/Chronic Immune Thrombocytopenia. ClinicalTrials.gov. Eftimiadi C, Eftimiadi G, Vinai P (2016) Staphylococcus aureus Colonization Modulates Tic Expression and the Host Immune Response in a Girl with Tourette Syndrome. Front Psychiatry 7. Swedo SE, Leonard HL, Garvey M, Mittleman B, Allen AJP, et al. (1998) Pediatric Autoimmune Neuropsychiatric Disorders Associated With Streptococcal Infections: Clinical Description of the First 50 Cases. AJP 155: 264–271. Swedo SE, Grant PJ (2005) Annotation: PANDAS: a model for human autoimmune disease. J Child Psychol Psychiatry 46: 227–234. Cunningham MW, Cox CJ (2016) Autoimmunity against dopamine receptors in neuropsychiatric and movement disorders: a review of Sydenham chorea and beyond. Acta Physiol 216: 90–100. van Belkum A, Verkaik NJ, de Vogel CP, Boelens HA, Verveer J, et al. (2009) Reclassification of Staphylococcus aureus Nasal Carriage Types. J Infect Dis 199: 1820–1826. Sollid JUE, Furberg AS, Hanssen AM, Johannessen M (2014) Staphylococcus aureus: determinants of human carriage. Infect Genet Evol 21: 531–541. Brown AF, Leech JM, Rogers TR, McLoughlin RM (2014) Staphylococcus aureus Colonization: Modulation of Host Immune Response and Impact on Human Vaccine Design. Front Immunol 4. Archer NK, Harro JM, Shirtliff ME (2013) Clearance of Staphylococcus aureus Nasal Carriage Is T Cell Dependent and Mediated through Interleukin-17A Expression and Neutrophil Influx. Infect Immun 81: 2070–2075. Peres AG, Stegen C, Li J, Xu AQ, Levast B, et al. (2015) Uncoupling of pro- and anti-inflammatory properties of Staphylococcus aureus. Infect Immun 83: 1587–1597. Frodermann V, Chau TA, Sayedyahossein S, Toth JM, Heinrichs DE, et al. (2011) A modulatory interleukin-10 response to staphylococcal peptidoglycan prevents Th1/Th17 adaptive immunity to Staphylococcus aureus. J Infect Dis 204: 253–262. Jenkins A, Diep BA, Mai TT, Vo NH, Warrener P, et al. (2015) Differential Expression and Roles of Staphylococcus aureus Virulence Determinants during Colonization and Disease. MBio 6: e02272-14.
CommonCrawl
Let $G$ be a finite simple group and let $S$ be a normal subset of $G$. We determine the diameter of the Cayley graph $\Gamma(G,S)$ associated with $G$ and $S$, up to a multiplicative constant. Many applications follow. For example, we deduce that there is a constant $c$ such that every element of $G$ is a product of $c$ involutions (and we generalize this to elements of arbitrary order). We also show that for any word $w=w(x_1,\ldots,x_d)$, there is a constant $c = c(w)$ such that for any simple group $G$ on which $w$ does not vanish, every element of $G$ is a product of $c$ values of $w$. From this we deduce that every verbal subgroup of a semisimple profinite group is closed. Other applications concern covering numbers, expanders, and random walks on finite simple groups.
CommonCrawl
To show the relationship between two sets, you can start by making a table of values. After doing so, you represent the first column by values along the \(x\)-axis and the second column by values along the \(y\)-axis. The result is a set of points in a Cartesian coordinate system. In order to plot points from a table using GeoGebra, the spreadsheet is used. The most common usage of spreadsheets is to use numbers in the spreadsheet cells. In GeoGebra however, you can also use regular GeoGebra objects within the spreadsheet. On the page GeoGebra Tutorial - Spreadsheet, it is shown how to use the spreadsheet to make relative copies of any object in GeoGebra. Make a new GeoGebra worksheet. Choose View->Spreadsheet. Enter the numbers from the picture above into the spreadsheet. The numbers in column A are the x-values and the numbers in column B the y-values. Each number is written in a so-called cell. The cells are named after column and row. The number -2 is written in cell A1 and the number -100 in cell B3. Select the six cells by dragging the mouse. Do not drag the small blue square at the lower right corner. The cells become blue. Choose the tool Create List of Points. Note that while the spreadsheet is selected, a tool bar for the spreadsheet is shown. If you want to see the regular tool bar, click on the drawing pad. To see the points created, you must either change the scale of the axes or zoom out. You zoom in or out by using the mouse wheel; the drawing pad must be selected in order to zoom. To change the scale, use the tool Move the Graphics View. Then hover the mouse over an axis until the curser changes to an arrow. When the curser is an arrow you can drag the axis. You can also rescale the axes by holding down the Shift-key. Show the input bar belonging to the spreadsheet. The little blue square in the lower right corner of a cell is used for making relative copies. Try to enter 1 in A1 and =A1+1 in A2, then drag the small blue square along column A. Enter the same formula in B1 and make relative copies in row 1. Select B5 to see the expression that has been copied. Generate column A using relative copies. Enter the numbers in column B by hand. Use the formula in C1 to generate column C using relative copies. Column B in the picture above, can also be generated using relative copies. Delete the content of the cells B1 to B10. Enter 1 in B2 and find a spreadsheet-formula for generating B3 to B10 using relative copies. By writing $A1, you make an absolute reference to A1 when dragging along a row. By writing A$1, you make an absolute reference to A1 when dragging along a column. By writing $A$1, you make an absolute reference to A1 when dragging along both a row and a column. Use both relative and absolute copies to generate a multiplication table. Use the formula \(y=100+10x\) to make a table of values when \(x=0,1,\ldots 10\). Then generate the corresponding points in the graphics view.
CommonCrawl
Abstract. We give a general reduction scheme for the study of the quantum propagator of molecular Schr\"odinger operators with smooth potentials. This reduction is made up to infinitely (resp. exponentially) small error terms with respect to the inverse square root of the mass of the nuclei, depending on the $C^\infty$ (resp. analytic) smoothness of the interactions. Then we apply this result to the case when an electronic level is isolated from the rest of the spectrum of the electronic Hamiltonian.
CommonCrawl
Precision investigation of the hyperfine structure (HFS) of the energy spectrum of light muonic atoms is an important task nowadays. It allows us to check the standard model and obtain more precise values of fundamental physical constants. The relevance of such research is connected with the experiments carried out by CREMA Collaboration. In these experiments Lamb shift of 2S and 2P and HFS for muonic hydrogen and muonic deuterium was obtained by means of laser spectroscopy. To calculate HFS of muonic ions we use quasipotential method in quantum electrodynamics, where the bound state of muon and nucleus can be described by means of Schrodinger equation and the potential is constructed with the use of scattering off-shell amplitude. The main contribution to the interaction operator of two particles is given by the well-known Breit Hamiltonian. In perturbation theory infinite series for the interaction operator of particles includes contributions of different kinds of interactions, primarily electromagnetic. Relativistic corrections of order $\alpha^6$ and also the contribution of anomalous magnetic moment of muon are known in analytical form. One- and two-loop vacuum polarization effects of order $\alpha^5$ and $\alpha^6$ in first and second order perturbation theory were obtained in integral form and evaluated numerically. One of the leading contributions to the HFS is given by the nuclear structure effect. Such effects can be described by means of two-photon exchange amplitudes. To calculate these amplitudes and also to calculate amplitudes of higher order of $\alpha$ the approach of projection operator was used. We also calculate more complicated corrections that involve combined effects of vacuum polarization, relativism and nuclear structure of order $\alpha^6$. Furthermore the dependance of the calculated contributions on the charge $Z$ of the nucleus was studied and final numerical values of HFS were obtained.
CommonCrawl
Home » Workshop » Schedules » Local solutions and existence of optimal transport maps for the $$W_\infty$$ Wasserstein distance. Extensions to more classical Monge problems. Local solutions and existence of optimal transport maps for the $$W_\infty$$ Wasserstein distance. Extensions to more classical Monge problems.
CommonCrawl
Dee Siduous is a botanist who specializes in trees. A lot of her research has to do with the formation of tree rings, and what they say about the growing conditions over the tree's lifetime. She has a certain theory and wants to run some simulations to see if it holds up to the evidence gathered in the field. One thing that needs to be done is to determine the expected number of rings given the outline of a tree. Dee has decided to model a cross section of a tree on a two dimenional grid, with the interior of the tree represented by a closed polygon of grid squares. Given this set of squares, she assigns rings from the outer parts of the tree to the inner as follows: calling the non-tree grid squares "ring $0$", each ring $n$ is made up of all those grid squares that have at least one ring $(n-1)$ square as a neighbor (where neighboring squares are those that share an edge). An example of this is shown in the figure below. Most of Dee's models have been drawn on graph paper, and she has come to you to write a program to do this automatically for her. This way she'll use less paper and save some $\ldots $ well, you know. The input will start with a line containing two positive integers $n$ $m$ specifying the number of rows and columns in the tree grid, where $n, m \leq 100$. After this will be $n$ rows containing $m$ characters each. These characters will be either 'T' indicating a tree grid square, or '.'. Output a grid with the ring numbers. If the number of rings is less than 10, use two characters for each grid square; otherwise use three characters for each grid square. Right justify all ring numbers in the grid squares, and use '.' to fill in the remaining characters. If a row or column does not contain a ring number it should still be output, filled entirely with '.'s.
CommonCrawl
Lemma 14.30.3. Let $f : X \to Y$ be a trivial Kan fibration of simplicial sets. Let $Y' \to Y$ be a morphism of simplicial sets. Then $X \times _ Y Y' \to Y'$ is a trivial Kan fibration. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 08NN. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 08NN, in case you are confused.
CommonCrawl
"Guess my number," said Tom, "I'll only answer yes or no." "OK. Is it less than $1000$?" "No." "Is it less than $2000$?" "Yes." "So it's an odd number between $1000$ and $2000$?" "Yes, but you knew that already. It's a wasted question!" "Is it divisible by $3$?" "No." "Is it divisible by $5$?" "No." "Is it divisible by $7$?" "Yes." "If I divide it by $7$ do I get a number that is less than $100$?" "No." "Do I get a number that is less than $200$?" "Yes." "Ah! $100 \times 7$ = $700$ and $200 \times 7 = 1400$. So your number is less than $1400$?" "Yes, but that's another wasted question." "Ha! $14$ squared is $196$. So your number is $7$ times the product of two prime numbers less than $14$?" "I am sure I know your number and this question is just checking. Is your number palendromic? Does it read the same in both directions?" Multiplication & division. Interactivities. Factors and multiples. Trial and improvement. Working systematically. Properties of numbers. Addition & subtraction. Divisibility. Prime numbers. Odd and even numbers.
CommonCrawl
It is natural to come up with cross-validation (CV) when the dataset is relatively small. The basic idea of cross-validation is to train a new model on a subset of data, and validate the trained model on the remaining data. Repeat the process multiple times and average the validation error, we get an estimate of the generalization performance of the model. Since the test data is untouched during each training, we kind of use the whole dataset to estimate the generalization error, which will reduce the bias. However, since it will train multiple models instead of one, the drawback of CV is that quite computational expensive. It is necessary to make it clear that, the aim of CV is not to get one or multiple trained models for inference, but to estimate an unbiased generalization performance. This may be quite confusing at the beginning, since the outcome of the common train/validate/test split approach are a trained model with tuned hyperparameter (on train/validate set), plus a generalization estimation of performance (on trainval/test set). Why bother to have a validation set? Why not just use the test set for the two tasks of hyperparameter tuning(model selection) and estimation at once? The problem is, if we use the test set multiple times for different trained models, during our selection of the optimal model, the test set actually "leaks" information, and thus unpure. When we later apply the model to real-world data, the model will probably have a larger error than on the test set. That being said, when using the test set for both model selection and estimation, it tends to overfit the test data, and the estimation leads to an optimistic bias. When doing one round CV to evaluate the performance of different models, and select the best model based on the CV results, it is similar to the above case of using test set both for model selection and estimation. Thus, when we want to perform model selection and generalization error estimation, we have to separate the two tasks by using two test set for each task. That's why we have the validation and test set, and the same version in CV is called nested or two-round cross validation. The nested CV has an inner loop CV nested in an outer CV. The inner loop is responsible for model selection/hyperparameter tuning (similar to validation set), while the outer loop is for error estimation (test set). Divide the dataset into $K$ cross-validation folds at random. 2.5 For each hyperparameter setting, calculate the average metrics score over the $L$ folds, and choose the best hyperparameter setting. 2.6 Train a model with the best hyperparameter on trainval. Evaluate its performance on test and save the score for fold $k$. Calculate the mean score over all $K$ folds, and report as the generalization error. As for the implementation, the scikit-learn documentation points out: the inner loop can call scikit-learn's GridSearchCV to achieve grid search of hyperparameter evaluated on the inner loop val set, and the outer loop can call cross_val_score for generalization error. Can I apply the best hyperparameter selected in the first iteration of the outer fold, to build models for the remaining $K-1$ outer loop? i.e. to save the search of the best hyperparameter in the next $K-1 \times L \times M$ (where $M$ is the number of hyperparameter combinations, if use grid search). I think the answer is no. The reason is that in this way, the test sets in the following loop are not "untouched" by the hyperparameter selection process. For example, in the outer loop # $2$, the test set for evaluating the model performance was actually used in the outer loop # $1$ for selecting the hyperparameter, then some data were used both for hyperparameter tuning and performance evaluation. This will cause overfitting. What if the $K$ outer loop has distinct hyperparameter? How can I use the nested CV to build the best model? As I state in the beginning, CV is not a method to get one or multiple trained models for inference, but only a tool to estimate an unbiased generalization performance. CV will generate multiple models in each outer loop, but we can hardly estimate the performance of each individual model, since the number of the test set in each outer loop is small. However, if the model is stable (do not change much if the training data is perturbed), the hyperparameter found in each outer loop may be the same (using grid search) or similar to each other (using random search). A more in-depth explanation can be found here. That's all for what I would like to share of nested CV. This post reflects my current understanding of the cross validation. Please correct me if you identify any problems. Thanks! T. Hastie, J. Friedman, and R. Tibshirani, "Model Assessment and Selection," in The Elements of Statistical Learning: Data Mining, Inference, and Prediction, T. Hastie, J. Friedman, and R. Tibshirani, Eds. New York, NY: Springer New York, 2001, pp. 193–224. G. C. Cawley and N. L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation," Journal of Machine Learning Research, vol. 11, no. Jul, pp. 2079–2107, 2010. Convolution is one of the most mysterious words for a novice deep learner. It is just a fancy operation of the weighted sum.
CommonCrawl
We study operators of multiplication by $z^k$ in Dirichlet-type spaces $D_\alpha$. We establish the existence of $k$ and $\alpha$ for which some $z^k$-invariant subspaces of $D_\alpha$ do not satisfy the wandering property. As a consequence of the proof, any Dirichlet-type space accepts an equivalent norm under which the wandering property fails for some space for the operator of multiplication by $z^k$ for any $k \geq 6$.
CommonCrawl
Giovedì 23 Novembre 2017 alle ore 12:30 in Aula 2AB40, Andrea Marchesi (Università di Zurigo) terrà un seminario dal titolo "Branched transportation: stability and new models". Models involving branched structures are employed to describe several supply-demand systems such as the structure of the nerves of a leaf, the system of roots of a tree, or the nervous and the cardiovascular systems. Given a flow that transports a given mass distribution $\mu^-$ onto a target distribution $\mu^+$, along a 1-dimensional network, the transportation cost per unit length is proportional to a concave power $\alpha \in (0,1)$ of the intensity of the flow. This favors grouping of the particles' trajectories and prevents diffusion. After a general description of the main features of the model, I will discuss in particular the issue of stability of minimizers under variations of the given mass distributions $\mu^-$ and $\mu^+$. I will also explain how one can exploit currents with coefficients in a normed group to describe a "multi-commodity" version of the problem, where the interaction between different types of transported goods is taken into account. Based on joint works with M. Colombo, A. De Rosa, A. Massaccesi, and R. Tione.
CommonCrawl
We know that the Totient function is multiplicative. Which means that when $p$ and $q$ are relatively prime, then $\varphi(p q)$ is equal to $\varphi(p) \varphi(q)$. My question is, are only prime numbers used in RSA or can they also be coprime like e.g. 11 and 16? I am asking this because I understand why RSA is multiplicative when p and q are prime but not when they are relatively prime. are only prime numbers used in RSA ? That depends on what is meant by "numbers used in RSA". I'll restrict to a reading of the question where these are $p$ and $q$, with $N=p\,q$ the public modulus in the RSA public key. That also depends of the defintion of RSA. In the original definition, or when we see $N=p\,q$ in an applied RSA cryptography context, we think of $p$ and $q$ primes, large and randomly seeded, and perhaps deliberately distinct. In this case, $p$ and $q$ are primes. They are distinct either deliberately or with overwhelming probability, and distinct primes are coprime. In some modern definitions including PKCS#1v2.2, RSA is extended to $N$ the product of $u\ge2$ distinct (odd) primes. In this case, when $u>2$ and if we nevertheless still write $N=p\,q$ (which is highly unusual), then at least one of $p$ or $q$ is not prime, but still $p$ and $q$ are coprime. On the other hand, from a mathematical standpoint, we can define RSA for any positive $N$: if $\gcd(e,\varphi(N))=1$ then the function $x\mapsto x^e\bmod N$ is a bijection of $\Bbb Z_N^*$, that is of the subset of $x$ in $\Bbb Z_N$ with $\gcd(x,N)=1$. That function is also a bijection in $\Bbb Z_N$ if and only if $N$ is square-free (equivalently, if and only if $p$ and $q$ are coprime in any factorization of $N$ as $N=p\,q$). Whenever pulling a factor out of $N$ is hard, overwhelmingly most elements of $\Bbb Z_N$ belong to $\Bbb Z_N^*$, thus the distinction $N$ square-free or not is immaterial for random $x$ (as used in good practice). In that mathematical definition of RSA, we might have $N=p\,q$ with $p$ and $q$ prime or not, coprime or not. For example for $N=11\cdot16$ we can write $N=p\,q$ with $p=8$ and $q=22$, neither is prime, and they are not coprime. Yet $(N,e=3)$ and $(N,d=7)$ are a valid RSA key pair when we restrict to odd integers in $[0,n)$ for message and ciphertext (we can, but need not, exclude multiples of $11$). I understand why RSA is multiplicative when $p$ and $q$ are prime but not when they are relatively prime. RSA is multiplicative no matter what, in the sense that for all $x_0$ and $x_1$ it holds that $E(x_0\cdot x_1\bmod N)=E(x_0)\cdot E(x_1)\bmod N$ for $E$ the function $x\mapsto x^e\bmod N$. That's because $(x_0\cdot x_1)^e\equiv x_0^e\cdot x_1^e\pmod N$, and that holds for all positive $e$ and $N$, without consideration of $N$ being the product of coprime integers, and from what set $x_0$ and $x_1$ are taken. We have a Multi-Prime RSA standard which supports that the public key $(n,e)$ can be a product of distinct odd primes where the number of distinct odd primes $\geq 2$. When you have formed your modulus $n =pq$ with your coprime numbers $p,q$, then you need to check that $n$ is a square-free modulus. If it is not square-free then the functionality (decryption) of the RSA will fail. See this answer with examples. In your example, $$n = 11 \cdot 16 = 11 \cdot 2^4 $$ which is not a square-free number. Therefore your modulus will not work. Not the answer you're looking for? Browse other questions tagged rsa prime-numbers or ask your own question. How can I find the prime numbers used in RSA? Why do we need Euler's totient function $\varphi(N)$ in RSA? Are the prime numbers used for RSA encryption known? Why does RSA need p and q to be prime numbers? How expensive is the Extended Euclidean Algorithm?
CommonCrawl
The Leggett-Garg (LG) inequalities were introduced, as a temporal parallel of the Bell inequalities, to test macroscopic realism -- the view that a macroscopic system evolving in time possesses definite properties which can be determined without disturbing the future or past state. The talk will begin with a review of the LG framework. Unlike the Bell inequalities, the original LG inequalities are only a necessary condition for macrorealism, and are therefore not a decisive test. We study correlations in fermionic systems with long-range interactions in thermal equilibrium. We prove an upper-bound on the correlation decay between anti-commut-ing operators based on long-range Lieb-Robinson type bounds. Our result shows that correlations between such operators in fermionic long-range systems of spatial dimension $D$ with at most two-site interactions decaying algebraically with the distance with an exponent $\alpha \geq 2\,D$, decay at least algebraically with an exponent arbitrarily close to $\alpha$. How does classical chaos affect the generation of quantum entanglement? What signatures of chaos exist at the quantum level and how can they be quantified? These questions have puzzled physicists for a couple of decades now. We answer these questions in spin systems by analytically establishing a connection between entanglement generation and a measure of delocalization of a quantum state in such systems. While delocalization is a generic feature of quantum chaotic systems, it is more nuanced in regular systems. The spectral gap problem consist in deciding, given a local interaction, whether the corresponding translationally invariant Hamiltonian on a lattice has a spectral gap independent of the system size or not. In the simplest case of nearest-neighbour frustration-free qubit interactions, there is a complete classification. On the other extreme, for two (or higher) dimensional models with nearest-neighbour interactions this problem can be reduced to the Halting Problem, and it is therefore undecidable. Ubiquitous in the behavior of physical systems is the competition between an energy term E and an entropy term S of their free energy F = E - beta S. These concepts are also relevant for error correction, where the `energy` E is the number of qubits afflicted by an error, the `entropy' S(E) is the logarithm of the number of energy-E failing errors, and beta relates to the probability of each qubit's error. Error-correction schemes with larger minimum free energy have better performance.
CommonCrawl
Abstract: In this paper we show that an automorphism $\alpha$ of an abelian group $A$ can be extented to an automorphism of any abelian group which contains $A$ as subgroup iff $(\alpha-id_A)(A)$ or $(\alpha+id_A)(A)$ is a divisible subgroup of $A$. Then if $A$ is reduced, $id_A$ and $-id_A$ are the only automorphisms of $A$ which have the extension property. Electronic version published on: 9 Feb 2006. This page was last modified: 27 Nov 2007.
CommonCrawl
Abstract: We study the conformal bootstrap for 3D CFTs with O(N) global symmetry. We obtain rigorous upper bounds on the scaling dimensions of the first O(N) singlet and symmetric tensor operators appearing in the $\phi_i \times \phi_j$ OPE, where $\phi_i$ is a fundamental of O(N). Comparing these bounds to previous determinations of critical exponents in the O(N) vector models, we find strong numerical evidence that the O(N) vector models saturate the bootstrap constraints at all values of N. We also compute general lower bounds on the central charge, giving numerical predictions for the values realized in the O(N) vector models. We compare our predictions to previous computations in the 1/N expansion, finding precise agreement at large values of N.
CommonCrawl
We have weekly research meetings of people working on automated algorithm design/configuration/selection right after the talk. In some weeks, we don't have a talk but only the research meeting. This reading group focuses on empirical aspects of computer science. Every computer science student takes a lot of math and theory courses, but typically not even a single course about the design of meaningful experiments and the analysis of their results. In this group, we discuss research papers with a focus on empirical issues; the papers typically concern solving hard computational problems (SAT, TSP, scheduling, etc) and computer aided algorithm design. Since this is a joint BETA/LCI reading group we often discuss AI papers, but we are open to other areas and also have a theoretician amongst us. Our meeting format is presentations with lots of audience interaction, so you can come without having read the paper (though you may get more out of it if you read it). If you are planning to attend our meetings, you should sign up to the mailing list (ea-rg), where the papers and schedules will be announced: just send an email to majordomo@cs.ubc.ca with the command "subscribe ea-rg" in the body of your message. April 30, 2014 14:00 - 15:00 Sepp Hartung Parameterized Algorithmics Parameterized algorithmics, structural parameterization and applications to algorithm design. Analysis of Portfolio-Style Parallel SAT Solving on Current Multi-Core Architectures. Fine-Tuning Algorithm Parameters Using the Design of Experiments Approach. A Framework for Optimizing Paper Matching. ICAPS 2012: Learning Portfolios of Automatically Tuned Planners. CoCoMile 2012: Algorithm Configuration for Portfolio-based Parallel SAT-Solving. IJCAI-09: Predicting Learnt Clauses Quality in Modern SAT Solvers. October 3, 2012 12:30-13:30 Lin Xu Algorithm selection Serdar Kadioglu, Yuri Malitsky, and Meinolf Sellmann. AAAI-12: Non-Model-Based Search Guidance for Set Partitioning Problems. September 26, 2012 12:30-13:30 Alexandre Fréchette Robust network design Presentation of M.Sc. work. August 22, 2012 12:30-13:30 Torsten Schaub Answer-set programming Martin Gebser, Benjamin Kaufmann, and Torsten Schaub. August 8, 2012 12:30-13:30 Lin Xu SAT solving Shaowei Cai and Kaile Su. AAAI-12: Configuration Checking with Aspiration in Local Search for SAT. July 11, 2012 12:30-13:30 Frank Hutter Bias/variance tradeoffs Trevor Hastie, Robert Tibshirani, Jerome Friedman. June 20, 2012 12:30-13:30 Frank Hutter Bias/variance tradeoffs Trevor Hastie, Robert Tibshirani, Jerome Friedman. May 16, 2012 12:30-13:30 Chris Thornton Hyper-parameter optimization James Bergstra, Rémi Bardenet, Yoshua Bengio, Balázs Kégl. NIPS-11: Algorithms for Hyper-Parameter Optimization. April 25, 2012 12:30-13:30 Lin Xu Algorithm selection Continuation of 3S. March 21, 2012 12:30-13:30 Chris Thornton Hyper-parameter optimization James Bergstra and Yoshua Bengio. JMLR: Random Search for Hyper-Parameter Optimization. Feb 15, 2012 12:30-13:30 Everyone Adding papers and topics to the queue. Feb 1, 2012 12:30-13:30 Lin Runtime prediction Ling Huang, Jinzhu Jia, Bin Yu, Byung-Gon Chun, Petros Maniatis, Mayur Naik. Jan 11, 2012 12:30-13:30 James Algorithm configuration James Styles, Holger Hoos, Martin Mueller. required and there will be demos and music. Nov 30, 2011 14:00-15:00 James P, NP, etc Russel Impagliazzo: "A Personal View of Average-Case Complexity" Nov 23, 2011 14:00-15:00 Chris T Robot soccer software Using genetic algorithms to optimize robot soccer players. Amine Bourki, Matthieu Coulm, Philippe Rolet, Olivier Teytaud, Paul Vayssiere, ININCO 2010. Alejandro Arbelaez and Youssef Hamadi, LION 2011. March 16, 2011 13:30-14:30 Lin Per-Instance Algorithm Configuration Instance-Based Selection of Policies for SAT Solvers. Mladen Nikolic, Filip Maric and Predrag Janicic. SAT 2009. February 10, 2011 10:30-11:30 Dave T Random Instance Generation for SAT Towards Industrial-Like Random SAT Instances. Carlos Ansotegui, Jordi Levy, and Maria Luisa Bonet. IJCAI-09. Changhao Jiang and Marc Snir. Jan 13, 2011 1pm-3pm Chris N. HAL: A Framework for the Automated Analysis and Design of High-Performance Algorithms. Christopher Nell, Chris Fawcett, Holger H. Hoos, and Kevin Leyton-Brown. Sequential Model-Based Optimization for General Algorithm Configuration. Dec 23, 2010 1pm-2pm Chris N Metaalgorithmics Practice talk for LION: HAL: A Framework for the Automated Analysis and Design of High-Performance Algorithms. Journal on Satisfiability, Boolean Modeling and Computation 7 (2010) 77–82. Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown. Accepted at CP-AI-OR-10. Lin Xu, Holger Hoos, and Kevin Leyton-Brown. To appear in AAAI-10. Dave Tompkins and Holger H. Hoos. Accepted to SAT-10. Mar 9, 2010 4:15-5:15pm Frank Parameter Tuning Tuning Local Search by Average-Reward Reinforcement Learning. Sept 16, 2009 1:30-2:30pm Frank Algorithm configuration PhD thesis defence practice talk. Thesis defence: 12:30pm on Friday, Sept 18, room 200, Graduate Student Centre. Sept 23, 2009 1:30-2:30pm Massih Khorvash Maximum Cliques MSc thesis defence. Thesis title: On uniform sampling of maximum cliques. T. Russell, A.M. Malik, M. Chase, and P. van Beek. Statistical analysis of experiments. Not based on a paper. February 13, 2008 12:00-1:30pm Lin Statistics Continued: statistical analysis of experiments. Not based on a paper. Xiaofei: Ozun, Alper and Cifter, Atilla (2007): Nonlinear Combination of Financial Forecast with Genetic Algorithm. August 13, 2008 12-1:30pm Chris (practice talk) Timetabling Marco Chiarandini, Chris Fawcett, and Holger H. Hoos. Dec 13, 2007 11:30am-1pm Hoyt Clustering Hoyt A. Koepke and Bertrand Clarke. To appear in the International Symposium on AI & MATH '08. Nov 30, 2007 11:30am-1pm Ashique Structure in SAT R. Ostrowski, E. Gregoire, B. Mazure and L. Sais. Congestion games are games where the cost of a player from using a certain resource depends on the total number of players that are using the same resource. In this talk, we present new results on the complexity of computing and approximating pure Nash equilibria (PNE) in congestion game. We start by giving an overview over the complexity class PLS that contains many interesting local search problems. It is known that computing PNE in congestion games is PLS-hard. A natural and convincing notion of approximation for equilibria in congestion games assumes that agents are ambivalent between strategies whose delay differs by less than a factor of $\alpha$, for some $\alpha$ > 1. In this talk, we show that computing an $\alpha$-approximate equilibrium is PLS-complete. Thus finding an approximate Nash equilibrium is as hard as finding an exact Nash equilibrium or solving any other problem in PLS. Nov 9, 2007 EARG cancelled due to talk by Avi Pfeffer, Harvard University, 11am-12:30. Why do agents (people or computers) do things in strategic situations? Answering this question will impact how we build computer systems to assist, represent or interact with people in interactions with other agents such as negotiations and resource allocation. We identify four reasoning patterns that agents might use: choosing an action for its direct effect on the agent's utility, attempting to manipulate another agent, signalling information to another agent that the first agent knows, or revealing or hiding information from another agent that the first agent itself does not know. We present criteria that characterize each reasoning pattern as a pattern of paths in a multi-agent influence diagram, a graphical representation of games. We define a class of strategies in which agents do not make unmotivated distinctions, and show that if we assume all agents play these kinds of strategies, our categorization of reasoning patterns is complete and captures all situations in which an agent has reason to make a decision. We then study how people use two reasoning patterns in a particular negotiation game. We use machine learning to learn models of people's play, and embed our learned models in computer negotiators. We find that negotiators that use our learned model outperform classical game-theoretic agents and also outperform people. Finally, we learn models of the way people's behavior changes in ongoing interactions with the same agent, particularly the degree to which retrospective (rewarding or punishing past behavior) and prospective (attempting to induce future good behavior) reasoning play a role. Thomas Bartz–Beielstein and Sandor Markon. On the stack - please feel free to pick one of these if you are looking for a paper to present. Other papers are of course also welcome. Journal of Global Optimization 21: 345–383, 2001. Brigham S. Anderson, Andrew W. Moore, and David Cohn. In Proceedings of the Fourteenth Int. Conf. on Machine Learning (ICML 1997), pp. 30--38. Lin Xu, Frank Hutter, Holger Hoos, Kevin Leyton-Brown. Michail G. Lagoudakis and Michael L. Littman. L. Mercier and P. Van Hentenryck. Matteo Gagliolo and Juergen M. Schmidhuber. In Proceedings of the 8th Workshop on Algorithm Engineering and Experiments and the Third Workshop on Analytic Algorithmics and Combinatorics, pages 119-128, 2006. Yuhong Wu, Hakon Tjelmeland and Mike West. Bayesian CART - Prior Specification and Posterior Simulation. To appear at CP 2006. I'll see Matteo at CP - if you have comments or questions please let me know. Abstract: Proteins form the very basis of life. If we were to open up any living cell, we would find, apart from DNA and RNA molecules whose primary role is to store genetic information, a large number of different proteins that comprise the cell itself (for example the cell membrane and organelles), as well as a diverse set of enzymes that catalyze various metabolic reactions. If enzymes were absent, the cell would not be able to function, since a number of metabolic reactions would not be possible. Functions of proteins are the consequences of their functional 3D shape. Therefore, to control these versatile properties, we need to be able to predict the 3D shape of proteins; in other words, solve the protein folding problem. The prediction of a protein's conformation from its amino-acid sequence is currently one of the most prominent problems in molecular biology, biochemistry and bioinformatics. In this thesis, we address the protein folding problem and the closely-related problem of identifying folding pathways. The leading research objective for this work was to design efficient heuristic search algorithms for these problems, to empirically study these new methods and to compare them with existing algorithms. This thesis makes the following contributions: (1) we show that biologically inspired approaches based on the notion of stigmergy – where a collection of agents modifies the environment, and those changes in turn affect the decision process of each agent (particularly artificial colonies of ants that give rise to such properties as self-organization and cooperation also observed in proteins) is a promising field of study for the protein folding problem; (2) we develop a novel adaptive search framework that is used to identify and to bin promising candidate solutions and to adaptively retrieve solutions when the search progress is unsatisfactory; (3) we develop a new method that efficiently explores large search neighbourhoods by performing biased iterated solution construction for identifying folding pathways; and (4) we show that our algorithms efficiently search the vast search landscapes encountered and are able to capture important aspects of the process of protein folding for some widely accepted computational models. Here are my slides. There was a life demonstration of their CALIBRA software: I ran CALIBRA on the branin test function with a wrapper outputting which parameter settings are begin run into a file. Then I used Matlab to visualize that trajectory. I packed all this into a zip file. The CALIBRA system combines fractional experimental design with a local search mechanism. Being limited to at most five free parameters, it starts off by evaluating each parameter configuration in a full factorial design. It then iteratively homes in to good regions in parameter space by employing fractional experimental designs that evaluate nine configurations around the so far best performing configuration. The grid for the experimental design becomes finer in each iteration, which provides a nice solution to the automatic discretization of continuous parameters. Once a local optimum is found and cannot be refined anymore, the search is restarted (with a coarser grid) by combining some of the best configurations found so far, but also introducing some noise for sake of diversification. The experiments reported in the paper show great promise in that CALIBRA was able to find parameter settings for six independent algorithms that matched or beat the respective originally proposed parameter configurations. Carla P. Gomes, Ashish Sabharwal, Bart Selman AAAI-06. Donald R. Jones, Matthias Schonlau, William J. Welch. Journal of Global Optimization (?), 1998. This is very well written and the basis of section 6.3.5 of the DACE book. Feb 9, 2006 16:00-17:30 Kevin M Active learning Chapter 6 of the DACE book. Please see Frank for a copy. Nov 21, 2005 17:30-19:00 Holger Space-filling designs for computer experiments Chapter 5 of the DACE book. Please see Frank for a copy. Nov 14, 2005 13:30-15:00 Lin Additional topics in prediction methodology Chapter 4 of the DACE book, continued. Please see Frank for a copy. Lin's slides. Nov 7, 2005 13:30-15:00 Lin Additional topics in prediction methodology Chapter 4 of the DACE book. Please see Frank for a copy. Lin's slides. Oct 24, 2005 13:30-15:00 Hendrik Gaussian processes Section 2.3 of the DACE book. Please see Frank for a copy. Oct 3, 2005 17:30-19:00 Frank Introduction to DACE Chapter 1 and sections 2.1-2.2 of the DACE book. Please see Frank for a copy. My slides. Feb 2, 2005 14:30-16:00 Holger Experimental design Statistical tests, chapters 2-3 in the book - continued and finished. Talk by and meeting with David Johnson. This paper concerns complete search. Original site creation and maintenance: Frank Hutter. Currently maintained by: Chris Fawcett.
CommonCrawl
[SOLVED] How big is the lattice of all functions? [SOLVED] Is the top interval of a finite distributive lattice, a boolean lattice? [SOLVED] Can any finite lattice be realized as an intermediate subgroups lattice? [SOLVED] What are the rank 3 boolean intervals [H,G], with G simple group? Does $[H_i , G_i]$ distributive imply $[H_1 \times H_2, G_1 \times G_2]$ modular? [SOLVED] Why do the projections in the Calkin algebra not form a lattice? [SOLVED] How many subspaces are generated by three or more subspaces in a Hilbert space? Can infinite bounded distibutive lattices be "arbitrarily wide"? [SOLVED] Does the collection of coverings on a set $X$ form a lattice when ordered by refinement?
CommonCrawl
This is your chance to follow some numbers and see where they go! A simple rule is all you need. My first suggestion is to add the digits together then multiply (times) by $2$. The first number that I chose happened to be $56$. We multiply the $11$ by $2$, $2 \times 11 = 22$, and that's the first part of the journey. We multiply the $4$ by $2$, $4 \times 2 = 8$, and that's the second part the journey. Now, $8 + 0 = 8$ and $8 \times 2 = 16$ and that was the third part. $10$ leads to $2$, $2$ leads to $4$, $4$ leads to $8$ and we are back to where we got to in the second part of the journey. Oh! So we are on the same bit as before, a circular bit that goes $2, 4, 8, 16, 14, 10$ and then back to the $2$ again. Now a new starting place, $96$. Oh! So we now have a smaller circular bit of the journey that goes $6, 12$ then back to the $6$. I explored further trying to start with each number from $1$ to $99$. Then I tried similar, but different rules. There are $99$ starting points to try and I've only show you $8$ on each of the two above so there are lots more to explore! Decide on the rules you will use and investigate what happens with different starting points. You might invent your own way of recording your findings. We'd love to hear how you got on. Area - squares and rectangles. Practical Activity. Trial and improvement. Interactivities. Working systematically. Addition & subtraction. Multiplication & division. Generalising. Combinations. Investigations.
CommonCrawl
Is a manifold-with-boundary with given interior and non-empty boundary essentially unique? Let $M$ be a compact connected manifold-with-boundary such that $\circ M \neq \emptyset$, where $\circ M$ is the boundary of $M$. Let $N$ be a compact connected manifold-with-boundary such that $\circ N \neq \emptyset$ and $\bullet M \approx \bullet N$, where $\bullet M$ denotes the interior of $M$ and $\approx$ denotes homeomorphic. Does it necessarily hold that $N \approx M$? No, there are examples detected by Whitehead torsion. If $P$ is a compact connected $(n-1)$-manifold with empty boundary, then (assuming $n\ge 6$) for every element $\tau$ of the Whitehead group of $\pi_1(P)$ there is an $h$-cobordism $M$ on $P$ such that $\tau$ is the Whitehead torsion of the pair $(M,P)$. The interior of $M$ will be isomorphic to $P\times\mathbb R$, but if $\tau$ is nontrivial then $M$ will not be isomorphic to $P\times I$. Not the answer you're looking for? Browse other questions tagged differential-topology manifolds or ask your own question. Contractible manifold with boundary - is it a disc? Does a *topological* manifold have an exhaustion by compact submanifolds with boundary? If 2-manifolds are homeomorphic and smooth, are they diffeomorphic? Remove a disc from a manifold. When is the resulting sphere nullhomotopic?
CommonCrawl
Quantum computers are devices which, exploiting intrinsically quantum mechanical phenomena, are believed to be able to perform certain operations more efficiently. While the basic unit of information that is manipulated by classical computers is the bit, quantum computers manipulate quantum states, often in the form two-level quantum systems typically referred to as qubits. Several models for quantum computation have been proposed and are actively researched. Being the field of quantum computation still not yet fully mature, no computational model is indisputably superior to the others. Arguably the most known one is the circuit model. Among the others, there are measurement-based quantum computation, quantum annealing, and continuous variable quantum computation. In gate-based quantum computation, gates are represented by matrices, and include types such as the Pauli X (also termed "NOT"), Y, and Z (pronounced "zed") gates, which are single-qubit gates, multiple-qubit gates like the controlled-NOT or CNOT gate and Toffoli gate, and others. The set of single-qubit gates plus the CNOT gate forms a set of universal gates. Where $\alpha$ through $\zeta$ (and there can be more beyond this) represent the amplitudes of the state, and determine the probability of a particular state resulting upon collapse of the wavefunction upon measurement. Each of the items between the $|\,\rangle$ represents a particular possible state that can occur upon measurement. When measurement occurs, the qubits become normal, classical bits, which is part of what makes writing algorithms for quantum computers so difficult. The advantage in a quantum computer lies in the fact that the whole system can be, and in fact must be, represented by a single vector. This means that all the qubits share information, and further, any one gate, even if a single-qubit gate, has repercussions on the whole system. There are many different physical realizations of the quantum computer. There are optical quantum computers, which use photons as qubits, and things like Fabry-Perot cavities, mirrors, beamsplitters, phase shifters, and so forth for gates. There are superconducting quantum computers, which use Josephson Junctions. There are ion-trap quantum computers, which use ions for qubits and hold those still with strong magnetic fields, and then manipulate the state of the ions with lasers. A list of realizations can be found here under "Quantum Computer Science" and "Physical Implementations". Nielsen and Chuang's Quantum Computing and Quantum Information is the standard textbook for the field. Michael Nielsen has a series of videos on YouTube called Quantum Computing for the Determined. It is recommended that you have a base understanding of linear algebra in particular if you wish to learn this subject. Some understanding of quantum mechanics and computer science will be highly useful and something you will at minimum have to learn upon the way.
CommonCrawl
In another paper, one of us argued that emergence and reduction are compatible, and presented four examples illustrating both. The main purpose of this paper is to develop this position for the example of phase transitions. We take it that emergence involves behaviour that is novel compared with what is expected: often, what is expected from a theory of the system's microscopic constituents. We take reduction as deduction, aided by appropriate definitions. Then the main idea of our reconciliation of emergence and reduction is that one makes the deduction after taking a limit of an appropriate parameter $N$. Thus our first main claim will be that in some situations, one can deduce a novel behaviour, by taking a limit $N\to\infty$. Our main illustration of this will be Lee-Yang theory. But on the other hand, this does not show that the $N=\infty$ limit is physically real. For our second main claim will be that in such situations, there is a logically weaker, yet still vivid, novel behaviour that occurs before the limit, i.e. for finite $N$. And it is this weaker behaviour which is physically real. Our main illustration of this will be the renormalization group description of cross-over phenomena. The Structure of Science.Ernest Nagel - 1961 - Les Etudes Philosophiques 17 (2):275-275. Critical Phenomena and Breaking Drops: Infinite Idealizations in Physics.Robert Batterman - 2004 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 36 (2):225-244. Understanding Thermodynamic Singularities: Phase Transitions, Data, and Phenomena.Sorin Bangu - 2009 - Philosophy of Science 76 (4):488-505. Explaining Quantum Spontaneous Symmetry Breaking.Chuang Liu & Gerard G. Emch - 2004 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 36 (1):137-163. Infinite Systems in SM Explanations: Thermodynamic Limit, Renormalization (Semi-) Groups, and Irreversibility.Chuang Liu - 2001 - Proceedings of the Philosophy of Science Association 2001 (3):S325-. On Emergence in Gauge Theories at the 'T Hooft Limit'.Nazim Bouatta & Jeremy Butterfield - 2015 - European Journal for Philosophy of Science 5 (1):55-87. Decoupling Emergence and Reduction in Physics.Karen Crowther - 2015 - European Journal for Philosophy of Science 5 (3):419-445. Less is Different: Emergence and Reduction Reconciled. [REVIEW]Jeremy Butterfield - 2011 - Foundations of Physics 41 (6):1065-1135. Reduction, Emergence and Renormalization.Jeremy Butterfield - 2014 - Journal of Philosophy 111 (1):5-49. Emergence and Reduction.Shaun Le Boutillier - 2013 - Journal for the Theory of Social Behaviour 43 (2):205-225. Ontology, Reduction, Emergence: A General Frame.C. Ulises Moulines - 2006 - Synthese 151 (3):313-323. Local Reduction in Physics.Joshua Rosaler - 2015 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 50:54-69.
CommonCrawl
Julia is an ambitious language. It aims to be as fast as C, as easy to use as Python, and as statistically inclined as R (to name a few goals). Read more about the language and why the creators are working so hard on it here: Why We Created Julia. These claims have lead to a few heated discussions (and a few flame wars) around benchmarking along the line of "Is Julia really faster than [insert your favorite language/package here]?" I don't think I'm the person to add to that particular conversation, but what I will say is this: Julia is fun. A few weekends ago, I made the decision to casually brush up on my neural networks. Why? Well, for starters neural networks are super interesting. Additionally, I was keen to revisit the topic given all the activity around "deep learning" in the Twittersphere. "There has been a great deal of hype surrounding neural networks, making them seem magical and mysterious. As we make clear in this section, they are just nonlinear statistical models" Not magic, just lots of interesting (or boring depending on perspective) math. The remainder of this post outlines a neural network package I made in Julia using only Julia's standard libraries. I've resisted throwing the actual code in this post and focused primarily on showing off some cool Julia stuff and discussion of the machine learning. If you're keen to see the code inside the neural network package, you can find that on github. Of course, first things first. I'll be using the UC Irvine Machine Learning Repository's wine dataset. It's a nice tabular, numeric csv file that saves us the hassle of having to acquire and clean new example data. Julia, like R and Python's pandas, has a data frame library specifically for these kinds of problems. I won't be showing off the more advanced features of the library, simply to reduce headaches from reading csv's and isolating columns. Note: To install Julia packages run Pkg.add("PACKAGENAME") within the Julia interpreter. For plots, I'm using Julia's Gadfly lib, a ggplot2-inspired, d3-powered package chock full of awesome. Let's look at the distributions of quality across our wines. As shown, the dataset is comprised of six unique classes w/ most observations falling into class 5 or class 6. It is fairly standard procedure in machine learning to split training data into separate train and test groups. Train/test splits simply exclude a portion of the data (the test set) during training, and predict on the holdout dataset to produce an unbiased evaluation. So let's create our train/test split! This code will look very familiar to those acquainted with pandas or R data.frame (for those less familiar, checkout some of our other awesome blog posts). Let's consider a toy example--modeling a company's value based on 2 variables only (# of employees and revenue). Clearly, these fields will contain numbers. But the magnitude of each will be very different (e.g. the former might have tens or hundreds while the latter may be a lot larger). This can present problems for machine learning algorithms. Some models handle these scale differences well while others do not. Unfortunately, it's hard to say for certain how well a particular algorithm will deal with these scenarios without understanding of what's going on under the hood. That said, some make it easier than others. k-nearest-neighbors clearly cares if one feature is arbitrarily larger than another since it will have a disproportionate effect on distance. But other cases aren't so easy. Word to the wise. If you're even a little unsure, scale your features. $^*$ Technical note: I think this has to do with how weights and biases are initialized before backpropagation. The scaling strategy I'll be implementing is fairly basic. For each column, subtract out the mean and divide by the standard deviation. This won't make my classifier immune to outliers or fat tails, but it will go a long way in making the model behave better. # for loops are fast again! Unlike in NumPy or R, when you multiply two arrays in Julia, you get array multiplication, not element-wise multiplication. Why does that code fail? You can't multiply an $Nx1$ matrix by another $Nx1$ matrix because the dimensions don't align. Element-wise operations (broadcasters) are specified with their own syntax; a period before the operator: .*, ./, .^. You can also transpose a matrix with a "tick" symbol. This has some interesting properties when combined with a broadcaster. I can use that trick to further vectorize the transformation, making the column-wise operations much more concise (and faster). Okay, let's look at the end result of scaling. I'll fit the scalar on the training data then transform the test. # what do columns look like before scaling? # ... and after scaling? Great. Rather than having a wild distribution of magnitudes, we've normalized the features to similar ranges. On to the predictions! Artificial neural networks are an attractive model-type because of their ability to model non-linear relationships. Neural networks are comprised of layers, each of which is a set of linear models. To initialize a neural network we only have to specify the size of one layer, so called the "hidden" layer. I'll explain later what that means, but for now we'll say that the size of the hidden layer influences how precisely the model learns the training set. The larger the hidden layer, the more likely the model is to overfit. The smaller the hidden layer, the more likely the model is to overgeneralize. I've designed the neural network module to use similar syntax to scikit-learn. The first two parameters, epochs and alpha have to do with, respectively, the length and rate of training. lambda scales a penalty which attempts to control over-fitting, larger values mean a more generalized model. Of course averages can be deceiving, and as we saw previously, the class distributions are skewed. Let's look at a confusion matrix. The columns represent the true classes, the rows represent the classes predicted by the model. In this case, the predictions aren't perfect but the results are significantly better than random guessing. A neural network is a combination of linear models. It takes some set of input observations ($x_1, x_2, \dots, x_n$) and produces class probabilities for that observation. In the diagram below, the output layer will create estimates for three classes ($P(c_1),P(c_2),P(c_3)$). The real magic comes from producing the intermediate results ($a_1,a_2,...,a_n$). Let's consider a single value produced by the "hidden layer" (the light blue ones). $a_1$ is calculated by feeding the input fields ($x_1, x_2, \dots, x_n$) into a linear classifier known as an artificial neuron. The same is true for $a_2$, $a_3$ and all the way up to $a_m$. Each intermediate result is the product of a unique artificial neuron, making each layer of the neural network is really just an amalgamation of linear models. So what does an artificial neuron look like? An artificial neuron looks surprisingly like any other linear model. Take an input value, multiply it by some weights and add a bias. # What are the outputs of these values? Where artificial neurons differ from a least-squares regression, is in activation function. Though there are many functions used for neural networks, a common choice is the sigmoidal function. This takes the output of the linear model and squashes the value between 0 and 1. Because the values approach 0 and 1 so quickly, this activation function allows us to think of an individual neuron as "active" (close to 1) or "inactive" (close to 0). By aggregating many linear models, neural networks is able to consider nonlinear relationships in attempt to generalize predictions. Neural networks, and strategies involving them such as deep learning, have become a major part of complex signal recognition such as machine vision. @cjordansquire afaik all challenges that involve signal data (sound or vision) are now won by deep CNNs with ReLU + dropout. The algorithm excels at systematically evaluating features at a speed human's just can't. For problems that necessitate high dimensional and complex feature spaces, neural networks is an immensely valuable tool. But there is a down side. The same strength of modeling interwoven relationships curtails an important property: the ability to open up the black box and understand what's going on. Often, being able to know which features are important is as critical as the actual prediction (why do customers churn vs. will a customer churn). Neural networks aren't magic, and they're not the end all be all. Julia, on the other hand, is totally magic.
CommonCrawl
We present a theory of thermal fluctuations which melt a commensurate $p \times p$ density wave ordered state on the square lattice. A phase diagram is constructed which will act as a springboard to a variety of interesting phases and phase transitions. The commensurate lock-in solid can in general melt to either an incommensurate floating solid or by a second order phase transition to an anisotropic (striped) floating state with $p$-periodic order along one direction and incommensurate quasi long range order in the other direction. In either case, this transition will be accompanied by the proliferation of domain walls, with the adjacent state being distinguished by the sign of the domain wall interaction energy. The fully disordered high temperature state can be reached from the floating solid by a second order transition mediated by dislocations. For $p = 4$, and at special commensurate densities, the $p \times p$ commensurate state can melt directly into the disordered state via a self-dual critical point with non-universal exponents.
CommonCrawl
(3) Figures to the right indicated full marks. Q1(a) Compare Intramodal Dispersion and Intermodal Dispersion. Q1(b) Define Critical Angle, Acceptance Angle, Fresnel Reflection and External Reflection. Q1(c) Compare LED and LASER Sources. Q1(d) Differentiate DWDM and WDM Techniques. Q2(a) Explain OTDR working principle in detail. Q2(b) Derive an expression for Time Delay in Intermodal Dispersion. Q2(c) Calculate the number of modes at 1.3$\mu$m wavelength in GIF having index profile $\alpha$=2, core radius 25$\mu$m, core refractive index 1.48 and cladding refractive index 1.46. Q3(a) Sketch the Refractive Index Profile of SIF and GIF. Derive an expression for Numerical Aperture and Number of Modes in SIF. Q3(b) Explain any one Fiber Fabrication Technique. Q3(c) Compare Isolators and Circulators. Q4(b) Derive an expression for Responsivity of PIN photodiode. Differentiate PIN and RAPD photodiodes. Q4(c) Explain Front End Amplifiers in optical communication. Q5(a) Explain OTDM in detail. Q5(b) Describe SONET/SDH in detail.
CommonCrawl
Abstract: In this paper we study the stability of the self-similar solutions of the binormal flow, which is a model for the dynamics of vortex filaments in fluids and super-fluids. These particular solutions $\chi_a(t,x)$ form a family of evolving regular curves of $\mathbb R^3$ that develop a singularity in finite time, indexed by a parameter $a>0$. We consider curves that are small regular perturbations of $\chi_a(t_0,x)$ for a fixed time $t_0$. In particular, their curvature is not vanishing at infinity, so we are not in the context of known results of local existence for the binormal flow. Nevertheless, we construct in this article solutions of the binormal flow with these initial data. Moreover, these solutions become also singular in finite time. Our approach uses the Hasimoto transform what leads us to study the long-time behavior of a 1D cubic NLS equation with time-depending coefficients and small regular perturbations of the constant solution as initial data. We prove asymptotic completeness for this equation in appropriate function spaces.
CommonCrawl
A Sudoku square consists of a $9\times 9$ grid with entries such that each row, column and each of the 9 non-overlapping $3\times 3$ tiles contains the numbers 1—9 once only. The following program verifies that a provided grid is a valid Sudoku square. """ Return True if grid is a valid Sudoku square, otherwise False. """ Here we use the fact that an array of length 9 contains 9 unique elements if the set formed from these elements has cardinality 9. No check is made that the elements themselves are actually the numbers 1—9.
CommonCrawl
how can I numerically calculate all eigenvectors of a $n \times n$ complex tridiagonal matrix? Ground state eigenvector different for different eigen solvers (differs by negative sign in the elements). Does it matter? What does "Counting algebraic multiplicity" mean? How to parallelize the computation of eigenvalues of a sparse symmetric matrix in MATLAB? I have a similarity matrix which is symmetric and sparse. How can I parallelize the computation of the eigenvalues of this matrix in MATLAB?
CommonCrawl
[Update (8/26): Inspired by the great responses to my last Physics StackExchange question, I just asked a new one—also about the possibilities for gravitational decoherence, but now focused on Gambini et al.'s "Montevideo interpretation" of quantum mechanics. I'm in Anaheim, CA for a great conference celebrating the 80th birthday of the physicist Yakir Aharonov. I'll be happy to discuss the conference in the comments if people are interested. In the meantime, though, since my flight here was delayed 4 hours, I decided to (1) pass the time, (2) distract myself from the inanities blaring on CNN at the airport gate, (3) honor Yakir's half-century of work on the foundations of quantum mechanics, and (4) honor the commenters who wanted me to stop ranting and get back to quantum stuff, by sharing some thoughts about a topic that, unlike gun control or the Olympics, is completely uncontroversial: the Many-Worlds Interpretation of quantum mechanics. Proponents of MWI, such as David Deutsch, often argue that MWI is a lot like Copernican astronomy: an exhilarating expansion in our picture of the universe, which follows straightforwardly from Occam's Razor applied to certain observed facts (the motions of the planets in one case, the double-slit experiment in the other). Yes, many holdouts stubbornly refuse to accept the new picture, but their skepticism says more about sociology than science. If you want, you can describe all the quantum-mechanical experiments anyone has ever done, or will do for the foreseeable future, by treating "measurement" as an unanalyzed primitive and never invoking parallel universes. But you can also describe all astronomical observations using a reference frame that places the earth at the center of the universe. In both cases, say the MWIers, the problem with your choice is its unmotivated perversity: you mangle the theory's mathematical simplicity, for no better reason than a narrow parochial urge to place yourself and your own experiences at the center of creation. The observed motions of the planets clearly want a sun-centered model. In the same way, Schrödinger's equation clearly wants measurement to be just another special case of unitary evolution—one that happens to cause your own brain and measuring apparatus to get entangled with the system you're measuring, thereby "splitting" the world into decoherent branches that will never again meet. History has never been kind to people who put what they want over what the equations want, and it won't be kind to the MWI-deniers either. This is an important argument, which demands a response by anyone who isn't 100% on-board with MWI. Unlike some people, I happily accept this argument's framing of the issue: no, MWI is not some crazy speculative idea that runs afoul of Occam's razor. On the contrary, MWI really is just the "obvious, straightforward" reading of quantum mechanics itself, if you take quantum mechanics literally as a description of the whole universe, and assume nothing new will ever be discovered that changes the picture. Nevertheless, I claim that the analogy between MWI and Copernican astronomy fails in two major respects. The first is simply that the inference, from interference experiments to the reality of many-worlds, strikes me as much more "brittle" than the inference from astronomical observations to the Copernican system, and in particular, too brittle to bear the weight that the MWIers place on it. Once you know anything about the dynamics of the solar system, it's hard to imagine what could possibly be discovered in the future, that would ever again make it reasonable to put the earth at the "center." By contrast, we do more-or-less know what could be discovered that would make it reasonable to privilege "our" world over the other MWI branches. Namely, any kind of "dynamical collapse" process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick. Admittedly, like most quantum folks, I used to dismiss the notion of "dynamical collapse" as so contrived and ugly as not to be worth bothering with. But while I remain unimpressed by the specific models on the table (like the GRW theory), I'm now agnostic about the possibility itself. Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with. But as Roger Penrose never tires of pointing out, there's at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?). As you might remember, I wagered $100,000 that scalable quantum computing will indeed turn out to be compatible with the laws of physics. Some people considered that foolhardy, and they might be right—but I think the evidence seems pretty compelling that quantum mechanics can be extrapolated at least that far. (We can already make condensed-matter states involving entanglement among millions of particles; for that to be possible but not quantum computing would seem to require a nasty conspiracy.) On the other hand, when it comes to extending quantum-mechanical linearity all the way up to the scale of everyday life, or to the gravitational metric of the entire universe—as is needed for MWI—even my nerve falters. Maybe quantum mechanics does go that far up; or maybe, as has happened several times in physics when exploring a new scale, we have something profoundly new to learn. I wouldn't give much more informative odds than 50/50. The second way I'd say the MWI/Copernicus analogy breaks down arises from a closer examination of one of the MWIers' favorite notions: that of "parochial-ness." Why, exactly, do people say that putting the earth at the center of creation is "parochial"—given that relativity assures us that we can put it there, if we want, with perfect mathematical consistency? I think the answer is: because once you understand the Copernican system, it's obvious that the only thing that could possibly make it natural to place the earth at the center, is the accident of happening to live on the earth. If you could fly a spaceship far above the plane of the solar system, and watch the tiny earth circling the sun alongside Mercury, Venus, and the sun's other tiny satellites, the geocentric theory would seem as arbitrary to you as holding Cheez-Its to be the sole aim and purpose of human civilization. Now, as a practical matter, you'll probably never fly that spaceship beyond the solar system. But that's irrelevant: firstly, because you can very easily imagine flying the spaceship, and secondly, because there's no in-principle obstacle to your descendants doing it for real. Now let's compare to the situation with MWI. Consider the belief that "our" universe is more real than all the other MWI branches. If you want to describe that belief as "parochial," then from which standpoint is it parochial? The standpoint of some hypothetical godlike being who sees the entire wavefunction of the universe? The problem is that, unlike with my solar system story, it's not at all obvious that such an observer can even exist, or that the concept of such an observer makes sense. You can't "look in on the multiverse from the outside" in the same way you can look in on the solar system from the outside, without violating the quantum-mechanical linearity on which the multiverse picture depends in the first place. The closest you could come, probably, is to perform a Wigner's friend experiment, wherein you'd verify via an interference experiment that some other person was placed into a superposition of two different brain states. But I'm not willing to say with confidence that the Wigner's friend experiment can even be done, in principle, on a conscious being: what if irreversible decoherence is somehow a necessary condition for consciousness? (We know that increase in entropy, of which decoherence is one example, seems intertwined with and possibly responsible for our subjective sense of the passage of time.) In any case, it seems clear that we can't talk about Wigner's-friend-type experiments without also talking, at least implicitly, about consciousness and the mind/body problem—and that that fact ought to make us exceedingly reluctant to declare that the right answer is obvious and that anyone who doesn't see it is an idiot. In the case of Copernicanism, the "flying outside the solar system" thought experiment isn't similarly entangled with any of the mysteries of personal identity. There's a reason why Nobel Prizes are regularly awarded for confirmations of effects that were predicted decades earlier by theorists, and that therefore surprised almost no one when they were finally found. Were we smart enough, it's possible that we could deduce almost everything interesting about the world a priori. Alas, history has shown that we're usually not smart enough: that even in theoretical physics, our tendencies to introduce hidden premises and to handwave across gaps in argument are so overwhelming that we rarely get far without constant sanity checks from nature. This entry was posted on Saturday, August 18th, 2012 at 1:56 pm and is filed under Metaphysical Spouting, Quantum. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed. So you support MWI but reject one of the main arguments for it? Deutsch argues that a quantum computer will prove that MWI is correct. Do you agree with that? I'd prefer to say that I understand the case for MWI and agree with part of it: yes, MWI really is just the "obvious" story you would tell if you wanted to apply quantum mechanics to the entire universe. (The zillions of other worlds aren't "added" per se; rather, they seem unavoidable once you accept that the Schrödinger equation applies always and everywhere.) Furthermore, all of the concrete alternatives to MWI on the market today are contrived and unsatisfactory in various ways. On the other hand, for the reasons explained in the post, I reject the further step some people take these days: of asserting that MWI is as obviously true as the Copernican system, and that anyone who refuses to see that is an idiot. I've got about 1.5 hours to watch talks from the conference, can you please recommend three that you found particularly enlightening? Thanks! If physics are computable, we can always imagine some "alien god" building a computer to simulate physics and look at the universe from outside the simulaton- even if the computer only simulates a small subset of the universe. In fact, we'll be able to do it ourselves. Is this really a necessary aspect of all many-worlds type theories? Subsequent worlds in branches can be identical to each other, right? Can a photon can arrive at one place having traveled different paths to get there, with the paths being different lengths, meaning that, since the speed of light is constant, and the photon arrived at one time, the photon left at different times? If so, how does this happen without worlds combining? Is worlds combining not a plausible ingredient in an explanation of Born's rule? Great post! But I wonder just what the implications would be if the Bouwmeester et al. experiment shows no interference. It certainly gives an opportunity to falsify Penrose's theory of gravitationally induced collapse, and a no-interference result would make that theory much more credible. But couldn't the mirror also undergo something much like conventional environmental decoherence in a version of quantum gravity where the gravitational field is a thermal state of Planck-scale degrees of freedom? I'm not sure that any of those theories are sufficiently well defined to predict the outcome of this experiment, but as a wild guess I'd expect them to produce environmental decoherence under the same kind of conditions as Penrose's theory produces a collapse. Brian #6: No, it's not a necessary aspect of MWI that the branches never again meet—in fact they will meet in the thermodynamic limit. All I meant was that, on the timescales relevant to us (and on the conventional picture), their chance of meeting is so absurdly small that it can safely be neglected. Scott, thank you for this fine essay! Please let me commend to everyone the first chapter of David Deutsch's book The Fabric of Reality (1997), not for the arguments that Deutsch develops in favor of MWI (arguments that are — literally — debatable), but rather for the wonderful arguments that Deutsch develops for the proposition that science is all about understanding, relative to which predictive power is merely supplementary. The final sentence quoted above is my favorite expression from all of Deutsch's Fabric, because it is such a wonderfully sharp, wonderfully double-edged sword, which we will call "Deutsch's Universal Double-Edged Sword" (DUDES). As a concrete and realistic example of quantum-relevant DUDES, let us suppose — as many researchers are discovering — that we come to a collective appreciation, as a humbly empirical finding, that the introduction of small quanitities of high-order noise can greatly increase the efficiency of computational simulation simulation, not for all classes of dynamical system, but for a class sufficienly broad as to encompass all naturally occuring systems, as well as all laboratory experiments conducted to date. The principle of DUDES affirms that this empirical finding must "illuminate all subjects", and in so particular, it must illuminate the reality (or not) of MWI. Scott, what do you think of combining Wigner's friend experiment with Peres's delayed choice for entanglement swapping? Suppose Alice, Bob, ad Victor (Figure 1 of Ma et al. 2012) become conscious of their measurement (basis chosen at random) at space-like distance. Then don't you think each individual would need to consider the others two as superposed consciousness? If this experiment was conducted, would you agree Penrose would have been proven wrong on this particular topic? If yes, do you actually need a space-like separation to get convinced? Jiav #10: Yes, if a Wigner's friend experiment can be done, then Penrose is wrong (and I think Penrose would readily agree!). No need for delayed-choice or anything like that. Scott #11: Sorry I was unclear. Wigner's original proposal could hardly be done in near future (except maybe using conscious computer, but then Penrose could argue these computers are not truly conscious). To the contrary, Peres' experiment has already be conducted, with Alice, Bob and Victor being instruments rather than conscious being. So, if Victor, Alice and Bob were conscious of their own measurement, would you agree Peres's proposal would be in essence a valid variation of Wigner's friend experiment? Jiav #13: Thanks for clarifying! Alas, no, I don't think an experiment where someone else measures a quantum state at spacelike separation from me says anything about these issues. For I'd only describe the other person as being in superposition if I already accepted MWI! If instead I believed in some dynamical-collapse theory, then I'd say the other person was in a definite configuration just like I was. And nothing in an experiment like this is able to distinguish the two possibilities. The only way I know to distinguish them is to look for interference between different mental states of a superposed observer, and that's something that we're obviously an extremely long way from doing. 1. There is a consensus on the MWI. 2. The people in this universe, by their belief in MWI, guess the existence of a universe like ours – i.e., a universe whose inhabitants believe it to be the only one. Then, from the point of view of those people, our view would seem parochial. Well, one of the spacelike-separated people could consciously choose a polarized photon to send to the other person, based on the result of their measurement. If this photon is in a superposition presumably so is the person who hand-crafted it. When you run your program will there be only one class of object instantiated (the wave class) or are there two different types of objects (of wave class and particle class) ? It should be obvious that the many worlds interpretation has much greater simplicity and clarity and that all other interpretations are in fact a return of dualism in disguise (with all the associated problem thereof). It is for that reason that many worlds wins hands down. John Sidles – how goes your search for the best polynomial-complexity approximation to QM? I think this was a nice write up, but I think that you should have written some about the techical problems that Everett faces. The Born Rule is seemingly impossible to derive in MWI and if that ends up being the case, then it's obviously wrong. The issue of preferred basis. Forms a CD or quantum well? Does the notion of superdeterminism remove the need for MWI? And if so, is there a good reason why is this not an explanation that receives more attention? To my limited knowledge, Gerard 't Hooft seems to be the only one whose work on this topic seems to be taken somewhat seriously. Also, if the concept of "consciousness" is integral to certain interpretations of quantum mechanics, does this not raise alarm bells? From what I've seen, there seems to be a consensus amongst the neuroscience community that "free will" is an illusion (see Sam Harris's recent short text for references). I suppose you might have "consciousness" without "free will", but that seems problematic, even at the level of then defining these terms. Sorry, while that argument sounds superficially plausible, the rules of QM prevent it from working! If the photon is entangled with the person who sent it, then I won't see interference when I measure the photon (i.e., the photon will be in a mixed state). The only way for the photon to be in a pure-state superposition is if it's unentangled with the sender. But in that case, the photon could just as well have been sent by a person in a definite configuration as by a superposed sender. So in neither case can I conclude anything at all, by measuring only the photon, about whether the person who sent it was in superposition. mjgeddes #17: While I love your coding analogy, the irony is that I draw the opposite conclusion from it than you do! I'd say: whether your program should have "wave" objects only, or both "wave" and "particle" objects, strongly depends on what kind of functionality you think the program has to support. It's as extreme a violation of the Occam razor principle (entities should not be multiplied without necessity) as possible. The trouble is that "entities should not be multiplied without necessity" is an archaic formulation of Occam's Razor, which matches neither how it is invoked in science, nor how it should be invoked according to modern statistical theories. To illustrate, imagine a theory that says that atoms don't exist except when we look at them through a microscope, or at least when we observe some phenomenon whose explanation requires atomic theory. Otherwise, if you just stare at (say) your table and don't try to take it apart, then it really is just a solid, continuous wood-substance with no internal structure. Now, notice that this theory can achieve an incredible reduction in "the number of entities in the universe," while matching (because we insist it do so) the predictions of conventional physics. Even so, the theory is stupid, and no one (including you, I'm sure) seriously advocates it. What's going on here is that this theory reduced the number of "entities," only at the expense of a staggering increase in the complexity of the laws. For example, how does the universe "know" whether anyone is looking at the table or not, so that it knows whether it needs to render the table down to the level of atoms? This is the reason why modern versions of Occam's Razor talk about the simplicity of theories themselves (either the number of bits needed to specify the theory in some way, or some other looser criterion), rather than the number of "entities" postulated by the theories. And thus, calling MWI "the most extreme violation of Occam's Razor possible" gets it ironically backwards. MWI is one of the most extreme possible applications of Occam's Razor—so extreme, in fact, that debate over whether the Razor should be taken that far seems entirely legitimate to me (see my comment #25). Mitchell, before answering, please let me first disclaim (a) any implication that the quest to find good QM approximation algorithms is solely mine, and/or (b) any ambition to encompass the entirety of QM with PTIME approximations. The reasons are pure common-sense: (a) many eminent researchers for many decades have sought (and found!) a multitude of ingenious PTIME approximations to quantum dynamical systems — of which many (perhaps even the majority) methods resort to non-Hilbert state-spaces — and there is no particular reason to think this progress will stop any time soon, and (b) as Scott often notes, there are excellent complexity-theoretic reasons to expect that scalable quantum computing architectures on perfectly flat state-spaces will forever be beyond the reach of PTIME simulation. That said, this summer our Quantum Systems Engineering group is mainly interested to explore (both experimentally and theoretically) the question "How does Onsager/Lindblad-style transport theory work on non-flat Hamiltonian/Kählerian state-spaces?", to which the satisfying answer — that in retrospect is mathematically unsurprising — is simply this: naturally and universally. Here the notion of of "transport" phenomena is understood pretty broadly, e.g. the subtle and intricate phenomena that are associated to single-photon emission-and-detection in low-loss optical cavities are regarded as the instances of the dynamical transport of conserved quantities from sources/emitters to the sinks/detectors. Thus viewed broadly, transport theory is associated to a class of phenomena that (obviously) very many physicists have studied for (obviously) very many decades using (obviously) very many physical insights and mathematical toolsets. What is *not* obvious — per David Deutsch's DUDES principle discussed above — is all the ways that these transport-related ideas can be arrayed so as to mutually illuminate one another! It will be quite awhile (IMHO) before we achieve a reasonably natural and unified understanding even of the transport-related ideas that are already extant in the theoretical and experimental literature … to say nothing of fascinating new classes of experiments that quantum information theorists (like Scott and Alex Arkhipov) are proposing. Some of Yuri Manin's students have won the Field's medal in case he is an unknown in TCS. Aaron #28: yes, sorry, fixed! 1ly1 #30: I've certainly heard of Manin, but I confess that I don't understand that passage. With dynamical-collapse proposals, even if the details aren't worked out, at least it's clear what kind of thing we have in mind. By contrast, if someone tells me that we should switch from Hilbert space to a more "sophisticated" representation, or that "our world is adelic," then I'm not even sure what we're talking about until there's some more concrete idea on the table. All my nerd love is for you right now! I could elaborate more but honestly, there's no need to, suffice it to say that all my nerd love is for you. I was about to write a long essay about why we should not believe in the many-worlds interpretation given our present understanding of quantum theory, but I watched The Avengers last night and I am afraid that it might make me turn into the hulk. On the other hand, thanks very much for the link to the Aharonov conference. I will watch a few talks, even though I have a feeling that I might find some of them hulk-inducing as well. This copernicanism thing remind me of another alleged "copernican revolution" but in philosophy, which might be relevant here. Kant claimed to have provided such a revolution by stating that our knowledge cannot be about the world as it is in itself, but about the world shaped by our understanding (thus providing a synthesis of empiricism and rationalism). I tend to agree that there is a kind of copernican turn with quantum physics, but I think it is much more akin to Kant's copernican turn than the advocates of MWI might think. It is something like this: our best physical theories are about relations between particulars (including relations to the observers), not about particulars themselves as existing independently of any observer. Obviously a single particular cannot be an object of knowledge if not in virtue of its relations to something else (a category, …), or if unrelated to an observer… MWI fails for taking too seriously the content of our theories as refering to particulars instead of relations, and for thinking that our theories somehow exist "outside our knowledge". In my opinion, this kind of relational interpretations suits as well Ockam's razor (and in a sense even more : you don't have to postulate an absolute reality-in-itself!) without the problematic "extravaganza" of MWI. I just want to comment that, both for your summary of the Deutsch position and for your explanation of your skepticism, this is one of the best blog posts on any topic that I have ever read. One reason I'm uncomfortable with MWI is the large number of unobservable 'unused' universes that end up just sort-of lying around. My question is whether this large number is infinite: Specifically, does MWI require admitting infinite quantities into physical theories? Huh, you're right! Dang, I really thought that would work. To make the discussion more concrete, consider the proposed experiment of Bouwmeester et al., which seeks to test (loosely) whether one can have a coherent superposition over two states of the gravitational field that differ by a single Planck length or more. I am a grad student currently working in the Bouwmeester group on this project. A minor technical point: it seeks a superposition of two coherent states of an object separated by more than the width of its wavepacket. For our proposal and related proposals, in practical terms this will be in the femtometer or picometer range. Still pretty small, but much larger than a Planck length! It seems that m.w.i is a wish and once it is found out how to understand the Born probabilities it will become an interpretation. The authors emphasize what they call 'entanglement relativity', e.g. in a H-atom the e and p particle are entangled, but if one uses center of mass plus relative coordinates the wave function separates. They show that something similar can be done for a particle coupled to a heat bath of harmonic oscillators (a toy model for decoherence) and 'entanglement relativity' now means that one is dealing with two different decoherence processes. They claim that an inconsistency follows for the original (and also modern) Everett interpretation, about when/how the world splits. #33… from what I remember from a class a long while back(very primitive) adeles are "vectors" with coordinates corresponding to every valuation possible (the first coordinate being real which is p-adic at p = infinity and remaining coordinates being done at all p). Under suitable definitions on product they form a ring. I believe what he is saying is, we may due to our inherent bias we may very well look at only the first coordinate leaving all other valuations. This may provide data that may not be new but very well be complementary. OK, your counterexample of an "atoms are only there when you look" theory is valid against quantitative interpretation of Occam razor, but not against qualitative one as it introduces more concepts then the original theory – in addition to all the usual atoms it also has fundamental surfaces (or whatever atoms turn into when not observed precisely). But MWI violates both interpretations since not only it postulates infinite additional universes, it also introduces some gargantuan hyperuniverse which contains ours and all the other branches, plus it adds new ill-defined concepts like splitting of universes. Also our universe is certainly not equivalent to other branches since unlike them it is observable, so there is something that picks it up from the ensemble – in other words why did my conscience follow this branch and not the other? Also Occam razor certainly doesn't favor many worlds even if you interpret it using some sort of simplicity or information content measure. The major error I see here is to assume that mathematics is all that defines a physical theory. That is simply false. Mathematics is only a part of it, and not the most important part (all physical theories can in principle be restated without math), by far the most important part of any physical theory is it's interpretation, which tells us how it's concepts and it's mathematics relate to the real world. An equation stating that a = db/dc is not Newtons law, it only turns into one with proper interpretation which relates the symbols of a, b, and c to the real world. Now the problem with MWI is that math may be simpler without having to postulate measurement, but the interpretation get's way more complex and ill-defined, completely outweighing any benefit. So even if we go with your interpretation of Occam razor as one which favors simplicity, ensemble interpretation still obliterates MWI. To see that, imagine that you have to write 2 papers explaining both ensmeble interpretation and MWI interpretation and all the concepts behind them to someone who never had any contact with QM. Which of the two papers would be shorter, which could be encoded in less bits? Ensemble interpretation is concise and straightforward, while MWI leads to a mess of ill-defined concepts. What a parallel universe even means? How can universes split and conserve energy? How can separate universes interfere? How can you get probability out of it? Why did my conscience follow this branch during split and not the other? etc. etc. Common-Sense Prediction A When the day comes that we switch from Hilbert space to "a more sophisticated representation", the leading order dynamical predictions of the new theory will look like noise (will it be noise compatible with scalable error-corrected quantum computing? no one knows). Common-Sense Prediction B In consequence of David Deutsch's DUDES principle, we can reasonably hope make progress toward a natural and universal post-Hilbert dynamics by naturalizing and universalizing our present understanding of noise dynamics. The key point is that naturalizing and universalizing our present understanding of quantum noise dynamics is a broad research path upon which any researcher can set forth immediately, in the reasonable hope that it will lead to illumination of one form or another … if not fundamental insights, then perhaps practical engineering applications. This ease-of-starting stands in marked contrast to quantum research paths that require genius-level inspiration even to identify the trail-head (e.g., Manin's "our world is neither real nor p-adic; it is adelic"). Genius-level quantum insights are a great starting point, but (obviously) as a general rule they are not available. 1) Just as Ockham's razor, properly interpreted, argues for dispensing with measurement as an unanalyzed primitive, it also (it seems to me) argues for dispensing with physical reality (as something distinct from mathematical reality) as an unanalyzed primitive. On this view, the many branches of MWI are just a tiny tip of the iceberg of reality. Mathematical structures exist; some mathematical strucures are sufficiently complex in the right sorts of ways that they contain self-conscious substructures; our Universe, with its many branches, is one of many such structures. To single out our universe from all the other mathematical structures, and to declare it uniquely "real", strikes me as (to adapt your words) an unmotivated perversity that mangles the simplicity of mathematics for no better reason than a parochial urge to place our own experiences at the center of reality. 2) One might adopt your anti-MWI arguments for service here, asking, e.g. "Who is the observer who in principle can take a birds-eye view and see that our own many-branched universe occupies no particular privileged place among the many other mathematical universes that we know it's possible to describe? The answer, of course, is the mathematical physicist, who has a lifetime of experience writing down different models of the universe and therefore sees something of the entire landscape. It's true that all of the math-physicists' models are far too primitive to include anything like self-conscious beings, but it's also true that your theoretical Copernican observer flies too far above the earth to see the details. Our math physicist, like your Copernican observer, glimpses a highly undetailed picture of something that is nevertheless undoubtedly there. And there is absolutely nothing in the mathematics to suggest that, e.g. the Einstein-de-Sitter universe is any more or less "real" (whatever that means) than the Godel Universe or any of a hundred other universes that we see in the literature, many of which look like extremely rough sketches of the universe we happen to inhabit. Bottom line: Ockham's razor tells me not to add unnecessary primitives, especially (as you point out) when it's easy to imagine an observer to whom those unnecessary primitives would seem entirely arbitrary. That's an argument for the reality of the MWI branches, but it's an even better argument for ditching the unnecessary primitive of physical reality, because in that case we know who the key observer is. And this has the nice side effect of relieving us from having to ask why the Universe exists in the first place. Mathematical objects exist; the Universe is a mathematical object; therefore the Universe exists. Of course this leaves the question of why mathematical objects exist, but I think I'll stop here for now. Any thoughts on this presentation http://www.youtube.com/watch?v=dEaecUuEqfc with corresponding paper http://www.flownet.com/ron/QM.pdf ? Ron's 'Zero-worlds' interpretation seems more parsimonious and elegant when compared with the MWI while retaining most of the MWI's explanatory characteristics which make it so compelling. Could you please tell us precisely what it was that led to _your_ nerves faltering: (a) the fact that it's a _linear_ theory or (b) the part that it's the _quantum_ theory (with all its peculiarities included in it)? IMO, the reason none should take MWI as a serious hypothesis of physics (let alone as a serious theory) has, in a way, little to do with Occam's razor. The fallacy involved here is, IMO, much more crude. Concepts of physics (i.e. certain mental objects which together form the contents of physics as a knowledge-discipline) are supposed to stand for something in the perceivable reality. Sense-perception is the basis of all knowledge. When a mental object is posited to stand for something that can in principle never be perceived, it ceases to have any valid epistemological status, and must not be taken seriously. Historical examples of such mental objects include things such as angels and fairies pushing the planets. In comparison, the late 20-th century version of the error posits not just a few imaginary conscious objects but an infinity of in principle imaginary universes. It's supernaturalism taken to the logically extreme end. And, supernaturalism, it is, even if it appears in a non-religious way—there evidently are secular ways to be a mystic, too! (i) Every man always wins … in some or the other universe. (ii) Every man is immortal … in some or the other universe. (iii) Hitler is busy in a forever loop, packing chocolates and gifts he sends to Jewish children … in some or the other universe. (a) One is to say that when a bullet aimed at you arrives near you, somehow the EM field and the gravity field and the wind pressure and whatnot kind of physical conditions so combine together that the bullet disintegrates into a harmless gas of its constituent atoms and molecules at just the safe distance from you (and these atoms and molecules also disperse rapidly, even while obeying the c limit). Such an outcome is extremely less probable, but, hey, with MWI, it _necessarily_ and _actually_ happens in some or the other universe. Notice, the bullet-disintegration and all is all a purely physical effect; it's purely an inanimate behaviour; consciousness has nothing to do with it. (b) Another possibility is to say that for every conscious choice, both the choice-path actually taken and also all the alternatives to it, also begin to exist in the correspondingly spun-out parallel universes. Thus, every act of free will also spins another _physical_ universe. Subtly implicit here is a physically active and causal role that consciousness plays in splitting the universes. The loving, jolly, Santa Claus-competing (and in fact Santa Claus-bettering) version of Hitler requires (b)—it involves his free will choice. However, for the Jews (and, logically, everyone) to survive Hitler's army (and, logically, all armies) despite bullets and nukes hurled at them, it is merely sufficient that (a) holds. My question is: Do the MWI advocates mean the version (a) or (b) of their supernaturalistic outpouring aka "interpretation" or "theory"? Nex #44: While at some level I share your skepticism about MWI, there are two important things I found missing from your comment. The first is the number of times in the history of physics that enormous discoveries have been made, simply by people trying to keep their equations as clean and simple as possible, and then taking completely literally what the equations said. Some famous examples include: the discovery of quantum mechanics in the first place, Dirac's discovery of the relativistic wave equation and his prediction of antiparticles, the prediction of black holes and an expanding universe from GR, Gell-Mann's discovery of quarks… Rightly or wrongly, the MWI proponents consider themselves in that same tradition. Second, I'm not sure exactly what you mean by "the ensemble interpretation." Do you mean Copenhagen? If so, I'd consider it less an "interpretation" than a decision to treat quantum mechanics instrumentally, and simply not to ask certain questions. Foremost among those questions is the following: what happens if you try to put a human being such as yourself in a coherent superposition state, as the laws of unitary evolution seem to allow? If you say that nothing much happens—i.e., quantum mechanics continues to hold, another person could observe the interference pattern revealing that you were, indeed, in a superposition of two different mental states, etc.—then I'd say that MWI, or something like it, has basically been vindicated by experiment. Or in other words, any remaining debates at that point are semantic: I'm willing to say that the main substantive question at issue would have been resolved in favor of MWI. This then leads to the core of my position: that in order to kill MWI (which would, of course, be a great scientific advance!), it's necessary and sufficient to explain what's wrong about the idea of putting a conscious being into superposition. I suspect that this position makes some people uncomfortable, since it reminds them of something they'd rather ignore: that the real reason why the quantum interpretation debate is so contentious, why it induces such a feeling of vertigo and of little or no progress being made, is that the even bigger debate over consciousness and the mind/body problem is always lurking in the background. Scott – Are you omitting the (purported) problems about the intelligibility of having probabilities without uncertainty or frequencies because you think there's no problem, or because you think they're not an argument against MWI but only against the completeness of its current state? Scott, during your talk [very interesting, but beware of white socks :-)] you proposed a "Turing test" for free will. Namely, you proposed the revised question "Is it physically possible to build a machine that, given some human's environmental stimuli as an input feed, predicts that human's future choices to any desired accuracy and arbitrarly far into the future -at least in the probabilistic sense (…)". Have you considered diagonization's argument? If this theorical machine was computable, then humans could simulate it to base their decision on defeating the machine, thus this machine does not exist. And no, I don't find it likely that future "development" of MWI is going to clarify this issue: it is what it is! In particular, if we agree to talk in this way about a probability distribution over various copies of yourself in the different Everett branches, then lots of well-known arguments (Gleason's theorem, Zurek's "envariance" argument, etc.) make a strong case that, given the framework of Hilbert spaces and unitary evolution, the distribution must be given by the standard Born rule and not some other rule. So the question "why the Born rule?" is not one that keeps me up at night; and at any rate, it certainly isn't a special problem for MWI (one could just as well ask it in any interpretation). As I see it, the real questions are: do we want to accept the view on offer, involving indexical uncertainty over different copies of yourself in different branches of the wavefunction? Are we forced by the evidence to accept that view? Is there any sensible alternative? And I think those questions, in turn, ultimately hinge on the empirical question of whether it is, in fact, possible to place a conscious observer into a coherent superposition of mental states, or whether there's a not-yet-discovered obstruction to that (see my comment #50). Scott #51, if I understand correctly, what you are telling that there are no any problems with introduction of probabilities and Born rules in MWI? For example, why not use the amplitude in 4-th degree, or, better, a linear combination of second and 4-th degree with small parameter before the fourth degree? Please, do not tell about Gleason theorem, it simply tells that World with such a law would be a terrible place. Jiav #52: Sorry about my unfashionable attire; I had to pack in a hurry! Furthermore, even if you can "merely" be predicted by a black box to which you don't have access—the predictions becoming invalid if and when you do get access—that already seems to me to raise all the bizarre consequences for personal identity that I mentioned in the talk. For example, provided only that you don't open the box, such a box would suffice for making backup copies of yourself or for faxing yourself to Mars! And actually, even if you did open the box, my definition of "prediction" allows the predictor to get access to whatever you're seeing, hearing, touching, etc. as a continuous input feed. Because of this, the predictor wouldn't need to predict its own future behavior, which would indeed generate a contradiction. Instead, it would "merely" need to predict your behavior as a function of whatever you learn on opening the box. What a parallel universe even means? How can universes split and conserve energy? How can separate universes interfere? And it seems very natural to describe such an evolution in terms of a "splitting" into "parallel worlds." But if we don't know exactly what that means, then in some sense that's our problem, not the theory's! Because MWI really just means "unitary evolution and nothing more," it follows that MWI can't fail for some simple technical reason like violating the conservation of energy. The laws of physics are as happy as a clam in this picture! 🙂 If there's any obvious problem, it's that—much like with turning yourself into a clam, in fact!—we've arguably bought "happiness" only at the cost of erasing our actual experiences, of ignoring the "we" who are happy or sad and who came to know about quantum mechanics in the first place. I was referring to the particular implication of quantum-mechanical linearity that an observer (me, for example) could exist in a coherent superposition of different mental states, which states could then interfere with each other. That's about as radical a revision to my concept of personal identity as I can imagine, and it's not a step that I personally want to take unless I'm forced to take it. Hence my comments above (and my remarks in the OP) about the possibility of a yet-undiscovered obstruction to carrying the requisite experiment out. Predictor: Hi Romney, I predict you'll behave as a jerk. Human: What?! That's rude! And how could you know?? Predictor: Well, have a look at this simulation. Human: Dude! That's unfortunate but you're right, that's obvious from the math. I can't argue, but… wait a minute why do you say it to me? Don't your computations stipulate I should not know this result? Predictor: Well, that's the problem. I'm a bit ill-at-ease my prediction is somewhat restricted. But look at this new simulation! Now you know your determinisms, it is clear you'll in the future act as a decent being. And, cerise on the sunday, this prediction now doesn't change when you know it. So my prediction is now perfect in any case: you'll be kind and have no free will! Human: Let me have a look… yeah, true, again one can't argue with a equation, and the solution is stable upon me knowing it. It's seems being kind and have no free will is what I'll choose from now. Thank you so much! Predictor: No need to say that. I predicted it you know. if I understand correctly, what you are telling that there are no any problems with introduction of probabilities and Born rules in MWI? (2) leaving aside the interpretation debate, the problem isn't nearly as bad as many people seem to think. For we really do have detailed arguments explaining why you can't change the Born rule even a little, without turning QM into a pile of crap (allowing superluminal signalling, instantaneous solution of NP-complete problems, etc). How much more "justification" for a fundamental rule of physics do you want, or can you reasonably expect? For more about this, see my paper Is Quantum Mechanics An Island in Theoryspace?, or Quantum Computing Since Democritus Lecture 9. Great post, Scott! Here is something I truly dont understand in your approach. Why does an obstruction for having an observer like you existing in coherent superposition of two mental states necessarily violate linearity of quantum mechanics. And, more generally, why isn't it possible that such an obstruction be discovered within the framework of quantum mechanics. Thank you, for explanation, Scott, but I think, that in some other interpretations we may simply introduce Born rule ether as axiom or as experimental evidence. Scott: how does the fact that we can't change the Born Rule give any plus points to MWI? My skepticism of the MWI(s) having any scientific content whatever led to Chris Fuchs buying me lunch at the most recent APS March meeting. As a PhD student, I find that a standard of success it is difficult to dismiss. … and if we want to argue parochialism, well, if one takes the Schrödinger Equation as describing the actual dynamics of observer-independent, ontic stuff — of ψ-flavoured goop — then saying that the same dynamical rule has to hold for the 96% of the universe we can't experiment upon because it's dark matter or dark energy or His Dark Materials — I find that astonishingly parochial. Hey Scott, I just want to remind that a year has passed since you last held a 24-hour-ask-Scott-anything event. Have you scheduled this year's question day? And will you commit now to answering questions continuously for the entire 24 hours, rather than wimping out like last year and going to bed for a while? Enseble/statistical interpretation is the minimal interpretation, it states that wavefunction is simply not applicable to individual systems. Ok, so we have a nice simple equation but the problem is that it doesn't illuminate anything, on the contrary it obscures the problem with needless metaphysics. To see why let's say we do a trivial experiment, I have a timer linked with a detector, and 2 unstable atoms, first decay starts the timer, second decay stops the timer. After running this experiment I have some measured value for time between 2 decays – real solid information, stored in bits on my computer – and I want to know where this information came from? Can MWI tell me anything intelligible about this? I don't think so, although it supposedly does away with measurement problem to me the problem is still there, only now it's not "why wavefuntion collapses to this value" but "why my consciousness is entangled with this value/chose to be on this branch of hyperuniverse." I certainly don't see this as an improvement. What benefit does said unitarity offer us here? I can't see any. However we dress the question, the truth is we simply don't know the answer, we don't know where this information comes from. Invoking MWI doesn't make the problem any better, on the contrary it entangles it with another famous thorny problem – that of consciousness – which is precisely the last thing we should want if we were serious about trying to solve it. Nex #67: OK, then I'd call the "ensemble interpretation" not so much an interpretation as just a restatement of the problem, or better, an attempt to divert attention from it. If "the wavefunction isn't applicable to individual systems," then how should we describe the state of an individual system? Even more importantly, what is it that comes in and stops the Schrödinger equation from applying at the level of measuring devices or brains? Personally, I tend toward the agnosticism that you yourself seem to express in your last paragraph: I'd say there's something deep about measurement that we have yet to understand. If I have to account for (or rather, explain away) measurement using existing concepts, then I think MWI is, as Steven Weinberg once said, "like democracy: terrible except for all the alternatives!" But what I really hope for is some new discovery that will change the outlook entirely. Hey Scott, I just want to remind that a year has passed since you last held a 24-hour-ask-Scott-anything event. Have you scheduled this year's question day? Alright, alright, I'll do it this weekend if logistics permit! And will you commit now to answering questions continuously for the entire 24 hours, rather than wimping out like last year and going to bed for a while? how does the fact that we can't change the Born Rule give any plus points to MWI? It doesn't. I was simply pointing out that the "why the Born Rule and not some other rule?" question can be asked in any interpretation, so that to whatever extent MWI doesn't answer it, that doesn't seem to me like a special problem for MWI. That was a wonderful comment; sorry for taking so long to respond to it! Thanks so much for the informative comment, and sorry for the delay responding! I wasn't talking about the separation of the actual object's wavepacket, but about the separation that would theoretically be induced by that wavepacket separation in the state of the gravitational metric. As you probably know, Penrose believes in a gravitational "objective reduction" process, and as a criterion for when the "objective reduction" takes place, he's proposed that it happens when the mass/energy distributions in two components of a superposition become different enough that (loosely speaking) "their gravitational fields differ by one Planck length or more." And as I understand it, testing that and related ideas was Bouwmeester's original motivation for the experiments you're working on—though I'm sure you could tell me more about how far you are from starting to test Penrose's quantitative conjectures! Wait I though you had an argument against MWI using QC namely the fact that it cannot be used to provide easy solutions to hard problems. I'd consider [Copenhagen] less an "interpretation" than a decision to treat quantum mechanics instrumentally, and simply not to ask certain questions. That's rewriting history. Everyone in the 20s, pro or con (except Bohr), agreed that what Bohr said was that consciousness causes collapse. Douglas Knight #75, could you please source this affirmation? Furthermore, while a scientific theory can't strictly be held responsible for its misinterpretations, if the many-worlds imagery consistently leads people to grievously-wrong ideas about how a quantum computer would work, then to my mind, that's a strong argument against using many-worlds imagery when popularizing quantum mechanics. On the other hand, I've also said before that perhaps the strongest argument in favor of MWI is that taking it seriously led David Deutsch to think of quantum computing. Could you give me some sources for that? Does the notion of superdeterminism remove the need for MWI? And if so, is there a good reason why is this not an explanation that receives more attention? Why does an obstruction for having an observer like you existing in coherent superposition of two mental states necessarily violate linearity of quantum mechanics. And, more generally, why isn't it possible that such an obstruction be discovered within the framework of quantum mechanics. We've had this discussion many times in the related context of quantum computing! In standard QM, there seems to be no principled obstruction to creating states like α|0>|You0>+β|1>|You1>. Hence, if there is such an obstruction, then something new has to be discovered that would radically change our understanding of QM. As explained in comment #12, I admit the possibility that the new discovery might be one that would somehow leave QM's unitarity and reversibility "formally on the books," while also giving a fundamental reason why the reversibility could never be realized in certain experiments like the Wigner's friend one. But if so, then I'd personally regard that as a revolution in our understanding of QM every bit as far-reaching as a flat-out dynamical collapse mechanism. So now one can ask: what's it like to be run backward in time? Does it feel similar to being run forward in time, except that now your subjective experience goes "the other direction"? This has a similar flavor to various "classical" conundrums: "what's it like to have an exact duplicate of you created on Mars, while the original remains on earth? when you wake up from the brain scan, should you expect to find yourself on earth? On Mars? On one or the other with 50/50 odds? Somehow on both?" Or: "can 'you' be brought into being by a colony of termites simulating all the neurons in your brain by their wiggling motions—taking, let's say, a thousand years to simulate each millisecond of neural activity?" Of course, the similarity to these old chestnuts hardly makes the Wigner's-friend one less perplexing! Sorry for the delay! It's a good question. If the Hilbert space is finite-dimensional, then certainly the number of "universes" will also be finite. And there are indications today—specifically, from the dark energy together with the holographic principle—that the total Hilbert space accessible to any one observer has a "mere" ~e10^123 dimensions. So as long as you only care what happens inside our causal horizon, you should be able to get away with only finitely many universes! If you also want to include what happens outside our causal horizon, then the answer would depend on whether the universe is spatially infinite or not, which no one knows (or possibly will ever know). But notice that, if the universe is spatially infinite, then you already have infinity even without MWI! In that case, MWI would "merely" take you up from Aleph0 to 2Aleph0. Ockam's razor is a useful principle for choosing our best mathemtical model of reality, and to that extent, no doubt that a model without any 'collapse' or 'measurement' primitive is better. Now come the problem of interpreting the model (what exactly do we mean by 'exist', what is a viewpoint, etc.). Unfortunatly this problem cannot be express in mathematical terms and I doubt that Ockam's razor is of any help here. You seem to be commited to a specific metaphysical interpretation akin to mathematical platonism, which is defensible, but not necessarily the sole one. Relational interpretations use the exact same mathematical model, but with a different interpretation of it. Oh I see. Now I see what you meant there. Ok. Anyway, thanks for clarifying the viewpoint you had while writing that line of yours. In this context, a quick Google search yielded this nice discussion: http://physics.stackexchange.com/questions/29740/does-the-hilbert-space-of-the-universe-have-to-be-infinite-dimensional-to-make-s . I mean, the first paragraph of Neumaier's answer. Two main points: (i) the relativistic horizon is not the only consideration here. (You call it the "causal" horizon.) (ii) The "spatial" space i.e. the real Eucledian space R^n may be finite dimensional with n being a finite integer, say <= 3, and yet the Hilbert space be infinite dimensional, e.g., the space L^2( R^n) defined by a square-integrable complex-valued function defined on that R^n. I don't think "introducing infinite quantities" is something to be worried about in the first place. Firstly, math can handle infinite quantities just fine. Sure, usually when you encounter an infinity in physics this indicates you messed up, but there's no a priori reason the universe has to work that way. Secondly, if you're trying to count the worlds in MWI, you're focusing on the wrong thing. Thirdly, infinite stuff seems much less philosophically worrying than infinitary stuff (e.g. real numbers), but physics already has plenty of that and hardly anyone complains. Scott, most (*) of your comments on this thread were even more insightful than the original, wonderful post. Let me humbly suggest collecting them into their own follow-up post for increased visibility. #80: Running backward in time feels exactly like running forward in time. The only problem is, if you run backward in time, then your input and output systems will have a hard time properly interacting with an outside world running forward in time. Are you saying you suspend judgement on MWI because it's falsifiable? About those termites, what's so absurd about it? Can we not say that's what are neurons are? As for the scope of time… This reminds me how Newton thought gravity's action over vast distances seemed absurd. But distance is relative. In the action of forces between atoms any more or less absurd? Brian Pepper #40, regarding Penrose-style decoherence, please let me commend to your attention the IBM/MRFM group's quantum spin-driven cantilever experiment that is described in Rugar, Budakian, Mamin and Chui "Single spin detection by magnetic resonance force microscopy" (Nature 2004). It's pretty easy to calculate concretely (full details below) that the spin-up versus spin-down trajectories of the IBM group's 92-picogram cantilever were sufficiently separated in space that a naive Penrose-style gravitational decoherence mechanism would have reduced the IBM experiment's signal amplitude by a factor of 1/3, and the signal-to-noise ratio by a factor of 1/3^4, relative to the observed experimental values. This constitutes (AFAICT) rather strong experimental disconfirmation of at least some Penrose-style decoherence mechanisms! Among quantum spin microscopists it is widely appreciated that the IBM/MRFM cantilever is (by far) the largest mass ever observed to be in a quantum superposition of states. I happen to be personally conversant with these figures, because the IBM group asked me to compute them *prior* to committing to the experiment, for the reason that Penrose-style decoherence (if it were present) would have rendered the cantilever dynamics classical, and thus the experimental SNR unobservably low — yikes! — and so the IBM group desired some measure of theoretical assurance that Penrose-style decoherence would *not* be observed. Conclusion Penrose-style quantum decoherence would have had substantial adverse consequences for some varieties of quantum spin imaging applications, and so it is fortunate (from an engineering point-of-view) that this quantum decoherence is *not* observed in practical quantum spin imaging contexts. Caveat The above analysis, and the above conclusion, both are over-simplified pretty dramatically, in order to fit within the confines of an entertaining and thought-provoking web-log post. A more circumspect conclusion is that the IBM data verify that a mass-spin system can preserve internal quantum coherence over space-time separations that a naive Penrose analysis would suggest would be decoherent, and this observation has significant practical implications for the capabilities of quantum spin microscopy (for example). I'm always un-nerved by the failure to distinguish between QM and QFT. Yes, QM correctly describes a handful of particles that have been isolated from their environment. But decoherance requires full QFT to correctly describe it: its an "MBI"–"Many Body Interaction". And we already know that the mathematics of MBI is full of surprises. Let me provide a pro-dynamical-collapse argument, by analogy. Its based on the http://en.wikipedia.org/wiki/Asymptotic_equipartition_property So: if you flip a coin N times, there are 2^N possible outcomes, right? If you take the limit N\to\infty, you get how many states? Ahem, 2^NH where H=<1 is the entropy. (H=1 for a perfectly fair coin, but H<1 in general) A somewhat counter-intuitive result. What happened to all those other possibilities? Their probability shrank to zero in the limit. So, by analogy: replace "coin" by "quantum trajectory" and replace "2^N" by "many interactions". Perhaps many of the possibilities simply shrink to zero probability. So, writing the analogous expression for QFT, we have Z=\int D\phi exp (i action /hbar) and we know that we experience the classical world located at the peak of the stationary phase above, which is the classical action. And we know that there are other trajectories all around us, at a distance hbar away from us. That's what we know for sure. From what I can tell, MWI argues that there are other trajectories, very very far away, that contribute to Z. Really? We don't ever seem to need to take these into account when computing with things with Z. No textbook ever says "gee, this is where the classical action bifuracted into two, and one must be careful to sum over both branches." Because I think that most people envision MWI as saying "there are two worlds, and the cat lives in one and dies in the other", which means that the CLASSICAL action had to bifurcate into two, to follow these two possibilities. But none of our practical calculations ever have to treat this case. Why not? Perhaps because it never happens? Because I think that most people envision MWI as saying "there are two worlds, and the cat lives in one and dies in the other", which means that the CLASSICAL action had to bifurcate into two, to follow these two possibilities. But none of our practical calculations ever have to treat this case. Why not? Perhaps because it never happens? Interesting, but I don't see how your suggestion can be made coherent (no pun intended), unless QFT can somehow allow an actual dynamical-collapse mechanism, or some other fundamental source of irreversibility. For in the usual view of both standard QM and QFT, "the reason none of our practical calculations ever have to treat this case" is extremely simple: it's because none of our practical experiments ever involve recoherence between different basis states of a macroscopic object like a cat! If we could do such experiments—and nothing in the known laws of physics seems to prohibit our eventually doing so—then our practical QFT calculations would need to include the macroscopic interference term. To reiterate, I agree that it would be wonderful if something killed off the macroscopic interference terms, but whatever that something was, it would have to be profoundly new; it can't arise just from pushing around existing QFT machinery. Thanks for the reply about superdeterminism. Another quick question: Do you believe that MWI allows for so-called "nighmare states" to exist? That is, states where extremely low probability (and potentially awful) phenomena are occuring. Anon #91: If you believe MWI, then it's straightforwardly true that there would exist such "nightmare states" … as well as utopian dream states and everything in between. From an MWI standpoint, the question is not about the "existence" of these states but only about their relative probabilities. Linas Vepstas #89, please let me both appreciate your thought-provoking post, and thank you for it. (1) The physical reality of vacuum field fluctuations is well-established. (2) The dynamical validity of fluctuation-dissipation relations is well established. (3) From fluctuation-dissipation relations directly follow fluctuation-dissipation-entanglement relations that are relevant to quantum computational dynamics. (4) The role of fluctuation-dissipation – entanglement relations in quantum computation is *not* well-analyzed (at present). From a purely practical point-of-view, fluctuation – dissipation – entanglement relations enter significantly in quantum spin microscopy … for example, per our own group's "The Classical and Quantum Theory of Thermal Magnetic Noise, with Applications in Spintronics and Quantum Microscopy" (Proc. IEEE, 2003) is yet another example of the ubiquity of Deutsch's DUDES duality between applied and fundamental research. More broadly, as quantum computing experiments become more sophisticated, the field – theoretic renormalization effects that enter prominently in cavity QED are moving to center-stage. This is grounds for theoretical humility, because even after very many decades of work by very many, very ingenious, quantum researchers, we are far from possessing an understanding of quantum renormalization effects in field theory that is simple, natural, and universal. And it is good news for young researchers too, in the sense that naturalizing and unifying the quantum renormalization-related ideas that are already extant in the theoretical and experimental literature is a reasonable path for launching a substantive research program, that does *not* require startling / transformational insights to begin. Thank you for a thought-provoking post, Linas Vepstas! Scott , since you are a computer scientist, I was wondering what you're thoughts are on the paper I linked to earlier? From an MWI standpoint, the question is not about the "existence" of these states but only about their relative probabilities. It is important to note that these worlds are in every sense of the world inaccessible to us. From a MWI standpoint they exist, but when dealing with issues of ethics and such, it is better to treat them as nonexistent. If someone's moral decisions depend on the existence or nonexistence of such worlds, then one most probably commits a category mistake. Hi Scott, thanks for the answer (#80) and also for the reference to#12. I also think that understanding decoherence and how fundamental it is, is probably at the crux of matters and I appreciate your point of view that for you a theory for which decoherence is fundamentally irreversible even if it keeps all the unitaries of QM intact will be surprising as much or almost as much as a theory that violates QM. Bottom line: His basic observation, about the gap in the usual argument from quantum computing to the existence of parallel universe, is one that I agree with and in fact made independently in Why Philosophers Should Care About Computational Complexity (see the section on quantum computing). However, I wouldn't go nearly as far as him in saying that MWI should be rejected "tout court." I think MWI is a perfectly legitimate way to think about quantum computation if you want to. In the QC context, the question is not so much whether MWI is "true" or "false", as whether it's more or less useful as a mental aid! In particular, I see no problem of principle in describing measurement-based quantum computation in MWI terms—indeed, an MWI true-believer like Deutsch would have no difficulty arguing why the "parallel universes" are no less essential for MBQC than any other kind of QC. What MBQC does is simply to provide a way of implementing QC that makes its "exponential parallelism" aspect less noticeable and salient—which I suppose has both good and bad aspects. About those termites, what's so absurd about it? Can we not say that's what are neurons are? One place where I think it gets difficult, is where you start asking what exactly a simulation of your brain needs to do in order to count as a simulation. For example, consider an astronomical-sized lookup table, which caches, for every possible question you could be asked of (say) 1000 bits of fewer, the answer you would give to that question. Does that count as a simulation of you, sufficient to "conjure your conscious experience into being" or whatever? If so, then does it matter if anyone actually consults the lookup table, or if it just sits there inert? Gil, what you say is reasonable, and yet the magnitude of this undertaking is daunting. For if by "decoherence" we understand more specifically "entropy-increasing dynamics", and if by "entropy-increasing dynamics" we understand yet more specifically "entropically spontaneous transport processes", then our quest for fundamental understanding leads to a very recent and eminently practical arxiv preprint by Jacquod, Whitney, Meair and Markus Büttiker, titled "Onsager Relations in Coupled Electric, Thermoelectric and Spin Transport: The Ten-Fold Way" (http://arxiv.org/abs/1207.1629, 6 Jul 2012). This preprint's "Ten-Fold Way" analysis systematically catalogs a reasonably comprehensive suite of fluctuation-dissipation relations, of which the fluctuations are (obviously) of fundamental relevance in scalable quantum computing error-correction, and their Onsager-associated dissipative currents are (obviously) of practical relevance to a broad class of sensing, separation, and storage technologies. Here we encounter yet another (vivid) example of Deutsch's DUDES duality between fundamental and applied quantum research. Scott #98: an old chinese proverb says "all that matters for us is that the lookup table would be finite, by the assumption that there is a finite upper bound on the conversation length (…) From these simple considerations, we conclude that if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory." It still seems hard to dismiss this very simple and powerfull point! But you're not actually wondering about intelligence and the Turing test. You're wondering about whether the Turing test is enough to prove consciousness. That is an hard problem, chinese says. This has to be some of the most brilliant writing from Scott ever, the clarity and abundance of insights in the original post and the comments is amazing. The mention to a relation between entropy and the perception of time really piqued my curiosity, only idea about this I was aware of is the fact that the direction in which time is perceived to advance is the direction in which entropy increases. Indeed, were there some interesting insights, ideas, speculations, or discoveries that you can tell us about? Scott #70, may be I misunderstand Black Stacey #64, but Pullman indeed very briefly touched possible relation between dark matter and MWI in his fantasy novels. Yet, then one my colleague had to made an intensive bibliographic search in the scientific literature (because of a grant they've got few years ago), he claimed that he did not see any article related to link between MWI and dark matter (and venomously suggested me to write a first one). So to caricature your thought process – and that of many MWI opponents – the *reason* that you dislike MWI is that it gives you a distressing philosophical problem to do with personal identity, but you hope that some as-yet-undiscovered physical mechanism to do with quantum gravity will come along and defeat MWI just enough that existing objects like lasers and quantum computers still work for basically the same reasons we think they do at the moment, but that your brain is saved from being put into a superposition. Or the possibility that perhaps there *is* some length-scale based cutoff above which superposition doesn't occur, but that the human brain comes in below the cutoff? This may sound unlikely, and forgive me if I am being stupid, but wasn't there an experiment a not long ago which put a millimetre-sized object into a superposition? Reality has no obligation to be convenient for us humans. My knowledge of the beliefs of the pioneers of quantum mechanics are largely based on this article which used to have an ungated copy here. In that case, MWI would "merely" take you up from Aleph0 to 2^Aleph0. It doesn't even do that. Just because QM exponentiates finite numbers doesn't mean it exponentiates infinite cardinals. I don't think there is any sense in which continuum QM has more than countably many parameters. The Hilbert spaces have countable (Hilbert) bases. There may, in some sense, be uncountably many worlds, but they are linearly dependent. An automatic "Interpretation of Quantum Mechanics" paper generator. Or maybe a new Turing Test, call it the "quantum-Turing Test". Scott #90: "it can't arise just from pushing around existing QFT machinery." But why? This is an assertion, and its exactly the assertion that I'm attacking. The avenue of attack is to claim that quantum measurement is inherently a many-body interaction. So: for a quantum system with just a few degrees of freedom, a few bra-kets here and there, one is forever forced to believe in MWI (or to argue about it) And also: for an isolated quantum system, it is valid to factor it into a quantum piece, and a rest-of-the-universe piece. However, once the system starts interacting, then the factorization cannot be properly done, and doing so invariably leads to a certain hand-waving. As long as one thinks one can factorize, then, yes, believing in MWI seems inescapable. John Sidles: Comment #93: thanks. I never actually think of fluctuation-dissipation; it has a certain air of semi-classical approximation which is dangerous to flirt with when arguing such a contentious topic. 🙂 I really am always going back to the QFT partition function as the fundamentally correct description: the trick is how to calculate a many(N->infty)-body interaction with it. Actually, I don't think that's the thought process of many MWI opponents. Most of them say that MWI is absurd pseudoscience, it violates Occam's Razor and/or conservation of energy, it can't explain where probabilities come from (and this is a special problem for MWI), etc. Or, if they're like many high-energy physicists (Tom Banks and Lubos Motl are two good examples), they seem to agree with MWI on every substantive question, yet hate the language of MWI—considering it a superfluous and wrongheaded attempt to translate the perfectly-clear mathematical formalism of QM into florid sci-fi imagery. As they see it, we should accept that everything in the world is quantum-mechanical, and even that we ourselves could be placed in coherent superposition, but not use phrases like "parallel universes" that have nothing to do with the calculation of scattering amplitudes. Seriously, I'd give the the following argument: we know that something enormous has to give when quantum mechanics is brought together with GR, since we don't yet really know what it means to have superpositions over different spacetime geometries. And while you might disagree with me here, I'd say it's also clear that there's something huge we don't understand about "self-locating belief," personal identity, and yes, free will and consciousness. For example, if you want to say that the mind is just a computer program running on meat hardware, that's fine … but then what happens when the program gets copied? Assuming you're "merely" self-interested, should you agree to a perfect computer simulation of yourself undergoing terrible simulated tortures, in exchange for millions of real dollars? Either possible answer can be made to sound bizarre from a traditional scientific rationalist standpoint, but presumably one of the answers is right! Furthermore, thought experiments like Wigner's friend (#80) seem to indicate that there's some connection between these two large sources of confusion in our existing scientific worldview. While I disagree with Penrose's proposed solutions to the problem, I do now agree with him (as I didn't used to) about the existence of a problem! It looks to me like we have something profoundly new to learn, just as philosophically important as (say) quantum mechanics or evolution were. I don't quite understand your position—but in any case, the reason why you can't "decisively" suppress macroscopic interference within the framework of QM is extremely simple, and extremely independent of details. Basically, it's that QM is a reversible theory. If a unitary transformation U represents a possible evolution of a physical system, then U-1 also represents a possible evolution. And, therefore, if decoherence can take place, then recoherence can also take place, by simply inverting whatever unitary transformation caused the decoherence. The only reason why we don't see that happen in practice is the Second Law of Thermodynamics: the same reason why we don't see omelettes unscrambling themselves, ash unburning back into wood, etc. But all those things are just statistical consequences of the initial conditions, and have nothing to do with the dynamical laws themselves. With fine enough control, you could in principle unscramble an omelette, and you could recohere a decohered macroscopic system. I think that any argument to the contrary, as you and John Sidles seem to want to make, would need to confront the reversibility of quantum mechanics directly—either by denying it outright, or else by explaining why other aspects of fundamental physics can prevent the reversibility from being "realized," not just in practice but in principle. I don't see any "sneaky" way to avoid talking about this! Our ground rule will be that it's OK to modify (standard) quantum dynamics, but it's *not* OK to tamper with (microscopic) reversability. After all, we are all-of-us so fond of thermodynamics, that we refust to modify First and Second Laws, which are founded upon microscopic reversability. To say it a more formally, the Laws require that dynamical systems are Hamiltonian flows. To say it geometrically, dynamical flows are symplectomorphisms. Once this thermodynamic constraint is in-place, there's not a lot we can do (AFAICT) to modify quantum dynamics on flat state-spaces. But fortunately, there's *plenty* that we can do to pullback quantum mechanics onto non-flat state-spaces. We first apply the engineering edge of DUDES. Surely mathematicians/physicists/chemists/engineers must *already* be working on non-flat complex state-spaces? This proves to be true … and not just true, but ubiquitously & ridiculously true. The generic challenge is craft a pullback state-space that preserves as much as possible of the "quantum / symplectic goodness" of flat Hilbert space, while enabling efficient trajectory integration on non-flat spaces that are propitious by virtue of lower dimension and/or favorable algebraic structures. Because the literature already holds (literally) thousands of examples of this method, the great challenge is to naturalize and universalize our understanding of these extant methods. Now let's apply the fundamental edge of DUDES. Surely pullback methods must be useful for discovering new physical laws (as contrasted with efficient similation of known Hamilitonian physics)? This research is way above my pay-grade, and (seemingly) orthogonal to my technical objectives, and yet it is clear that Fadeev-Popov/BRST field quantization (as a much-studied example) does preserves the congenial fiction of a flat Hilbert space only by introducing an intricate set of nonlinear interactions among a non-physical family of particles. As for the natural state-space of String/M theory, heck, don't ask me! There's enough mathematics and physics already extant, to make it reasonable for physicists to be agnostic regarding Hilbert as the fundamental state-space of Nature (or not). 1. Schrödinger's equation describes the MWI multiverse. 2. Schrödinger's equation applies at all scales, with no "collapse". 3. The MWI multiverse has physical existence. 1′. Kleene's equation describes an equivalent DFA M (which might be exponentially bigger). 2′. Kleene's equation applies at all scales, even NFA's with 10^123 states. —but this need not entail M having anything more than mathematical existence, nothing that you "experience". Indeed in applications such as "grep" you don't want to deal with M at all—the simulation that gets actuated uses N as the data structure. My analogy might be more realistic with a quantum finite automaton in place of N—noting that various QFA models covered in this paper by Rusins Freivalds still accept only regular languages. The reason criticisms of MWI revolve around personal identity is because MWI has not yielded any objective concept of a "world". MWI believers just say "all that exists is the wavefunction, it's all in there somewhere", but are unwilling and unable to say exactly where. So in an attempt to get MWI advocates to focus on the need to say *exactly* how their theory relates to empirical reality, MWI critics talk about conscious experiences and so forth, this always being the last resort when one tries to communicate with someone who either denies reality entirely (quantum "antirealists") or who is only willing to talk about their favorite theoretical construct (MWI believers). If MWI had a clear and objective and universally agreed answer to the question, "what is a 'world', in your theory of 'many worlds'?", this debate about personal identity would be a lot more in the background. Now anyone who knows QM, knows that the theory in its essence doesn't give you a preferred basis or a natural factorization of all states, so looking for well-defined parts in the universal wavefunction is problematic. MWI advocates, when you take this line of thought, you are abandoning your responsibility for connecting your theory to reality. The theory is vague, therefore reality is vague? Reality can't be vague. Vagueness is only in concepts, not in reality. So my syllogism is: the theory is vague, reality is not vague, therefore the theory has a problem. The Michael Mensky e-print mentioned by vince #108 is indeed about the things I myself usually associate with "links between MWI and dark matter" and that caused venomous remark of my colleague few years ago. Indeed I have read the e-print only now – it was not cross-referenced neither to quant-ph, nor to gr-qc and I can consider that as indirect confirmation of "hard-printability" of such ideas even for people with very strong reputation like Mensky. For me submission of a preprint about such kind of ideas in quant-ph quite possibly could cause rejection, inclusion in black list, etc. Ken, there is considerable literature to indicate that this postulate is attended by risks, e.g. Ford and O'Connell "There is No Quantum Regression Theorem" (Phys. Rev. Lett. 1996). Here the point is that the algebraic properties of flat-space Schrödinger physics are so enticing, as to encourage us to ignore a very considerable body of difficulties and intricacies that arise when we attempt to describe open-system dynamics within this framework. In both cases, mathematical methods were evolved that successfully swept awkward dynamical questions under the carpet. Such sweeping is a blessing in the sense that it permits practical calculations to move forward, and yet that same sweeping is regrettable in that clues to conceiving more general dynamical frameworks can be cloaked by it. That is why it is prudent to study these dynamical difficulties with two objectives in view: first to evolve techniques that evade the difficulties in practical calculations, second to conceive alternative dynamical frameworks that naturally exclude the difficulties (these being the twin edges of David Deutsch's sword). Can you explain this "self-locating belief" problem in more detail? Yes, to me the mind is just a program running on meat hardware (free will an illusion) and I don't see any problems with this view. What happens when the program gets copied? Same as with regular software – you end up with 2 programs. They will have shared memory and personality but will diverge after split. Now if it were possible to place both copies in perfectly identical conditions they would act exactly the same, but of course in real world there are sources of randomness which cannot be controlled. As to the other question – assuming you're "merely" self-interested, should you agree to a perfect computer simulation of yourself undergoing terrible simulated tortures, in exchange for millions of real dollars? If you were to completely discard ethics then yes, you certainly won't feel any of those tortures. Of course the other copy would also say the same thing if asked the same question but I don't see this as a problem. But the question is mostly about ethics as it's analogous to subjecting someone else to such tortures for money. Only in this case that someone else is very similar to you. So what is bizarre here? And where does this interpretation of mind run into problems? Doesn't the outcome of the two-slit experiment constitute something more than mere mathematical existence? Isn't it something that we "experience"? Perhaps not, but then I would like to understand the "explanation" for the outcome — not simply that QM "predicts" the outcome, but rather an explanation of what physically in the world of experience is causing the outcome. Related to what are 'reasonable' visualizations of quantum mechanics, isn't the schrodinger cat experiment very off-base, because interference only works for linear interactions, and dieing is about as non-linear as you can possibly get, since it destroys information? If you die a second time you're still dead. Ad John #119: Thanks. I found the paper, but need hand-holding to see the connection. However, I do frequently ask people when it is clear that the entire system described by an instance of Schrödinger's equation is within our "branch", versus when not—as per descriptions of the two-slit experiment. Ad Mike #122: I did nod to what you say in my first comment #116, but I'm not convinced the "other" photon—or what is perhaps more properly described as "the" photon regarded as a "multiversal object"—has physical existence. Before getting there, my Q is whether its physical existence must follow from accepting that QM linearity holds at all scales. Ad Mitchell #117: I move from "world" to "history"—as in "consistent histories" (whose relation to MWI can be another topic)—and then try to analogize a history to a path through states in an NFA. The states of the NFA represent configurations of matter-energy, understanding that a 10^123-ish size limit applies somewhere. Ad Scott #124, you meant "isn't nearly as macabre…"? Ken #125: Yes, thanks, fixed! I guess my question is that if one assumes that the "other" photon isn't "real," what then is the explanation for the single photon interference outcome. And, I suppose a related question is that absent any good evidence [yet ;)] that QM linearity doesn't hold, why not prefer the MWI? The reason criticisms of MWI revolve around personal identity is because MWI has not yielded any objective concept of a "world" … So in an attempt to get MWI advocates to focus on the need to say *exactly* how their theory relates to empirical reality, MWI critics talk about conscious experiences and so forth, this always being the last resort when one tries to communicate with someone who either denies reality entirely (quantum "antirealists") or who is only willing to talk about their favorite theoretical construct (MWI believers). So, that's the reason to bring in consciousness: because if I know nothing else, at least I know that my consciousness isn't a theoretical construct (as you presumably know yours isn't!). Sure! "Self-locating belief" refers to the issue that, even after all the "objective" facts about the physical world have been specified, you still need certain additional facts before you can make useful predictions. In particular, you need what the philosophers call "indexical" facts: among all the possible agents contained in your objective world-description, which agent are YOU? This is obviously a huge issue for MWI folks, but it's also an issue for reasons having nothing to do with quantum mechanics. For example, self-locating questions are always in the background when people argue about the anthropic principle, the Doomsday Argument, or the likelihood of extraterrestrial intelligence. The best introduction to self-locating belief I've seen is Nick Bostrom's wonderful book Anthropic Bias: Observation Selection Effects in Science and Philosophy, which fortunately is available free on the web. Personally, I don't claim that I know the right way to think about these issues: my only real goal, when this subject comes up, is to convince people who claim not to be confused that they should be confused! In my example from comment #113, where a simulation of you undergoes terrible simulated tortures, suppose instead that it's a thousand simulations of you being tortured in a thousand computers. Then even if you're completely self-interested (i.e., if we leave ethics out of it), why on earth should you agree to this? As you deliberate over this decision, shouldn't you reason that you yourself are almost certainly one of the simulations (since by assumption, they all have exactly the same mental state as the "real" you), and therefore it's overwhelmingly likely that you yourself will be tortured once the deal is agreed to? Mike #127: Well, I could try one of the theories at the bottom of Scott's notes here. Admittedly, my own objections to MWI are only "m(etaph)or(ic)al", on top of what Scott says in his philosophy paper about MWI as an explanation of quantum computing (esp. his second issue). QM is a reversible theory. If a unitary transformation U represents a possible evolution of a physical system, then U^-1 also represents a possible evolution. And, therefore, if decoherence can take place, then recoherence can also take place, by simply inverting whatever unitary transformation caused the decoherence. The only reason why we don't see that happen in practice is the Second Law of Thermodynamics:… . But all those things are just statistical consequences of the initial conditions, and have nothing to do with the dynamical laws themselves. So even if the U^-1 evolution were to "happen" (whatever that would mean), all of the observers further along in that direction would only have memories of things even further in that direction (i.e. further "pastward"). The only observers left to "remember" t = 0 are those in states resulting from the (increasing-entropy) evolution in the U direction. Admittedly, many scientists seem to adopt a methodological rule, according to which you have to avoid saying the word "consciousness" for as long as possible, instead using various proxies for it. But do you need consciousness for any of this, or merely the ability to experience qualia ( Some would argue – do argue – that you don't even need that)? Or do you mean the two to be roughly equivalent? I think there's an important difference myself. ScentOfViolets #133, I'm extremely curious: what, in your view, is the relevant difference between "consciousness" and "qualia"? I confess that I've always regarded those—together with "mind," "sentience," "first-person experience," and other frequent bedfellows—as totally interchangeable for these discussions. Or rather, from my standpoint, there's one mysterious thing that all these terms are trying to get at (the most precise designation I could think of was, "that which David Chalmers writes about"). So the differences between the terms merely arise from the different ways in which they can be misunderstood to mean something other than that thing. Tangential, but I really think the word "sentience" should be deprecated. It always seems to end up with people conflating consciousness with intelligence. I don't see how (assuming that were possible) them having exactly the same mental state would imply that one of them is me. The real me is the one that rises hand when I rise my hand. Each software occupies it's own meat hardware, trashing the hardware of all the other copies won't do anything to me. Exactly the same as with regular software and hardware. How do you explain partial traces in MWI? The preferred basis problem? What is the coarse graining resolution for the branchings? What about branches merging due to "forgetting"? If there really are parallel worlds, how come we can only extract information from them obtainable from coherent constructive and destructive interference, and not any more? Regarding whether consciousness causes wavefunction collapse, can't we learn a bit more just within the double-slit experiment? Namely, let Alice observe which slit the particle goes through (so she will never see the interference pattern) while Bob will be watching the wall from afar, so he should have no information to collapse the particle wavefunction (assuming we isolated him from Alice and her observing device) and should see the interference pattern. But is it really possible that one conscious being sees the pattern and another one does not? Or one conscious observer (even isolated) is enough for the collapse? Any experiments like this have been done before? Stas #140: In the situation you describe, neither one will see an interference pattern. Alice observing which slit the photon goes through decoheres the photon, turning it from a pure state to a mixed state. Note that exactly the same would happen if "Alice" were a mechanical recording device or even just environmental noise. From the lack of an interference pattern, Bob can't conclude anything at all about whether a conscious being learned which path the photon went through—all he knows is that the "which-path" information became entangled with something in the external environment. This is all just standard QM, and yes, it's been done. Thanks, Scott, I got it! I forgot that any entanglement is enough to destroy the interference. I haven't read all the comments in detail do this might just be a repetition of stuff that has already been discussed above. I was imagining today that there existed some kind of pan dimensional being that could record the wave function of our universe and play it back at wil. We could all very well be part of one of these playbacks. Knowing that's possible doesn't seem to preclude us being conscious, even though we have no choice but to repeat everything, including what I am writing right now, each and every time the recording is "played" again. I'm extremely curious: what, in your view, is the relevant difference between "consciousness" and "qualia"? I confess that I've always regarded those—together with "mind," "sentience," "first-person experience," and other frequent bedfellows—as totally interchangeable for these discussions. This is all just blue-skying, of course. Given what we know of consciousness already – namely that it's an epiphenomenon that strips away vast quantities of data in preference of a pared-down inner form or narrative or whatever other descriptor you prefer – it may turn out that consciousness is in part defined by the fact it only takes notice of small parts of the wave function, those decohered 'branches' if you will. Certainly any agent cognizant of multiple branches of the wave function would not experience any form of consciousness as we know it. Qualia, otoh . . . well, that to my mind (and this is just imho, mind you) seems to be more fundamental in some sense. No, it's not a necessary aspect of MWI that the branches never again meet—in fact they will meet in the thermodynamic limit. I'm wondering what you mean here. I would think (naively, of course; I'm an algebraic geometer), that living in a cosmos dominated by dark energy would ensure this would never happen! Am I wrong in thinking that dark energy does not imply some sort of exponential increase in accessible states? All other things being equal, of course. ScentOfViolets #145: That's an excellent point. With dark energy in the picture, it's entirely possible that the branches would never again meet; thanks! But a major difficulty here is that no one really understands how to apply quantum mechanics in a cosmological context—e.g., should our Hilbert space include only the ~10122 qubits inside our causal horizon (in which case, the dynamics on that Hilbert space wouldn't be perfectly unitary, but would include an external "bath")? Or should we include "the whole shebang," even if we have no possible way of knowing how large it is, and can't even in principle construct observables involving the stuff outside our causal region? 1) I'm an ordinary physics-monkey. I say "Ooopsie, mistakes have been made," and I toss out what I've done so far. 2) I'm a Bayesian. I say "No mistakes have been made, my subjective knowledge of the system has changed," and I toss out what I've done so far. 3) I'm an MWI-er. I say "No mistakes have been made, nothing has changed, I am simply not in the universe I thought I was in," and I toss out what I've done so far. I have a vague impression, that many years ago David Deutsch explained me that branches do not meet again just due to thermodynamics reasons… Yet the definition of "branch" is rather classical explanation of a quantum picture and it is not clear how to define that in some complex cases. The structural aspect of the deductive logic you have is right. However, I was disturbed by the "observe" thing in there. The two are unrelated. Consciousness isn't necessary in a basic physical description/theory (e.g. a differential equation), though it _can_ participate as a causal agent in a physical system, but only in an auxiliary way (e.g. as a part of well-posed auxiliary conditions). When you swing a bat to impart motion to an initially motionless ball, there _is_ a conscious action involved in it as a cause. However, such causal participation of consciousness does not alter the basic physics of the force imparted to the ball; F = dp/dt still holds. Moral: Your best bet to discover physics always is to use the context of the inanimate objects; it simplifies things to such a degree that physical analysis at all becomes possible. If the physics you get this way is right, then the use of consciousness in the physics theory will automatically take care of itself. Think about it. The last time you walked through a door, you _were_ diffracted in the process. Would you therefore stay indoors? … Come to think of it, even entertaining such a possibility is meaningless. You would be diffracted during any movement, any motion whatsoever. In fact, you would be diffracted even if you hypothetically were to remain perfectly still and some one thing—just one thing—in the rest of the universe were to move relative to you… All in all, flying through a diffraction grating isn't all that terrifying. So you are saying Rugar et al. produced a coherent wavefunction representing a total mass of 92 pg with a width of roughly 45 pm which was demonstrated to obey linear quantum mechanics for 760 ms? That would indeed falsify Penrose's conjecture as well as certain other theories predicting gravitationally-induced nonlinearities. If so, your analysis seems deserving of its own peer-reviewed paper. Jim, there are multiple ways (of course) to unravel the quantum trajectories of the IBM experiment, all of which predict the same ensemble of experimental records (of course!) and only *some* of these unravelings invalidate Penrose's conjecture (of course!!). Closing all of the loopholes that are naturally associated to this question will take more than one generation of physicists (if the history of Bell inequalities is any guide). This latter is the question that I mostly think about, because (1) it has greater practical applications, and (2) it's philosophical baggage is lighter-weight, and (3) DUDES duality suggests that in the long run these two questions will be answered within a common framework, and so it may not matter much which gets worked on. (1) I think that experiment mentioned in #47 is close related with the question. (3) the question itself is about theory of "proper quantum gravity" we still do not have. @Alexander: Can you elaborate on (2)? Scott #154 it is resulting impression after multi-year discussions of people in Friedmann lab. Just try youself to explain precisely what do you mean under that term? Scott, may be I wrong about your idea in SE, but it seems to me that you are looking for example with inversion of evolution prohibited by some reasons and in such a case the gravitation indeed is overkill. @Ajit_R._Jadhav #150: I thought my explanation was orthogonal to consciousness — indeed, it applies just as much to non-conscious beings as to conscious beings. The specific example Drescher uses in his exposition is a system of giant balls bouncing around with numerous small balls. In that system, you have an arrow of time, but clearly no conscious beings. The "arrow of time" is that you can look at a snapshot and see the "wakes" that the large balls have cleared out, thus identifying which snapshots are "pastward" and which are "futureward". The kicker is that if you run the simulation backwards past its initial state, you see the same phenomena, just with all the signs reversed except entropy! That is, you see the large balls clearing out the small ones and leaving wakes, just in the opposite directions. But you can still identify which snapshots are closer to t=0. And you also see that it's not the "negative time direction" in which entropy increases but the "closer to initial" direction. Even more interestingly, there is no observational distinction that can tell you whether your in the "increasing time" or "decreasing" time direction — it's purely a matter of labels whether you say that "the large ball moving left" is the positive time or negative time direction. Certainly, people can have false memories of past events — indeed, that's what dreams are — but for them stably persist (i.e. be "true" memories), they have to be the result of an increasing-entropy process. Sorry if I have offended you in the above comment by addressing you as @Silas. I had no idea of the naming system you seem to follow. I have not read Drescher, nor, going by the Wiki page about him (http://en.wikipedia.org/wiki/Gary_Drescher), do I intend to even browse his book(s). Feel free to briefly point out if the description in the Wiki page about him is inaccurate (by Scott's permission, here, or otherwise, direct to me (no spaces etc): a j 1 7 5 t p at yahoo=dot * co#dot+in). And, no, your above explanation was not orthogonal to consciousness—not inasmuch as you specifically said memories, and not, say, photographs. However, I won't press or debate this point too much. Apart from the broad aside I noted above, it doesn't interest me much. I don't think the foil for false memories are the stabling persisting ones. Stably persisting memories could easily be false ones. However, yes, I would agree that, speaking in broader terms, life is a process in which the entropy of the universe increases. Yet, I have no particular opinion (nor any particular interest) whether only the memory-forming sub-part of the life processes, taken by themselves, would necessarily increase the entropy of the universe or not. Scott #98: Hi Scott, I've thought about your question. This is off-topic, but I hope it's okay for me to comment as I've been thinking about it for a few days. In addition to passing the Turing test in the sense of straightforward synchronous input-output, I think an entity would also have to pass an "asynchronous" Turing test. That is it would have to demonstrate its own behaviour that is not directly linked to its previous input. In human terms I am referring to creativity in all of its forms. When an author writes a novel, he or she isn't doing so in response to one specific input. That seems important to me. What do you think? (1) We may not found "which-way" information by measuring of field of single electron, but we may resolve the problem by measuring "static" gravitational field of macroscopic body near us even with present day technology. I think the Preskill model is rather related with fascinated perspective of measurement of gravitational waves from far objects. I have posted a reasonably quantitative answer to the question Scott posed "Reversing gravitational decoherence" on Physics StackExchange. This same answer will provide technical background to the final exchange in the ongoing debate between Aram Harrow and Gil Kalai regarding the feasibility (or not) of scalable quantum computing. John Sidles #162, it was a pleasure for me to read that review, but what is relation with Scott's question? Certainly it is a thought experiment with simplified model there second law "is disabled by definition". Alexander #163, Scott and I have had an enjoyable exchange of comments regarding that point. Here on Shtetl Optimized it is traditional to be a little bit more provocative. In keeping with this tradition, the strongest fundamental physics postulates that (IMHO) might be true, are these: (1) No bosonic emission into the vacuum (gravitational or otherwise) can be scalably quenched with abitrarily low residual decoherence, and in consequence (2) the state-space of Nature is effectively not a (flat) Hilbert space, and hence (3) it is credible that the state-space of Nature is in fact not a Hilbert space, however (4) non-flat quantum state-spaces are difficult to observe because they mimic experimental decoherence so near-perfectly. A Joke that complement the one about the Rabbi who told everybody "you are right" is about a mathematician. A young mathematician comes to present to a famous mathematician his conjecture and ideas. "You are absolutely wrong," the famous mathematician dismissed the young one. Next enters another young mathematician and presents precisely the opposite conjecture. "You are absolutely wrong" replies the famous mathematician. The famous mathematician's wife interferes. "How could you tell both of them that they are wrong," she sais. "They have made completely opposite claims, one of them must be right!" "You are also wrong," replied the famous mathematician. John #164, It seems to me, Physics.SE is not well suited for such discussions by pure technical (software) and other reasons. Joe Fitzsimons et al tried to organize some "physicsoverflow" first inside SE (but the TP.SE was closed after about 200+ days of work) and outside (http://discussion.tpqa.org/), but I have not seen any motion there during long period of time. _If_ there has to be a particle of gravity, why does it have to quantum mechanical in nature, i.e. carrying a wave-particle dual nature, undergoing the wave-collapse/decoherence, etc.? Why can't it be a simple classical particle? Since there is no experimental evidence anyway, the only issue that can now be raised is: What kind of aesthetics leads to that kind of an assumption? Is the aesthetics in question completely inspired in reference to the fact that one has already invested a lot of years mastering the usual QM and the same/similar mathematical machinery can then be easily put to a good reuse (good in the sense: allowing easiest route to enhance one's publication record)? Or is there a more robust, physical, consideration behind the choice of that assumption? Historically, atoms and molecules initially were simple particles without any quantum nature. That's how Boltzmann thought of them. The spectral density graph of the cavity radiation, the photoelectric effect, and the discrete atomic spectral lines together thrust the quantum nature onto the theorists. (There is one more effect that the introductory texts invariably mention: the Compton effect. However, though discovered by an American, it was a piece of evidence that actually came only later.) In contrast, Boltzmann could still explain thermodynamical properties of gases assuming a simple, pre-quantum (or classical) molecule. If the Poisson-Laplace equation is all that is (locally) to be explained in reference to a particles-based model, why not keep it a classical particle? Where precisely, then, do we hurt the attempts at GR+QM integration—if at all we do? Since no experimental observation yet exists to suggest any diffraction/interference effects due to gravity, why take a giant leap of faith of assuming a quantum nature for gravity (say, in disregard of the famous Occam's razor (which could be, and still is, invoked to reject the idea of aether, anyway))? I of course know in advance that I am right. However, here, I am looking for a succint, direct and preferably conceptual answers to these questions, esp. the last one. It's an excellent question, but I think there's an excellent answer to it, which was provided by Feynman in a 1957 conference. See this article by Zeh for details. The short version: if there's something that can be put in superposition, then the principles of quantum mechanics (provided you accept them) imply that anything else can also be put in superposition, since what happens when you do an experiment that entangles the first thing with the second thing? The principle of superposition is like a "universal acid": once you introduce it anywhere, it tolerates nothing in the universe that fails to abide by it! So no, this isn't a question of "saving effort" by reusing the same mathematical machinery: it's a simple question of logical consistency. Either gravity has to be quantum-mechanical, or else the principles of quantum mechanics themselves have to be overthrown (surviving only as approximations). But no one knows how to revise QM even slightly and get a sensible theory—that's exactly what we've been discussing here! So, instead they think about how to quantize gravity, and (if they're string theorists 🙂 ) claim things like AdS/CFT as large if incomplete successes in that program. At any rate, I don't see any way to escape the choice between quantizing gravity or changing QM. Cool! The paper to which you refer seems neat. However, I might take some time before returning on this one. (Am thinking of writing something for the latest FQXi contest. Haven't started yet, but may be, will start doing so tomorrow afternoon or so… Which means, I will go through Zeh's paper on September 1st or so, at the earliest). Yet, let me slip in something here right away, for the time being. Superposition is a part of a mathematical (or even a physics-theoretical) method; it's not a physical quantity. In fact, even if you stick only to the mainstream QM, it's not even a QM "observable." Gravitational force, on the other hand, is (inasumuch as momentum is, anyway). Though I have yet to go through the paper, here is my _hunch_: Feynman (or others like him), logically speaking, _must_ be first assuming that a proper unification of EM (or QM i.e. QED) and gravity is in principle possible. Now, note, that is just a hypothesis. Historical evidence supports it (e.g. mechanical theory of heat; E+M = EM; Optics as EM; QM theory of matter; etc.). But there is no fundamental reason why Mother Nature must always oblige us in supporting the projections into future of our historical trends. _If_ you accept that hypothesis (that unification is possible), and then, as you yourself rightly pointed out, if you also further suppose that the present mainstream understanding of QM is the final word on those matters (I mean, the observations and experimental findings regarding the QM phenomena), then, sure, for the consistency reasons, I would immediately agree that gravity would have to be quantum in nature. FYI Link to the Thurston video isn't correct. Feynman (or others like him), logically speaking, _must_ be first assuming that a proper unification of EM (or QM i.e. QED) and gravity is in principle possible. the degrees of freedom of the other seem like they have to obey the superposition principle as well. Mitch #170: Sorry, should be fixed now! 1. Is the thought a physical reality? We have measurements of the thought in the form of a scientific papers, working devices. 2. If it is physical reality, is it described by the classical physics, quantum mechanics, or neither. 4. What is consciousness? Even in operational form. What are measurable consequences of consciousness? Do you mean there is entanglement conservation law? Let me make one last attempt from my side at this issue. Consider an inclined black box with one central hole on the top surface and 'n' # of holes on the bottom surface. You release initially motionless balls in the top hole, one at a time, and, as the ball exits from the bottom surface after a while, it falls into one of n # of collection bins below each hole. The system obviously has two basically different mechanisms governing the components of motion along the two axes (i.e., it has two sets of forces). (i) Just because the DOFs along the x-axis are discrete, does it mean our theory must discretize the motion along the y-axis, too? Why does the "universal acid" fail to dissolve the continuous nature of the y-axis DOFs here? (ii) Now, assume that the black box has some mechanism other than the one given above. Suppose that it involves some QM-like superpositions. (Note, the word is: QM-like, not the QMechanical.) The ball, once released in the top hole, goes in a state of that QM-like superposition. Assume further that the collapse occurs for some unknown reasons at the time of the ball's exit from one of the 'n' holes at the bottom. Note, the fact of superpositions affects only the horizontal component of the motion. The question: Does the existence in theory of superpositions now go on to change your answer to question (i)? Why? Why not? What if the question is generalized to any aspect of the dynamics (i.e. not just to the continuous/discrete nature of the DOFs)? Would the existence of a superposition mechanism, operating strictly only in the horizontal direction to select one of the n bins, imply that the vertical motion must also obey some superposition? As this example shows, the average time of a ball to exit the box (in fact even the statistical distribution of the time to exit) would be completely independent of superpositions. What went wrong? Fine as an expression of some mild exasperation. However, if meant by any chance more seriously than that, then let me ask: What would it take for you to be convinced that I do? BTW, I can always leave commenting on any blog, including yours. It really is easier to actually do it than what it sounds like on the first reading. The collectivists sort of "connectors" have culturally made the idea of an easily un-connecting kind of a man difficult to digest. But, that's not reality. It is pretty easy. (I recently did it for an IITians group at LinkedIn, too.) So, feel free to indicate (or drop me a line by email) if I should do that. A young blog commenter tells famous Prof Scott Aaronson his conjecture and ideas. "You are absolutely wrong," the famous professor dismissed the young one. Next enters another young blog commenter and presents precisely the opposite conjecture. "You are absolutely wrong" replies the famous professor. The famous professor's wife interferes. "How could you tell both of them that they are wrong," she says. "They have made completely opposite claims, one of them must be right!" "You are also wrong," replied the famous professor. rrtucci #175: You are totally, completely wrong. Scott #147, I have actually done that experiment. I sent all 435 members of the US House of Representatives through the double slit and found that only 17% of them were conscious, and that consciousness was highly correlated with subcommittee assignments in a most surprising way. Before I could publish my findings, I was kidnapped by CIA operatives and my entire lab was relocated to Gitmo. It is only a quantum doppelganger of me posting here now, and similarly for the ghosts now occupying those seats in Congress. Thus, MWI is confirmed ;-). You are not coming back to my questions in my comment #174. Why? Ok. Let me jot down something, anyway. No, I don't think I had to invent that blackbox. However, conceptually, it's a powerful toy, and it does help ground arguments while talking with folks who are given to taking flights into the abstract and the symbolic far too easily. (i) You first say that G and EM are _totally_ different forces, but then you also go on to add that they _interact_. The expression "interacting _forces_" has no meaning unless one first assumes the existence of a more basic, unifying, force. In which case, G & EM cease to be _totally_ different; they simply become two different contextual manifestations of the same underlying force. Which precisely is what I had suggested. Tch… Do you at least now realize that all that you have actually succeeded in doing is to lend support to my position? (ii) A different, second point. Here, you (and also many many others, including those at MIT/Berkley/Cambridge etc., those winning Nobels etc., and Indians, esp. IITians, revering all such aforementioned) may not agree. However, it's something I believe in. BTW, it's a direct consequence of Ayn Rand's philosophy. Strictly speaking, there cannot be any interaction between forces. Interactions happen between objects (entities), not between their attributes, characteristics, actions, etc. A force is nothing but a kind of an action taken by an entity. Actions (e.g. motions) do not exist independent of the entities which act. To suggest that two "totally different" forces can interact follows the same basic pattern as suggesting that two unrelated attributes interact. For instance, that size and texture interact. That bigness can interact with surface roughness. So long as one is willing to drop the context and blank out necessary facts, one can always come up with a lot of argumentation, perhaps an intelligent one, perhaps a socially satisfying one, perhaps one that leads to money, perks and career advancements, etc. Perhaps. But, always, ultimately, in rebellion against reality. If I were in your place, rather than writing a new blog post involving interaction, I would have come back and on my own clarified this point—viz., that I had smartly, almost cheatingly, slipped in that interaction thingie in that argument above. And, I would have agreed that all the G + QM programs are at best only tentative in nature. Tch. Berkeley/MIT/etc. folks. Remarkably like IITians. No point expecting such things from them. Scott @171: I follow the argument that gravity must obey the superposition principle, but does this necessarily imply that gravity must be *quantized*, strictly speaking? I.e., must it have a minimum non-zero energy, or could it be "classical" in that sense? Jon Lennox #181: I don't know! For me, "being quantum" means obeying the superposition principle. Maybe someone else can explain what, if any, is the argument from first principles that there must be a "quantum of gravity" (i.e., a graviton). The link to the Youtube video actually links to the discussion on Physics StackExchange. It is terribly late but here are several questions related to the post that I am curious about. 1) Is MWI the only interpretation which support QM going "all the way?". E.g., why not to think about QM as a mathematical theory of noncommutative probability as (if I understood him correctly) Steve Landsburg suggests #46 . 2) Cat states are fairly simple. Why to regard macroscopic cat states as harder to get than the very entangled states we see in quantum algorithms of quantum error correction? 3) Isn't it more likely to think that the distinction between microscopic/macroscopic systems emerges from the physics rather than that there will be different a priori principles for microscopic and macroscopic systems? 4) The way I look at it a theory of decoherence (or noise) is simply a theory of approximation of large quantum systems when you neglect some (or many) degrees of freedom. Viewed this way many computational methods in quantum physics can be seen as such approximation recipes. Are such approximation methods expected (or even known) to follow "from first principles" from the basic framework of QM or rather to supplement it? 5) A related question: Should we regard QM as a mathematical language that allows to express every law of physics or rather more strongly as a theory that allows to derive every law of physics? 6) Of course, the case of thermodynamics in the context of question 5 is especially interesting. What about thermodynamics? The beauty of MWI is that everything emerges from just taking the Schroedinger equation (and its siblings) literally. But MWI has issues, like the dependence of the branch counting on the level of coarse graining or the problems with the emergence of the Born rule (without additional postulates). So it's not obvious at all that MWI is the final answer, even if the approach is highly favorable.
CommonCrawl
Technically, an $N$-dimensional vector is just an $N \times 1$ matrix, but in practice we tend to make a distinction. An $N \times 1$ matrix is just a matrix that could be $N \times 2$ tomorrow; declaring something to be a vector is documentation that it is definitively one column of numbers. If we apply a function to every row of a vector, that function should take in a scalar; if we apply a function to every row of a matrix, that function should take in a vector. If you think we can get by in our data analysis systems with only a matrix structure, I wouldn't really disagree. But the rest of this post, about my efforts to accommodate both structures gracefully, won't be very interesting to you. As noted a few episodes ago, I have a coworker or two whose greatest hate for R is how it will will cast an $N\times 1$ matrix as a vector, which is enough to break many operations. So the auto-guessing route is out. We could have two structs, the way the GSL does, which is pretty clear, but you'll often have to explicitly cast back and forth between one and the other. Every time we write a new function or structure, we'll have to decide whether to build around a vector or a matrix. For example, should the parameters of a model be a vector or a matrix? I went with the committee solution: the apop_data set includes both a vector and a matrix. First, this means that it's easy to write functions that take either a vector or a matrix, like the apop_dot function, which can calculate the dot product of matrix $\cdot$ matrix, vector $\cdot$ matrix, matrix $\cdot$ vector, or vector $\cdot$ vector. In C terms, we have a union of vector and matrix; in OO terms, any function that takes in an apop_data set is in a sense overloaded to accept both vector type and matrix type. Those are cases where the apop_data struct is holding only a vector or only a matrix. Regression-style models have a single dependent variable in the vector and a matrix of independent variables. This is such a common use case that it is by itself an argument for a vector+matrix pairing. A set of Eigenvalues in the vector and the Eigenvectors in the matrix. Linear constraints are of the form $C \leq \beta_0 X_0 + \beta_1 X_1 + …$. Stack several of these together and you have a vector of $C$ values and a matrix of $\beta$s. See apop_linear_constraint. More generally, systems of equations are typically expressed as a vector plus matrix. All sorts of other little ad hoc uses come up all the time. Toying around with the Logits from the longest single blog post I have ever written, I made myself a data set with a pointer to the data matrix, and put the predicted probability, apop_dot(data, model->parameters), in the vector. Did you notice that that's all we had to do to calculate $X\beta$? It's surprisingly difficult in some systems, because they hide the constant column. Apophenia's linear models do what I like to call `the OLS shuffle': given a matrix of data, like the one produced by the query in the MVN example above, copy the first column of data to the vector, and fill the now-redundant first column with a column of ones. You now have an explicit representation of both sides of the equation with a minimum of disruption, and the above dot product works. If you have a linear regression that is linear (meaning not affine, meaning no ones column), then put the dependent variable in the vector, and the regression model's prep routine will know to not further modify your data. This is treading on do-what-I-mean territory, but it does the same thing every time, so once you know what it's doing it doesn't do anything surprising. If you don't specify an argument to any of these functions, it defaults to being zero. So if you know all the data is in row zero, don't bother specifying the row. The vector is column -1. If you ask for column zero of the matrix, but there is no matrix, I give you the vector, column -1. This is a more useful result than throwing an error. //given vector and matrix, the vector is column -1. Having the vector count as column -1 means that a simple for loop can span one or both vector and matrix. //(assuming that if both vector and matrix are present, they're the same height). So the front-end functions, apop_data_(get|set|ptr) do an OK job of binding together a vector or matrix, or allowing called functions to choose between the two. We get this operator overloading without auto-casting or metadata manipulation.
CommonCrawl
Search Results: 1 - 10 of 167475 matches for " E. Carloni " Abstract: antimicrobial susceptibility tests were determined in 100 isolates of e.coli from differents patologies in cattle, horses, dogs and cats, according to clinical and laboratory standards institute. multiresistance isolates were detected in this assay. the antibiotics selected were amikacin, ampicillin /sulbactam, cefotaxime, ciprofloxacin, chloramphenicol, colistin, gentamicin, nitrofurantoin, streptomycin, tetracycline, trimethoprim/sulfamethoxazole. the antibiotic with the highest resistance was tetracycline (34% in cats and 75% in dogs). in isolated strains from dogs and cats it was also found considerable percentages of resistance to ampicilline/ sulbactam (27% in dogs and 53%in cats) and to ciprofloxacine (30%in dogs and 67% in cats). in these isolates it was also found the highest percentage of multiresistance (29% in dogs and 67% in cats). selective pressure originated from the inadecuate use of antibiotics can be responsible for the appearance of resistance, eventhough it is not the only one. it may be possible that e. coli could transmit genes of resistance to antimicrobials agents, although it is unknown if the presence of these genes are permanent or transitory. Abstract: Se determinó el perfil de susceptibilidad a antimicrobianos de 100 aislamientos de E.coli provenientes de diversas patologías en bovinos, equinos, caninos y felinos, siguiendo metodología del Clinical and Laboratory Standards Institute y detectando la aparición de aislamientos multiresistentes. El panel de antibióticos ensayados incluyó amicacina, ampicilina/sulbactama, cefotaxima, ciprofloxacina, cloranfenicol, colistina, estreptomicina, gentamicina, nitrofurantoína, tetraciclina, trimetoprima/ sulfametoxazol. El mayor porcentaje de resistencia (R) se detectó frente a tetraciclina en aislamientos de todas las especies animales (entre 34% en los de origen felino y 75% de origen equino). En las cepas de origen canino y felino se encontraron porcentajes considerables frente ampicilina/ sulbactama (27% de caninos y 53% de felinos) y ante ciprofloxacina (30% y 67% respectivamente). En estos aislamientos también, se detectó el mayor porcentaje de multiresistencia (29% en caninos y 67% en felinos). La presión selectiva originada por la aplicación inadecuada de antibióticos puede resultar un factor, aunque no el único, responsable de la aparición de R. Además existe la posibilidad de que E.coli pueda constituirse en un eslabón de transmisión de genes de R a antimicrobianos, aunque no se conoce hasta el momento, el origen de ellos, humano o animal y, su permanencia en el tiempo. Antimicrobial susceptibility tests were determined in 100 isolates of E.coli from differents patologies in cattle, horses, dogs and cats, according to Clinical and Laboratory Standards Institute. Multiresistance isolates were detected in this assay. The antibiotics selected were amikacin, ampicillin /sulbactam, cefotaxime, ciprofloxacin, chloramphenicol, colistin, gentamicin, nitrofurantoin, streptomycin, tetracycline, trimethoprim/sulfamethoxazole. The antibiotic with the highest resistance was tetracycline (34% in cats and 75% in dogs). In isolated strains from dogs and cats it was also found considerable percentages of resistance to ampicilline/ sulbactam (27% in dogs and 53%in cats) and to ciprofloxacine (30%in dogs and 67% in cats). In these isolates it was also found the highest percentage of multiresistance (29% in dogs and 67% in cats). Selective pressure originated from the inadecuate use of antibiotics can be responsible for the appearance of resistance, eventhough it is not the only one. It may be possible that E. coli could transmit genes of resistance to antimicrobials agents, although it is unknown if the presence of these genes are permanent or transitory. Abstract: I review the state of the art of the investigation on the structure formation in $f(R)$-gravity based on the Covariant and Gauge Invariant approach to perturbations. A critical analysis of the results, in particular the presence of characteristic signature of these models, together with their meaning and their implication is given. Abstract: We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models. Abstract: In this paper we show how the covariant gauge invariant equations for the evolution of scalar, vector and tensor perturbations for a generic $f(R)$-gravity theory can be recast in order to exploit the power of dynamical system methodology. In this way, recent results describing the dynamics of the background FRW model can be easily combined with these equations to reveal important details pertaining to the evolution of cosmological models in fourth order gravity. Relationship between seed yield and its component characters in Cenchrus spp. Abstract: Cenchrus setigerus, C. sp., eleven obligate apomictic cultivars and a sexual line of Cenchrus ciliaris L. were studied to determinethe relationship between seed production and its component characters, through principal component analysis, path correlationanalysis and analysis of variance. A completely randomized field design was used. Ten vegetative and reproductivemorphological characters were measured. Seed production was influenced directly by panicle weight and indirectly by paniclelength, 1000 seed weight, length and width of flag leaf lamina and length of flag leaf sheath. Panicle weight showed highheritability and variability among genotypes. Hence, panicle weight can be considered a selection criterion to obtain increasedseed production in Cenchrus. The cultivar Lucero INTA PEMAN exhibited the highest panicle weight and, therefore, greatestseed production, which makes it suitable for selection as parental cultivar to obtain new germplasm in Cenchrus with high seedyield. Background: Incorporation of HPV tests into cervical cancer screening programs may be advantageous over conventional cytology, especially in developing nations, where the largest burden of cervical cancer is observed. Objectives: To conduct an evaluation of commercially available molecular HPV tests in Brazilian women. Study design: Two groups were recruited: group A was composed of 511 women referred to the clinics because of a previous abnormal Pap test while group B consisted of 2464 subjects under routine screening. Cervical samples were collected using SurePath liquid cytology (LBC) device, and split into aliquots which were submitted to molecular testing by Hybrid Capture and cobas HPV. Colposcopy and biopsies were made according to the standard guidelines, directed by cytological diagnosis. Results: Prevalence of HSIL was 5.97% and 0.7% in Group A and B respectively. High-Risk HPV DNA was found in about 9% of group B women, while in group A this frequency was 24%. Having CIN3+ as the study end-point, the negative predictive values for molecular methods were above 99.8%. All "in-situ" and invasive cervical carcinomas were detected by both HPV nucleic acid assays. Conclusion: Use of HPV DNA testing was feasible and highly sensitive in cancer screening settings of Brazil. Abstract: An improved QED Parton Shower algorithm to calculate photonic radiative corrections to QED processes at flavour factories is described. We consider the possibility of performing photon generation in order to take into account also the effects due to interference between initial and final state radiation. Comparisons with exact order $\alpha$ results are shown and commented. Abstract: We revisit the relativistic restricted two-body problem with spin employing a perturbation scheme based on Lie series. Starting from a post-Newtonian expansion of the field equations, we develop a first-order secular theory that reproduces well-known relativistic effects such as the precession of the pericentre and the Lense-Thirring and geodetic effects. Additionally, our theory takes into full account the complex interplay between the various relativistic effects, and provides a new explicit solution of the averaged equations of motion in terms of elliptic functions. Our analysis reveals the presence of particular configurations for which non-periodical behaviour can arise. The application of our results to real astrodynamical systems (such as Mercury-like and pulsar planets) highlights the contribution of relativistic effects to the long-term evolution of the spin and orbit of the secondary body. Abstract: We discuss a mechanism that induces a time-dependent vacuum energy on cosmological scales. It is based on the instability induced renormalization triggered by the low energy quantum fluctuations in a Universe with a positive cosmological constant. We employ the dynamical systems approach to study the qualitative behavior of Friedmann-Robertson-Walker cosmologies where the cosmological constant is dynamically evolving according with this nonperturbative scaling at low energies. It will be shown that it is possible to realize a "two regimes" dark energy phases, where an unstable early phase of power-law evolution of the scale factor is followed by an accelerated expansion era at late times.
CommonCrawl
$\tau$ is a continuous map on a metric compact space $X$. For a continuous function $\phi:X\to\mathbb R$ we considera 1-dimensional map $T$ (possibly multi-valued) which sends a local $\phi$-maximum on $\tau$ trajectory to the next one: consecutive maxima map. The idea originated with famous Lorenz's paper on strange attractor. We prove that if $T$ has a horseshoe disjoint from fixed points, then $\tau$ is in some sense chaotic, i.e., it has a turbulent trajectory and thus a continuous invariant measure. Boyarsky, Abraham and Eslami, Peyman and Góra, Paweł and Li, Zhenyang and Meddaugh, Jonathan and Raines, Brian E.
CommonCrawl
Chandran, Sunil L and Francis, Mathew C and Mathew, Rogers (2008) Finding a Box Representation for a Graph in $O(n^2\Delta^2\ln n)$ time. In: ICIT '08 Proceedings of the 2008 International Conference on Information Technology, 17-20 Dec. 2008 , Washington, DC. An axis-parallel box in $b$-dimensional space is a Cartesian product $R_1 \times R_2 \times \cdots \times R_b$ where $R_i$ (for $1 \leq i \leq b$) is a closed interval of the form $[a_i, b_i]$ on the real line. For a graph $G$, its boxicity is the minimum dimension $b$, such that $G$ is representable as the intersection graph of (axis-parallel) boxes in $b$-dimensional space. The concept of boxicity finds application in various areas of research like ecology, operation research etc. Chandran, Francis and Sivadasan gave an $O(\Delta n^2 \ln^2 n)$ randomized algorithm to construct a box representation for any graph $G$ on $n$ vertices in $\lceil (\Delta + 2)\ln n \rceil$ dimensions, where $\Delta$ is the maximum degree of the graph. They also came up with a deterministic algorithm that runs in $O(n^4 \Delta )$ time. Here, we present an $O(n^2 \Delta^2 \ln n)$ deterministic algorithm that constructs the box representation for any graph in $\lceil (\Delta + 2)\ln n \rceil$ dimensions.
CommonCrawl
Deliver the $50$ letters to location $-10$ (travel $2 \cdot 10$), the first $100$ letters to location $10$ (travel $2 \cdot 10$), the remaining $75$ letters to location $10$ while on the way to delivering the $20$ to location $25$ (travel $2 \cdot 25$). The total round-trip distance traveled is $90$. The first line contains two integers, $N$ and $K$, where $3 \leq N \leq 1000$ is the number of delivery addresses on the route, and $1 \leq K \leq 10\, 000$ is the carrying capacity of the postal truck. Each of the following $N$ lines will contain two integers $x_ j$ and $t_ j$, the location of a delivery and the number of letters to deliver there, where $-1500 \leq x_1 < x_2 < \cdots < x_ N \leq 1500$ and $1 \leq t_ j \leq 800$ for all $j$. All delivery locations are nonzero (that is, none are at the post office). Output the minimum total travel distance needed to deliver all the letters and return to the post office.
CommonCrawl
Abstract: I review the subjects of non-solar cosmic rays (CRs) and long-duration gamma-ray bursts (GRBs). Of the various interpretations of these phenomena, the one best supported by the data is the following. Accreting compact objects, such as black holes, are seen to emit relativistic puffs of plasma: `cannonballs' (CBs). The inner domain of a rotating star whose core has collapsed resembles such an accreting system. This suggests that core-collapse supernovae (SNe) emit CBs, as SN1987A did. The fate of a CB as it exits a SN and travels in space can be studied as a function of the CB's mass and energy, and of `ambient' properties: the encountered matter- and light- distributions, the composition of the former, and the location of intelligent observers. The latter may conclude that the interactions of CBs with ambient matter and light generate CRs and GRBs, all of whose properties can be described by this `CB model' with few parameters and simple physics. GRB data are still being taken in unscrutinized domains of energy and timing. They agree accurately with the model's predictions. CR data are centenary. Their precision will improve, but new striking predictions are unlikely. Yet, a one-free-parameter description of all CR data works very well. This is a bit as if one discovered QED today and only needed to fit $\alpha$.
CommonCrawl
Lemma 15.50.4. Let $R$ be a Noetherian ring. Assume $P$ satisfies (C) and (D). Then $R$ is a $P$-ring if and only if the formal fibres of $R_\mathfrak m$ have $P$ for every maximal ideal $\mathfrak m$ of $R$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0BIU. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0BIU, in case you are confused.
CommonCrawl
This isn't homework or anything, just something I stumbled across. Did you type the equations correctly? They don't specify a recurrence relation. The second equation implies f(1) = 1 + 1/f(1), which isn't true if f(1) = 1. Is f(x) defined only when x is a positive integer? How did you "stumble across" this? Last edited by Benit13; January 5th, 2016 at 03:08 AM. For very large values of $n$, $x_n$ is approximately 1/2 + √$n$.
CommonCrawl
Start by selecting the types of variables to use in the simulation from the Select types dropdown in the Simulate tab. Available types include Binomial, Constant, Discrete, Log normal, Normal, Uniform, Data, Grid search, and Sequence. Add random variables with a binomial distribution using the Binomial variables inputs. Start by specifying a Name (crash), the number of trials (n) (e.g., 20) and the probability (p) of a success (.01). Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., crash 20 .01). List the constants to include in the analysis in the Constant variables input. You can either enter names and values directly into the text area (e.g., cost 3) or enter a name (cost) and a value (5) in the Name and Value input respectively and then press the icon. Press the icon to remove an entry. Note that only variables listed in the (larger) text-input boxes will be included in the simulation. Define random variables with a discrete distribution using the Discrete variables inputs. Start by specifying a Name (price), the values (6 8), and their associated probabilities (.3 .7). Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., price 6 8 .3 .7). Note that the probabilities must sum to 1. If not, a message will be displayed and the simulation cannot be run. To include (log) normally distributed random variables in the analysis select Normal from the Select types dropdown and use Normal variables inputs. For example, enter a Name (demand), the Mean (1000) and the standard deviation (St.dev., 100). Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., demand 1000 100). To include uniformly distributed random variables in the analysis select Uniform from the Select types dropdown. Provide parameters in the Uniform variables inputs. For example, enter a Name (cost), the Min (10) and the Max (15) value. Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., cost 10 15). Note that if Grid search has been selected the number of values generated will override the number of simulations or repetitions specified in # sims or # reps. If this is not what you want use Sequence. Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., price 4 10 0.01). To include a sequence of values select Sequence from the Select types dropdown. Provide the minimum and maximum values in the Sequence variables inputs. For example, enter a Name (trend), the Min (1) and the Max (1000) value. Note that the number of 'steps' is determined by the number of simulations. Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., trend 1 1000). To perform a calculation using the generated variables, create a formula in the Simulation formulas input box in the main panel (e.g., profit = demand * (price - cost)). Formulas are used to add (calculated) variables to the simulation or to update existing variables. You must specify the name of the new variable to the left of a = sign. Variable names can contain letters, numbers, and _ but no other characters or spaces. You can enter multiple formulas. If, for example, you would also like to calculate the margin in each simulation press return after the first formula and type margin = price - cost. Many of the same functions used with Create in the Data > Transform tab and in Filters in Data > View can also be included in formulas. You can use > and < signs and combine them. For example x > 3 & y == 2 would evaluate to TRUE when the variable x has values larger than 3 AND y has values equal to 2. Recall that in R, and most other programming languages, = is used to assign a value and == to evaluate if the value of a variable is exactly equal to some other value. In contrast != is used to determine if a variable is unequal to some value. You can also use expressions that have an OR condition. For example, to determine when Salary is smaller than $100,000 OR larger than $20,000 use Salary > 20000 | Salary < 100000. | is the symbol for OR and & is the symbol for AND (see also the help file for Data > View). ifelse statements can be used to create more complex (numeric) variables as well. In the example below, z will take on the value 0 if x is smaller than 60. If x is larger than 100, z is set equal to 1. Finally, when x is 60, 100, or between 60 and 100, z is set to 2. Note: make sure to include the appropriate number of opening ( and closing ) brackets! To find the value for price that maximizes profit use the find_max command. In this example price could be a random or Sequence variable. There is also a find_min command. Other commonly used functions are ln for the natural logaritm (e.g., ln(x)), sqrt for the square-root of x (e.g., sqrt(x)) and square to calculate square of a variable (e.g., square(x)). To return a single value from a calculation use functions such as min, max, mean, sd, etc. A special function useful for portfolio optimization is sdw. It takes weights and variables as inputs and returns the standard deviation of the weighted sum of the variables. For example, to calculated the standard deviation for a portfolio of three stocks (e.g., Boeing, GM, and Exxon) you could use the equation below in the Simulation formulas input. f and g could be values (e.g., 0.2 and 0.8) or vectors of different weights specified in a Grid search input (see above). Boeing, GM, and Exxon are names of variables in a data-set included in the simulation using a Data input (see above). The value shown in the # sims input determines the number of simulation draws. To redo a simulation with the same randomly generated values, specify a number in the Set random seed input (e.g., 1234). To save the simulated data for further analysis, specify a name in the Simulated data input box. You can then investigate the simulated data by choosing the data with the specified name from the Datasets dropdown in any of the Data tabs (e.g., Data > View, Data > Visualize, or Data > Explore). When all required inputs have been specified press the Simulate button to run the simulation. In the screen shot below var_cost and fixed_cost are specified as constants. E is normally distributed with a mean of 0 and a standard deviation of 100. price is a discrete random variable that is set to $6 (30% probability) or $8 (70% probability). There are three formulas in the Simulation formulas text-input. The first establishes the dependence of demand on the simulated variable price. The second formula specifies the profit function. The final formula is used to determine the number (and proportion) of cases where profit is below 100. The result is assigned to a new variable profit_small. In the output under Simulation summary we first see details on the specification of the simulation (e.g., the number of simulations). The section Constants lists the value of variables that do not vary across simulations. The sections Random variables and Logicals list the outcomes of the simulation. We see that average demand in the simulation is 627.94 with a standard deviation of 109.32. Other characteristics of the simulated data are also provided (e.g., the maximum profit is 1758.77). Finally, we see that the probability of profits below 100 is equal 0.32 (i.e., profits were below $100 in 315 out of the 1,000 simulations). To view histograms of the random variables as well as the variables created using Simulation formulas ensure Show plots is checked. Because we specified a name in the Simulated data box the data are available as simdat within Radiant (see screen shots below). To use the data in Excel click the download icon on the top-right of the screen in the Data > View tab or go to the Data > Manage tab and save the data to a csv file (or use the clipboard feature). For more information see the help file for the Data > Manage tab. Suppose the simulation discussed above was used to get a better understanding of daily profits. To develop insights into annual profits we could re-run the simulation 365 times. This can be done easily by using the functionality available in the Repeat tab. First, select the Variables to re-simulate, here E and price. Then select the variable(s) of interest in the Output variables box (e.g., profit). Set # reps to 365. Next, we need to determine how to summarize the data. If we select Simulation in Group by the data will be summarized for each draw in the simulation across 365 repeated simulations resulting in 1,000 values. If we select Repeat in Group by the data will be summarized for each repetition across 1000 simulations resulting in 365 values. If you imagine the full set of repeated simulated data as a table with 1,000 rows and 365 columns, grouping by Simulation will create a summary statistic for each row and grouping by Repeat will create a summary statistic for each column. In this example we want to determine the sum of simulated daily profits across 365 repetitions so we select Simulation in the Group by box and sum in the Apply function box. To determine, the probability that annual profits are below $36,500 we enter the formula below into the Repeated simulation formula text input. When you are done with the input values click the Repeat button. Because we specified a name for Repeat data a new data set will be created. repdat" will have will contain the summarized data grouped per simulation (i.e., 1,000 rows). To store all 365 x 1,000 simulations/repetitions selectnonefrom theApply function` dropdown. Descriptive statistics for the repeated simulation are shown in the main panel under Repeated simulation summary. We see that the annual expected profit (i.e., the mean of sum of profit) for the company is 172,311.84 with a standard deviation of 10,772.29. Although we found above that daily profits can be below $100, the chance that profits are below \(365 \times 100\) for the entire year are slim to none (i.e., the proportion of repeated simulations with annual profits below $36,500 is equal to 0). If Show plots is checked a histogram of annual profits (sum of profit) is shown under Repeated simulation plots. There is no plot for profit_365 because it has only one value (i.e., FALSE). Note that the Repeat tab also has the option to use a Grid search input to repeat a simulation by replacing one or more Constants specified in the Simulation tab in an iterative fashion. This input option is shown only when Group by is set to Repeat. Provide the minimum and maximum values as well as the step-size in the Grid search inputs. For example, enter a Name (price), the Min (4), Max (10), and Step (0.01) value. If multiple variables are specified in Grid search all possible value combinations will be created and evaluated in the simulation. Note that if Grid search has been selected the number of values generated will override the number of repetitions specified # reps. Then press the icon. Alternatively, enter (or remove) input directly in text area (e.g., price 4 10 0.01).
CommonCrawl
We report a vibration-like social phenomenon (mass behavior of human subjects) in the experiments of experimental economics. vibration is a long existed concept and a widely existed phenomenon in natural and in social phenomena. Usually, the causal of social phenomena is unclear, but recently developed experimental economics studies in laboratory make it possible. During a game in a laboratory experiment, the human subjects are interacting with each other by the strategies of the game, and as a phenomena, the society is keeping evolution. In strategy space, the variables as velocity, accelerant and speed of the subjects can be measured from the detail records of the human subjects from the experiments. Data from the human subjects experiments, which are from Public Goods Game (Chen and Tang, Journal Political Economics, 1998), $2\times 2$ Games (Hyndman, et al, Experimental Economics, 2009) and Prisoners' Dilemma (Duffy and Ochs, Game and Economics behavior, 2009), is implicitly analysis, and harmonic-vibration-like phenomena is shown. This might be helpful to understand the dynamical proceeding of the adaption of a society of human subjects.
CommonCrawl
Has anyone used MathJax on an miPage? to modify the Config/Resources.xml file to have MathJax scripts added to <head>. scripts in the <body>, it all worked. I'm glad you got this working. I don't have any experience with MathJax, but MiServer was designed to make it easy to add your own resources. We discourage people from modifying the files in the MiServer folder structure because it may cause issues down the road if you update your MiServer installation. "use MathJax" in my mipage.. Maybe, that was the problem. I will try your solution. adding the MathJax scripts to the <body> of my mipage. I add the MathJax virtual directory in the site's virtual.xml. Tried your prescription. I get a msg indicating "unable to load from a local directory. I have to look up the setting for allowing the server to access a local directory. I downloaded MathJax and started to experiment and discovered that I hadn't provided support for resources with query strings. I've just updated MiServer 3.0 to allow them. You can grab it from the "master" branch of the MiServer GitHub repository. I created a small MiSite to test MathJax locally. Notice that I defined the resource for using AsciiMath, I could define other resources if I wanted to use different math markup technologies. I wrote a very simple MiPage to test. Finally, I load the MiServer workspace, type Start 'c:\bb\mathjax', navigate to the page, and it seems to display properly. Tex/Latex. For Tex/Latex I would like the additional configuration of the tex2jax processor. I believe the tex2jax processor makes a pass over the page after the html gets generated. $..$ or $$..$$ for inline and display math. model it seems the MathJax directory must be at ServerRoot. I'm glad you're getting things to work. Yeah, my example indicated that you should use %SiteRoot% to point to MathJax. Similarly, you're adding a <script> to a <p> to a <div> to your f1 <form>. If you're likely to build multiple pages using MathJax, or you simply like modularity, MiServer allows to build your own templates and components. I tinkered a bit and built a simple TeX/LaTeX component and template. First, I added a resource in /Config/Resources.xml so I can use TexLaTeX. Then I created a Tex/Latex template called TeXPage, based on the MiPage class. Templates need to be located in your MiSite's /Code/Templates/ folder. The first thing the template does is Use 'MathJax-TeX/LaTeX', so you don't need to do it in each of your pages. Then we insert the configuration script - we use Insert instead of Add in order to force the configuration script to be loaded first. You'll also note that here's one case where we explicitly use Head.Insert to make sure the script is loaded in the <head> element AND before MathJax.js. The next thing I did was to write a TeX component based on #._html.script (_.script). The default mode is inline. I defined two parameters, inline and displayed where either can be set to indicate that the equation is a displayed (not inline) equation by setting inline to 0 or displayed to 1. So now, whenever I want to use TeX/LaTeX, I just use Add #.TeX. As far as accepting input from a form and returned the equation, that shouldn't be very difficult - it could be done entirely on the client side, or with a callback to MiServer. When you say you'd like to turn it into a RESTful service, what do you envision? How would it be invoked, what would the returned data be? I've uploaded my MathJax MiSite to the MiSites GitHub repository.
CommonCrawl
The characteristic properties of scattering data for the Schrodinger operator on the axis with a triangular $2\times 2$ matrix potential are obtained under the simple or multiple virtual levels being possibly present. Under a multiple virtual level, a pole for the reflection coefficient at $k=0$ is possible. For this case, the modified Parseval equality is constructed.
CommonCrawl
In the chapter on grammars, we have seen how to use grammars for very effective and efficient testing. In this chapter, we refine the previous string-based algorithm into a tree-based algorithm, which is much faster and allows for much more control over the production of fuzz inputs. The algorithm in this chapter serves as a foundation for several more techniques; this chapter thus is a "hub" in the book. You should know how grammar-based fuzzing works, e.g. from the chapter on grammars. Here, any choice except for (expr) increases the number of symbols, even if only temporary. Since we place a hard limit on the number of symbols to expand, the only choice left for expanding <factor> is (<expr>), which leads to an infinite addition of parentheses. It is inefficient. With each iteration, this fuzzer would go search the string produced so far for symbols to expand. This becomes inefficient as the production string grows. It is hard to control. Even while limiting the number of symbols, it is still possible to obtain very long strings – and even infinitely long ones, as discussed above. Let us illustrate both problems by plotting the time required for strings of different lengths. We see that (1) the effort increases quadratically over time, and (2) we can easily produce outputs that are tens of thousands of characters long. To address these problems, we need a smarter algorithm – one that is more efficient, that gets us better control over expansions, and that is able to foresee in expr_grammar that the (expr) alternative yields a potentially infinite expansion, in contrast to the other two. To both obtain a more efficient algorithm and exercise better control over expansions, we will use a special representation for the strings that our grammar produces. The general idea is to use a tree structure that will be subsequently expanded – a so-called derivation tree. This representation allows us to always keep track of our expansion status – answering questions such as which elements have been expanded into which others, and which symbols still need to be expanded. Furthermore, adding new elements to a tree is far more efficient than replacing strings again and again. Like other trees used in programming, a derivation tree (also known as parse tree or concrete syntax tree) consists of nodes which have other nodes (called child nodes) as their children. The tree starts with one node that has no parent; this is called the root node; a node without children is called a leaf. The grammar expansion process with derivation trees is illustrated in the following steps, using the arithmetic grammar from the chapter on grammars. We start with a single node as root of the tree, representing the start symbol – in our case <start>. To expand the tree, we traverse it, searching for a nonterminal symbol $S$ without children. $S$ thus is a symbol that still has to be expanded. We then chose an expansion for $S$ from the grammar. Then, we add the expansion as a new child of $S$. For our start symbol <start>, the only expansion is <expr>, so we add it as a child. To construct the produced string from a derivation tree, we traverse the tree in order and collect the symbols at the leaves of the tree. In the case above, we obtain the string "<expr>". To further expand the tree, we choose another symbol to expand, and add its expansion as new children. This would get us the <expr> symbol, which gets expanded into <expr> + <term>, adding three children. We now have a representation for the string 2 + 2. In contrast to the string alone, though, the derivation tree records the entire structure (and production history, or derivation history) of the produced string. It also allows for simple comparison and manipulation – say, replacing one subtree (substructure) against another. where SYMBOL_NAME is a string representing the node (i.e. "<start>" or "+") and CHILDREN is a list of children nodes. None as a placeholder for future expansion. This means that the node is a nonterminal symbol that should be expanded further. (i.e., the empty list) to indicate no children. This means that the node is a terminal symbol that can no longer be expanded. Let us take a very simple derivation tree, representing the intermediate step <expr> + <term>, above. """Return s in a form suitable for dot""" assert dot_escape("<hello>, world") == "\\<hello\\>\\, world" While we are interested at present in visualizing a derivation_tree, it is in our interest to generalize the visualization procedure. In particular, it would be helpful if our method display_tree() can display any tree like data structure. To enable this, we define a helper method extract_node() that extract the current symbol and children from a given data structure. The default implementation simply extracts the symbol, children, and annotation from any derivation_tree node. While visualizing a tree, it is often useful to display certain nodes differently. For example, it is sometimes useful to distinguish between non-processed nodes and processed nodes. We define a helper procedure default_node_attr() that provides the basic display, which can be customized by the user. Similar to nodes, the edges may also require modifications. We define default_edge_attr() as a helper procedure that can be customized by the user. While visualizing a tree, one may sometimes wish to change the appearance of the tree. For example, it is sometimes easier to view the tree if it was laid out left to right rather than top to bottom. We define another helper procedure default_graph_attr() for that. Finally, we define a method display_tree() that accepts these four functions extract_node(), default_edge_attr(), default_node_attr() and default_graph_attr() and uses them to display the tree. Say we want to customize our output, where we want to annotate certain nodes and edges. Here is a method display_annotated_tree() that displays an annotated tree structure, and lays the graph out left to right. The all_terminals() function returns the string representation of all leaf nodes. However, some of these leaf nodes may be due to nonterminals deriving empty strings. For these, we want to return the empty string. Hence, we define a new function tree_to_string() to retrieve the original string back from a tree like structure. Let us now develop an algorithm that takes a tree with unexpanded symbols (say, derivation_tree, above), and expands all these symbols one after the other. As with earlier fuzzers, we create a special subclass of Fuzzer – in this case, GrammarFuzzer. A GrammarFuzzer gets a grammar and a start symbol; the other parameters will be used later to further control creation and to support debugging. Just like nonterminals() in the chapter on Grammars, we provide for future extensions, allowing the expansion to be a tuple with extra data (which will be ignored). The generic expand_node() method can later be used to select different expansion strategies; as of now, it only uses expand_node_randomly(). The helper function process_chosen_children() does nothing; it can be overloaded by subclasses to process the children once chosen. The method any_possible_expansions() returns True if the tree has any unexpanded nodes. Here comes expand_tree_once(), the core method of our tree expansion algorithm. It first checks whether it is currently being applied on a nonterminal symbol without expansion; if so, it invokes expand_node() on it, as discussed above. If the node is already expanded (i.e. has children), it checks the subset of children which still have unexpanded symbols, randomly selects one of them, and applies itself recursively on that child. The expand_tree_once() method replaces the child in place, meaning that it actually mutates the tree being passed as an argument rather than returning a new tree. This in-place mutation is what makes this function particularly efficient. Again, we use a helper method (choose_tree_expansion()) to return the chosen index from a list of children that can be expanded. """Return index of subtree in `children` to be selected for expansion. Defaults to random.""" """Choose an unexpanded symbol in tree; expand it. Can be overloaded in subclasses.""" Let's put it to use, expanding our derivation tree from above twice. We see that with each step, one more symbol is expanded. Now all it takes is to apply this again and again, expanding the tree further and further. With expand_tree_once(), we can keep on expanding the tree – but how do we actually stop? The key idea here, introduced by Luke in [Luke et al, 2000.], is that after inflating the derivation tree to some maximum size, we only want to apply expansions that increase the size of the tree by a minimum. For <factor>, for instance, we would prefer an expansion into <integer>, as this will not introduce further recursion (and potential size inflation); for <integer>, likewise, an expansion into <digit> is preferred, as it will less increase tree size than <digit><integer>. symbol_cost() returns the minimum cost of all expansions of a symbol, using expansion_cost() to compute the cost for each expansion. expansion_cost() returns the sum of all expansions in expansions. If a nonterminal is encountered again during traversal, the cost of the expansion is $\infty$, indicating (potentially infinite) recursion. Here's two examples: The minimum cost of expanding a digit is 1, since we have to choose from one of its expansions. Here's now a variant of expand_node() that takes the above cost into account. It determines the minimum cost cost across all children and then chooses a child from the list using the choose function, which by default is the minimum cost. If multiple children all have the same minimum cost, it chooses randomly between these. The shortcut expand_node_min_cost() passes min() as the choose function, which makes it expand nodes at minimum cost. We can now apply this function to close the expansion of our derivation tree, using expand_tree_once() with the above expand_node_min_cost() as expansion function. We keep on expanding until all nonterminals are expanded. We see that in each step, expand_node_min_cost() chooses an expansion that does not increase the number of symbols, eventually closing all open expansions. We see that with each step, the number of nonterminals increases. Obviously, we have to put a limit on this number. Max cost expansion. Expand the tree using expansions with maximum cost until we have at least min_nonterminals nonterminals. This phase can be easily skipped by setting min_nonterminals to zero. Random expansion. Keep on expanding the tree randomly until we reach max_nonterminals nonterminals. Min cost expansion. Close the expansion with minimum cost. We implement these three phases by having expand_node reference the expansion method to apply. This is controlled by setting expand_node (the method reference) to first expand_node_max_cost (i.e., calling expand_node() invokes expand_node_max_cost()), then expand_node_randomly, and finally expand_node_min_cost. In the first two phases, we also set a maximum limit of min_nonterminals and max_nonterminals, respectively. """Output a tree if self.log is set; if self.display is also set, show the tree structure""" until the number of possible expansions reaches `limit`.""" """Expand `tree` in a three-phase strategy until all expansions are complete.""" Let us try this out on our example. Based on this, we can now define a function fuzz() that – like simple_grammar_fuzzer() – simply takes a grammar and produces a string from it. It thus no longer exposes the complexity of derivation trees. Let us try out the grammar fuzzer (and its trees) on other grammar formats. How do we stack up against simple_grammar_fuzzer()? Our test generation is much faster, but also our inputs are much smaller. We see that with derivation trees, we can get much better control over grammar production. With GrammarFuzzer, we now have a solid foundation on which to build further fuzzers and illustrate more exciting concepts from the world of generating software tests. Many of these do not even require writing a grammar – instead, they infer a grammar from the domain at hand, and thus allow to use grammar-based fuzzing even without writing a grammar. Stay tuned! effectively avoids running into infinite expansions. Congratulations! You have reached one of the central "hubs" of the book. From here, there is a wide range of techniques that build on grammar fuzzing. Assigning constraints to individual expansions allows to express semantic constraints on individual rules. Derivation trees (then frequently called parse trees) are a standard data structure into which parsers decompose inputs. The Dragon Book (also known as Compilers: Principles, Techniques, and Tools) [Aho et al, 2006.] discusses parsing into derivation trees as part of compiling programs. We also use derivation trees when parsing and recombining inputs. The key idea in this chapter, namely expanding until a limit of symbols is reached, and then always choosing the shortest path, stems from Luke [Luke et al, 2000.]. Tracking GrammarFuzzer reveals that some methods are called again and again, always with the same values. Set up a class FasterGrammarFuzzer with a cache that checks whether the method has been called before, and if so, return the previously computed "memoized" value. Do this for expansion_to_children(). Compare the number of invocations before and after the optimization. Important: For expansion_to_children(), make sure that each list returned is an individual copy. If you return the same (cached) list, this will interfere with the in-place modification of GrammarFuzzer. Use the Python copy.deepcopy() function for this purpose. Some methods such as symbol_cost() or expansion_cost() return a value that is dependent on the grammar only. Set up a class EvenFasterGrammarFuzzer() that pre-computes these values once upon initialization, such that later invocations of symbol_cost() or expansion_cost() need only look up these values. In expand_tree_once(), the algorithm traverses the tree again and again to find nonterminals that still can be extended. Speed up the process by keeping a list of nonterminal symbols in the tree that still can be expanded. What is the difference between the original implementation and this alternative? Andreas Zeller, Rahul Gopinath, Marcel Böhme, Gordon Fraser, and Christian Holler: "Efficient Grammar Fuzzing". In Andreas Zeller, Rahul Gopinath, Marcel Böhme, Gordon Fraser, and Christian Holler (eds.), "Generating Software Tests", https://www.fuzzingbook.org/html/GrammarFuzzer.html. Retrieved 2019-02-19 10:09:48+00:00.
CommonCrawl
We characterize Neeman's well generated triangulated categories and discuss some of its basic properties. 2000 Mathematics Subject Classification: 18E30 (55P42, 55U35). Keywords and Phrases: Triangulated category, well generated category, $\alpha$-compact object, $\alpha$-small object. Full text: dvi.gz 11 k, dvi 26 k, ps.gz 104 k, pdf 95 k.
CommonCrawl
Cas13a (previously known as C2c2) is a Class 2 type VI-A CRISPR-Cas effector that specifically targets RNA (O. O. Abudayyeh et al. 2016). Due to the potential use of Cas13a as a broad range detection tool we attempted to turn the protein and its required guide sequence into a BioBrick. The construct was delivered in three fragments that had to be assembled with the backbone into a plasmid. In parallel, we expressed Cas13a from a plasmid (pC013) containing Leptotrichia wadei Cas13a (LwCas13a), received from the authors of Gootenberg et al. 2017. In this section, the synthesis of a BioBrick encoding for Cas13a is explained. Firstly, the sequence for LwCas13a was extracted from Gootenberg et al. 2017. This DNA construct (7.7 kb), was ordered in three separate parts, denoted part 1 (2.5 kb), part 2 (1.6 kb) and part 3 (0.3 kb). See Figure 1 for an overview of how the assembled plasmid should look like (see the Cas13a notebook for the entire process of assembling this plasmid). After Gibson Assembly (GA), a PCR was run on the resulting GA mixture and the product was loaded on a gel. This showed that part 2 and part 3 were successfully assembled. Figure 1: Schematic overview of our multipurpose Cas13a construct. The insert was delivered in three parts: part 1 (green), part 2 (blue) and part 3 (red). The backbone (pSB4A5) is depicted in yellow. This part was PCR-amplified from the GA mix (see Figure 2). Subsequently, another Gibson Assembly was done with part 1, the ligated part 2 and part 3 and the backbone (pSB4A5). We transformed the GA mixture several times unsuccessfully (see the Cas13a notebook). When amplifying the ligation mixture we obtained a band corresponding to the size of 7.7 kb, which indicated the presence of the fully assembled plasmid (Figure 3), at least in linear form. The plasmid was linearly amplified by PCR and blunt-end ligation was performed. The ligation was then transformed into BL21 (DE3) cells. Colonies of this transformation were screened with colony PCR which gave a positive result for colony 1 (Figure 4). Plasmid DNA from colony 1 was purified and sent for sequencing, see Figure 5 for the results. Figure 2: PCR amplification of the ligated part 2 (P2) and part 3 (P3). This was achieved using primers IG0030 and IG0023. The sequence should be around 1800 bp, which is in line with the picture. Figure 3: PCR amplification of the whole plasmid (7.7 kb). Primers IG0055 and IG0056 were used. The Gibson assembly mix was used in different dilutions and in duplo. The dilutions 1:10 have bands of the right size. Figure 4: Colony PCR of colony 1.Primers IG0007 and IG0004 (1.5 kb) were used. Colony 1 seemed to contain plasmids with an insert of the right size. Figure 5: The alignment of the sequencing results.This image shows the alignment of the sequencing results of plasmids from colony 1 with the original insert design in SnapGene. The blue rectangles indicate deletion sites. The construct was successfully assembled, however two deletions were present. These deletions that are present in the coding sequence of the Cas13a gene caused a frameshift, resulting in a corrupted transcription of the gene. For further experiments, the Cas13a was purified from a plasmid received from the authors of Gootenberg et al. 2017. This protein is structurally similar to our envisioned BioBrick, except that our BioBrick is codon optimized for bacterial expression instead of mammalian expression and that the CRISPR array lays downstream the Cas13a coding sequence. Please see the details of our construct on the Cas13a design page. The CRISPR array with the interchangeable spacer was made a composite and basic part (BBa_K2306013 and BBa_K2306015). This array can serve as the template for future Cas13a BioBricks. Thanks to the two BsaI restriction sites at each end of the spacer, any guide can be easily be made by restriction ligation of a new spacer. See Figure 6 for a schematic overview of BBa_K2306013. Figure 6: Schematic overview of the basic part BBa_K2306013. It consists of a changeable spacer (blue) flanked by two direct repeats (DR) (orange) and a constitutive promoter (J23119)(white). The purification protocol we used to purify Cas13a is based on the protocol described in Gootenberg et al. 2017 (find the Cas13a purification protocol here). During purification, SDS-PAGE analysis was done to check for presence and purity of Cas13a (Figure 7). Figure 7: The difference in expression between the bacteria cultures before and after induction with IPTG (BI and AI). The intensity of the bands around 140 kDa show a clear difference before and after induction, which mean that the induction was successful and that the size of the translated protein is right. B: The presence of proteins in the different purification samples. This gels shows the presence of proteins in the supernatant (SN), the first two washing steps (W1 and W2) and the five elution steps (E1-5). It can be seen that clear bands around 140 kDa are present in the elution steps. Also, some fainter bands at different heights can be seen in the elution steps. From these results we can conclude that Cas13a is expressed in significantly higher quantities after induction with IPTG (although there is some leaky expression). Furthermore we can conclude that purity of the product is increased using the different purification steps that were done. To assess the collateral cleavage activity of the purified Cas13a, we first made use of the fluorescence assay as described by Gootenberg et al. 2017. For a description of the RNase alert assay (Thermo Fischer Scientific, 2017), see the Cas13a design page. Reactions with Cas13a were performed with and without target and crRNA (Figure 8). Figure 8: The fluorescence of RNase alert over time. Fluorescence indicates for Cas13a activity. It is shown here that there is only activity when both crRNA and target RNA is added to the solution. Values reported here are the measured values of which the blank (only RNase alert) is subtracted. From this figure we can conclude that Cas13a indeed gets activated by finding its target and shows collateral cleavage. There is relatively more fluorescence when Cas13a is present with crRNA and target, than without them. We employed our crRNA design to design 5 crRNAs that have been successfully synthesized following the crRNA synthesis protocol. The synthesis of crRNA is achieved by annealing two complementary DNA oligos (oligos IG0057 - IG0063) that serve as templates. In vitro transcription was conducted on these DNA oligos. As indicated in Figure 9, we observe bands of the expected length on a gel, confirming crRNA production. To test the functioning of these crRNAs, similar fluorescent assays as for testing the functioning of Cas13a were repeated for different crRNAs. The results in Figure 10 show that all tested crRNAs result in Cas13a activation as expected. However, the efficiency of binding the target differs between the crRNAs, with crRNA 3 exhibiting the best target binding. Figure 9: crRNA production. Electrophoresis of 10% PAGE stained with SYBR Gold. A Low Range ssRNA Ladder (LRL) is used. This gel contains the different designed crRNAs synthesized by in vitro transcription with (1-5) and without hairpin (1H-5H). The presence of bands of the expected length in all lanes indicates successful crRNA production. Figure 10: Testing the crRNAs. This graph shows the activity of Cas13a in combination with different crRNAs for the same target designed with the crRNA model. The difference in activity demonstrates the difference in efficiency of the different crRNAs. Although we did not manage to create our own composite BioBrick part containing Cas13a and the spacer sequence, we were still able to to construct a BioBrick composite part comprising the spacer sequence and a constitutive promoter that can be used in combination with other BioBricks encoding for Cas13a. Furthermore, the Cas13a obtained from Gootenberg et al. 2017 was successfully expressed and purified following the Cas13a protein purification protocol. We confirmed presence of the protein by running on SDS-PAGE. An RNase alert fluorescence assay from Gootenberg et al. 2017 was used to show the functionality of Cas13a. Additionally, the fluorescence assay enabled us to test the efficiency of different guides (different crRNAs) designed using our novel crRNA design tool. Cas13a needs an RNA target to detect. Therefore, the bacterial DNA present in a milk sample first needs to be isolated, amplified and transcribed to RNA. DNA can be isolated by either boiling the cells or by the use of a microwave described by Dashti et al. 2009. We decided to try out both with consecutive amplification and transcription. This included testing the sensitivity of our methods and testing the target in a cleavage assay using Cas13a. We also put effort into constructing a centrifuge ourselves. Lastly, we used our 'motif finder' to design primers that could be used to amplify resistance genes from mastitis pathogens (blaZ genes). To see if we were able to use the easy DNA extraction method described by Dashti et al. 2009, we repeated the control steps of the paper. We tried the boiling- and microwave- method on bacteria of the KEIO-strain (JW0729). We used PCR primers that anneal to the kanamycin resistance gene to test our DNA extraction method. We used the microwave and boiling method described in the DNA isolation protocol. Next, we amplified a specific DNA-sequence of the KEIO-strain with primers IG0014 and IG0015 ( GoTaq PCR ). If the primers annealed, a band of size ~750 bp was expected. To be sure that the amplified fragment was the right one, we did a digestion with BsrBI, to obtain DNA-fragments of sizes approximately 250 bp and 500 bp (Digestion). We checked the sample on a gel (DNA electrophoresis), as shown in Figure 11. Figure 11: Digestion of PCR product on DNA extracted by DNA isolation methods. The number of cells are indicated in each lane. The microwave extraction entails using the microwave for 10 seconds, while the longer microwave extraction was for 20 seconds. We can conclude that both methods are suitable to isolate DNA from gram-negative bacteria. However, the boiling method was more user-friendly. This is because the settings of different microwaves are not always the same. For example, when we tried our protocol with another microwave, the tubes containing the samples burst open. Amplification of DNA, which follows the isolation, is possible by Recombinase Polymerase Amplification (RPA). This isothermal alternative of a PCR reaction, can be applied at constant temperature. In this way, no thermocycler is required. The RPA reaction was tested on pSB1T3, a plasmid with a tetracycline resistance gene, following the RPA protocol. We used the designed primers IG0016 and IG0017, with an expected amplification product of 167 bp. After purification of the samples (PCR product purification), the samples were checked on a RNA-gel (RNA electrophoresis). These results can be found in Figure 12. Figure 12: Target DNA made from pSB1T3. The target DNA was made by RPA. We successfully amplified a part of the tetracycline resistance gene from pSB1T3. RPA is thus applicable for amplifying DNA at a constant temperature, even though pipetting and mixing all compounds together might require some practice. During the RPA reaction, a T7-polymerase binding site can be added to the target DNA by special designed primers. By adding T7-polymerase the DNA can be transcribed into RNA simultaneously.The combined reaction of RPA and T7 polymerase was tested to amplify and transcribe a part of the tetracycline resistance gene present on pSB1T3. First, the DNA was amplified and transcribed into RNA following the RPA+in vitro transcription protocol. We used the primers IG0016 and IG0017, designed with a T7 promoter tail. The expected RNA product was 145 bp. After purification of the sample (RNA isolation ), the sample was checked on an RNA gel. (RNA electrophoresis), as can be seen in Figure 13. Figure 13: Target RNA made from pSB1T3. The target RNA was made by RPA and in vitro transcription at 37 °C. DNA amplification and in vitro transcription can be combined to obtain the required target RNA. Again, pipetting and mixing all compounds together requires some training, meaning that we need to come up with a solution to reduce the number of pipetting steps. The next step was to combine the DNA isolation step with the consecutive amplification and transcription step. Therefore, pSB1T3 was transformed into DH5-α and the cells were diluted in milk. To assess the sensitivity of the boiling method on gram-negative bacteria in milk, we prepared a dilution range of 10-3 up to and including 10-9 in milk. The DNA was isolated following the boiling method protocol. After amplification and in vitro transcription of the isolated DNA, the RNA was purified and loaded on a gel to check the result (making use of the RPA+in vitro transcription protocol, the RNA isolation protocol and the RNA electrophoresis protocol). The results can be found in Figure 14. Figure 14: Target RNA made by RPA and in vitro transcription from tetracycline resistant bacteria diluted in milk. Both the isolation method and the RPA and in vitro transcription reaction were tested on pSB1T3 in DH5-α at 37°C. To assess the sensitivity of the method, different dilutions of DH5-α transformed with pSB1T3 were prepared in milk. The dilutions are indicated per lane. With a teardrop assay, we estimated the number of cells in the sample we used. There were approximately ~3 x 108 cells. We were able to isolate, amplify and transcribe DNA up till a dilution of 10-8 in milk, indicating that we could detect our target in the range of 1-10 cells per sample. However, one should keep in mind that the cells contained a high copy number plasmid, which means that our sample contained more DNA per cell than an average biological sample. To make our device more user-friendly, we tried the amplification and in vitro transcription at 22 °C (room temperature) for 3 hours, instead of at 37 °C. The results are displayed in Figure 15. These results indicate that the RPA is able to work at room temperature. The T7 RNA polymerase we used on the other hand, has an optimal temperature of 37 °C, and is not functional at room temperature. This means that for our device, we need to come up with a way to regulate the incubation temperature for 3 hours. Figure 15: Target RNA made by RPA and in vitro transcription at 22°C from DH5-α transformed with pSB1T3, diluted in milk. Both the isolation method and the RPA and in vitro transcription reaction were tested on pSB1T3 in DH5-α at 22 °C. To assess the sensitivity of the method, different dilutions of DH5-α transformed with pSB1T3 were prepared in milk. The dilutions are indicated per lane. The sample preparation is required to obtain the target for Cas13a. Therefore, It is important to find out if, with the current sample preparation, Cas13a is able to recognise the target and will engage in collateral cleavage. To check whether or not Cas13a gets activated by our prepared sample, we used one of our purified RNA samples in the Cas13a activity assay. As can be seen in Figure 16, our Cas13a was activated by our target. This demonstrates that we successfully prepared the sample in such a way that the target became available for Cas13a to detect. Figure 16: Cas13a assay with target RNA extracted with our boiling DNA isolation method. (blue) Cas13a with crRNA and target isolated from artificially contaminated milk, containing bacteria with the tetracycline resistance gene. (yellow) Cas13a with crRNA and target isolated from milk without bacteria containing the tetracycline resistance gene. (red) Cas13a without crRNA and without target. We wanted to test if our sample preparation could be applied on a biological sample, without knowing the sequence beforehand. We decided to validate our sample preparation on mastitis. While talking to veterinarian experts in our Integrated Human Practices, we discovered that detecting multi resistance genes such as blaZ and mecA would be most relevant. We used our 'motif finder' to find all the conserved regions of blaZ. On these regions, we designed primers for RPA. We wanted to test if our motif finder would work in practice; ie. that the designed primers would bind to the unknown genome of any pathogen containing the blaZ-gene. We were happy to receive three isolates from the Wageningen Bioveterinary Research in Lelystad; one was confirmed to contain the blaZ-gene, the other two isolates most likely contained the blaZ-gene as well. The exact sequences of the strains were not provided. First of all, we wanted to test if our primers would anneal to the unknown genome of the isolates, which most likely contained a variant of the blaZ-gene. Three different DNA isolation methods were used: the boiling method (DNA isolation boiling) and two commercial kits (Plasmid Isolation (Promega PureYield™ Plasmid Miniprep Kit and Milk Bacterial DNA Isolation Kit). To test the sensitivity of our method, we decided to prepare a dilution range in milk of 10-3 up to and including 10-9 of S. aureus. Next, we used the boiling method to isolate the DNA. (DNA isolation boiling). After DNA isolation, the samples were amplified, transcribed into RNA, purified and then loaded on a gel (RPA+in vitro transcription protocol, of the RNA isolation protocol and the RNA electrophoresis protocol). The samples prepared in LB-medium all gave a positive output. However, all the dilutions prepared in milk did not show a band. We wanted to test if the samples were too diluted already, so we prepared the 10-1 and 10-2 dilutions both in milk and in LB. The results can be found in Figure 17. Figure 17: Target RNA of blaZ-gene made by RPA and in vitro transcription. (A) To test if the primers we designed on the conserved regions of blaZ work, we extracted DNA from S.aureus (1.0), Coagulase Negative Staphylococcus 1 (2.0) and Coagulase Negative Staphylococcus 2 (3.0) prepared in Tryptic Soy Broth in 3 different ways, namely: the boiling method (0.1), the commercial promega miniprep plasmid isolation kit (0.2) and the norgen milk isolation kit (0.3). (B) To assess the sensitivity of the boiling method, different dilutions of S.aureus were prepared in milk. These dilutions are on indicated per lane. We prepared the 10-1 and 10-2 dilutions of S.aureus both in milk and in LB and performed RPA and in vitro transcription. We used DH5-α transformed with pSB1T3;(Tet) as negative control. The samples in milk again did not show a band, in contrast to the samples prepared in LB. For the samples prepared in LB, the boiling was long enough to break open the cells. This indicates that the boiling method can be used for gram-positive bacteria in LB. For milk on the other hand, 10 minutes in boiling water was not sufficient. We dove into literature again and found out that milk interferes with DNA isolation methods. We did not experience this when we boiled gram-negative bacteria in milk. However, gram-positive bacteria have a thick peptidoglycan layer in the cell wall, making it harder to lyse the cells. Therefore, for future experiments, we would recommend to boil for at least 15 minutes and immediately put samples on ice for 15 minutes before centrifuging as described by Ribeiro Junior et al. 2016. Another possible method was explained by Parayre et al. 2009, which involves the resuspension of the pellet in lysis buffer and the consecutive incubation with proteinase K. This means however, that an extra hour of incubation time needs to be considered and extra pipetting steps. For the DNA isolation, a centrifuge step is required to spin down the cell debris. We tried to build a centrifuge out of a hard drive, by making it spin on its own. Additionally, we made a hand-powered centrifuge out of a string and around-shaped hard plastic. The hard drive has a default speed which cannot easily be overridden, so we wanted to find out whether the default speed exceeded our ideal speed. For this simple speed test, we broke open the hard drive and marked a certain part of the disk. We filmed the spinning and analysed the video with VLC media player. We subtracted the frame rate (which was 24 Hz) and tried to determine in how many frames the mark would make one round. Unfortunately the hard drive spun so fast that the mark already made a whole round in one frame. This means that the hard drive would spin with a speed of at least 24 Hz $\times$ 60 = 1440 rpm, exceeding our ideal speed of 1000 rpm. To be able to control the speed of the hard drive, we need a sensor less brushless motor driver (ESC) to drive the special motor of the hard drive and control this speed with an Arduino. We decided to discard this idea and we continued with our hand-driven centrifuge. We started with a preliminary test, to check whether or not we could spin down our suspension of cells by using our centrifuge. By centrifuging for less than a minute, we already made a cell pellet. This process can be seen in figure 18. We aimed to make all our steps as easy as possible, as the end user (a farmer) does not have access to specialized lab facilities and equipment. It is very important that the end-user is provided with all the necessary materials and protocols. To make our kit accessible and user-friendly, we decided to buy all the materials in the local shopping centre instead of buying it from brands specialized in lab facilities. Products in stores are already developed to be user-friendly and usable by everyone. Based on the materials provided in the kit we worked out a protocol, in easy accessible language and steps. To validate our work we visited Paul at his farm, where he tried out whether or not he could handle the provided materials in our protocol. From our work we gained two insights we were able to implement in the protocol: (1) we need to reduce the number of pipetting steps in the protocol for the farmer and (2) in order to let RPA and in vitro transcription work we need to regulate temperature for 3 hours. To reduce the amount of pipetting, we envision in our protocol to provide all the enzymes, primers and NTPs dried in a tube. In this way only the rehydration buffer (with the activation reagent) and the DNA isolated from the milk sample need to be added. Furthermore, we came up with a creative way to regulate the temperature, by using the materials we already had in our toolbox thermos flask. By adding a meat thermometer that can measure up to 120 °C, we can bring the water to the right temperature and keep it around the optimum temperature for RPA an in vitro transcription. For future research on DNA isolation from gram-positive bacteria in milk we advise to boil for at least 15 minutes and immediately put samples on ice for 15 minutes before centrifuging. We recommend testing the sensitivity of the whole tool by also testing the RNA prepared from all the different dilutions DNA with Cas13a. Lastly, it would be interesting to test an isothermal T7 RNA polymerase. In this way, not only the RPA-reaction works at room temperature, but the whole RPA and in vitro transcription can be done at this temperature, greatly increasing the user-friendliness of our tool. We experimentally validated that boiling is an easy and effective method to isolate DNA from gram-negative bacteria in milk and gram-positive bacteria in LB. Also, that the combined RPA and in vitro transcription reaction can be performed at a constant temperature. Furthermore, we mapped out and tested all the sample preparation steps to get from gram-negative bacteria with our target gene to our target RNA, which activated Cas13a. We can state that we successfully prepared a biological sample for detection via Cas13a. We translated what we learned into a user-friendly protocol that can be followed by farmers to perform sample preparation on their farm. The goal of the detection module was to demonstrate our newly invented coacervation method, named Coacervate Inducing Nucleotide Detection of Your Sequence (CINDY Seq). CINDY Seq allows naked-eye detection of target recognition by Cas13a, exploiting the physical phenomenon called "coacervation". This is the phenomenon that mutually attracting polymers phase-separate into polymer-rich regions (known as coacervates) and polymer-poor regions if the polymers are long enough and the conditions are right. More elaborate description of this method is provided on the design page, and a theoretical model describing the coacervation process is provided on the modelling page. We started forming coacervates following Aumiller et al. 2016. They use "long" polymers of uracil (polyU) in combination with spermine, a small positively charged polyamine, to form coacervates. PolyU/spermine coacervates were formed with 0.1 wt% polyU and various spermine concentrations. The absorbance of 500 nm light was measured by UV/Vis spectrophotometry. The results are shown in Figure 19. An interesting observation that follows from these measurements is that there is an optimum of spermine in between 0.1 and 1.0 wt%. This is counterintuitive, as it is hard to imagine that addition of more spermine actually decreases the amount of coacervates. However, this is perfectly in line with theoretical considerations as we explain in the coacervate modelling page. Figure 19: Absorbance of polyU/spermine coacervate solutions. Barplot shows the absorbance (500 nm light) of coacervate solutions containing varying amounts of spermine and a fixed (0.1 wt%) polyU concentrations. The result indicates that there is an optimum and that addition of more spermine after 1 wt% in fact decreases the overall turbidity. This result, although counterintuitive, is predicted by theoretical models of coacervates. Error bars indicate standard deviation from the mean of three measurements. With coacervates formed, the next step was to test the coacervation method for an unspecific RNase to give a proof of principle. This RNase would eventually be substituted by Cas13a. Besides the proof of principle, the experiments with non-specific RNase (RNase A) would already give us information on timescales, concentrations and methods of measurement. In Figure 20 two tubes are shown, both containing polyU and spermine. To the left tube, RNase A is added. There is clear difference in turbidity between the two tubes, proving that coacervates do not form in the presence of RNase A. Now that we were able to prove that RNase can inhibit the formation of coacervates, we did an RNase A titration to determine from which concentration of RNase A the formation of coacervates is inhibited. The results are shown in Figure 21. From these results we can conclude that the formation of coacervates is inhibited above an RNase A concentration of 10-4 wt%. Figure 20: PolyU/spermine coacervates with (left) and without (right) RNase A. As is directly visible to the naked eye, coacervate solutions of 0.1 wt% polyU and 1.0 wt% spermine are cleared upon addition of 0.05 wt% RNase A. This serves as a first proof of principle that coacervates can be used to indicate RNase activity. Figure 21: Titration of RNase A. Various amounts of RNase A were added to polyU/spermine coacervate solutions and after 5 minutes of equilibrating to 27 °C, the absorbance of 500 nm was measured over the course of 30 minutes. In the case where no or only 10-5 wt% of RNase was added, significant absorbance could be measured throughout the full period of time. For higher concentrations, the absorbance fell to that of the background within the equilibration time. In order to transfer the results obtained with RNase A, we tested the effect of spermine on Cas13a. A fluorescence assay with RNase Alert (as described on the Cas13a design page) was done with the addition of 1.0 wt% spermine, demonstrating that Cas13a did not show collateral cleavage with this condition. The effect of spermine was tested with and without the presence of the target RNA. The control consisted of a regular Cas13a assay with and without target RNA. Figure 22 shows the results. We can conclude that spermine inhibits the activity of Cas13a, whereas it does not inhibit RNase A activity. For our purpose, this means that spermine always has to be added after sufficient time has given for Cas13a to collaterally cleave enough RNA. Figure 22: Collateral cleavage reaction is not compatible with 1.0 wt% spermine. The fluorescence assay with and without active Cas13a, and with and without 1.0 wt% spermine indicates that Cas13a does not show collateral cleavage activity in the presence of spermine. Finally, the functional Cas13a had to be integrated with CINDY Seq, to achieve a full proof of principle. First tests revealed that polyU/spermine coacervates form in the reaction buffer of Cas13a (40 mM Tris-HCl, 60 mM NaCl, 6 mM MgCl2, pH 7.3), but as said, Cas13a does not show activity in presence of 1.0 wt% spermine. Therefore, the experiment was separated into two parts. First, Cas13a with and without target and crRNA was incubated with polyU for varying amounts of time, and subsequently spermine was added to a final concentration of 1.0 wt%. It was observed that after 60 minutes of incubation and subsequent spermine addition, no coacervates were formed in the tube containing activated Cas13a. On the other hand, if spermine was added to the tube with inactive Cas13a, coacervates did form (see Figure 23). We can conclude that it is possible to prove the presence or absence of a certain RNA target with the naked eye using the CINDY Seq method. This conclusion can be drawn after at least 45 minutes of incubation, but preferably after 60 minutes (see Figure 24). Figure 23: CINDY Seq gives a result within the hour. This figure shows the difference in turbidity between solution with and without RNA target after 0 minutes and after 60 minutes of incubation. It is directly visible to the naked eye that when activated Cas13a is given 60 minutes, it is able to cleave the polyU to such an extent that it no longer forms coacervates. As a control, also inactive Cas13a was incubated with polyU and did not show cleavage of polyU to the extent where coacervation no longer occurred. The tubes in which spermine was added after 0 minutes serves as a control. Figure 24: Animated GIF showing the readout of CINDY seq at different timepoints. Mixes with active and inactive Cas13a and 0.1 wt% polyU were kept at 37 °C and supplemented with spermine (to final concentration of 1.0 wt%) at various timepoints. Previous experiments have shown that presence of 1.0 wt% spermine does not allow Cas13a to become active. At the first three timepoints (0, 15 and 30 min), coacervation is still visible in both tubes, indicating that there is still sufficient long polyU to coacervate. However, after 45 min and beyond, a clear solution can be observed in the mix containing active Cas13a, meaning that enough polyU has been cleaved. Here we have proven that the ability of Cas13a to collaterally cleave RNA can be combined with the formation of coacervates of long polymers into an easy and cheap detection method. By incubating Cas13a for one hour at 37 °C in solution with 0.1 wt% polyU and subsequently adding spermine to a final concentration of 1.0 wt% the presence or absence of the RNA target can be proven. To preserve activity of Cas13a and provide our device with a longer shelf life, we investigated the use of Tardigrade intrinsically Disordered Proteins (TDPs), which were found to preserve protein activity after desiccation and subsequent rehydration (Boothby et al. 2017). We picked four different TDPs, two cytosolic abundant heat soluble (CAHS) and two secretory abundant heat soluble (SAHS) proteins, further referred to as CAHS 94205, CAHS 106094, SAHS 33020 and SAHS 68234. The first step in using TDPs to preserve Cas13a activity was assembling the construct from the gBlocks synthesized by IDT. Since the first stage in the TDP module involved protein purification, we chose the T7 promoter. This strong promoter results in overexpression of the produced proteins. To assemble the parts, a Gibson Assembly (Gibson et al. 2009) was carried out following the Gibson assembly protocol, and cloned into a TOP10 strain. The resulting colonies were then screened for the inserts after extraction of their DNA and subsequent PCR amplification of the insert. The colonies chosen for sequencing, included in Figure 25, were purified using Miniprep and subsequently sequenced. Figure 25: Gel electrophoresis of colony PCR products of the colonies resulting from the transformation of the assembled plasmids.The gels feature ladders (L), negative controls (x1) and the different inserts (1 to 6, respectively CAHS94205_2, CAHS94205, CAHS). Bands with the expected size for the insert are marked with. The primers used for the PCR were chosen to anneal upstream the prefix and downstream the suffix of the backbone pSB1C3 to screen for the insert. The expected size of the PCR product without insert was 261 bp, while the inserts carrying the CAHS proteins (1.x, and 3.x) and SAHS (5.x and 6.x) had an expected size of ~1000 bp and ~850 bp, respectively. As shown in Figure 25, bands of approximately 1000 bp were found in the colonies 1.5, 3.2, 3.5, 4.6, 4.8 and 4.9 while bands of ~850 bp were visible in the colonies 5.1, 6.1, 6.2, 6.5, 6.6, 6.7 and 6.8. Therefore, the assembly was concluded to have yielded properly assembled plasmids for all inserts, except insert 2. However, both assembly of insert 2 and insert 4 were discarded due to time constraints. Finally, the colonies 1.5, 3.5, 5.1 and 6.2 were picked for further sequencing, where all of them were confirmed to contain the exact desired sequence. After the sequencing results had corroborated a successful cloning, the next step was to evaluate whether the resulting plasmids would produce the expected proteins. For protein purification, the assembled plasmids were transformed into the protein expression strain BL21 (DE3). Subsequently, the TDP purification protocol was executed and the final protein concentration was measured with a Bradford assay. The concentrations of the resulting protein solutions ranged from 1 to 3 g/L. We ran SDS-PAGE gels for all four proteins. Additionally, we included a protein solution resulting from the purification of the bacterial strain featuring the same backbone as our BioBricks. As depicted in Figure 26, the samples from the purified TDPs led to a series of additional bands absent in the control, indicating that we successfully produced and isolated all four tardigrade-specific proteins. Figure 26: SDS-PAGE gels of TDPs after protein purification.Gel a features the ladder (L), CAHS 94205 (#1), SAHS33025 (#2) and the control with the backbone pSB1C3 (Blank) while gel b includes CAHS 106094 (#3), the ladder (L) and SAHS 68234 (#4). In addition to SDS-PAGE of the purified protein solutions, the identity of the proteins was also confirmed by mass spectrometry. To do this, a sequence unique to each TDPs' amino acid sequence was chosen and screened for their presence in the proteome of the expression strain, BL21 (DE3). Subsequently, four cell cultures expressing a different TDP each were used to screen for all chosen sequences by mass spectrometry (MS) (see samples preparation) and an additional culture only featuring the same backbone (pSB1C3) as a control. The peak areas of the resulting mass spectrographs shown in Figure 27, reflecting the occurrence of a given sequence in the sample. Figure 27: Bar chart graph including the peak areas of the TDP samples analysed by mass spectrometry after purification. For the control, the expression strain only contained the same backbone. The unique peptides that were screened for were only present in each TDP expected to contain the sequence. The differences in peak height in Figure 27 can be attributed to the different lengths of the target peptides, which can influence the readout due to the variation of the mass/charge ratios. Hence, it can be concluded that the results were positive and the identity of the proteins could be verified by mass spectrometry. As the production of TDPs by our cloned BioBricks had been validated by different methods, we moved on to evaluate the protective capabilities of our proteins of interest. We performed a series of preliminary activity assays with the enzyme "Lactate Dehydrogenase" (LDH) to determine whether the TDPs conferred desiccation tolerance to non-native proteins following the LDH assay protocol. In this assay, LDH was dried with and without TDPs and rehydrated before their measurement, so that the protection TDPs offer can be established as the difference between the initial and final enzyme activity. Subsequently, the feasibility of long-term storage with TDPs was studied by measuring the activity of LDH+TDP samples dried and stored at room temperature for different amounts of time, up to a maximum of 18 days. All experiments were carried out in triplicate. To the end of assessing the protective effect of the TDPs, all four proteins were tested as buffer excipients (in concentrations ranging from 0.1 to 1 g/L) for the enzyme Lactate Dehydrogenase (LDH) in an enzymatic activity assay based on that described by (Boothby et al. 2017). Figure 28: LDH activity after drying and subsequent rehydration for the different TDPs. As depicted in Figure 28, the LDH activity is least preserved at the lowest TDP concentration for all TDPs and a maximum of activity preserved for the highest concentrations we assessed, with the exception of CAHS 94205, though the error bars should be considered. This indicates that the higher the protein concentration in the solution, the better the activity is preserved; and thus, the better the protection of the enzyme. Such a trend was also observed in our model and by Boothby et. al. 2017, hence substantiating the validity of our results. Furthermore, the preserved enzymatic activity at the highest concentration studied (1 mg/mL), did not differ significantly among the four proteins. It is however notable that, unlike the rest of the proteins, CAHS 106094 did not show a significant preservation of activity at a concentration of 0.1 mg/mL and yielded a much lower preservation at a concentration of 0.5 g/L in comparison to the other TDPs. In view of the previous results, we adapted the LDH assay previously carried out to test the protective capability of CAHS 94205 and SAHS 33020 for 18 days and thus evaluate their use for long-term storage at room temperature. Samples similar to those of the previous procedure were prepared and stored dried at room temperature for different amounts of time before their resuspension and measurement. Figure 29: LDH activity after freeze-drying and drying with TDPs. LDH activity after freeze-drying (FD)/ drying with different concentrations for CAHS 94205 (a) and SAHS 33020 (b) and rehydrating as a function of time. Experiments were performed at room temperature. As shown in Figure 29, LDH maintains its activity when stored dried with the cytosolic and secretory TDPs, while its activity was not preserved when freeze-dried or dried in the absence of TDPs. Furthermore, it can be seen in Figure 29 that for TDP concentrations of 0.1 mg/mL only SAHS 33020 could maintain a significant amount of activity after a day. Although both proteins successfully maintained LDH activity at higher TDP concentrations, the activity decrease over time was substantially lower for a concentration of 1 mg/mL compared to that observed for 0.5 mg/mL. Moreover, for a concentration of 1 mg/mL, it can be concluded that the variations in activity over time for SAHS 33020 are lower than those for CAHS 94205, suggesting that the mechanism by which the SAHS proteins confer their protection is more effective for long periods of time. Our assays reveal that LDH in combination with SAHS 33020 is still active after being stored dried for 18 days at room temperature. Nevertheless, the activity drops in the data discussed above could also be due to other factors than just the limitations of the proteins. It should be noted that after drying the samples no additional measures to ensure the dryness of the samples were taken, as these were kept in a drawer instead of in a desiccator. Moreover, interactions with any air components should not be neglected either, as the samples were not purged with an inert gas prior to their storage. In future experiments the long-term evolution of the Cas13a activity should be assessed for SAHS 33020 and other tardigrade-specific SAHS proteins as, in view of the long-term results for the TDPs with LDH, these proteins hold great potential in simplifying the storage, usage, and shipment of the many fragile chemicals and biological materials. As described by (Boothby et al. 2017), the production of tardigrade-specific proteins by heterologous organisms increases their desiccation tolerance significantly. Therefore, to the end of further characterizing our BioBricks, iGEM Wageningen UR was asked to carry out experiments with our plasmids to test the increase in bacterial desiccation tolerance by CAHS94205, CAHS106094 and SAHS33020. Furthermore, for the sake of gaining a better understanding of the proteins' function, Wageningen performed additional experiments by washing with PBS buffer. This was a procedure that had not been done before. The results are displayed in Figure 30. Figure 30: bacteria desiccation tolerance with TDPs. Number of colonies remaining after desiccation and rehydration for the empty vector, TU Delft's BioBricks for the production of CAHS 94205, CAHS 106094 and SAHS 33020 in BL21(DE3). Experiments performed by iGEM Wageningen UR. As seen in Figure 30, the number of colonies found after drying and rehydrating the bacteria (BL21(DE3)) with the TDPs is considerably larger than that for the empty vector of the same bacterial strain (1 vs. 103-104). This proves that the survival of bacteria is greatly enhanced by the presence of the three TDPs in the cells and complies with the results from Boothby et al. 2017. However, a closer look at the differences in trend between the samples washed with water and PBS reveals a modest increase in the survival of bacteria with plasmids for the production of cytosolic abundant proteins (CAHS) and a decrease for the secretory abundant protein SAHS 33020, which proves that either the proteins or the mechanism by which the protection is conferred are affected by the media used to resuspend. After evaluating the influence of TDPs on the desiccation tolerance of LDH, an established procedure, we wanted to assess their protective performance with Cas13a. Our lattice model indicated that it was best to use CAHS 94205 and SAHS 33020 to preserve Cas13a activity at a concentration of 1 g/L or higher. Furthermore, it was observed previously that these two TDPs preserved LDH activity after drying and subsequent rehydration, even after several weeks. Both CAHS 94205 and SAHS 33020 were dried with Cas13a in a concentration of 1 g/L, as the protein purification yielded low concentrations for SAHS 33020. Subsequently, Cas13a was rehydrated and its activity measured via an RNase alert fluorescence assay. All measurements were performed in duplicate. Figure 31: Cas13a activity with CAHS (94205) (a) and SAHS (33020) (b) in RNase Alert assay after drying and rehydration. SAHS 33020 and CAHS 94205 were dried with Cas13a and then combined with the target. Additionally, the fluorescence activity of the same sample without target, as well as a Cas13a sample with target, but dried without TDPs was measured. Cas13a with the respective TDP and the target without previous drying were used as control. As shown in Figure 31a and b, the fluorescence intensity triggered by the RNase-like activity of Cas13a dried without any TDPs is very low in comparison to that of the Cas13a undried and stored frozen. However, while the fluorescence intensity and thus Cas13a activity was higher when Cas13a was dried with CAHS 94205, it also seemed to lose its specificity: the fluorescence intensity after one hour was the same with crRNA and target as without (Figure 31a). However, when the same experiments were repeated with SAHS 33020 instead (Figure 31b), Cas13a remained active after being dried and rehydrated, and the fluorescence intensity observed in the presence of the target and crRNA was clearly higher than in its absence. This indicates that SAHS 33020 can preserve Cas13a activity after desiccation and rehydration, thus keeping the protein functional and opening the possibility of the storage of Cas13a with TDPs. In view of the unexpected results observed for CAHS94205 and the fact that iGEM Munich 2017 was also working with Cas13a and similar fluorescence assays, we sent them a newly purified batch of CAHS 94205, to determine whether the same phenomenon would be observed in experiments under the same conditions (see Figure 32). Figure 32: Cas13a activity with CAHS 94205 in RNase Alert Assay by iGEM Munich 2017. Fluorescence intensities over time triggered by RNase activity of Cas13a after drying with the TDP CAHS 94205, with and without crRNA and target RNA. Data from iGEM Munich 2017. The results of iGEM Munich 2017 also showed the unexpected RNase activity of Cas13a. We therefore hypothesize, that the conformational change in Cas13a, normally caused by the binding of the crRNA with the target, is induced as a result of an interaction between CAHS 94205 and Cas13a upon drying and resuspension. Consequently, CAHS 94205 was ruled out as a potential desiccation tolerance mediator for Cas13a. After receiving gene fragments for the four chosen TDPs and promoters, we assembled the fragments into a pSB1C3 backbone with the T7 promoter and transformed the resulting plasmids into the protein expression strain BL21(DE3). Their successful production was confirmed by both SDS-PAGE and mass spectrometry. From the different drying assays we performed, it was concluded that LDH activity was remarkably well preserved after 18 days in the samples containing LDH dried with SAHS 33020. Furthermore, we discovered that while both CAHS 94205 and SAHS 33020 preserved Cas13a's RNase-like activity after drying, CAHS 94205 could not preserve its specificity; Cas13a dried with the latter CAHS showed RNase activity even in the absence of the target RNA. This was corroborated by iGEM Munich 2017, when the same trend was observed under similar experimental conditions with a Cas13a protein originating from a different strain. Nonetheless, the results were positive for the tardigrade-specific protein SAHS 33020, as both the activity and specificity of Cas13a could be considerably preserved after drying and rehydrating. We therefore believe that the SAHS proteins hold great potential in simplifying the storage, usage and shipment of the many fragile chemicals and biological materials. As indicated on the design page, we wanted to let our bacteria produce all our necessary proteins at once and package them in Outer Membrane Vesicles (OMVs). For this purpose, we tried to establish and characterize hypervesiculation in a strain with a knock-out of an important membrane envelope protein, TolA. Furthermore, we conducted experiments in which another membrane protein, TolR, was overexpressed. It had been suggested that this would further enhance vesiculation. We received the plasmid with the mutation in the TolR gene from UNSW Australia 2016. In order to confirm the sequence, we transformed the plasmid into a TOP10 strain. Some colonies were picked and screened with colony PCR (with primers IG0028 and IG0027). Simultaneously these colonies were grown and miniprepped, prior to performing a digestion with NdeI and SphI. Colonies 3.1 and 3.2 were picked for sequencing (with primer IG0028), where the sequence of both plasmids was further confirmed. Figure 33: Gel electrophoresis of the colony PCR products and restriction assay. (A) The colony PCR products from different transformations. (B) The restriction assay, including non-digested samples as control. We performed a series of tests to determine if we successfully induced hypervesiculation by the deletion of TolA in the E.coli BW25133 strain from the Keio collection (KEIO) and the overexpression of TolR (Baba et al. 2006). The following combinations of plasmid and cells were used (Figure 34): pET-Duet with and without the insert of TolR, both in the E. coli BW25133 strain with (KEIO) and without (WT) the deletion of TolA. Figure 34: Scheme of hypervesiculation. Scheme of plasmids and strains used in characterization of hypervesiculation. By measuring the size distribution of the vesicles with Dynamic Light Scattering (DLS), we wanted to confirm vesicle production and determine vesicle size. Dynamic Light Scatter (DLS) measurements were performed following the DLS protocol. Vesicle size distribution in the KEIO strain and WT strain with either TolR or pET-Duet were compared at different time points after induction: three, four, five hours and overnight (approximately 20 hours). In Figure 35, the raw data of both TolR and pET-Duet in the KEIO strain clearly show a distribution of larger particles compared to the WT strain. Demonstrating that the WT strain produces a very low amount to no vesicles. Therefore, we decided to only focus on analysing the DLS data of the KEIO strain. Figure 35: Raw data of the DLS experiment. Raw data of the size distributions of the pET-Duet and TolR in the KEIO (a and b) and WT (c and d) strain, measured with Dynamic Light Scatter (DLS). The samples are put against the size in nm and the colour represents the size distribution in percentages. In Figure 36, the analysed data of the KEIO strain is shown. In this graph the mean and width of the size distribution is plotted per time for TolR and pET-Duet. Overall, neither time nor the presence of TolR shifted size distribution of the vesicles. The only exception is the time point after 4 hours, which might possibly be a statistical st. Therefore, the KEIO strain is mostly responsible for vesicle production. Pertaining to size, the average we found was around 18 nm in diameter (d.nm), which differs from the 80-100 d.nm sized vesicles found by the 2016 iGEM team of University of New South Wales (UNSW) Australia. Further, they showed that the size distribution shifts to larger vesicles when TolR is overexpressed in the KEIO strain, which was not evident in our data. A possible reason could be the low amount of IPTG we added. However, due to time constraints, we were not able to test a range of different IPTG concentrations. Figure 36: Analysed data of the DLS experiment. Vesicles size after induction of TolR (orange) and pET-Duet (blue) in the KEIO strain obtained by DLS measurements. The mean for each measurement was represented with dots and the width of the distribution with bars. The time points in hours is set against the size in nm. As shown in previous experiments, both pET-Duet and TolR in KEIO seem to produce vesicles. To confirm that the objects measured by DLS are vesicles and not, for example, parts of the cell, we made negative stain Transmission Electron Microscopy (TEM) images following the TEM protocol. We expected that vesicles have a different shape than the cell debris, namely spherical. Furthermore, when vesicles are big enough you should be able the see the lipid bilayer of the membrane. Figure 37 shows PET-Duet in KEIO (a) and TolR in KEIO (b). Vesicles were identified in the images, indicating that the measured objects were not simply cell debris. As shown in the raw DLS data in Figure 35, the range of vesicules go up to approximately 70 d.nm. Due to the resolution limitations of the TEM, we could only visualize vesicles above the average size of 18 d.nm. Figure 37: Transmission Electron Microscop (TEM) images of TolR (left) and pET-Duet (right) in the KEIO strain. The red arrows point to vesicles. The DLS and TEM experiments do not provide any information about the concentration of produced vesicles. Therefore, vesicle concentration was determined through staining the DLS samples with the membrane dye FM4-64 and subsequent fluorescence measurements in a plate reader. The experiment followed the membrane staining protocol. In order to calculate the concentration of vesicles from the measured intensity, a calibration curve of synthetic liposomes was made. Figure 38 shows the following linear function which was fitted through the measured data: $I = 4.7865 \cdot C + 950.0159$. In this formula, $I$ is the intensity of the fluorescence in arbitrary units and $C$, the concentration of the liposomes in mg/µL. In Figure 39 the concentration of vesicles is represented. It can be seen that little to no vesicles are produced in the wild type strain compared to the KEIO strain. Furthermore, the concentration of vesicles after growing overnight is much higher than 3 hours after induction, thus showing that more vesicles are produced over time. Also, the concentration of vesicles with and without induction differ only with around 2 mg/μL which is not that significant considering the error bars. The reason for this could be the use of a high copy plasmid in combination with a leaky promoter. Besides this, it is confirmed that the presence of the TolR plasmid does not result in large differences in vesicle production. Figure 38: Liposome calibration curve. Calibration curve of liposomes. The intensity (a.u.) is put against the concentration of liposomes. The liposomes were stained with the membrane dye FM4-64, which only fluoresce when it is bound to the membrane. A linear fit is made, with the formula $I = 4.7865 \cdot C + 950.0159$. Figure 39: Concentration of vesicles. The concentration of vesicles present in the purified samples of TolR and pET-Duet in the KEIO strain and pET-Duet in the WT strain. The concentration, in mg/µL, plotted for the different samples at 3 hours and 20 hours after induction. We ordered our TorA-GFP from IDT and assembled the part on pSB1C3 by digestion assay with EcoRI and PstI and consecutive ligation. We transformed the construct into a TOP10 strain and screened some colonies with colony PCR (with primers IG0006 and IG0013). Figure 40: Gel electrophoresis of colony PCR products resulting for the transformation with the construct. The gel includes a ladder (L), negative controls (C), a blank (B) and the different inserts (1 to 4) from different ligation. The indicated bands have the expected size. The indicated bands have an expected band size of ~1500 bp. Colonies 1 and 2 from ligation 1 were picked for sequencing (with primer IG0013), where the sequence of both plasmids was further confirmed. Introducing vesicle production is only the first step in transporting proteins into vesicles. It also requires translocation of the proteins to the periplasm. To achieve this we placed a TorA export tag in front of the protein and this fusion protein was transformed into the KEIO and WT strain, see overview Figure 41. Widefield microscope images were made to visualize whether the addition of the transport tag impaired the GFP structure and therefore its fluorescence. Additionally, an osmo-shock was performed to harvest the periplasmic fraction of the cell. After that, the plate-reader was used to determine the GFP levels in the periplasm and cytoplasm. This will tell us whether the the construct is functional thereby translocating the GFP into the periplasm. Figure 41: Scheme of translocation. Scheme of plasmids and strains used in characterization of TorA-GFP. We modeled the dynamics of the system to know at what time the concentration of GFP in the periplasm is maximal. For this modeling, we needed to know the OD curve of both the WT and KEIO strain. Due to the fact that we did not know the effect of TorA-GFP on the growth rate, we transformed the plasmid into the strains and measured the OD in the plate-reader. The experiment was performed with the growth rate protocol. In Figure 42, it is shown that the maximal OD is lower for the KEIO strain compared to the WT strain. Also, the growth rate of the KEIO strain in its exponential phase was smaller. With modeling we determined the maximal growth rate to be 1.4 per hour for WT and 0.9 per hour for KEIO. Figure 42: OD curves. Measured OD curves of the (a) WT and (b) KEIO strain with TorA-GFP, measured in the plate-reader. The OD600 is put against time (hours). To verify whether the addition of TorA tag to GFP did not impair the functionality of GFP, widefield images were taken with the widefield protocol. As shown in Figure 43, GFP fluorescence was both observed in the KEIO and WT strain. Consequently, It is shown that more GFP is present in WT. Furthermore, you can see that the shape of the KEIO strain cells is elongated. This is probably caused by the deletion of TolA, which destabilizes the membrane (Baker et al. 2014). Figure 43: Widefield images. Widefield images of GFP-TA in the KEIO (right) and WT (left) strain. For GFP to be able to be transported into vesicles, it needs to be present in the periplasm. Therefore, an osmoshock was performed using the osmoshock protocol, to check the periplasmic fraction of the KEIO and WT strain, both with and without TorA-GFP. After the osmoshock the fluorescence of the GFP in the periplasm and cytoplasm was measured in the plate-reader. In Figure 44, the fluorescence intensity in the periplasm in WT is significantly higher than in the cytoplasm. However, this difference in intensity is much less pronounced in the KEIO strain. A possible explanation for this is the previously demonstrated vesicle production. Previously, it was shown in Figure 40, that the KEIO strain produces a large amount of vesicles, while the WT strain produces none. This result suggests that in the KEIO strain GFP is transported into the vesicles, which reduces the amount of GFP in the periplasm. Another possibility could be that both the growth and the protein production is greatly impaired in the KEIO-strain, due to its mutation. This means that the overall GFP production is lower, leading to a lower concentration in the periplasm as well. All in all, we see that GFP-TA is transported into the periplasm. Figure 44: GFP fluorescence in the periplasm and cytoplasm. Intensity of GFP in the periplasm and cytoplasm. The fluorescence intensity is measured for the samples TorA-GFP in the WT and KEIO strain and WT ad KEIO strain without plasmid. In Figure 45, the analysed data is shown. Evidently, GFP is present in the purified sample of GFP-TA in WT, even though previous experiments showed that WT produces little to no vesicles, suggesting that those values are a background from the protocol used, due to a cell destruction during the centrifugation step.To test this hypothesis, the protocol could be adjusted to see whether GFP is still present at lower centrifugation speeds. Regardless of these possible changes, the fluorescence intensity of both GFP-TA with and without TolR in KEIO is significantly higher than GFP-TA in WT. Taken together, we can conclude that GFP is transported into the vesicles. Figure 45: GFP fluorescence in vesicles. Intensity of GFP in vesicles. The fluorescence intensity of the samples TorA-GFP with TolR in the KEIO strain, TorA-GFP in the KEIO and WT strain and the empty backbone in the WT strain. The goal of this module was to transport proteins into vesicles. One of the things we have shown through Dynamic Light Scatter (DLS) experiments is that hypervesiculation occurs in E.coli BW25113 of the Keio strain with a TolA deletion. Produced vesicles have a size of around 18 d.nm. Additionally, by measuring fluorescence intensity in the cytoplasm and periplasm, we have demonstrated that GFP with the TorA tag is transported to the periplasm. To determine if GFP was transported into vesicles, we combined the two plasmids and measured the intensity of GFP in vesicles. Modeling determined that after 25 minutes post induction the concentration of GFP in the vesicles did not increase anymore and reached its maximum. On top of this, it we had shown that the amount of vesicles produced was higher after 20 hours. Combining this, the vesicles we harvested after 20 hours post induction. The same purification method as in the DLS experiments was used, after which fluorescence was measured in the plate reader. With these results, we can conclude that the fusion of TorA and GFP is transported into the periplasm. Abudayyeh, O.O. et al., 2016. C2c2 is a single-component programmable RNA-guided RNA-targeting CRISPR effector. Science (New York, N.Y.), 353(6299), p.aaf5573. Aumiller, W.M. et al., 2016. RNA-Based Coacervates as a Model for Membraneless Organelles: Formation, Properties, and Interfacial Liposome Assembly. Langmuir, 32(39), pp.10042–10053. Baba, T., et al, 2006. Construction of Escherichia coli K-12 in-frame, single-gene knockout mutants: the Keio collection. Mol Syst Biol. 2. Boothby, T.C. et al., 2017. Tardigrades Use Intrinsically Disordered Proteins to Survive Desiccation. Molecular Cell, 65(6), p.975–984.e5. Carlos, J., Junior, R., Tamanini, R., Soares, B. F., Oliveira, A. M. De, Silva, F. D. G., … Beloti, V. 2016. Efficiency of boiling and four other methods for genomic DNA extraction of deteriorating spore-forming bacteria from milk. Semina Ciencias Agrarias 37, 3069–3078. Dashti, A. A., Jadaon, M. M., Abdulsamad, A. M., and Dashti, H. M., 2009. Heat Treatment of Bacteria: A Simple Method of DNA Extraction for Molecular Techniques. Kuwait Medical Journal 41, 117–122. Gibson, D.G. et al., 2009. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nature Methods, 6(5), pp.343–345. Parayre, S., Falentin, H., Madec, M-N., Sivieri, K., Le Dizes, A-S., Sohier, D., and Lortal, S., 2007. Easy DNA extraction method and optimization of PCR-Temporal Temperature Gel Electrophoresis to identify the predominant high and low GC-content bacteria from dairy products. Journal of Microbiological Methods 69, 431-441.
CommonCrawl
You are given a tree that consists of $n$ nodes. Your task is to process $q$ queries of the form: what is the distance between nodes $a$ and $b$? The first input line contains two integers $n$ and $q$: the number of nodes and queries. The nodes are numbered $1,2,\ldots,n$. Then there are $n-1$ lines that describe the edges. Each line contains two integers $a$ and $b$: there is an edge between nodes $a$ and $b$. Finally, there are $q$ lines that describe the queries. Each line contains two integer $a$ and $b$: what is the distance between nodes $a$ and $b$? Print $q$ integers: the answer to each query.
CommonCrawl
A polyomino is a polyform with the square as its base form. It is a connected shape formed as the union of one or more identical squares in distinct locations on the plane, taken from the regular square tiling, such that every square can be connected to every other square through a sequence of shared edges (i.e., shapes connected only through shared corners of squares are not permitted). The most well-known polyominos are the seven tetrominos made out of four squares (see figure), famous from the Tetris® game, and of course the single domino consisting of two squares from the game with the same name. Some polyomino can be obtained by gluing several copies of the same smaller polyomino translated (but not rotated or mirrored) to different locations in the plane. We call those polyomino powers. One line with two positive integers $h, w \leq 10$. Next follows an $h \times w$ matrix of characters '.' or 'X', the 'X's describing a polyomino and '.' space. A $k$-power with $2 \leq k \leq 5$ copies of a smaller polyomino: Output a $h\times w$ matrix on the same format as the input with the 'X's replaced by the numbers $1$ through $k$ in any order identifying the factor pieces. Furthermore, if multiple solutions exist, any will do. Otherwise, output "No solution" if no solution exists.
CommonCrawl
Is it possible to express the solution of this equation for $x$ in terms of elementary functions? Any ideas? If the expression $x=e^x$ has a solution, it is simply a number $x_0$, not a function. Therefore, $x=e^x$ is equivalent to $x=-W(-1)$. Note that inasmuch as $e^x\ge x$ for all $x$, the solution $x=-W(-1)$ is not a purely real number. $f(x)$ non-decreasing then pseudoinverse of $x + f(x)$ is Lipschitz.
CommonCrawl
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1243-1251, 2018. Work on approximate linear algebra has led to efficient distributed and streaming algorithms for problems such as approximate matrix multiplication, low rank approximation, and regression, primarily for the Euclidean norm $\ell_2$. We study other $\ell_p$ norms, which are more robust for $p < 2$, and can be used to find outliers for $p > 2$. Unlike previous algorithms for such norms, we give algorithms that are (1) deterministic, (2) work simultaneously for every $p \geq 1$, including $p = \infty$, and (3) can be implemented in both distributed and streaming environments. We study $\ell_p$-regression, entrywise $\ell_p$-low rank approximation, and versions of approximate matrix multiplication. %X Work on approximate linear algebra has led to efficient distributed and streaming algorithms for problems such as approximate matrix multiplication, low rank approximation, and regression, primarily for the Euclidean norm $\ell_2$. We study other $\ell_p$ norms, which are more robust for $p < 2$, and can be used to find outliers for $p > 2$. Unlike previous algorithms for such norms, we give algorithms that are (1) deterministic, (2) work simultaneously for every $p \geq 1$, including $p = \infty$, and (3) can be implemented in both distributed and streaming environments. We study $\ell_p$-regression, entrywise $\ell_p$-low rank approximation, and versions of approximate matrix multiplication.
CommonCrawl
For your trip to Beijing, you have brought plenty of puzzle books, many of them containing challenges like the following: how many triangles can be found in Figure 1? While these puzzles keep your interest for a while, you quickly get bored with them and instead start thinking about how you might solve them algorithmically. Who knows, maybe a problem like that will actually be used in this year's contest. Well, guess what? Today is your lucky day! The first line of input contains two integers $r$ and $c$ ($1 \leq r \le 3\, 000$, $1 \le c \leq 6\, 000$), specifying the picture size, where $r$ is the number of rows of vertices and $c$ is the number of columns. Following this are $2r-1$ lines, each of them having at most $2c-1$ characters. Odd lines contain grid vertices (represented as lowercase x characters) and zero or more horizontal edges, while even lines contain zero or more diagonal edges. Specifically, picture lines with numbers $4k+1$ have vertices in positions $1, 5, 9, 13, \ldots $ while lines with numbers $4k+3$ have vertices in positions $3, 7, 11, 15, \ldots $ . All possible vertices are represented in the input (for example, see how Figure 1 is represented in Sample Input 2). Horizontal edges connecting neighboring vertices are represented by three dashes. Diagonal edges are represented by a single forward slash ('/') or backslash ('\') character. The edge characters will be placed exactly between the corresponding vertices. All other characters will be space characters. Note that if any input line could contain trailing whitespace, that whitespace may be omitted. Display the number of triangles (of any size) formed by grid edges in the input picture.
CommonCrawl
Understand the intuition behind MDPs leading to Reinforcement Learning and the Q-learning algorithm. Evaluating the Bellman equations from data. We need to estimate Q from transitions . We are in a state, we take an action, we get the reward and we are in the next state. Intuition: imagine if we've an estimate of the Q function. We will update by taking the state and action and moving a bit. We have the reward and discount the maximum of the utility of the next state. This represents the utility function. This represents the utility of the next state. $\alpha_t$ is the learning rate. When $\alpha = 0$, you would have no learning at all where your new value is your original value, V=V. When $\alpha = 1$, you would have absolute learning where you totally forget your previous value, V leaving your new value V = X. (s, a) is visited infinitely often. We have to note that we face an exploration-exploitation dilemma here that is a fundamental tradeoff in reinforcement learning.
CommonCrawl
We present the matrix models that are the generating functions for branched covers of the complex projective line ramified over 0, 1, and $\infty$ (Grotendieck's dessins d'enfants) of fixed genus, degree, and the ramification profile at infinity. For general ramifications at other points, the model is the two-logarithm matrix model with the external field studied previously by one of the authors (L.Ch.) and K.Palamarchuk. It lies in the class of the generalised Kontsevich models (GKM) thus being the Kadomtsev–Petviashvili (KP) hierarchy tau function and, upon the shift of times, this model is equivalent to a Hermitian one-matrix model with a general potential whose coefficients are related to the KP times by a Miwa-type transformation. The original model therefore enjoys a topological recursion and can be solved in terms of shifted moments of the standard Hermitian one-matrix model at all genera of the topological expansion. We also derive the matrix model for clean Belyi morphisms, which turns out to be the Kontsevich–Penner model introduced by the authors and Yu. Makeenko. Its partition function is also a KP hierarchy tau function, and this model is in turn equivalent to a Hermitian one-matrix model with a general potential. Finally we prove that the generating function for general two-profile Belyi morphisms is a GKM thus proving that it is also a KP hierarchy tau function in proper times.
CommonCrawl
Wang, Y.F.;Yu, M.;Liu, B.;Fan, B.;Wang, H.;Zhu, M.J.;Li, K. PSMC5 subunit, which belongs to the 26S proteasomal subunit family, plays an important role in the antigen presentation mediated by MHC class I molecular. Full-length cDNA of porcine PSMC5 was isolated using the in silico cloning and rapid amplification of cDNA ends (RACE). Amino acid was deduced and the primary structure was analyzed. Results revealed that the porcine PSMC5 gene shares the high degree of sequence similarity with its mammalian counterparts at both the nucleotide level and the amino acid level. The RT-PCR was performed to detect the porcine PSMC5 expression pattern in seven tissues and the result showed that high express level was observed in spleen, lung, marrow and liver while the low express level was in muscle. The full-length genomic DNA sequence of porcine PSMC5 gene was amplified by PCR and the genomic structure revealed that this gene was comprised by 12 exons and 11 introns. Best alignment of the cDNA and genomic exon DNA sequence presents 4 mismatches and this information potentially bears further study in gene polymorphisms. Tsukamoto, T., S. Miura, T. Nakai, S. Yokota, N. Shimozawa, Y. Suzuki, T. Orii, Y. Fujiki, F. Sakai, A. Bogaki, H. Yasumo and T. Osumi. 1995. Peroxisome assemble factor-2, a putative ATPase cloned by functional complementation on a peroxisome-deficient mammalian cell mutant. Nat. Genet. 11:395-401. Hoyle, J., K. H. Tan and E. M. C. Fisher. 1997. Localization of genes encoding two human one-domain members of the AAA family: PSMC5 (the thyroid receptor-interacting protein, TRIP1) and PSMC3 (the tat-binding protein, TBP1). Hum. Genet. 99:285-288. Rubin, D. M., M. H. Glickman, C. N. Larsen, S. Dhruvakumar and D. Finley. 1998. Active site mutants in the six regulatory particle ATPases reveal multiple roles for ATP in the proteasome. EMBO. J. 17:4909-4919. Thompson, J. D., D. G. Higgins and T. J. Gibson. 1994. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, positions-specific gap penalties and weight matrix choice. Nucleic. Acids. Res. 22:4673-4680. Walker, J. E., M. Saraste, M. J. Runswick and N. J. Gay. 1982. Distantly related sequences in the $\alpha$- and $\beta$-subunits of ATP synthase, myosin, kinases and other ATP-requiring enzymes and a common nucleotide binding fold. EMBO. J. 1:945-951. Tanahashi, N., M. Suruki, T. Fujiwara, E. Takahashi, N. Shimbara, C. H. Chung and K. Tanaka. 1998. Chromosomal localization and immunological analysis of a family of human 26S proteasomal ATPase. Biochem. Biophys. Res. Commun. 243:229-232. Pan, P W., S. H. Zhao, M. Yu, B. Liu, T. A. Xiong and K. Li. 2003. Identification of differentially expressed genes in the longissimus dorsi muscle tissue between duroc and erhualian pigs by mRNA differential display. Asian-Aust. J. Anim. Sci. 16:1066-1070. Li, Y. H., J. Chambers, J. Pang, K. Ngo, P. A. Peterson, W. P. Leung and Y. Yang. 1999. Characterization of the mouse proteasome regulator PA28b gene. Immunogenetics 49:149-157. Park, B. W., D. M. O'Rourke, Q. Wang, J. G. Davis, A. Post, X. L. Qian and M. I. Greene. 1999. Induction of the tat-binding protein 1 gene accompanies the disabling of oncogenic erbB receptor tyrosine kinases. Proc. Natl. Acad. Sci. 96:6434-6438. Makino, Y., S. Yogosawa, M. Kanemaki, T. Yoshida, K. Tamano, T. Kishimoto, V. Moncollin, J. M. Egly, M. Muramatsu and T. Tamura. 1996. Structures of the rat proteasomal ATPases:determination of highly conserved structural motifs and rules for their spacing. Biochem. Biophys. Res. Commun. 220:1049-1054. Lee, J. W., H. S. Choi, J. Gyuris, R. Brent and D. D. Moore. 1995. Two classes of proteins dependent on either the presence or absence of thyroid hormone for interaction with the thyroid hormone receptor. Molec. Endocr. 9:243-254. DeMartino, G. N., C. R. Moomaw, O. P. Zagnitko, R. J. Proske, M. Chu-Ping, S. J. Afendis, J. C. Swaffield and C. A. Slaughter. 1994. PA700, an ATP-dependent activator of the 20S proteasome, is an ATPase containing multiple members of a nucleotide-binding protein family. J. Biol. Chem. 269:20878-20884. Liu, B., G. L. Jin, S. H. Zhao, M. Yu, T. A. Xiong, Z. Z. Peng and K. Li. 2002. Preparation and analysis of spermatocyte meiotic pachytene bivalents of pigs for gene mapping. Cell. Res. 12(5-6):401-405. Hershko, A. and A. Ciechanover. 1998. The ubiquitin system. Annu. Rev. Biochem. 67:425-479. Ma, C. P., C. A. Slaughter and G. N. Demartino. 1992. Identification, purification and characterization of a protein activator (PA28) of the 20S proteasome Macropain). J. Biol. Chem. 267:10515-10523. Ma, C. P., J. H. Vo, R. J. Proske, C. A. Slaughter and G. N. Demartino. 1994. Identification, purification, and characterization of a high-molecular weight, ATP-dependent activator (PA700) of the 20S proteasome. J. Biol. Chem. 269:3539-3547. Wang, Y. F., M. Yu, M. Yerle, B. Liu, S. H. Zhao, T. A. Xiong, B. Fan and K. Li. 2003. Mapping of genes encoding four ATPase and three non-ATPase components of the pig 26S proteasome. Anim. Genet. 34:393-395. Yu, M., S. H. Zhao, B. Liu, T. A. Xiong, P. W. Pan and K. Li. 2001. Isolation and regional localization of the porcine glial fibrillary acidic protein (GFAP) gene. J. Anim. Sci. 79:2754.
CommonCrawl
Trajectory planning for point to point motion maps position as function of time between specified points. Velocity and acceleration along the trajectory can be computed by differentiating position with respect to time, and for a smooth path, velocity cannot have any discontinuities or the specified trajectory would require infinite acceleration. With a given set of via points and constraints, a smooth trajectory ($q(t)$) can be specified using several methods. Cubic Polynomial Trajectories defines the path between two points ($q(t_0)$ and $q(t_f)$) with a cubic polynomial. Differentiating Equation 1, velocity and acceleration can be calculated. Equation 3 shows that acceleration linearly varies with time and is continuous; thus, the trajectory does not require infinite accelerations. Four constraints must be specified to solve the unknowns: $a_0$, $a_1$, $a_2$, and $a_3$. Obviously, two constraints are the start and end positions, and the final two are the initial and end velocities. Equations 4-7 can be combined into one matrix equation. Figure 1: Cubic Polynomial Trajectory with: $t_0= 0$, $t_f= 0$, $q_0= 3$, $q_f= 20$, $v_0= 1$, and $v_f= 2$. Note that the accelerations cannot be specified at each point using the Cubic Polynomial Trajectory method, and for a set of points, acceleration will be discontinuous at each point. This discontinuity of acceleration makes the derivative of acceleration (jerk) infinite at each via point, and causes an impulsive jerk in the motion of the robot. To avoid this, three constraints at each point must be specified: position, velocity, and acceleration. Therefore, a fifth order polynomial is required to define the trajectory between two via points. Therefore, the constraints at points $q(t_0)$ and $q(t_f)$ can be defined as follows. Equations 10-15 can be put into matrix form. Figure 2: Quintic Polynomial Trajectory with: $t_0= 0$, $t_f= 0$, $q_0= 3$, $q_f= 20$, $v_0= 1$, $v_f= 2$, $\alpha_0= 0$, and $\alpha_f= 0$. Now, the trajectory for the final deceleration region must be derived. Knowing $\dot q(t_f) =0$ and $q(t_f) = q_f$, the following holds true for $(t_f-t_b) < t \leq t_f$. Finally, the derived equations can be substituted into Equation 17. Figure 3: LSPB Trajectory with: $V= 4$, $t_f= 6$, $t_b= 1.75$, and $\alpha = 2.29$. Previously, a few methods were outlined for planning a trajectory between two points. Now, consider a set of via points and desired velocities at each point. The Cubic Polynomial Trajectory method can be used to map a trajectory to meet specified constraints. Note that for n via points, there will be n-1 cubic polynomials and 2n constraints to describe the desired path. At every point, time ($t$), position ($q$), and velocity ($v$) must be known. Figure 4: Cubic Polynomial Trajectory with 4 via points.
CommonCrawl
In this article, using the example of C. Camci() we reconfirm necessary sufficient condition for a slant curve. Next, we find some necessary and sufficient conditions for a slant curve in a Sasakian 3-manifold to have: (i) a $C$-parallel mean curvature vector field; (ii) a $C$-proper mean curvature vector field (in the normal bundle). D. E. Blair, Riemannian Geometry of Contact and Symplectic Manifolds, Progress in Math., 203, Birkhauser, Boston, Basel, Berlin, 2002. D. E. Blair, F. Dillen, L. Verstraelen and L. Varncken, Calabi curves as holomorphic Legendre curves and Chen's inequality, Kyunpook Math. J., 35(1996), 407-416. R. Caddeo, C. Oniciuc and P. Piu, Explicit formulas for biharmonic non-geodesic curves of the Heisenberg group, Rend. Sem. Mat. Univ. e Politec. Torino, 62(3)(2004), 265-278. C. Camci, Extended cross product in a 3-dimensionsal almost contact metric manifold with applications to curve theory, Turk. J. Math., 35(2011), 1-14. B. Y. Chen, Some open problems and conjectures on submanifolds of finite type, Soochow J. Math., 17(1991), no. 2, 169-188. J.-E. Lee, On Legendre curves in contact pseudo-Hermitian 3-manifolds, Bull. Aust. Math. Soc. 78(2010), no. 3, 383-396. C. Ozgur, M. M. Tripathi, On Legendre curves in $\alpha$-Sasakian manifolds, Bull. Malays. Math. Sci. Soc., 31(2)(2008), no. 1, 91-96. S. Tanno, Sur une variete de K-contact metrique de dimension 3, C. R. Acad. Sci. Paris Ser. A-B, 263(1966), A317-A319.
CommonCrawl
In this talk I would like to present the problem which is at the basis of my PhD project: toroidal compactifications of the moduli space of polarised K3 surfaces. The search for compactifications for moduli spaces is a central problem in algebraic geometry that has been widely studied and toroidal constructions provide a tool to compactify varieties arising as quotients of hermitian symmetric domains by the action of arithmetic groups. For the moduli space of principally polarised abelian varieties $\mathcal(A)_g$ some specific toroidal compactifications have been studied in detail, while for the case of K3 surfaces most of the work so far has been done thanks to the abstract knowledge of such toroidal compactifications. The goal of my research is to find and describe a specific toroidal compactification; the main idea at its basis is to mimic the construction of the perfect cone compactification for $\mathcal(A)_g$ and to apply that to the moduli space of $K3$ surfaces. Time permitting, I will add something regarding compactifications of other moduli spaces which involve lower rank lattices and therefore rely on more feasible computations.
CommonCrawl
I very confused about the behavior of mathematical community. I have worked 15 years in chaos theory and I have published more than 80 papers and 8 books in several international publishers and in each time I seek comments from other specialist, they answred my positively and they never refuse to read may papers. For this time I send a request for comments (for my paper: http://vixra.org/pdf/1210.0176v7.pdf) to more than 800 mathematician arround the world and I receive only 7 replies. The main objective is to see the opinions of experts before sending the paper to a journal. Is the mathematical community behaves like the logistic equation (chaotic). More than this, some of them attack me personaly with very bold words in despite they do not know me. Do you have anything specifically about MathOverflow that you want to say? This site is mainly for discussing MathOverflow. Sadly, this is starting to look like spam. >> For this time I send a request for comments (for my paper: http://vixra.org/pdf/1210.0176v6.pdf) to more than 800 mathematician arround the world and I receive only 7 replies.<< Here is what I can say. The chances that you have a valid proof there are 0.00000.....000001%. I just was proofreading an attempt to solve Navier-Stokes on 12 pages last week by a mathematician with much higher reputation than yours. The mistake was a trivial miscomputation in some Lemma in the middle. My advice: proofread your paper yourself doing one page per day, reading it aloud, and carrying out every "trivial" computation in honest. If after 8 days of such proofreading you still claim it to be correct, I'll read it. However, if I find a mistake or a part that makes no sense, you'll not try to patch it immediately or move it to some other place, or anything else like that. You'll just accept the verdict and wait for 3 months before trying to show anything like that to anyone again. Deal? @fedja: Ok and thank you very much. I will spent 8 dyas to verify all possible errors and I will send it again to you. Dear zeraoulia: I do not understand why you sent me an email asking me to check your preprint about the Riemann Hypothesis. If you check my web page, you will see quite clearly that I have never done any work on zeta functions or the distribution of primes. In particular, it is unlikely that I could verify a correct proof of RH without investing months of time learning the necessary background. Do you think it is reasonable to expect me to set aside the research I am currently doing for such a long time? If people like me are counted among your 800, you should not be surprised that you are getting few replies. Is your time so precious that you can't be bothered to find out if you are emailing a person in the correct field? Requests for verification of preprints are off-topic on MathOverflow. If you ask such a question, it will be closed. Multiple requests for verification are typically marked as spam. People who make multiple such requests are considered unwelcome. I see that you have already asked more than 20 questions that were not appropriate. Please refrain from doing so again. If you are not sure about what is acceptable, please re-read the FAQ. @Scott Carnahan: Thank you for your reply. I find in MO that you have answred a closely related question to RH. Sorry for that problem. Please let me know that you receive this message. OK, I'll stick to my words. However, one thing needs to be clarified before I say anything else. The main theorem in the paper says something about the description of roots with real part 1/2 but it tells ABSOLUTELY NOTHING about the case when the real part is not 1/2. How can anything like that help to establish the RH? Either the general logic here is hopelessly flawed, or I just misunderstand the formulation. What exactly should the statement of the theorem be (forget the proof for now) and how does RH follow from it? @fedja: Theorem 1 proves that the Riemann hypothesis is true for the function η(s) if and only if θ(s)≠0 (mod2π), x(s)=u(s) and y(s)=-v(s) for every root s∈D of the equation η(s)=0. so don't expect a quick response but be sure that I'll come back eventually). :). OK, the next question (I'm a slow reader, you know...) The conditions of the theorem are $x=u$, $y=-v$. However, just a few lines below you talk about the fixed point of $B$, which would require $x=u$, $y=v$ to the best of my understanding. This little minus is either misplaced or forgotten. Am I missing something again? The fixed points of B do not require $x=u$, $y=-v$ . The condition $x=u$, $y=-v$ implies that alpha=0.5. Now I don't understand you at all. Where does the statement that (u,v) is a fixed point come from then? The only thing you do is to say that (x,y)-vector is the (normalized) image of the (u,v)-vector. If no relation between (x,y) and (u,v) is used in the proof of $s$ being a root, then you effectively claim that the condition $\theta(s)\ne 0\mod 2\pi$ alone implies that $s$ is a root of $\eta$, which is absurd. So, here is the question again: we have the assumptions $x=u$, $y=-v$, $\theta(s)\ne 0\mod 2\pi$ and nothing else. How does it follow that $s$ is a root? This is claimed on the top of page 5 already. Assuming that you didn't intend to derive it immediately from the fixed point argument but rather from (8), I see no reason why (8) implies anything of this sort. All you say there is that if $\alpha=\frac 12$, then SOMETHING, after which the claim is made that $\alpha=\frac 12$ is the only possible option. Equation (8) is certainly correct by itself, but the short paragraph between it and the proof of the reverse implication seems to make no sense either as a demonstration that $\eta(s)=0$, or as a demonstration that $\alpha=\frac 12$. Note also that $\theta$ doesn't enter (8) in any way but is mentioned in this paragraph, thus making me think again that the fact that $s$ is a root was claimed BEFORE we arrived at (8) and is merely recalled now. Whether it all makes sense or not, the presentation is extremely confusing. So, please clarify what is the last line of the proof that $s$ is a root (forget about the part proving $\alpha=\frac 12$ for a while). @fedja: "Where does the statement that (u,v) is a fixed point come from then? " Rotations have single fixed point with respect to (u,v). This is a geometrical result. I have used this fact to determine that if $\theta(s)\ne 0\mod 2\pi$, then s is a root. "This is claimed on the top of page 5 already. Assuming that you didn't intend to derive it immediately from the fixed point argument but rather from (8), I see no reason why (8) implies anything of this sort". Since, we speak about a single and isolated root s, and you have (8), then by intuition you obtain that if alpha=0.5 then ro=1 and (8) holds for one beta since again the root is isolated. "thus making me think again that the fact that $s$ is a root was claimed BEFORE we arrived at (8) and is merely recalled now". No, if you read the proof, firstly, I assume that s is in D and then I collect the two conditions to claim that s is a root with alpha=0.5. "the presentation is extremely confusing." You have two implications see item (1) for the second one and item (2) for the first one. OK, so how do you show that $(u,v)$ is a fixed point of B? There seems to be no equation anywhere that would imply it. Of course, if it is, many things start making sense and I do not disagree with that. However, to the best of my understanding, the only thing that could be used for showing that $(u,v)$ is a fixed point of B is (7) and that would require a different sign in the relation y=-v, as I claimed above. By the way, I can easily write one line whose meaning nobody will ever understand. "Short+structured" doesn't always make "clear" :). "OK, so how do you show that $(u,v)$ is a fixed point of B? " The fixed point of the rotation in (7) must satisfies (I₂-B(s))(u(s),v(s))=0 where I₂ is the 2×2 unit matrix (this is a linear system of algebraic equation). The determinant of the matrix (I₂-B(s)) is -2(cosθ(s)-1) and it is not zero since θ(s)≠0 (mod2π), this means that s is a solution of η(s)=0 by using (3). " However, to the best of my understanding, the only thing that could be used for showing that $(u,v)$ is a fixed point of B is (7) and that would require a different sign in the relation y=-v, as I claimed above." No, relations x=u and y=-v are used to show that alpha=0.5 in the first part of theorem 1. "By the way, I can easily write one line whose meaning nobody will ever understand. "Short+structured" doesn't always make "clear" :). " Can you specify the matter. Yes, the fixed point of the rotation in (7) must be (0,0). However, what makes you think that (u(s),v(s)) is a fixed point of that rotation? I see no proof of this anywhere. The objective is not the fixed point. The objective is to find conditions to get a zero s for eta. I made this approach to find possible conditions to get zeros of the eta function. The only condition is when $\theta(s)\ne 0\mod 2\pi$. So, do you claim that $\theta(s)\ne 0\mod 2\pi$ alone implies that $\eta(s)=0$? As I said, this is just absurd. "Now I don't understand you at all. Where does the statement that (u,v) is a fixed point come from then? The only thing you do is to say that (x,y)-vector is the (normalized) image of the (u,v)-vector. If no relation between (x,y) and (u,v) is used in the proof of $s$ being a root, then you effectively claim that the condition $\theta(s)\ne 0\mod 2\pi$ alone implies that $s$ is a root of $\eta$, which is absurd. So, here is the question again: we have the assumptions $x=u$, $y=-v$, $\theta(s)\ne 0\mod 2\pi$ and nothing else. " You know that all non-trivial rotations have a single and unique fixed point. If we apply this result to the rotation in (7) we obtain only the condition $\theta(s)\ne 0\mod 2\pi$. The condition $x=u$, $y=-v$, $ is used only to deduce that aplha=0.5 for a single beta. I write simply: It is well known that a non trivial rotation must have a unique fixed point, its rotocenter. The rotation in (5) is non trivial if ϕ(s)≠1 (here we assumed that θ(s)≠0 (mod2π) in the second part of Theorem 1). The reason is that the trivial rotation corresponding to the identity matrix, in which no rotation takes place. The fixed point of the rotation in (7) must satisfies (I₂-B(s))(u(s),v(s))=0 where I₂ is the 2×2 unit matrix. The determinant of the matrix (I₂-B(s)) is -2(cosθ(s)-1) and it is not zero since θ(s)≠0 (mod2π), this means that s is a solution of η(1-s)=0 and by using (3) we obtain that η(s)=0 . Um, excuse me for interrupting, but why is this discussion being held on meta? I thought meta was for discussions about the operation of MathOverflow, not about mathematics. This is a very nice service to the community by Fedja. Let's not interupt and welcome it on this thread. In my opinion, it sets a bad precedent: that purported solutions to RH and discussion thereof are now welcomed on meta. But I am happy to have this be my last comment on the matter, and let the moderators decide. I agree with Gil that it is a nice thing Fedja is doing, but I'm also a little worried about creating the appearance that if you want mathematicians to evaluate an unconventional proof of a famous conjecture, you can do this by asking on meta.MO (or that this is a particularly interesting or promising approach to RH). I don't want to get caught up in a long discussion of the proof myself, but it has fundamental issues even aside from what Fedja and Zeraoulia are currently discussing. For example, page 9 of the current version makes no sense to me, even taking for granted the earlier assertions about characterizing roots of eta on the critical line. The first half of page 9 describes a trivial equivalence relation on points with theta not equal to an integral multiple of 2\pi (it simply defines them all to be equivalent). Then the paper exhibits one root of eta with theta not a multiple of 2\pi and concludes using the equivalence relation that all the roots of eta have this property. This is an elaboration on page 7 of the previous version, which simply made the same assertion without the equivalence relation explanation, but it's not a proof. My conclusion is that this paper does not give a proof of the Riemann hypothesis, and I can't imagine any way of correcting it. No, the eta function has infinitely many roots. The equivalence relation is only a caracterisation of them. If We follow your opinion, then the set of multiples of an interger of the form n-m has a single element and the half of algebra must be omitted. I agree with Todd Trimble's sentiments. Yes, this thread's amusement value wore out quickly. Moderators, please close. I think, by now Fedja fulfilled his civic duty reading the paper (going far beyond of what most of us would do in this situation), while Henry nailed down the main logical flaw in the paper. Thus, I second Andres' request for this thread to be closed. Dear Elhadj (Zeraoulia): If you still think that your proof is valid, why don't you just send it to the journal of your choice and wait for referee's report, since MO (including meta) is not the right place for requesting opinions on your papers. @fedja: But I am still not convainced that using "equivalent relation" would affect that proof. The Henry claim must have strong evidences and it makes no sense to me. Here he just said "My conclusion is that this paper does not give a proof of the Riemann hypothesis, and I can't imagine any way of correcting it" without giving any argument on his claim. This is not acceptable unless he give strong arguments! @zeraoulia: You may put pupils of a class in an equivalence relation by being in the same schoolclass and prove that one of them has black hair in a class. Does this prove that all of them in the same class have black hair? @abatkai: Yes it does, with some pretty high probability. Explanation: given the worldwide distribution of various hair colors, often concentrated over countries, etc, having someone in a class with black hair proves at about 70-80% or so that all the other pupils in the class have black hair too! @Fedja and Henry: Ok, ad thank you very much for valuable discussions. I will repair the paper. it seems to me that you are threading on very slippery ground. A long time ago I thought I had proved a major conjecture (nothing like the Riemann hypothesis, but still a big deal). I wrote a paper, but before making it public I showed it to a friend, an extremely competent mathematician, who thought it was ok. Fortunately I decided to think about it some more before distributing it, because I found a fatal error after a couple of days. I still remember the heady feeling when I thought I had a proof, and the huge letdown when I found the mistake. The heady feeling can be addictive; it's very hard to give it up. I have seen very good mathematicians fall prey to it. The fact that you take it for granted that you can repair the paper makes me think that you might be falling into this pit.
CommonCrawl
Talk at Sub-Riemannian Geometry and Celestial Mechanics: A conference to celebrate the 60th birthday of Richard Montgomery. This work was supported by JSPS KAKENHI Grant Number 23540249. Published:J. Phys. A: Math. Theor. 48 (2015) 265205. Published: J. Phys. A: Math. Theor. 45 (2012) 345202. Talk at "2011 年度冬の力学系研究集会", Karuizawa, Nagano, Japan on Jan. 7 2011. Published: J. Phys. A: Math. Gen. 37 (2004) 10571-10584. "Synchronized Triangles in the Figure-Eight Solution under 1/r^2 Potential" You need "KeyNote" application (by Apple. Not Free) to read this beautyful document. Shape of the figure-eight orbit for $\alpha$. Talk at RIMS(Research Institute for Mathematical Sciences) on Nov. 13, 2003. Talk at Department of Mathematics, Kyoto University on Nov. 14, 2003. This is the original format. Lots of figs and movies are included. You need "KeyNote" application (by Apple. Not Free) to read this beautyful document. Exported from the original format. Lots of figs and movies are included. You need "PowerPoint" application (by microsoft. Not Free) or compatible to read this document. Sorry, this is not beautiful and file size is so big. Our paper: "N-body Choreography on the Lemniscate (Developments and Applications of Dynamical Systems Theory)" Acknowledgment: The authors would like to thank AIM/ARCC for funding a workshop in celestial mechanics where the authors met. workshop "Variational methods in Celestial mechanics"
CommonCrawl
For any that don't know, LMC (little man computer) is a simple model of a computer which is mainly used for teaching students about von Neumann architecture. It only has 10 instructions, 100 memory addresses and only supports direct addressing. These features make it quite limited however provide just enough utility to perform many computational tasks. In this post I will explore a couple of more advanced tasks and how I solved them. You can find the most commonly used LMC emulator here. A stack is a simple data structure that allows you to add (push) and remove (pop) pieces of data. Many architectures such as x86 have push and pop instructions built in however LMC doesn't have such a convenience. This means we will have to implement it ourselves. Functions are commonly implemented in low-level languages by simply placing the arguments in predetermined registers and branching to the start of the procedure. This is exactly what we will aim to do: the push function should expect the value to add to the stack in the accumulator and the return address in the memory location labeled ret; the pop instruction should expect the return address in ret and should return the value taken off the stack in the accumulator. There is one major caveat to our scheme however and that is the fact that as previously mentioned, LMC does not support indirect addressing. This means that to dynamically return from the functions and to change where on the stack we are storing to / retrieving from we will need to modify the future instructions in the program. Luckily enough, LMC is based on the von Neumann architecture meaning that machine code is data and data is machine code which allows us to simply use STA to modify the program. Here we set up the data locations we will need. Note the store, load and branch constants; these are the base machine code for the STA, LDA, and BRA instructions respectively. Also note stack-p DAT stack-p which is a handy way of getting the stack pointer to point to itself (the bottom of the program from where the stack will grow downwards). Next it is important to work out how we are going to return to the address specified in ret. This is the first instance where we need to modify the machine code that is about to be executed. First we store the current value of the accumulator in stack-t as a measure to make sure we don't lose the contents while doing our memory manipulation. Next we load ret into the accumulator and add our branch constant (600) to it which essentially builds the instruction to return to the address specified in ret. Now we store this instruction in memory location stack-i and load stack-t back into the accumulator. Finally we execute the instruction in stack-i thus branching to the value of ret. The time to implement the pop function is upon us. First we load stack-p which holds the address of the most recently added value on the stack and add load (500). This builds the instruction to load the value at the top of the stack into the accumulator. Now we store this instruction in pop-i and decrement stack-p by one. Finally pop-i is executed and we branch to stack-r to return to ret. Our push procedure is much like pop but in reverse so I won't explain it in depth. We incriminate stack-p by one, build the instruction to store the value of the accumulator at the top of the stack, e execute the instruction and branch to stack-r. The completed program is about 30 lines long which is over a quarter of available LMC memory and doesn't implement any sort of underflow protection meaning it is perhaps not the most practical data type for LMC programming. You can see the completed program below. Whilst calling push or pop is as simple as BRA push or BRA pop it can be tough to get the arguments in the correct registers. Below I will show how I would go about pushing a value to the stack. First we load ret-p which points to our return location (ret-l) and store it in ret. Next we load the value we want to push into the accumulator. Finally we branch to push. In maths $x!=x(x-1)\ldots(2)(1)$. E.g. $5!=5\times4\times3\times2\times1=120$. This is especially difficult in LMC as there is no multiplication instruction meaning we will have to implement that ourselves through repeated addition. To make things clearer, implementing multiplication as if it were a function. We will say that in memory location a it will expert the first operand, in location b it will expect the second and will return the result in c. We only need to return to one location so we can just directly branch to ret without having to worry about modifying memory. This is a fairly simple implementation of multiplication so I won't explain it in depth. We first set c to 0. Now we add a to c and subtract 1 from b and continue to do this until b reaches 0 which means we will have added a to c b times which results in c becoming a times b. Finally we return to memory location ret. Accept user input and store it in a, this will be the number we will work out the factorial of. Subtract 1 and store that in d and b. This sets up our first multiplication, i.e. input times input - 1. Branch to mul to perform the multiplication. Load the value of d which acts as our counter to keep track of what we are multiplying next. Subtract 1 from d and end the program (outputting the result) if it is 0. Store d in b and c in a to set up the next multiplication (the previous result times the counter). Branch to mul to perform the next multiplication. Below I will show the full code with a couple of additions to deal with $1!$ and $0!$ as well as a couple of hacks to reduce the length of the code.
CommonCrawl
Let $H$ be a Hilbert space. Question 1: Are all rank one operators from $H$ to $H$ is of the form $$T:H\rightarrow H, x \mapsto \langle x,u\rangle v $$ For some $u,v \in H$. Question 2: Suppose $I \subseteq L(H)$ is an ideal and contains all the rank one operators, how do we show it contains all the finite rank operators? These two statements seem to be true, but I could not find any reference. $$x \mapsto \langle x,v \rangle w$$ are rank one if $v \not=0, w \not= 0$. Combining the above two, $T$ is rank one if and only if it is of the form $x \mapsto \langle x,v \rangle w $. Any finite rank operator, must again be of the form $\sum_j \langle x, v_j \rangle w_j$ (finite sum). These are generated by the rank one operators. I would be happy if anyone point some possible pitfalls / mistake I made in my proof. I don't really see how you combine your 1 and 2 to get that $T$ is of the desired form when it is rank-one, so I cannot comment on that. I also don't see how you reason on 4. When $T$ is finite-rank, you can repeat the above but, instead of a single $y$, you will now have an orthonormal basis $y_1,\ldots,y_n$ and bounded linear functionals $\lambda_j$. Not the answer you're looking for? Browse other questions tagged functional-analysis operator-theory hilbert-spaces operator-algebras compact-operators or ask your own question. Proving a linear operator is compact: understanding the statement "norm limit of a sequence of finite rank operators".
CommonCrawl
With the rapid expansion of computer networks, communication and storage of medical information has entered a new era. Teleclinical practice and computer digital storage are two important medical information technologies that have made the issue of data compression of crucial importance. Efficiently compressing data is the key to making teleclinical practice feasible, since the bandwidth provided by computer media is too limited for the huge amount of medical data that must be transmitted. Because of the high compressibility of the medical images, data compression is also desirable for digital storage despite the availability of inexpensive hardware for mass storage. This chapter addresses the family of progressive image compression algorithms. The progressive property is preferred because it allows users to gradually recover images from low to high quality images and to stop at any desired bit rate, including lossless recovery. A progressive transmission algorithm with automatic security filtering features for on-line medical image distribution using Daubechies' wavelets has been developed and is discussed in this chapter. The system is practical for real-world applications, processing and coding each 12-bit image of size $512\times512$ within 2 seconds on a Pentium Pro. Besides its exceptional speed, the security filter has demonstrated a remarkable accuracy in detecting sensitive textual information within current or digitized previous medical images. Copyright 1998 Kluwer. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the Kluwer.
CommonCrawl
It's an unpresupposing little letter, $x$. In fact, that's the reason we use it to represent something we don't know. But how do you write it down? When Vijay Krishnan tweeted a link to an American college professor's page on mathematical handwriting, I was shocked to learn that he thought adding a hook to a simple cross was sufficient to differentiate letter-$x$ from times-$\times$. So I asked our Twitter followers how they write $x$. The Cambrian explosion of diversity in answers I received was eye-opening – I'm glad I asked! Lots of people have blogs where they talk about maths. Lots of these people just use plain text for mathematical notation which, while it gets the point across, isn't as easy to read or as visually appealing as it could be. MathJax lets you write LaTeX and get beautifully typeset mathematical notation. And it's really really easy to set up: you just need to paste some code into the header of your blog's theme. To make it really really really easy, I've written some very detailed instructions of what to do for each big blogging service. (If you're reading this after I wrote it, which you definitely are, beware that the interfaces I describe may have changed, so the advice below might be inaccurate.
CommonCrawl
Abstract: Iterated admissibility (IA) can be seen as exhibiting a minimal criterion of rationality in games. In order to make this intuition more precise, the epistemic characterization of this game-theoretic solution has been actively investigated in recent times: it has been shown that strategies surviving m+1 rounds of iterated admissibility may be identified as those that are obtained under a condition called rationality and m assumption of rationality (RmAR) in complete lexicographic type structures. On the other hand, it has been shown that its limit condition, $R\infty AR$, might not be satisfied by any state in the epistemic structure, if the class of types is complete and the types are continuous. In this paper we introduce a weaker notion of completeness which is nonetheless sufficient to characterize IA in a highly general way as the class of strategies that indeed satisfy $R\infty AR$. The key methodological innovation involves defining a new notion of generic types and employing these in conjunction with Cohen's technique of forcing.
CommonCrawl
While often our data can be well represented by a homogeneous array of values, sometimes this is not the case. This section demonstrates the use of NumPy's structured arrays and record arrays, which provide efficient storage for compound, heterogeneous data. While the patterns shown here are useful for simple operations, scenarios like this often lend themselves to the use of Pandas Dataframes, which we'll explore in Chapter 3. But this is a bit clumsy. There's nothing here that tells us that the three arrays are related; it would be more natural if we could use a single structure to store all of this data. NumPy can handle this through structured arrays, which are arrays with compound data types. Here 'U10' translates to "Unicode string of maximum length 10," 'i4' translates to "4-byte (i.e., 32 bit) integer," and 'f8' translates to "8-byte (i.e., 64 bit) float." We'll discuss other options for these type codes in the following section. As we had hoped, the data is now arranged together in one convenient block of memory. Note that if you'd like to do any operations that are any more complicated than these, you should probably consider the Pandas package, covered in the next chapter. As we'll see, Pandas provides a Dataframe object, which is a structure built on NumPy arrays that offers a variety of useful data manipulation functionality similar to what we've shown here, as well as much, much more. The shortened string format codes may seem confusing, but they are built on simple principles. The first (optional) character is < or >, which means "little endian" or "big endian," respectively, and specifies the ordering convention for significant bits. The next character specifies the type of data: characters, bytes, ints, floating points, and so on (see the table below). The last character or characters represents the size of the object in bytes. Now each element in the X array consists of an id and a $3\times 3$ matrix. Why would you use this rather than a simple multidimensional array, or perhaps a Python dictionary? The reason is that this NumPy dtype directly maps onto a C structure definition, so the buffer containing the array content can be accessed directly within an appropriately written C program. If you find yourself writing a Python interface to a legacy C or Fortran library that manipulates structured data, you'll probably find structured arrays quite useful! Whether the more convenient notation is worth the additional overhead will depend on your own application. This section on structured and record arrays is purposely at the end of this chapter, because it leads so well into the next package we will cover: Pandas. Structured arrays like the ones discussed here are good to know about for certain situations, especially in case you're using NumPy arrays to map onto binary data formats in C, Fortran, or another language. For day-to-day use of structured data, the Pandas package is a much better choice, and we'll dive into a full discussion of it in the chapter that follows.
CommonCrawl
Combinational logic circuits can be constructed from programmable logic devices (PLDs). The general idea is illustrated in Figure 7.2.1 for two input variables and two output functions of these variables. Figure 7.2.1. Simplified circuit for a programmable logic array. Each of the input variables, both in its uncomplemented and complemented form, are inputs to AND gates through fuses. (The S-shaped lines in the circuit diagram represent fuses.) The fuses can be "blown" or left in place in order to program each AND gate to output a product. Since every input, plus its complement, is input to each AND gate, any of the AND gates can be programmed to output a minterm. The products produced by the array of AND gates are all connected to OR gates, also through fuses. Thus, depending on which OR-gate fuses are left in place, the output of each OR gate is a sum of products. There may be additional logic circuitry to select between the different outputs. We have already seen that any Boolean function can be expressed as a sum of products, so this logic device can be programmed by "blowing" the fuses to implement any Boolean function. PLDs come in many configurations. Some are pre-programmed at the time of manufacture. Others are programmed by the manufacturer. And there are types that can be programmed by a user. Some can even be erased and reprogrammed. Programming technologies range from specifying the manufacturing mask (for the pre-programmed devices) to inexpensive electronic programming systems. Some devices use "antifuses" instead of fuses. They are normally open. Programming such devices consists of completing the connection instead of removing it. Both the AND gate plane and the OR gate plane are programmable. Only the OR gate plane is programmable. Only the AND gate plane is programmable. A programmable logic array is typically larger than the one shown in Figure 7.2.1, which is already complicated to draw. To simplify the drawing, it is typical to use a diagram as shown in Figure 7.2.2 to specify the design. This diagram deserves some explanation. Note in Figure 7.2.1 that each input variable and its complement is connected to the inputs of all the AND gates through a fuse. The AND gates have multiple inputs—one for each variable and its complement. In Figure 7.2.2 we use one horizontal line leading to the input of each AND gate to represent multiple wires, one for each variable and its complement. So each AND gate in Figure 7.2.2 has eight inputs even though we draw only one line. Figure 7.2.2. The horizontal lines to the AND gate inputs represent multiple wires—one for each input variable and its complement. The vertical lines to the OR gate inputs also represent multiple wires—one for each AND gate output. The dots represent connections. Read only memory can be implemented as a programmable logic device where only the OR plane can be programmed. The AND gate plane is wired to provide all the minterms. Thus, the inputs to the ROM can be thought of as addresses. Then the OR gate plane is programmed to provide the bit pattern at each address. For example, the ROM diagrammed in Figure 7.2.3 has two inputs, $a_1$ and $a_0$. Figure 7.2.3. Eight-byte Read Only Memory (ROM). The "\(\times\)" connections represent permanent connections. Each AND gate can be thought of as producing an address. The eight OR gates produce one byte. The connections (dots) in the OR plane represent the bit pattern stored at the address. Figure 7.2.4. Two-function Programmable Array Logic (PAL). The "\(\times\)" connections represent permanent connections. Each AND gate can be thought of as producing an address. The eight OR gates produce one byte. The connections (dots) in the OR plane represent the bit pattern stored at the address.
CommonCrawl
Abstract. We consider discrete one-dimensional Schr\"odinger operators with quasi-Sturmian potentials. We present a new approach to the trace map dynamical system which is independent of the initial conditions and establish a characterization of the spectrum in terms of bounded trace map orbits. Using this, it is shown that the operators have purely singular continuous spectrum and their spectrum is a Cantor set of Lebesgue measure zero. We also exhibit a subclass having purely $\alpha$-continuous spectrum. All these results hold uniformly on the hull generated by a given potential.
CommonCrawl
Is it possible to represent an improper fraction as a finite sum of unique unit fractions (Egyptian fractions)? Of course $n$ exists because the infinite Harmonic series diverges. It follows that $\alpha - H_n<\frac 1n$ so none of the fractions in the standard Egyptian decomposition of $\alpha - H_n$ can appear in $H_n$. Not the answer you're looking for? Browse other questions tagged number-theory fractions egyptian-fractions or ask your own question. Algebraic structure of a set of Egyptian fractions of a positive rational? Does there exist an operation which partitions any fraction into the sum of the minimum number of unit fractions? Any 'odd fraction' can be represented as the finite sum of different 'odd unit fractions'? Any 'odd unit fraction' whose denominator is not $1$ can be represented as the sum of three different 'odd unit fractions'? How do I prove that any unit fraction can be represented as the sum of two other distinct unit fractions? Is a number divided by $0$ an improper fraction?
CommonCrawl
The first part of the graph is the parabolic function $4-x^2$ until the point (1,3), which cuts the x-axis at (-2,0). The second part of the graph is the parabolic function $x^2 +2x$ from the point (1,3) on through (2,8). $F(1) = 4 - 1^2$ $F(1) = 4 - 1$ $F(1) = 3$ The first part will cut the x-axis at: $4-x^2 = 0$ $x^2 = 4$ $x = \sqrt4$ or $x = -\sqrt4$ $x = 2$ or $x = -2$ So the first part of the graph will be the parabolic function $4-x^2$ until the point (1,3), which cuts the x-axis at (-2,0). $ F(1) = 1^2 + 2 \times 1$ $ F(1) = 1 + 2$ $ F(1) = 3$ $ F(2) = 2^2 + 2 \times 2$ $ F(2) = 4 + 4$ $ F(2) = 8$ So the second part of the graph will be the parabolic function $x^2 +2x$ from the point (1,3) on through (2,8).
CommonCrawl
Wang L., Yan J.-R., Zhang J.-G., Liu Z.-R. Chinese Physics 16, 2498, 2007. Many real-world networks have the ability to adapt themselves in response to the state of their nodes. This paper studies controlling disease spread on network with feedback mechanism, where the susceptible nodes are able to avoid contact with the infected ones by cutting their connections with probability when the density of infected nodes reaches a certain value in the network. Such feedback mechanism considers the networks' own adaptivity and the cost of immunization. The dynamical equations about immunization with feedback mechanism are solved and theoretical predictions are in agreement with the results of large scale simulations. It shows that when the lethality $\alpha$ increases, the prevalence decreases more greatly with the same immunization $g$. That is, with the same cost, a better controlling result can be obtained. This approach offers an effective and practical policy to control disease spread, and also may be relevant to other similar networks. This paper in Chinese Phys.
CommonCrawl
Are there any good nonconstructive "existential metatheorems"? Which functions have all derivatives everywhere positive? How many three dimensional real Lie algebras are there? What is the current status of Agrawal's conjecture? In an arbitrary abelian category, does chain complex homology commute with coproduct? Why are universal introduction and existential elimination valid inference rules? Why is a matrix pencil called a pencil? Why and how are moduli spaces of (semi)stable vector bundles well-behaved? Research trends in geometry of numbers? Are there Ricci-flat riemannian manifolds with generic holonomy? Notions of degree for maps $S^n \to S^n$? is the tensor product of projective modules again projective? When is a finite cw-complex a compact topological manifold? Why are optimization problems often called "programs"? When is a submanifold of $\mathbf R^n$ given by global equations? Why is so much work done on numerical verification of the Riemann Hypothesis? What do you do if you find a typo in an equation of a paper? How are infinite-dimensional manifolds most commonly treated? How should I visualise RP^n? What's about "quantum modular forms"? Non-commutative geometry from von Neumann algebras? Mumford conjecture: Heuristic reasons? Generalizations? … Algebraic geometry approaches?
CommonCrawl
Lieou-Kui, E., Kanazawa A., Behr J‐B., & Py S. (2018). Ring‐Junction‐Substituted Polyhydroxylated Pyrrolizidines and Indolizidines from Ketonitrone Cycloadditions. Eur. J. Org. Chem.. 2018, 2178–2192. Tangara, S., Kanazawa A., & Py S. (2017). The Baldwin Rearrangement: Synthesis of 2-Acylaziridines.. Eur. J. Org. Chem.. 6357–6364. Tangara, S., Aupic C., Kanazawa A., Poisson J-F., & Py S. (2017). Aziridination of Cyclic Nitrones Targeting Constrained Iminosugars.. Org. Lett.. 19, 4842–4845. Da Cruz, A. Vieira, Kanazawa A., Poisson J-F., Behr J-B., & Py S. (2017). Polyhydroxylated Quinolizidine Iminosugars as Nanomolar Selective Inhibitors of α-Glucosidases.. J. Org. Chem.. 82, 9866–9872. Lieou-Kui, E., Kanazawa A., Philouze C., Poisson J-F., & Py S. (2017). Exploring the Metal-Catalyzed Reactions of α-Diazo-β-hydroxyamino Esters: Conversion of Cyclic Aldonitrones into Ketonitrones.. European J. Org. Chem.. 2017, 363–372. Lieou-Kui, E., Kanazawa A., Philouze C., Poisson J-F., & Py S. (2017). Exploring the Metal‐Catalyzed Reactions of α‐Diazo‐β‐hydroxyamino Esters: Conversion of Cyclic Aldonitrones into Ketonitrones. Eur. J. Org. Chem.. 363-372. Racine, E., Burchak O. N., & Py S. (2016). Synthesis of α-Acyloxynitrones and Reactivity towards Samarium Diiodide.. Eur. J. Org. Chem.. 2016, 4003–4012. Boisson, J., Thomasset A., Racine E., Cividino P., Sainte-Luce T. Banchelin, Poisson J-F., et al. (2015). Hydroxymethyl-Branched Polyhydroxylated Indolizidines: Novel Selective α-Glucosidase Inhibitors.. Org. Lett.. 17, 3662–3665. Lieou-Kui, E., Kanazawa A., Poisson J-F., & Py S. (2014). Transition-Metal-Catalyzed Ring Expansion of Diazocarbonylated Cyclic N-Hydroxylamines: A New Approach to Cyclic Ketonitrones.. Org. Lett.. 16, 4484–4487. Selim, K. B., Martel A., Laurent M. Y., Lhoste J., Py S., & Dujardin G.. (2014). Enantioselective Ruthenium-Catalyzed 1,3-Dipolar Cycloadditions between C-Carboalkoxy Ketonitrones and Methacrolein: Solvent Effect on Reaction Selectivity and Its Rationale.. J. Org. Chem.. 79, 3414–3426. Zhang, X., Cividino P., Poisson J-F., Shpak-Kraievskyi P., Laurent M. Y., Martel A., et al. (2014). Asymmetric Synthesis of α,α-Disubstituted Amino Acids by Cycloaddition of (E)-Ketonitrones with Vinyl Ethers.. Org. Lett.. 16, 1936–1939. Ben Ayed, K., Beauchard A., Poisson J-F., Py S., Laurent M. Y., Martel A., et al. (2014). Asymmetric access to α-substituted functional aspartic acid derivatives by a [3+2] strategy employing a chiral dienophile.. Eur. J. Org. Chem.. 2014, 2924–2932. Lieou-Kui, E., Kanazawa A., Poisson J-F., & Py S. (2013). Unprecedented base-promoted nucleophilic addition of diazoesters to nitrones.. Tetrahedron Lett.. 54, 5103–5105. Xu, C-P., Huang P-Q., & Py S. (2012). SmI2-Mediated Coupling of Nitrones and tert-Butanesulfinyl Imines with Allenoates: Synthesis of β-Methylenyl-gamma-lactams and Tetramic Acids.. Org. Lett.. 14, 2034–2037. Gilles, P., & Py S. (2012). SmI2-Mediated Cross-Coupling of Nitrones with β-Silyl Acrylates: Synthesis of (+)-Australine.. Org. Lett.. 14, 1042–1045. Prikhod'ko, A., Walter O., Zevaco T. A., Garcia-Rodriguez J., Mouhtady O., & Py S. (2012). Synthesis of α-amino acids through samarium(II) iodide promoted reductive coupling of nitrones with CO2.. Eur. J. Org. Chem.. 2012, 3742–3746, S3742/1–S3742/33. Wolan, A., Soueidan M., Chiaroni A., Retailleau P., Py S., & Six Y.. (2011). Tactics for the asymmetric preparation of 2-azabicyclo[3.1.0]hexane and 2-azabicyclo[4.1.0]heptane scaffolds.. Tetrahedron Lett.. 52, 2501–2504. Burchak, O. N., Masson G., & Py S. (2010). SmI2-mediated reductive cross-coupling reactions of $\alpha$-cyclopropyl nitrones.. Synlett. 1623–1626. Chavarot, M., Rivard M., Chamiot B., Hahn F., Rose-Munch F., Rose E., et al. (2010). Synthesis and Structural Characterization of Planar Chiral Cr(CO)3-Complexed Aromatic Nitrones - Valuable Substrates for Asymmetric SmI2-Induced Coupling Reactions.. European J. Org. Chem.. 944–958, S944/1–S944/23. Cividino, P., Dheu-Andries M-L., Ou J., Milet A., Py S., & Toy P. H. (2009). Mechanistic investigations of the phosphine-mediated nitrone deoxygenation reaction and its application in cyclic imine synthesis.. Tetrahedron Lett.. 50, 7038–7042. Racine, E., & Py S. (2009). Tandem SmI2-induced nitrone $\beta$-elimination/aldol-type reaction.. Org. Biomol. Chem.. 7, 3385–3387. Burchak, O. N., & Py S. (2009). Reductive cross-coupling reactions (RCCR) between CN and CO for $\beta$-amino alcohol synthesis.. Tetrahedron. 65, 7333–7356. Racine, E., Philouze C., & Py S. (2009). Synthesis and X-ray Structure of (2R,3R,4R,5R)-3,4,5-Tris-Benzyloxy-2-Benzyloxymethyl-Piperidin-1-ol, the N-Hydroxy-Analog of 2,3,4,6-Tetra-O-Benzyl-1-Deoxymannojirimycin.. J. Chem. Crystallogr.. 39, 494–499. Racine, E., Bello C., Gerber-Lemaire S., Vogel P., & Py S. (2009). A Short and Convenient Synthesis of 1-Deoxymannojirimycin and N-Oxy Analogues from D-Fructose.. J. Org. Chem.. 74, 1766–1769. Desvergnes, S., Vallée Y., & Py S. (2008). Novel Polyhydroxylated Cyclic Nitrones and N-Hydroxypyrrolidines through BCl3-Mediated Deprotection.. Org. Lett.. 10, 2967–2970. Cividino, P., Py S., Delair P., & Greene A. E. (2007). 1-(2,4,6-Triisopropylphenyl)ethylamine: A New Chiral Auxiliary for the Asymmetric Synthesis of γ-Amino Acid Derivatives. J. Org. Chem.. 72, 485–493.
CommonCrawl
Abstract: Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously. In this work we present a suite of new, simple and efficient protocols for secure computation in this "one-pass" model. We give protocols that obtain optimal privacy for the following general tasks: -- Evaluating any multivariate polynomial $F(x_1, \ldots ,x_n)$ (modulo a large RSA modulus N), where the parties each hold an input $x_i$. -- Evaluating any read once branching program over the parties' inputs. As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata and decision trees.
CommonCrawl
A 3-D Microwave Imaging Reflectometry (MIR) instrument is being designed for the National Spherical Tokamak Experiment (NSTX). Reflections from multiple, extended plasma cutoff surfaces are imaged onto a 2-D mixer array (8$\times $2 or 8$\times $4 elements, depending upon the size of the viewing window). Through the simultaneous launch and collection of 8 probe frequencies spanning a frequency range of 38-52 GHz (extendable to 70 GHz), the result is a 3-D visualization (up to 8$\times $4$\times $8 or 256 channels) of plasma density fluctuations associated with MHD and microturbulence. Each probe frequency may be independently controlled for radial correlation studies, or scanned to collect localized fluctuation data over a large plasma volume. The 2-D nature of the mixer array allows the magnetic pitch angle to be determined through correlation studies of toroidally and poloidally separated channels. Technical details regarding the MIR system design will be presented together with that of an innovative adaptive optics approach under development at UC Davis which can match the curvature of the illumination beam to that of the target plasma in real-time. *Work supported by U.S. DoE Grants DE-FG02-99ER54518 and DE-AC02-76-CH03073.
CommonCrawl
Abstract : A novel low-density parity-check decoder architecture is presented that can achieve a high data throughput while retaining the flexibility to decode a wide range of quasi-cyclic codes. The proposed architecture allows to combine multiple message-update schedules, providing an additional degree of freedom to jointly optimize the code and decoder architecture. Protograph-based code constructions are introduced that exploit this added degree of freedom in order to maximize data throughput, and that are also optimized to reduce the complexity of the required parallel data accesses.For some examples and under an ideal pipeline speedup assumption, the proposed architecture and code designs reduce decoding latency by a factor of $3.2\times$ compared to a decoder using a strict sequential schedule.
CommonCrawl
The zero/low intermediate frequency (IF) receiver (RX) architecture has enabled full CMOS integration. As the technology scales and wireless standards become ever more challenging, the issues related to time-varying dc offsets, the second-order nonlinearity, and flicker noise become more critical. In this paper, we propose a new architecture of a superheterodyne RX that attempts to avoid such issues. By exploiting discrete-time (DT) operation and using only switches, capacitors, and inverter-based gm-stages as building blocks, the architecture becomes amenable to further scaling. Full integration is achieved by employing a cascade of four complex-valued passive switched-cap-based bandpass filters sampled at $4\times $ of the local oscillator rate that perform IF image rejection. Channel selection is achieved through an equivalent of the seventh-order filtering. A new twofold noise-canceling low-noise transconductance amplifier is proposed. Frequency domain analysis of the RX is presented by the proposed DT model. The RX is wideband and covers 0.4–2.9 GHz with a noise figure of 2.9–4 dB. It is implemented in 65-nm CMOS and consumes 48–79 mW.
CommonCrawl
Global solvability and asymptotics of semilinear parabolic Cauchy problems in $\mathbb R^n$ are considered. Following the approach of A. Mielke these problems are investigated in weighted Sobolev spaces. The paper provides also a theory of second order elliptic operators in such spaces considered over $\mathbb R^n$, $n\in \mathbb N$. In particular, the generation of analytic semigroups and the embeddings for the domains of fractional powers of elliptic operators are discussed. S. M. Nikolsky: A Course of Mathematical Analysis. Mir Publishers, Moscow, 1987.
CommonCrawl
Will the Universe actually end in a big rip? by Matthew Wright. Published on 3 July 2015. Yesterday, the Guardian published an article with the headline "Not with a bang, but with a Big Rip: how the world will end". In it they discuss a recently published paper by some physicists in Tennessee, claiming "scientists have concluded that we could be heading for an equally dramatic cosmic finale: the Big Rip". Now part of my PhD is on this exact topic of cosmology, and I find it really interesting to see how the mainstream media report my own field. So I thought I'd write a blog post putting the article in context, and explaining what the Guardian got wrong. Firstly, some maths. You might think that modelling the universe as a whole would be quite a complicated problem, but actually if we make a few reasonable assumptions then it is not too difficult. We start with the assumption that on large scales the universe is roughly the same everywhere. Mathematically we say that space is homogeneous and isotropic. Then we define a function called the scale factor of the universe, $a(t)$, which tells us the rate at which the universe is expanding. The actual value of $a$ is not important, all we care about is how it changes over time. So if $a(t)$ is constant, this tells us the universe is static; if $a(t)$ is increasing then the universe is expanding and if $a(t)$ is decreasing then the universe is contracting. In order to find an equation for this scale factor, we use Einstein's equation from general relativity, which relates the scale factor to the energy content of the universe. where $a_0$, $H_0$ and $t_0$ are just constants. What is remarkable about this simple equation is that it tells us that the entire fate of the universe depends on just the value of this one parameter $w$! If $w>-1$, then $a(t)$ will increase forever and the universe will end in what cosmologists call a big freeze. However if $w<-1$, then the scale factor will actually become infinite in a finite amount of time, occurring at $t=t_0$. This is because the exponent $2/3(w+1)$ is negative, so $a(t)=a_0/0=\infty$ when $t=t_0$. At this time all points in the universe will be an infinite amount of distance apart: this is what is meant by a big rip. The accelerated expansion of the universe was first discovered by observing distant supernova. Source: wikimedia commons. Astronomers can measure the equation of state by observing the acceleration rate of the universe. It was observed in the late 1990s that the universe was not only expanding, as had been discovered Hubble in the 1930s, but that this rate of expansion was accelerating. This was a remarkable and unexpected discovery, and left cosmologists baffled, as some unknown energy source in the universe was required to explain it. So they invented the term dark energy to describe this mysterious energy. For the universe to be accelerating, we require an equation of state $w<-1/3$. In fact, very high precision measurements indicate an equation of state parameter roughly equal to $-1$, which is exactly on the borderline between the scenario of a big freeze and a big rip. In order to explain dark energy, scientists have suggested thousands of potential theories, all of which predict a different effective equation of state parameter $w$. In the particular paper the Guardian reported on, they take into account a property of fluids called bulk viscosity, and claim that this leads to a big rip, in particular an equation of state with $w<-1$. It's an interesting idea, but I find it kind of bizarre the Guardian picked up on this aforementioned paper. This is just another model amongst a multitude of theories, some of which also predict a universe with a big rip; many others do not. In fact, I recently published a paper on a different model in the same journal which also predicts a big rip. For the moment, the proposed model is certainly not a hot topic of research (the paper has yet to receive any citations), so it is odd the Guardian chose to pick up on it, and scientists have certainly not "concluded" anything of the sort. Having said this, I think it's great that the Guardian has found a hook to engage the public and explain these exciting ideas about dark energy. But, as is often the case in how journalists report science, the way the article is phrased misleads the public into how science research actually works. The Guardian is certainly far from the worst culprit in this regard. Journalism and science are not always an easy mix, science does not necessarily fit into a daily news agenda, which must portray everything as a ground-breaking new discovery. These are extremely rare in science: progress requires many years of gradual improvement by a large collaborative community, all working together to uncover the truth behind Nature's grand mysteries. And finally, to answer the question I posed in the title of this blog: the answer, as is often the case, is maybe. Matt is a PhD student at UCL, working in the fields of general relativity and cosmology. Tennis maths: how long should a deuce point last? Using Markov chains to calculate some interesting tennis stats!
CommonCrawl
The above equation is the duality equation between F-theory and M-Theory on a vanishing 2-torus. What's the explanation for this equation? Is there anything similar to this equation with M-theory and type IIB theory, and how is it explained? M-theory compactified on a 2-torus is the same as M-theory compactified on a circle and then compactified on another circle because $T^2=S^1\times S^1$. M-theory compactified on a circle is type IIA string theory with $g_s$ being an increasing power of the radius of the compactified dimension. And if type IIA is compactified on a circle of a small radius, we get type IIB string theory via T-duality. When we connect the M-theory/IIA duality and the IIA/IIB T-duality, we get the $F=MA$ relationship between M-theory and type IIB you mentioned. One may avoid the type IIA intermediate step, too. M-theory on a two-torus is naively a 9-dimensional theory (the supergravity approximation would lead us to this belief). However, M-theory contains M2-branes, two-dimensional objects, and both of their spatial dimensions may be wrapped on the 2-torus. This produces point-like objects in the remaining 9 large dimensions. These objects are light when $A\to 0$ and they also have bound states of $N$ objects. So one obtains a continuum of new states in the $A\to 0$ limit and they may be reinterpreted as the momentum modes with respect to a new, "emergent", 10th spacetime dimension of the resulting type IIB string theory. So, F-Theory compactifyed on a vanishing (axion-dilaton ?) torus gives IIB theory , while M-Theory compactifyed on a vanishing torus gives IIA theory compactifyed on a small radius, that is IIB theory compactifyed on a large radius (by T-Duality). Is it correct ? Trimok: heuristically yes. However, one must be careful about the type of compactification you need for F-theory - the signature of the 2-torus isn't really well-defined and there isn't any decompactified 12-dimensional F-theory to start with.
CommonCrawl
We give a practical tool to control the $L^\infty$-norm of the Steklov eigenfunctions in a Lipschitz domain in terms of the norm of the $BV$-trace operator. The norm of this operator has the advantage to be characterized by purely geometric quantities. As a consequence, we give a spectral stability result for the Steklov eigenproblem under geometric domain perturbations and several examples where stability occurs. In particular we deal with geometric domains which are not equi-Lipschitz, like vanishing holes, merging sets, approximations of inner peaks.
CommonCrawl
What are the pros and cons of Pedersen commitments vs hash-based commitments? Obviously, it's possible to create a commitment scheme comm(r, S) by using a hash function H and computing H(S||r). This scheme is secure under the assumption that H is collision and preimage resistant, which (IMO) is a lighter cryptographic assumption than the discrete log assumption. So I guess my question is, why are commitment schemes like Pedersen commitments used which do require the latter assumption? What efficiency or security benefits they bring? And are there still any benefits to using hash commitments? The hash-based commitment scheme you are sketching is in fact not secure under collision resistance and preimage resistance of the hash function. For hiding, you need to assume that the hash function you are using behaves like a random oracle (i.e., whenever queried on a new value it returns a uniformly random value from the output domain of the hash function, and for every repeated query it answers consistently). The random oracle assumption is an idealizing assumption which is considered to be a rather strong assumption compared to the discrete log assumption. Not the answer you're looking for? Browse other questions tagged hash commitments or ask your own question. Why is the Pedersen commitment computationally binding? Why is the Pedersen commitment perfectly hiding? RIPEMD versus SHA-x, what are the main pros and cons? What are the pros/cons of using symmetric crypto vs. hash in a commitment scheme? Taking $n-1$ bits from hash function will also be hash function? Pedersen commitments, what happens if I choose $H$ such that $H = a\times G$? trapdoor commitment from lattice-based assumptions?
CommonCrawl
The retailer should charge 44 dollars for the book. Let s represent the selling price. The basic relationship is: selling price equals cost plus profit. Selling price = Cost + Profit Selling price = s Cost = 22 Profit = 50% of the selling price = 50% $\times$ s = 0.5s So, s = 22 + 0.5s 0.5s = 22 Divide both sides by 0.5 s = 44 The retailer should charge 44 dollars for the book.
CommonCrawl
Experiments conducted on a linear helicon plasma (HelCat) device shows evidence of drift wave instability fluctuations, which are suppressed when an increased positive DC electric potential is applied perpendicular to the magnetic field. Simultaneously, a new K-H instability appears, and deterministic chaos also can be observed during this transaction. Measurements show both axial and azimuthal flow speed, as well as Reynolds stress change under the effect of E$\times $B flow shear caused by this external disturbance. When neutral gas pressure is increased during the process, the suppression, requires a higher DC bias, and the K-H transition is not observed. From axial flow measurements, a possible mechanism is suggested from the reduced flow speed which may caused by increased collisions between charged particles and neutrals. Two simple models are presented to predict neutral change with increased gas pressure.
CommonCrawl
You are given a rooted tree that consists of $n$ nodes. The nodes are numbered $1,2,\ldots,n$, and node $1$ is the root. Each node has a color. Your task is to determine for each node the number of distinct colors in the subtree of the node. The next line consists of $n$ integers $c_1,c_2,\ldots,c_n$: the color of each node. Print $n$ integers: for each node $1,2,\ldots,n$, the number of distinct colors.
CommonCrawl
Abstract: Computed tomography (CT) is widely used in screening, diagnosis, and image-guided therapy for both clinical and research purposes. Since CT involves ionizing radiation, an overarching thrust of related technical research is development of novel methods enabling ultrahigh quality imaging with fine structural details while reducing the X-ray radiation. In this paper, we present a semi-supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function to facilitate structural preservation. In this deep imaging process, we incorporate deep convolutional neural network (CNN), residual learning, and network in network techniques for feature extraction and restoration. In contrast to the current trend of increasing network depth and complexity to boost the CT imaging performance, which limit its real-world applications by imposing considerable computational and memory overheads, we apply a parallel $1\times1$ CNN to compress the output of the hidden layer and optimize the number of layers and the number of filters for each convolutional layer. Quantitative and qualitative evaluations demonstrate that our proposed model is accurate, efficient and robust for super-resolution (SR) image restoration from noisy LR input images. In particular, we validate our composite SR networks on three large-scale CT datasets, and obtain promising results as compared to the other state-of-the-art methods.
CommonCrawl
How much of the sky can the JWST see? To turn and point at different objects in space, Webb uses six reaction wheels to rotate the observatory. The reaction wheels are basically flywheels, which store angular momentum. The effect of angular momentum is familiar in bicycle riding. It is much easier to stay up on the bike when it is moving than when it is standing still, and the bicycle will tend to go straight in 'no hands' mode thanks to the angular momentum of the spinning wheels. Slowing down or speeding up one or more of the Webb's reaction wheels alters the total angular momentum of the whole observatory and consequently the observatory turns to conserve angular momentum. Hubble uses reaction wheels also to turn to point at different objects. The reaction wheels work in combination with three star trackers and six gyroscopes that provide feedback on where the observatory is pointing and how fast it is turning. This enables coarse pointing sufficient to keep the solar array pointed at the Sun and the high-gain antenna pointed at the Earth. To take images and spectra of astronomical targets (i.e., galaxy, star, planet, etc.) finer pointing is needed. Additional information for finer pointing from the Fine Guidance Sensor in Webb's integrated science instrument module (ISIM) is used to move the telescope's fine steering mirror (FSM) to steady the beam of light coming from the telescope and going into the science instruments. Webb's reaction wheels, star trackers, gyroscopes, Fine Guidance Sensor, and fine steering mirror work together in the observatory's attitude control system (ACS) to precisely point and stare at targets so that the science instruments can see them and see them clearly. The system works much the same way your body uses multiple methods of differing precision ("your inner ears and eyes and nervous system and muscles") to catch a baseball in the outfield. To me, this implies the JWST's mirrors are rigidly attached to the base of the spacecraft, so they're in a fixed orientation. The telescope always looks in a plane roughly parallel to the sunshield. Because the sunshield has to be normal to the Sun (more or less), that limits the field of view available at any one time to a circular strip of the sky. If you wanted to look at a particular star, you'd have to wait for up to 6 months for it to become available. That seems too long for many interesting transient phenomena. The key here is the 'more or less' part: how far out of the normal can you tilt the JWST without losing the sunshield's effectiveness? How much of the sky can the JWST observe from any given position in its orbit? At any particular location JWST will always be able to tilt sideways clock or counter-clockwise. It however cannot tilt backwards, due to the constraints of the sunshield facing the sun. As the field-of-view is essentially all-sky on average JWST will be used as follow-up for many surveys, in particular to characterize non-transient phenomena in detail. Transient phenomena like Gamma-ray bursts or transiting Exoplanets will be hard to discover by JWST if they're not luckily in the viewing horizon at any given moment. A last piece of knowledge that is necessary to understand this map, is that the map only represents the possibility of looking at things for the given amount of time. JWST's instruments have fields of view of $2'\times 2'$ and smaller, so the time allocation committee has additionally to actually decide to look at an object when being in the viewing horizon. This again emphasizes JWST's character as follow-up rather than discovery-machine: The coolant will run out after approx. 5 years with no means of refuelling at L2, so in any particular moment the viewing horizon observation time is expected to be heavily overbooked with known objects, leaving little time for 'just poking around in the darkness'. The field of regard (FOR) is the region of the sky in which observations can be conducted safely at any time. For JWST, the FOR is a large annulus that is centered on the position of the Sun. The FOR, as is shown in Figure 4-14, allows one to observe targets between 85° and 135° off the Sun line (MR-103, MR-104, MR-105). Most astronomical targets are observable for two periods separated by 6 months during each year. The length of the observing window varies with ecliptic latitude, and targets within 5 ° of the ecliptic poles are visible continuously (MR-106). This continuous viewing zone is important both for some science programs that involve monitoring throughout the year and will also be useful for calibration purposes. The sunshield for JWST will provide a 39.7% celestial field-of- regard (FOR) that is greater than the sky coverage requirement of 35% (MR-104). This large FOR is required to provide the scheduling flexibility to allow JWST to conduct an efficient scientific program and simplifies orbit station-keeping design since it permits a wide range of Sun orientations for thruster firing. Not the answer you're looking for? Browse other questions tagged james-webb-telescope or ask your own question. The JWST - What happens if/when it breaks? Does the JWST have a camera to monitor its deployment progress? How will JWST be serviced? How can the 6.5 m primary mirror of the JWST fit inside the 5.4 m fairing of Ariane 5? Can the James Webb Space Telescope basically manage its own orbit if necessary? Why isn't the JWST mirror bigger? How much would the James Webb Telescope have cost of they made two or ten of them instead for example? Can/Will the James Webb Telescope maintain its position passively?
CommonCrawl
When Alpha is submitted the equation $a(a^2-1)=2b^2$, it unexpectedly forgets the integer solution $a=1,b=0$. What could explain this ? I have observed that the anomaly disappears when the resolution is explicitly asked over the integers (the qualifier Diophantine also works). So it seems that the flaw would be in the logics for the "Integer solutions" section that comes along with an unspecified domain (presumably $\mathbb C$ by default). As suggested by @chiphurst, this could be because a general solver is used and might fail to find the exact integers. Not the answer you're looking for? Browse other questions tagged wolfram-alpha-queries diophantine-equations or ask your own question. Nonsense Data with Wolfram Alpha + Mathematica? How can I get all the results from a Wolfram | Alpha query? Second derivative implicit differentiation using Wolfram Alpha input? Why does this work with Wolfram|Alpha but not Mathematica? I use WolframAlpha within Mathematica; would there be any advantage to buy Wolfram|Alpha PRO?
CommonCrawl
I am a graduate student in the Harvard Department of Mathematics, working with Mike Hopkins. My interests are in homotopy theory, category theory, and algebraic geometry (and the intersection of these). My recent projects have focused on developing a theory of localization of types in homotopy type theory (analogous to that for spaces, see "Research" below) and understanding certain aspects of weak equivalences in the Joyal model structure on simplicial sets (ongoing). I did my undergraduate at the University of Massachusetts Amherst, where I wrote an honors thesis in algebraic geometry with Jenia Tevelev. Before UMass, I was at Cape Cod Community College, where I was into physics and student journalism. After UMass, I did Part III at Cambridge. Lucy Yang and I co-organize monthly women in math lunches at Harvard, which provide informal opportunities for women in the community to get to know each other. Please e-mail me if you would like to be added to our Google group! I am an organizer for the MIT Talbot Workshop, along with Calista Bernard, Yajit Jain, and Sean Poherence. If you have questions or suggestions about Talbot, please feel free to reach out. Localization in Homotopy Type Theory, with J. Daniel Christensen, Egbert Rijke, and Luis Scoccola. Accepted. Effective divisors on moduli spaces of rational curves with marked points. Michigan Math. J. 65 (2016), no. 2, 251--285. My undergrad thesis; I also created a related database of spherical hypertree divisors generated using Macaulay2. In summer 2018 I taught a tutorial on Category Theory. In spring 2018 I taught Math 21b (Linear Algebra and Differential Equations) at Harvard. A more expository slide talk on Homotopy Type Theory and the main results of Localization in homotopy type theory. Given at the Women in Homotopy Theory and AG conference in Berlin (March 2019). May 27-June 2, 2018: Talbot 2018: Model-independent theory of $\infty$-categories (organizer). May 16-20, 2018: Chromatic Homotopy Theory: Journey to the Frontier (participant). January 2018: Joint Mathematics Meetings, special session on Homotopy Type Theory (invited participant). November 2017: Women in Topology workshop, Berkeley, CA (participant). Summer 2017: Mathematical Research Community (MRC) in Homotopy Type Theory. Snowbird, Utah (invited participant). Spring 2016: Algebraic Geometry New England Series (AGNES), Brown RI (participant). Summer 2015: AMS Summer Institue in Algebraic Geometry. Conference and graduate bootcamp (participant). Spring 2015: University of Warwick Algebraic Geometry Seminar (invited speaker). February 2014: Midwest Algebraic Geometry Graduate Conference, UIUC (invited speaker). January 2014: Joint Mathematics Meetings (poster presenter, undergraduate poster session). October 2013: Algebraic Geometry New England Series (AGNES), Stony Brook (poster presenter). Summer 2013: Young Mathematicians' Conference, OSU (poster presenter). Summer 2013: UCLA Logic Summer School (participant). My CV, last updated fall 2018.
CommonCrawl