text
stringlengths
104
605k
## 117b – Undecidability and Incompleteness – Lecture 4 February 1, 2007 We showed that $a=n!$ is Diophantine. This implies that the prime numbers are Diophantine. Therefore, there is a polynomial in several variables with integer coefficients whose range, when intersected with the natural numbers, coincides with the set of primes. Amusingly, no nonconstant polynomial with integer coefficients can only take prime values. We proved the bounded quantifier lemma, the last technical component of the proof of the undecidability of Hilbert’s tenth problem. It implies that the class of relations definable by a $\Sigma^0_1$ formula in the structure $({\mathbb N},+,\times,<,0,1)$ coincides with the class of $\Sigma_1$ relations (i.e., the Diophantine ones). To complete the proof of the undecidability of the tenth problem, we will show that any c.e. relation is Diophantine. For this, recall that a set is c.e. iff it is the domain of a Turing machine. We proceeded to code the behavior of Turing machines by means of a Diophantic representation. This involves coding configurations of Turing machines and it remains to show how to code the way one configuration changes into another one.
### Madhyamik Question Paper Solution of Year 2016 Question No: 1.(iii) $4p^2qr^3$ এবং  $6p^3q^2r^4$ এর গ.সা.গু কত?
## 13. Kernels and Operators The goal of this section is to study a type of mathematical object that arises naturally in the context of conditional expected value and parametric distributions, and is of fundamental importance in the study of stochastic processes, particularly Markov processes. In a sense, the main object of study in this section is a generalization of a matrix, and the operations generalizations of matrix operations. If you keep this in mind, this section may seem less abstract. ### Basic Theory #### Definitions and Properties Recall that a measurable space $$(S, \mathscr{S})$$ consists of a set $$S$$ and a $$\sigma$$-algebra $$\mathscr{S}$$ of subsets of $$S$$. If $$S$$ is countable, then $$\mathscr{S}$$ is usually the power set of $$S$$, the collection of all subsets of $$S$$. If $$S$$ is uncountable, then typically $$S$$ is a topological space (a subset of a Euclidean space, for example), and $$S$$ is the Borel $$\sigma$$-algebra, the $$\sigma$$-algebra generated by the open subsets of $$S$$. A nice set of assumptions that we will use in the section is that $$S$$ has a topology that is locally compact, Hausdorff, and has a countable base (LCCB), and that $$\mathscr{S}$$ is the Borel $$\sigma$$-algebra. (See the section on topology to review the definitions.) These assumptions are general enough to encompass most measurable spaces that occur in probability and yet are restrictive enough to allow a nice mathematical theory. In particular, a countable set $$S$$ with the discrete topology satisfies the assumptions and the corresponding Borel $$\sigma$$-algebra $$\mathscr{S}$$ is the power set. In this case, every function from $$S$$ to another measurable space is measurable, and every from from $$S$$ to another topological space is continuous. For $$k \in \N_+$$, the Euclidean space $$\R^k$$ also satisfies the assumptions, and in this case the Borel $$\sigma$$-algebra is the usual collection of measurable sets. Let $$\mathscr{B}(S)$$ denote the collection of bounded functions $$f: S \to \R$$. Under the usual operations of pointwise addition and scalar multiplication, $$\mathscr{B}(S)$$ is a vector space, and the natural norm on this space is the supremum norm $\| f \| = \sup\{\left|f(x)\right|: x \in S\}, \quad f \in \mathscr{B}(S)$ This vector space plays an important role. In this section, it is sometimes more natural to write integrals with respect to a positive measure with the differential before the integrand, rather than after. However, rest assured that this is mere notation, the meaning of the integral is the same. Thus, if $$\mu$$ is a positive measure on $$(S, \mathscr{S})$$ and $$f: S \to \R$$ a measurable, real-valued function we may write the integral of $$f$$ with respect to $$\mu$$ in operator notation as $\mu f = \int_S \mu(dx) f(x)$ assuming, as usual, that the integral exists. If $$\mu$$ is a probability measure on $$S$$, then we can think of $$f$$ as a real-valued random variable, in which case our new notation is not too far from our traditional $$\E(f)$$. Our main definition comes next. Suppose that $$(S, \mathscr{S})$$ and $$(T, \mathscr{T})$$ are measurable spaces. A kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$ is a function $$K: S \times \mathscr{T} \to [0, \infty]$$ such that 1. $$x \mapsto K(x, A)$$ is measurable for each $$A \in \mathscr{T}$$. 2. $$A \mapsto K(x, A)$$ is a positive measure on $$\mathscr{T}$$ for each $$x \in S$$. If $$(T, \mathscr{T}) = (S, \mathscr{S})$$, then $$K$$ is said to be a kernel on $$(S, \mathscr{S})$$. There are several classes of kernels that deserve special names. Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Then 1. $$K$$ is finite if $$K(x, T) \lt \infty$$ for every $$x \in S$$. 2. $$K$$ is bounded if $$K(x, T)$$ is bounded in $$x \in S$$, and in this case we define $$\|K\| = \sup\{K(x, T): x \in S\}$$. 3. $$K$$ is a probability kernel if $$K(x, T) = 1$$ for every $$x \in S$$. So a probability kernel is bounded, and a bounded kernel is finite. The terms stochastic kernel and Markov kernel are also used for probability kernels, and for a probability kernel $$\|K\| = 1$$ of course. The terms are consistent with terms used for measures: $$K$$ is a finite kernel if and only if $$K(x, \cdot)$$ is a finite measure for each $$x \in S$$, and $$K$$ is a probability kernel if and only if $$K(x, \cdot)$$ is a probability measure for each $$x \in S$$. A kernel defines two natural integral operators, by operating on the left with measures, and by operating on the right with functions. As usual, we are often a bit casual witht the question of existence. Basically in this section, we assume that any integrals mentioned exist. Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. 1. If $$\mu$$ is a positive measure on $$(S, \mathscr{S})$$, then $$\mu K$$ defined as follows is a positive measure on $$(T, \mathscr{T})$$: $\mu K(A) = \int_S \mu(dx) K(x, A), \quad A \in \mathscr{T}$ 2. If $$f: T \to \R$$ is measurable, then $$K f: S \to \R$$ defined as follows is measurable $K f(x) = \int_S K(x, dy) f(y), \quad x \in S$ Proof: 1. Clearly $$\mu K(A) \ge 0$$ for $$A \in \mathscr{T}$$. Suppose that $$\{A_j: i \in J\}$$ is a countable collection of disjoint sets in $$\mathscr{T}$$ and $$A = \bigcup_{j \in J} A_j$$. Then \begin{align*} \mu K(A) & = \int_S \mu(dx) K(x, A) = \int_S \mu(dx) \left(\sum_{j \in J} K(x, A_j) \right) \\ & = \sum_{j \in J} \int_S \mu(dx) K(x, A_j) = \sum_{j \in J} \mu K(A_j) \end{align*} The interchange of sum and integral is justified since the terms are nonnegative. 2. The measurability of $$K f$$ follows from the measurability of $$f$$ and of $$x \mapsto K(x, A)$$ for $$A \in \mathscr{S}$$, and from basic properties of the integral. Thus, a kernel transforms measures on $$(S, \mathscr{S})$$ into measures on $$(T, \mathscr{T})$$, and transforms measurable functions from $$T$$ to $$\R$$ into measurable functions from $$S$$ to $$\R$$. Part (b) assumes of course that $$(K f)(x)$$ exists for $$x \in S$$. This will be the case if $$f$$ is nonnegative (although the integral may be infinite) or if $$f$$ is integrable with respect to the measure $$K(x, \cdot)$$ for every $$x \in S$$. In particular, the last statement will hold in the following important special case: Suppose again that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$ and that $$f \in \mathscr{B}(T)$$. 1. If $$K$$ is finite, then $$K f$$ exists and $$\left|Kf(x)\right| \le \|f\| K(x, S)$$. 2. If $$K$$ is bounded, then $$K f \in \mathscr{B}(T)$$ and $$\|K f\| = \|K\| \|f\|$$. Proof: 1. Note that $$K \left|f\right|(x) = \int_S K(x, dy) \left|f(y)\right| \le \int_S K(x, dy) \|f\| = \|f\| K(x, S) \lt \infty$$ for $$x \in S$$. Hence $$f$$ is integrable with respect to $$K(x, \cdot)$$ for each $$x \in S$$ and the inequality holds. 2. From (a), we now have $$\left|K f(x)\right| \le \|K\| \|f\|$$ for $$x \in S$$, so $$\|K f\| \le \|K\| \|f\|$$. Moreover equality holds when $$f = \bs{1}_T$$, the constant function 1 on $$T$$. The identity kernel $$I$$ on the measurable space $$(S, \mathscr{S})$$ is defined by $$I(x, A) = \bs{1}(x \in A)$$ for $$x \in S$$ and $$A \in \mathscr{S}$$. Thus, $$I(x, A) = 1$$ if $$x \in A$$ and $$I(x, A) = 0$$ if $$x \notin A$$. So $$x \mapsto I(x, A)$$ is the indicator function of $$A \in \mathscr{S}$$, while $$A \mapsto I(x, A)$$ is point mass at $$x \in S$$. Clearly the identity kernel is a probability kernel. If we need to indicate the dependence on the particular space, we will add a subscript and write $$I_S$$. The following result justifies the name. Let $$I$$ denote the identity kernel on $$(S, \mathscr{S})$$. 1. If $$\mu$$ is a positive measure on $$(S, \mathscr{S})$$ then $$\mu I = \mu$$. 2. If $$f: S \to \R$$ is measurable, then $$I f = f$$. We can create a new kernel from two given kernels, by the usual operations of addition and scalar multiplication. Suppose that $$K$$ and $$L$$ are kernels from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, and that $$c \in [0, \infty)$$. Then $$c K$$ and $$K + L$$ defined below are also kernels from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. 1. $$(c K)(x, A) = c K(x, A)$$ for $$x \in S$$ and $$A \in \mathscr{T}$$. 2. $$(K + L)(x, A) = K(x, A) + L(x, A)$$ for $$x \in S$$ and $$A \in \mathscr{T}$$. Proof: These results are simple. 1. Since $$x \mapsto K(x, A)$$ is measurable for $$A \in \mathscr{T}$$, so is $$x \mapsto c K(x, A)$$. Since $$A \mapsto K(x, A)$$ is a positive measure on $$(T, \mathscr{T})$$ for $$x \in S$$, so is $$A \mapsto c K(x, A)$$ since $$c \ge 0$$. 2. Since $$x \mapsto K(x, A)$$ and $$x \mapsto L(x, A)$$ are measurable for $$A \in \mathscr{T}$$, so is $$x \mapsto K(x, A) + L(x, A)$$. Since $$A \mapsto K(x, A)$$ and $$A \mapsto L(x, A)$$ are positive measures on $$(T, \mathscr{T})$$ for $$x \in S$$, so is $$A \mapsto K(x, A) + L(x, A)$$. A more interesting and important way to form a new kernel from two given kernels is via a multiplication operation. Suppose that $$K$$ is a kernel from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$ and that $$L$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Then $$K L$$ defined as follows is a kernel from $$(R, \mathscr{R})$$ to $$(T, \mathscr{T})$$: $K L(x, A) = \int_S K(x, dy) L(y, A), \quad x \in R, \, A \in \mathscr{T}$ Proof: The measurability of $$x \mapsto (K L)(x, A)$$ for $$A \in \mathscr{T}$$ follows from basic properties of the integral. For the second property, fix $$x \in R$$. Clearly $$K L(x, A) \ge 0$$ for $$A \in \mathscr{T}$$. Suppose that $$\{A_j: j \in J\}$$ is a countable collection of disjoint sets in $$\mathscr{T}$$ and $$A = \bigcup_{j \in J} A_j$$. Then \begin{align*} K L(x, A) & = \int_S K(x, dy) L(x, A) = \int_S K(x, dy) \left(\sum_{j \in J} L(y, A_j)\right) \\ & = \sum_{j \in J} \int_S K(x, dy) L(y, A_j) = \sum_{j \in J} K L(x, A_j) \end{align*} The interchange of sum and integral is justified since the terms are nonnegative. Once again, the identity kernel lives up to its name: Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Then 1. $$I_S K = K$$ 2. $$K I_T = K$$ The next several results show that the operations are associative whenever they make sense. Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$ and that $$\mu$$ is a positive measure on $$\mathscr{S}$$, $$c \in [0, \infty)$$, and $$f: T \to \R$$ is measurable. Then 1. $$c (\mu K) = (c \mu) K$$ 2. $$c (K f) = (c K) f$$ 3. $$(\mu K) f = \mu (K f)$$ Proof: These results follow easily from the definitions. 1. The common measure on $$\mathscr{T}$$ is $$c \mu K(A) = c \int_S \mu(dx) K(x, A)$$ for $$A \in \mathscr{T}$$. 2. The common function from $$S$$ to $$\R$$ is $$c K f(x) = c \int_S K(x, dy) f(y)$$ for $$x \in S$$. 3. The common real number is $$\mu K f = \int_S \mu(dx) \int_T K(x, dy) f(y)$$. Suppose that $$K$$ is a kernel from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$ and $$L$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Suppose also that $$\mu$$ is a positive measure on $$(R, \mathscr{R})$$, $$f: T \to \R$$ is measurable, and $$c \in [0, \infty)$$. Then 1. $$(\mu K) L = \mu (K L)$$ 2. $$K ( L f) = (K L) f$$ 3. $$c (K L) = (c K) L$$ Proof: These results follow easily from the definitions. 1. The common measure on $$(T, \mathscr{T})$$ is $$\mu K L(A) = \int_R \mu(dx) \int_S K(x, dy) L(y, A)$$ for $$A \in \mathscr{T}$$. 2. The common measurable function from $$R$$ to $$\R$$ is $$K L f(x) = \int_S K(x, dy) \int_T L(y, dz) f(z)$$ for $$x \in R$$ 3. The common kernel from $$(R, \mathscr{R})$$ to $$(T, \mathscr{T})$$ is $$c K L(x, A) = c \int_S K(x, dy) L(y, A)$$ for $$x \in R$$ and $$A \in \mathscr{T}$$. Suppose that $$K$$ is a kernel from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$, $$L$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, and $$M$$ is a kernel from $$(T, \mathscr{T})$$ to $$(U, \mathscr{U})$$. Then $$(K L) M = K (L M)$$. Proof: This results follow easily from the definitions. The common kernel from $$(R, \mathscr{R})$$ to $$(U, \mathscr{U})$$ is $K L M(x, A) = \int_S K(x, dy) \int_T L(y, dz) M(z, A), \quad x \in R, \, A \in \mathscr{U}$ The next several results show that the distributive property holds whenever the operations makes sense. Suppose that $$K$$ and $$L$$ are kernels from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$ and that $$M$$ and $$N$$ are kernels from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Suppose also that $$\mu$$ is a positive measure on $$(R, \mathscr{R})$$ and that $$f: S \to \R$$ is measurable. Then 1. $$(K + L) M = K M + L M$$ 2. $$K (M + N) = K M + K N$$ 3. $$\mu (K + L) = \mu K + \mu L$$ 4. $$(K + L) f = K f + L f$$ Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, and that $$\mu$$ and $$\nu$$ are positive measures on $$(S, \mathscr{S})$$, and that $$f$$ and $$g$$ are measurable functions from $$T$$ to $$\R$$. Then 1. $$(\mu + \nu) K = \mu K + \nu K$$ 2. $$K(f + g) = K f + K g$$ 3. $$\mu(f + g) = \mu f + \mu g$$ 4. $$(\mu + \nu) f = \mu f + \nu f$$ In particular, note that if $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, then the transformation $$\mu \mapsto \mu K$$ defined for positive measures on $$(S, \mathscr{S})$$, and the transformation $$f \mapsto K f$$ defined for measurable functions $$f: T \to \R$$ (for which $$K f$$ exists), are both linear operators. If $$\mu$$ is a positive measure on $$(S, \mathscr{S})$$, then the integral operator $$f \mapsto \mu f$$ defined for measurable $$f: S \to \R$$ (for which $$\mu f$$ exists) is also linear, but of course, we already knew that. Finally, note that the operator $$f \mapsto K f$$ is positive: if $$f \ge 0$$ then $$K f \ge 0$$. Here is the important summary of our results when the kernel is bounded. If $$K$$ is a bounded kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, then $$f \mapsto K f$$ is a bounded, linear transformation from $$\mathscr{B}(T)$$ to $$\mathscr{B}(S)$$ and $$\|K\|$$ is the norm of the transformation. The commutative property for the product of kernels does not hold in general. If $$K$$ and $$L$$ are kernels, then depending on the measurable spaces, $$K L$$ may be well defined, but not $$L K$$. Even if both products are defined, they may be kernels from-to different measurable spaces. Even if both are defined from-to the same measurable spaces, it may well happen that $$K L \neq L K$$. Examples are given in the computational exercises below. If $$K$$ is a kernel on $$(S, \mathscr{S})$$ and $$n \in \N$$, we let $$K^n = K K \cdots K$$, the $$n$$-fold power of $$K$$. By convention, $$K^0 = I$$, the identity kernel on $$S$$. Fixed points of the operators associated with a kernel turn out to be very important. Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. 1. A positive measure $$\mu$$ on $$(S, \mathscr{S})$$ such that $$\mu K = \mu$$ is said to be invariant for $$K$$. 2. A measurable function $$f: T \to \R$$ such that $$K f = f$$ is said to be invariant for $$K$$ So in the language of linear algebra (or functional analysis), an invariant measure is a left eigenvector of the kernel, while an invariant function is a right eigenvector of the kernel, both corresponding to the eigenvalue 1. By our results above, if $$\mu$$ and $$\nu$$ are invariant measures and $$c \in [0, \infty)$$, then $$\mu + \nu$$ and $$c \mu$$ are also invariant. Similarly, if $$f$$ and $$g$$ are invariant functions and $$c \in \R$$, the $$f + g$$ and $$c f$$ are also invariant. Of couse we are particularly interested in probability kernels. Suppose that $$P$$ is a probability kernel from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$ and that $$Q$$ is a probability kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. Suppose also that $$\mu$$ is a probability measure on $$(R, \mathscr{R})$$. Then 1. $$P Q$$ is a probability kernel from $$(R, \mathscr{R})$$ to $$(T, \mathscr{T})$$. 2. $$\mu P$$ is a probability measure on $$(S, \mathscr{S})$$. Proof: 1. We know that $$P Q$$ is a kernel from $$(R, \mathscr{R})$$ to $$(T, \mathscr{T})$$. So we just need to note that $P Q(T) = \int_S P(x, dy) Q(y, T) = \int_S P(x, dy) = P(x, S) = 1, \quad x \in R$ 2. We know that $$\mu P$$ is a positive measure on $$(S, \mathscr{S}))$$. So we just need to note that $\mu P(S) = \int_R \mu(dx) P(x, S) = \int_R \mu(dx) = \mu(R) = 1$ As a corollary, it follows that if $$P$$ is a probability kernel on $$(S, \mathscr{S})$$, then so is $$P^n$$ for $$n \in \N$$. The operators associated with a kernel are of fundamental importance, and we can easily recover the kernel from the operators. Suppose that $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$, and let $$x \in S$$ and $$A \in \mathscr{T}$$. Then trivially, $$K \bs{1}_A(x) = K(x, A)$$ where as usual, $$\bs{1}_A$$ is the indicator function of $$A$$. Trivially also $$\delta_x K(A) = K(x, A)$$ where $$\delta_x$$ is point mass at $$x$$. #### Kernel Functions Usually our measurable spaces are in fact measure spaces, with natural measures associated with the spaces (counting measure for a countable set and Lebesgue measure for a subset of a Euclidean space, for example). In such cases, kernels are usually constructed from density functions in much the same way that positive measures are defined from density functions. In the discussion that follows, we assume as usual that integrals that are written exist. Suppose that $$(S, \mathscr{S}, \lambda)$$ and $$(T, \mathscr{T}, \mu)$$ are measure spaces. As usual, $$S \times T$$ is given the product $$\sigma$$-algebra $$\mathscr{S} \otimes \mathscr{T}$$. If $$k: S \times T \to [0, \infty)$$ is measurable, then the function $$K$$ defined as follows is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$: $K(x, A) = \int_A k(x, y) \mu(dy), \quad x \in S, \, A \in \mathscr{T}$ Proof: The measurability of $$x \mapsto K(x, A) = \int_A k(x, y) \mu(dy)$$ for $$A \in \mathscr{T}$$ follows from a basic property of the integral. The fact that $$A \mapsto K(x, A) = \int_A k(x, y) \mu(dy)$$ is a positive measure on $$\mathscr{T}$$ for $$x \in S$$ also follows from a basic property of the integral. In fact, $$y \mapsto k(x, y)$$ is the density of this measure with respect to $$\mu$$. Clearly the kernel $$K$$ depends on the positive measure $$\mu$$ on $$(T, \mathscr{T})$$ as well as the function $$k$$, while the measure $$\lambda$$ on $$(S, \mathscr{S})$$ plays no role (and so is not even necessary). But again, our point of view is that the spaces have fixed, natural measures. Appropriately enough, the function $$k$$ is called a kernel density function (with respect to $$\mu$$), or simply a kernel function. Suppose again that $$(S, \mathscr{S}, \lambda)$$ and $$(T, \mathscr{T}, \mu)$$ are measure spaces. Suppose also $$K$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$ with kernel function $$k$$. If $$f: T \to \R$$ is measurable, then $K f(x) = \int_S k(x, y) f(y) \mu(dy), \quad x \in S$ Proof: This follows since the function $$y \mapsto k(x, y)$$ is the density of the measure $$A \mapsto K(x, A)$$ with respect to $$\mu$$: $K f(x) = \int_S K(x, dy) f(y) = \int_S k(x, y) f(y) \mu(dy), \quad x \in S$ A kernel function defines an operator on the left with functions on $$S$$ in a completely analogous way to the operator on the right above with functions on $$T$$. Suppose again that $$(S, \mathscr{S}, \lambda)$$ and $$(T, \mathscr{T}, \mu)$$ are measure spaces and that $$k: S \times T \to [0, \infty)$$ is measurable. If $$f: S \to \R$$ is measurable, then the function $$f K: T \to \R$$ defined as follows is also measurable $f K(y) = \int_S \lambda(dx) f(x) k(x, y), \quad y \in T$ The operator defined above depends on the measure $$\lambda$$ on $$(S, \mathscr{S})$$ as well as the kernel function $$k$$, while the measure $$\mu$$ on $$(T, \mathscr{T})$$ playes no role (and so is not even necessary). But again, our point of view is that the spaces have fixed, natural measures. Here is how our new operation on the left with functions relates to our old operation on the left with measures. Suppose again that $$(S, \mathscr{S}, \lambda)$$ and $$(T, \mathscr{T}, \mu)$$ are measure spaces and that $$k: S \times T \to [0, \infty)$$ is measurable. Suppose also that $$f: S \to [0, \infty)$$ is measurable, and let $$\rho$$ denote the measure on $$(S, \mathscr{S})$$ that has density $$f$$ with respect to $$\lambda$$. Then $$f K$$ is the density of the measure $$\rho K$$ with respect to $$\mu$$. Proof: The main tool, as usual, is an interchange of integrals. For $$B \in \mathscr{T}$$, \begin{align*} \rho K(B) & = \int_S \rho(dx) K(x, B) = \int_S f(x) K(x, B) \lambda(dx) = \int_S f(x) \left[\int_B k(x, y) \mu(dy)\right] \lambda(dx) \\ & = \int_B \left[\int_S f(x) k(x, y) \lambda(dx)\right] \mu(dy) = \int_B f K(y) \mu(dy) \end{align*} As always, we are particularly interested in stochastic kernels. With a kernel function, we can have doubly stochastic kernels. Suppose again that $$(S, \mathscr{S}, \lambda)$$ and $$(T, \mathscr{T}, \mu)$$ are measure spaces and that $$k: S \times T \to [0, \infty)$$ is measurable. Then $$k$$ is a double stochastic kernel function if 1. $$\int_T k(x, y) \mu(dy) = 1$$ for $$x \in S$$ 2. $$\int_S \lambda(dx) k(x, y) = 1$$ for $$y \in S$$ Of course, condition (a) simply means that the kernel associated with $$k$$ is a stochastic kernel according to our original definition. The most common and important special case is when the two spaces are the same. Thus, if $$(S, \mathscr{S}, \lambda)$$ is a measure space and $$k : S \times S \to [0, \infty)$$ is measurable, then we have an operator $$K$$ that operates on the left and on the right with measurable functions $$f: S \to \R$$: \begin{align*} f K(y) & = \int_S \lambda(dx) f(x) k(x, y), \quad y \in S \\ K f(x) & = \int_S k(x, y) f(y) \lambda(d y), \quad x \in S \end{align*} If $$f$$ is nonnegative and $$\mu$$ is the measure on with density function $$f$$, then $$f K$$ is the density function of the measure $$\mu K$$ (both with respect to $$\lambda$$). Suppose again that $$(S, \mathscr{S}, \lambda)$$ is a measure space and $$k : S \times S \to [0, \infty)$$ is measurable. Then $$k$$ is symmetric if $$k(x, y) = k(y, x)$$ for all $$(x, y) \in S^2$$. Of course, if $$k$$ is a symmetric, stochastic kernel function on $$(S, \mathscr{S}, \lambda)$$ then $$k$$ is doubly stochastic, but the converse is not true. Suppose that $$(R, \mathscr{R}, \lambda)$$, $$(S, \mathscr{S}, \mu)$$, and $$(T, \mathscr{T}, \rho)$$ are measure spaces. Suppose also that $$K$$ is a kernel from $$(R, \mathscr{R})$$ to $$(S, \mathscr{S})$$ with kernel function $$k$$, and that $$L$$ is a kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$ with kernel function $$l$$. Then the kernel $$K L$$ from $$(R, \mathscr{R})$$ to $$(T, \mathscr{T})$$ has density $$k l$$ given by $k l(x, z) = \int_S k(x, y) l(y, z) \mu(dy), \quad (x, z) \in R \times T$ Proof: Once again, the main tool is an interchange of integrals via Fubini's theorem. Let $$x \in R$$ and $$B \in \mathscr{T}$$. Then \begin{align*} K L(x, B) & = \int_S K(x, dy) L(y, B) = \int_S k(x, y) L(y, B) \mu(dy) \\ & = \int_S k(x, y) \left[\int_B l(y, z) \rho(dz) \right] \mu(dy) = \int_B \left[\int_S k(x, y) l(y, z) \mu(dy) \right] \rho(dz) = \int_B k l(x, z) \mu(dz) \end{align*} ### Examples and Special Cases #### The Discrete Case In this subsection, a countable set is given the power set as the $$\sigma$$-algebra (as usual). Thus all subsets of the set are measurable, and any function defined on the set is measurable. We also use counting measure $$\#$$ as the natural measure on the set, so integrals become sums. Suppose now that $$K$$ is a kernel from a countable set $$S$$ to a countable set $$T$$. For $$x \in S$$ and $$y \in T$$, let $$K(x, y) = K(x, \{y\})$$. Then more generally, $K(x, A) = \sum_{y \in A} K(x, y), \quad x \in S, \, A \subseteq T$ The function $$(x, y) \mapsto K(x, y)$$ is simply the kernel function of the kernel $$K$$, as defind above, but in this case we usually don't bother with using a different symbol for the function as opposed to the kernel. The function $$K$$ can be thought of as a matrix, with rows indexed by $$S$$ and columns indexed by $$T$$ (and so an infinite matrix if $$S$$ or $$T$$ is countably infinite). With this interpretation, all of the operations defined above can be thought of as matrix operations. If $$f: T \to \R$$ and $$f$$ is thought of as a column vector indexed by $$T$$, then $$K f$$ defined by $K f(x) = \sum_{y \in S} K(x, y) f(y), \quad x \in S$ is simply the ordinary product of the matrix $$K$$ and the vector $$f$$; the product is a column vector indexed by $$S$$. Similarly, if $$f: S \to \R$$ and $$f$$ is thought of as a row vector indexed by $$S$$, then $$f K$$ defined by $f K(y) = \sum_{x \in S} f(x) K(x, y), \quad y \in T$ is simple the ordinary product of the vector $$f$$ and the matrix $$K$$; the product is a row vector indexed by $$T$$. If $$L$$ is another kernel from $$T$$ to another countable set $$U$$, then as functions, $$K L$$ defined by $K L(x, z) = \sum_{y \in T} K(x, y) L(x, z), \quad (x, z) \in S \times L$ is the simply the matrix product of $$K$$ and $$L$$. Let $$S = \{1, 2, 3\}$$ and $$T = \{1, 2, 3, 4\}$$. Define the kernel $$K$$ from $$S$$ to $$T$$ by $$K(x, y) = x + y$$ for $$(x, y) \in S \times T$$. Define the function $$f$$ on $$S$$ by $$f(x) = x!$$ for $$x \in S$$, and define the function $$g$$ on $$T$$ by $$g(y) = y^2$$ for $$y \in T$$. Compute each of the following using matrix algebra: 1. $$f K$$ 2. $$K g$$ In matrix form, $K = \left[\begin{matrix} 2 & 3 & 4 & 5 \\ 3 & 4 & 5 & 6 \\ 4 & 5 & 6 & 7 \end{matrix} \right], \quad f = \left[\begin{matrix} 1 & 2 & 6 \end{matrix} \right], \quad g = \left[\begin{matrix} 1 \\ 4 \\ 9 \\ 16 \end{matrix} \right]$ 1. As a row vector indexed by $$T$$, the product is $$f K = \left[\begin{matrix} 32 & 41 & 50 & 59\end{matrix}\right]$$ 2. As a column vector indexed by $$S$$, $K g = \left[\begin{matrix} 130 \\ 160 \\ 190 \end{matrix}\right]$ Let $$R = \{0, 1\}$$, $$S = \{a, b\}$$, and $$T = \{1, 2, 3\}$$. Define the kernel $$K$$ from $$R$$ to $$S$$, the kernel $$L$$ from $$S$$ to $$S$$ and the kernel $$M$$ from $$S$$ to $$T$$ in matrix form as follows: $K = \left[\begin{matrix} 1 & 4 \\ 2 & 3\end{matrix}\right], \; L = \left[\begin{matrix} 2 & 2 \\ 1 & 5 \end{matrix}\right], \; M = \left[\begin{matrix} 1 & 0 & 2 \\ 0 & 3 & 1 \end{matrix} \right]$ Compute each of the following kernels, or explain why the operation does not make sense: 1. $$K L$$ 2. $$L K$$ 3. $$K^2$$ 4. $$L^2$$ 5. $$K M$$ 6. $$L M$$ Proof: Note that these are not just abstract matrices, but rather have rows and columns indexed by the appropriate spaces. So the products make sense only when the spaces match appropriately; it's not just a matter of the number of rows and columns. 1. $$K L$$ is the kernel from $$R$$ to $$S$$ given by $K L = \left[\begin{matrix} 6 & 22 \\ 7 & 19 \end{matrix} \right]$ 2. $$L K$$ is not defined since the column space $$S$$ of $$L$$ is not the same as the row space $$R$$ of $$K$$. 3. $$K^2$$ is not defined since the row space $$R$$ is not the same as the column space $$S$$. 4. $$L^2$$ is the kernel from $$S$$ to $$S$$ given by $L^2 = \left[\begin{matrix} 6 & 14 \\ 7 & 27 \end{matrix}\right]$ 5. $$K M$$ is the kernel from $$R$$ to $$T$$ given by $K M = \left[\begin{matrix} 1 & 12 & 6 \\ 2 & 9 & 7 \end{matrix} \right]$ 6. $$L M$$ is the kernel from $$S$$ to $$T$$ given by $L M = \left[\begin{matrix} 2 & 6 & 6 \\ 1 & 15 & 7 \end{matrix}\right]$ #### Conditional Probability An important class of probability kernels arises from the distribution of one random variable, conditioned on the value of another random variable. In this subsection, suppose that $$(\Omega, \mathscr{F}, \P)$$ is a probability space, and that $$(S, \mathscr{S})$$ and $$(T, \mathscr{T})$$ are measurable spaces. Further, suppose that $$X$$ and $$Y$$ are random variables defined on the probability space, with $$X$$ taking values in $$S$$ and that $$Y$$ taking values in $$T$$. Informally, $$X$$ and $$Y$$ are random variables defined on the same underlying random experiment. The function $$P$$ defined as follows is a probability kernel from $$(S, \mathscr{S})$$ to $$(T, \mathscr{T})$$. $P(x, A) = \P(Y \in A \mid X = x), \quad x \in S, \, A \in \mathscr{T}$ Proof: Recall that for $$A \in \mathscr{T}$$, the conditional probability $$\P(Y \in A \mid X)$$ is itself a random variable, and is measurable with respect to $$\sigma(X)$$. That is, $$\P(X \in A \mid X) = P(X, A)$$ for some measurable function $$x \mapsto P(x, A)$$ from $$S$$ to $$[0, 1]$$. Then, by definition, $$\P(X \in A \mid X = x) = P(x, A)$$. Trivially, of course, $$A \mapsto P(x, A)$$ is a probability measure on $$(T, \mathscr{T})$$ for $$x \in S$$. The operators associated with this kernel have natural interpretations. Let $$P$$ be the conditional probability kernel of $$Y$$ given $$X$$ as defined in the last result. 1. If $$f: T \to \R$$ is measurable, then $$Pf(x) = \E[f(Y) \mid X = x]$$ for $$x \in S$$ (assuming as usual that the expected value exists). 2. If $$\mu$$ is the probability distribution of $$X$$ then $$\mu P$$ is the probability distribution of $$Y$$. Proof: These are basic results that we have already studied, dressed up in new notation. 1. Since $$A \mapsto P(x, A)$$ is the conditional distribution of $$Y$$ given $$X = x$$, $\E[f(Y) \mid X = x] = \int_S P(x, dy) f(y) = P f(x)$ 2. Let $$A \in \mathscr{T}$$. Conditioning on $$X$$ gives $\P(Y \in A) = \E[\P(Y \in A \mid X)] = \int_S \mu(dx) P(Y \in A \mid X = x) = \int_S \mu(dx) P(x, A) = \mu P(A)$ As in the general discussion above, the measurable spaces $$(S, \mathscr{S})$$ and $$(T, \mathscr{T})$$ are usually measure spaces with natural measures attached. So the conditional probability distributions are often given via conditional probability density functions, which then play the role of kernel functions. Suppose that $$X$$ and $$Y$$ are random variables for an experiment, taking values in $$\R$$. For $$x \in \R$$, the conditional distribution of $$Y$$ given $$X = x$$ is normal with mean $$x$$ and standard deviation 1. Use the notation and operations of this section for the following problems: 1. Give the probability density function for the conditional distribution of $$Y$$ given $$X = x$$. 2. Find $$\E\left(Y^2 \bigm| X = x\right)$$. 3. Suppose that $$X$$ has the standard normal distribution. Find the probability density function of $$Y$$. 1. The kernel function (with respect to Lebesgue measure, of course) is $p(x, y) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} (y - x)^2}, \quad x, \, y \in \R$ 2. Let $$g(y) = y^2$$ for $$y \in \R$$. Then $$E\left(Y^2 \bigm| X = x\right) = P g(x) = 1 + x^2$$ for $$x \in \R$$ 3. The standard normal PDF $$f$$ is given $$f(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$$ for $$x \in \R$$. Thus $$Y$$ has PDF $$f P$$. $f P(y) \int_{-\infty}^\infty f(x) p(x, y) dx = \frac{1}{2 \sqrt{\pi}} e^{-\frac{1}{4} y^2}, \quad y \in \R$ This is the PDF of the normal distribution with mean 0 and variance 2. Suppose that $$X$$ and $$Y$$ are random variables for an experiment, with $$X$$ taking values in $$\{a, b, c\}$$ and $$Y$$ taking values in $$\{1, 2, 3, 4\}$$. The conditional density function of $$Y$$ given $$X$$ is as follows: $$P(a, y) = 1/4$$, $$P(b, y) = y / 10$$, and $$P(c, y) = y^2/30$$, each for $$y \in \{1, 2, 3, 4\}$$. 1. Give the kernel $$P$$ in matrix form and verify that it is a probability kernel. 2. Find $$f P$$ where $$f(a) = f(b) = f(c) = 1/3$$. The result is the density function of $$Y$$ given that $$X$$ is uniformly distributed. 3. Find $$P g$$ where $$g(y) = y$$ for $$y \in \{1, 2, 3, 4\}$$. The resulting function is $$\E(Y \mid X = x)$$ for $$x \in \{a, b, c\}$$. 1. In matrix form $P = \left[\begin{matrix} \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} \\ \frac{1}{10} & \frac{2}{10} & \frac{3}{10} & \frac{4}{10} \\ \frac{1}{30} & \frac{4}{30} & \frac{9}{30} & \frac{16}{30} \end{matrix} \right]$ Note that the row sums are 1. 2. In matrix form, $$f = \left[\begin{matrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{matrix} \right]$$ and $$f P = \left[\begin{matrix} \frac{23}{180} & \frac{35}{180} & \frac{51}{180} & \frac{71}{180} \end{matrix} \right]$$. 3. In matrix form, $g = \left[\begin{matrix} 1 \\ 2 \\ 3 \\ 4 \end{matrix} \right], \quad P g = \left[\begin{matrix} \frac{5}{2} \\ 3 \\ \frac{10}{3} \end{matrix} \right]$ #### Parametric Distributions A parametric probability distribution also defines a probability kernel in a natural way, with the parameter playing the role of the kernel variable, and the distribution playing the role of the measure. Such distributions are usually defined in terms of a parametric density function which then defines a kernel function, again with the parameter playing the role of the first argument and the variable the role of the second argument. If the parameter is thought of as a given value of another random variable, as in Bayesian analysis, then there is considerable overlap with the previous subsection. In most cases, the spaces involved are either subsets of Euclidean spaces which naturally have Lebesgue measure, or countable spaces which naturally have counting measure. Consider the parametric family of exponential distributions. Let $$f$$ denote the identity function on $$(0, \infty)$$. 1. Give the probability density function as a probability kernel function $$p$$ on $$(0, \infty)$$. 2. Find $$P f$$. 3. Find $$f P$$. 4. Find $$p^2$$, the kernel function corresponding to the product kernel $$P^2$$. 1. $$p(r, x) = r e^{-r x}$$ for $$r, \, x \in (0, \infty)$$. 2. For $$r \in (0, \infty)$$, $P f(r) = \int_0^\infty p(r, x) f(x) \, dx = \int_0^\infty x r e^{-r x} dx = \frac{1}{r}$ This is the mean of the exponential distribution. 3. For $$x \in (0, \infty)$$, $f P(x) = \int_0^\infty f(r) p(r, x) \, dr = \int_0^\infty r^2 e^{-r x} dr = \frac{2}{x^3}$ 4. For $$r, \, y \in (0, \infty)$$, $p^2(r, y) = \int_0^\infty p(r, x) p(x, y) \, dx = \int_0^\infty = \int_0^\infty r x e^{-(r + y) x} dx = \frac{r}{(r + y)^2}$ Consider the parametric family of Poisson distributions. Let $$f$$ be the identity function on $$\N$$ and let $$g$$ be the identity function on $$(0, \infty)$$. 1. Give the probability density function $$p$$ as a probability kernel function from $$(0, \infty)$$ to $$\N$$. 2. Show that $$P f = g$$. 3. Show that $$g P = f$$. 1. $$p(r, n) = e^{-r} \frac{r^n}{n!}$$ for $$r \in (0, \infty)$$ and $$n \in \N$$. 2. For $$r \in (0, \infty)$$, $P f(r) = \sum_{n=0}^\infty p(r, n) f(n) = \sum_{n=0}^\infty n e^{-r} \frac{r^n}{n!} = r$ This is the mean of the Poisson distribution. 3. For $$n \in \N$$, $g P(n) = \int_0^\infty g(r) p(r, n) \, dr = \int_0^\infty e^{-r} \frac{r^{n+1}}{n!} dr = n$ Clearly the Poisson distribution has some very special and elegant properties. The next family of distributions also has some very special properties. Compare this exercise with the corresponding one above Consider the family of normal distributions, parameterized by the mean and with variance 1. 1. Give the probability density function as a probability kernel function $$p$$ on $$\R$$. 2. Show that $$p$$ is symmetric. 3. Let $$f$$ be the identity function on $$\R$$. Show that $$P f = f$$ and $$f P = f$$. 4. For $$n \in \N$$, find $$p^n$$ the kernel function for the operator $$P^n$$. 1. For $$\mu, \, x \in \R$$, $p(\mu, x) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2}(x - \mu)^2}$ That is, $$x \mapsto p(x, \mu)$$ is the normal probability density function with mean $$\mu$$ and variance 1. 2. Note that $$p(\mu, x) = p(x, \mu)$$ for $$\mu, \, x \in \R$$. So $$\mu \mapsto p(\mu, x)$$ is the normal probability density function with mean $$x$$ and variance 1. 3. Since $$f(x) = x$$ for $$x \in \R$$, this follows from the previous two parts: $$P f(\mu) = \mu$$ for $$\mu \in \R$$ and $$f P(x) = x$$ for $$x \in \R$$ 4. For $$\mu, \, y \in \R$$, $p^2(\mu, x) = \int_{-\infty}^\infty p(\mu, t) p(t, y) \, dt = \frac{1}{\sqrt{4 \pi}} e^{-\frac{1}{4}(x - \mu)^2}$ so that $$x \mapsto p^2(\mu, x)$$ is the normal PDF with mean $$\mu$$ and variance 2. By induction, $p^n(\mu, x) = \frac{1}{\sqrt{2 \pi n}} e^{-\frac{1}{2 n}(x - \mu)^2}$ for $$n \in \N_+$$ and $$\mu, \, x \in \R$$. Thus $$x \mapsto p^n(\mu, x)$$ is the normal PDF with mean $$\mu$$ and variance $$n$$. For each of the following special distributions, express the probability density function as a probability kernel function. Be sure to specify the parameter spaces. 1. The general normal distribution on $$\R$$. 2. The beta distribution on $$(0, 1)$$. 3. The negative binomial distribution on $$\N$$. 1. The normal distribution with mean $$\mu$$ and standard deviation $$\sigma$$ defines a kernel function $$p$$ from $$\R \times (0, \infty)$$ to $$\R$$ given by $p[(\mu, \sigma), x] = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\left(\frac{x - \mu}{\sigma}\right)^2\right]$ 2. The beta distribution with left parameter $$a$$ and right parameter $$b$$ defines a kernel function $$p$$ from $$(0, \infty)^2$$ to $$(0, 1)$$ given by $p[(a, b), x] = \frac{1}{B(a, b)} x^{a - 1} y^{b - 1}$ where $$B$$ is the beta function. 3. The negative binomial distribution with stopping parameter $$k$$ and success parameter $$\alpha$$ defines a kernel function $$p$$ from $$(0, \infty) \times (0, 1)$$ to $$\N$$ given by $p[(n, \alpha), k] = \binom{n + k - 1}{n} \alpha^k (1 - \alpha)^n$
## Algebra 1 The Pythagorean Theorem states that for a right triangle, $a^{2} +b^{2} = c^{2}$. Thus, we plug in the values given for a, b, or c, and then we use these values to find the missing value: $a^{2} + 3.5^{2} =3.7^{2} \\\\ a^{2} +12.25=13.69 \\\\ a^{2} =1.44 \\\\ a=1.2$
# Christian Brauner ## Runtimes And the Curse of the Privileged Container #### Introduction (CVE-2019-5736) Today, Monday, 2019-02-11, 14:00:00 CET CVE-2019-5736 was released: The vulnerability allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command (it doesn't matter if the command is not attacker-controlled) as root within a container in either of these contexts: • Creating a new container using an attacker-controlled image. I've been working on a fix for this issue over the last couple of weeks together with Aleksa a friend of mine and maintainer of runC. When he notified me about the issue in runC we tried to come up with an exploit for LXC as well and though harder it is doable. I was interested in the issue for technical reasons and figuring out how to reliably fix it was quite fun (with a proper dose of pure hatred). It also caused me to finally write down some personal thoughts I had for a long time about how we are running containers. #### What are Privileged Containers? At a first glance this is a question that is probably trivial to anyone who has a decent low-level understanding of containers. Maybe even most users by now will know what a privileged container is. A first pass at defining it would be to say that a privileged container is a container that is owned by root. Looking closer this seems an insufficient definition. What about containers using user namespaces that are started as root? It seems we need to distinguish between what ids a container is running with. So we could say a privileged container is a container that is running as root. However, this is still wrong. Because “running as root” can either be seen as meaning “running as root as seen from the outside” or “running as root from the inside” where “outside” means “as seen from a task outside the container” and “inside” means “as seen from a task inside the container”. What we really mean by a privileged container is a container where the semantics for id 0 are the same inside and outside of the container ceteris paribus. I say “ceteris paribus” because using LSMs, seccomp or any other security mechanism will not cause a change in the meaning of id 0 inside and outside the container. For example, a breakout caused by a bug in the runtime implementation will give you root access on the host. An unprivileged container then simply is any container in which the semantics for id 0 inside the container are different from id 0 outside the container. For example, a breakout caused by a bug in the runtime implementation will not give you root access on the host by default. This should only be possible if the kernel's user namespace implementation has a bug. The reason why I like to define privileged containers this way is that it also lets us handle edge cases. Specifically, the case where a container is using a user namespace but a hole is punched into the idmapping at id 0 aka where id 0 is mapped through. Consider a container that uses the following idmappings: id: 0 100000 100000 This instructs the kernel to setup the following mapping: id: container_id(0) -> host_id(100000) id: container_id(1) -> host_id(100001) id: container_id(2) -> host_id(100002) . . . container_id(100000) -> host_id(200000) With this mapping it's evident that container_id(0) != host_id(0). But now consider the following mapping: id: 0 0 1 id: 1 100001 99999 This instructs the kernel to setup the following mapping: id: container_id(0) -> host_id(0) id: container_id(1) -> host_id(100001) id: container_id(2) -> host_id(100002) . . . container_id(99999) -> host_id(199999) In contrast to the first example this has the consequence that container_id(0) == host_id(0). I would argue that any container that at least punches a hole for id 0 into its idmapping up to specifying an identity mapping is to be considered a privileged container. As a sidenote, Docker containers run as privileged containers by default. There is usually some confusion where people think because they do not use the --privileged flag that Docker containers run unprivileged. This is wrong. What the --privileged flag does is to give you even more permissions by e.g. not dropping (specific or even any) capabilities. One could say that such containers are almost “super-privileged”. #### The Trouble with Privileged Containers The problem I see with privileged containers is essentially captured by LXC's and LXD's upstream security position which we have held since at least 2015 but probably even earlier. I'm quoting from our notes about privileged containers: Privileged containers are defined as any container where the container uid 0 is mapped to the host's uid 0. In such containers, protection of the host and prevention of escape is entirely done through Mandatory Access Control (apparmor, selinux), seccomp filters, dropping of capabilities and namespaces. Those technologies combined will typically prevent any accidental damage of the host, where damage is defined as things like reconfiguring host hardware, reconfiguring the host kernel or accessing the host filesystem. LXC upstream's position is that those containers aren't and cannot be root-safe. They are still valuable in an environment where you are running trusted workloads or where no untrusted task is running as root in the container. We are aware of a number of exploits which will let you escape such containers and get full root privileges on the host. Some of those exploits can be trivially blocked and so we do update our different policies once made aware of them. Some others aren't blockable as they would require blocking so many core features that the average container would become completely unusable. [...] As privileged containers are considered unsafe, we typically will not consider new container escape exploits to be security issues worthy of a CVE and quick fix. We will however try to mitigate those issues so that accidental damage to the host is prevented. LXC's upstream position for a long time has been that privileged containers are not and cannot be root safe. For something to be considered root safe it should be safe to hand root access to third parties or tasks. #### Running Untrusted Workloads in Privileged Containers is insane. That's about everything that this paragraph should contain. The fact that the semantics for id 0 inside and outside the container are identical entails that any meaningful container escape will have the attacker gain root on the host. #### CVE-2019-5736 Is a Very Very Very Bad Privilege Escalation to Host Root CVE-2019-5736 is an excellent illustration of such an attack. Think about it: a process running inside a privileged container can rather trivially corrupt the binary that is used to attach to the container. This allows an attacker to create a custom ELF binary on the host. That binary could do anything it wants: • could just be a binary that calls poweroff • could be a binary that spawns a root shell • could be a binary that kills other containers when called again to attach • could be suid cat • . • . • . The attack vector is actually slightly worse for runC due to its architecture. Since runC exits after spawning the container it can also be attacked through a malicious container image. Which is super bad given that a lot of container workload workflows rely on downloading images from the web. LXC cannot be attacked through a malicious image since the monitor process (a singleton per-container) never exits during the containers life cycle. Since the kernel does not allow modifications to running binaries it is not possible for the attacker to corrupt it. When the container is shutdown or killed the attacking task will be killed before it can do any harm. Only when the last process running inside the container has exited will the monitor itself exit. This has the consequence, that if you run privileged OCI containers via our oci template with LXC your are not vulnerable to malicious images. Only the vector through the attaching binary still applies. #### The Lie that Privileged Containers can be safe Aside from mostly working on the Kernel I'm also a maintainer of LXC and LXD alongside Stéphane Graber. We are responsible for LXC – the low-level container runtime – and LXD – the container management daemon using LXC. We have made a very conscious decision to consider privileged containers not root safe. Two main corollaries follow from this: 1. Privileged containers should never be used to run untrusted workloads. 2. Breakouts from privileged containers are not considered CVEs by our security policy. It still seems a common belief that if we all just try hard enough using privileged containers for untrusted workloads is safe. This is not a promise that can be made good upon. A privileged container is not a security boundary. The reason for this is simply what we looked at above: container_id(0) == host_id(0). It is therefore deeply troubling that this industry is happy to let users believe that they are safe and secure using privileged containers. #### Unprivileged Containers as Default As upstream for LXC and LXD we have been advocating the use of unprivileged containers by default for years. Way ahead before anyone else did. Our low-level library LXC has supported unprivileged containers since 2013 when user namespaces were merged into the kernel. With LXD we have taken it one step further and made unprivileged containers the default and privileged containers opt-in for that very matter: privileged containers aren't safe. We even allow you to have per-container idmappings to make sure that not just each container is isolated from the host but also all containers from each other. For years we have been advocating for unprivileged containers on conferences, in blogposts, and whenever we have spoken to people but somehow this whole industry has chosen to rely on privileged containers. The good news is that we are seeing changes as people become more familiar with the perils of privileged containers. Let this recent CVE be another reminder that unprivileged containers need to be the default. #### Are LXC and LXD affected? • Unprivileged LXC and LXD containers are not affected. • Any privileged LXC and LXD container running on a read-only rootfs is not affected. • Privileged LXC containers in the definition provided above are affected. Though the attack is more difficult than for runC. The reason for this is that the lxc-attach binary does not exit before the program in the container has finished executing. This means an attacker would need to open an O_PATH file descriptor to /proc/self/exe, fork() itself into the background and re-open the O_PATH file descriptor through /proc/self/fd/<O_PATH-nr> in a loop as O_WRONLY and keep trying to write to the binary until such time as lxc-attach exits. Before that it will not succeed since the kernel will not allow modification of a running binary. • Privileged LXD containers are only affected if the daemon is restarted other than for upgrade reasons. This should basically never happen. The LXD daemon never exits so any write will fail because the kernel does not allow modification of a running binary. If the LXD daemon is restarted because of an upgrade the binary will be swapped out and the file descriptor used for the attack will write to the old in-memory binary and not to the new binary. #### Chromebooks with Crostini using LXD are not affected Chromebooks use LXD as their default container runtime are not affected. First of all, all binaries reside on a read-only filesystem and second, LXD does not allow running privileged containers on Chromebooks through the LXD_UNPRIVILEGED_ONLY flag. For more details see this link. #### Fixing CVE-2019-5736 To prevent this attack, LXC has been patched to create a temporary copy of the calling binary itself when it attaches to containers (cf.6400238d08cdf1ca20d49bafb85f4e224348bf9d). To do this LXC can be instructed to create an anonymous, in-memory file using the memfd_create() system call and to copy itself into the temporary in-memory file, which is then sealed to prevent further modifications. LXC then executes this sealed, in-memory file instead of the original on-disk binary. Any compromising write operations from a privileged container to the host LXC binary will then write to the temporary in-memory binary and not to the host binary on-disk, preserving the integrity of the host LXC binary. Also as the temporary, in-memory LXC binary is sealed, writes to this will also fail. To not break downstream users of the shared library this is opt-in by setting LXC_MEMFD_REXEC in the environment. For our lxc-attach binary which is the only attack vector this is now done by default. Workloads that place the LXC binaries on a read-only filesystem or prevent running privileged containers can disable this feature by passing --disable-memfd-rexec during the configure stage when compiling LXC. ## Android Binderfs #### Introduction Android Binder is an inter-process communication (IPC) mechanism. It is heavily used in all Android devices. The binder kernel driver has been present in the upstream Linux kernel for quite a while now. Binder has been a controversial patchset (see this lwn article as an example). Its design was considered wrong and to violate certain core kernel design principles (e.g. a task should never touch another tasks file descriptor table). Most kernel developers were not a fan of binder. Recently, the upstream binder code has fortunately been reworked significantly (e.g. it does not touch another tasks file descriptor table anymore, the locking is very fine-grained now, etc.). With Android being one of the major operating systems (OS) for a vast number of devices there is simply no way around binder. #### The Android Service Manager The binder IPC mechanism is accessible from userspace through device nodes located at /dev. A modern Android system will allocate three device nodes: • /dev/binder • /dev/hwbinder • /dev/vndbinder serving different purposes. However, the logic is the same for all three of them. A process can call open(2) on those device nodes to receive an fd which it can then use to issue requests via ioctl(2)s. Android has a service manager which is used to translate addresses to bus names and only the address of the service manager itself is well-known. The service manager is registered through an ioctl(2) and there can only be a single service manager. This means once a service manager has grabbed hold of binder devices they cannot be (easily) reused by a second service manager. #### Running Android in Containers This matters as soon as multiple instances of Android are supposed to be run. Since they will all need their own private binder devices. This is a use-case that arises pretty naturally when running Android in system containers. People have been doing this for a long time with LXC. A project that has set out to make running Android in LXC containers very easy is Anbox. Anbox makes it possible to run hundreds of Android containers. To properly run Android in a container it is necessary that each container has a set of private binder devices. #### Statically Allocating binder Devices Binder devices are currently statically allocated at compile time. Before compiling a kernel the CONFIG_ANDROID_BINDER_DEVICES option needs to bet set in the kernel config (Kconfig) containing the names of the binder devices to allocate at boot. By default it is set as: CONFIG_ANDROID_BINDER_DEVICES="binder,hwbinder,vndbinder" To allocate additional binder devices the user needs to specify them with this Kconfig option. This is problematic since users need to know how many containers they will run at maximum and then to calculate the number of devices they need so they can specify them in the Kconfig. When the maximum number of needed binder devices changes after kernel compilation the only way to get additional devices is to recompile the kernel. #### Problem 1: Using the misc major Device Number This situation is aggravated by the fact that binder devices use the misc major number in the kernel. Each device node in the Linux kernel is identified by a major and minor number. A device can request its own major number. If it does it will have an exclusive range of minor numbers it doesn't share with anything else and is free to hand out. Or it can use the misc major number. The misc major number is shared amongst different devices. However, that also means the number of minor devices that can be handed out is limited by all users of misc major. So if a user requests a very large number of binder devices in their Kconfig they might make it impossible for anyone else to allocate minor numbers. Or there simply might not be enough to allocate for itself. #### Problem 2: Containers and IPC namespaces All of those binder devices requested in the Kconfig via CONFIG_ANDROID_BINDER_DEVICES will be allocated at boot and be placed in the hosts devtmpfs mount usually located at /dev or – depending on the udev(7) implementation – will be created via mknod(2) – by udev(7) at boot. That means all of those devices initially belong to the host IPC namespace. However, containers usually run in their own IPC namespace separate from the host's. But when binder devices located in /dev are handed to containers (e.g. with a bind-mount) the kernel driver will not know that these devices are now used in a different IPC namespace since the driver is not IPC namespace aware. This is not a serious technical issue but a serious conceptual one. There should be a way to have per-IPC namespace binder devices. #### Enter binderfs To solve both problems we came up with a solution that I presented at the Linux Plumbers Conference in Vancouver this year. There's a video of that presentation available on Youtube: Android binderfs is a tiny filesystem that allows users to dynamically allocate binder devices, i.e. it allows to add and remove binder devices at runtime. Which means it solves problem 1. Additionally, binder devices located in a new binderfs instance are independent of binder devices located in another binderfs instance. All binder devices in binderfs instances are also independent of the binder devices allocated during boot specified in CONFIG_ANDROID_BINDER_DEVICES. This means, binderfs solves problem 2. Android binderfs can be mounted via: mount -t binder binder /dev/binderfs at which point a new instance of binderfs will show up at /dev/binderfs. In a fresh instance of binderfs no binder devices will be present. There will only be a binder-control device which serves as the request handler for binderfs: root@edfu:~# ls -al /dev/binderfs/ total 0 drwxr-xr-x 2 root root 0 Jan 10 15:07 . drwxr-xr-x 20 root root 4260 Jan 10 15:07 .. crw------- 1 root root 242, 6 Jan 10 15:07 binder-control #### binderfs: Dynamically Allocating a New binder Device To allocate a new binder device in a binderfs instance a request needs to be sent through the binder-control device node. A request is sent in the form of an ioctl(2). Here's an example program: #define _GNU_SOURCE #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/ioctl.h> #include <sys/stat.h> #include <sys/types.h> #include <unistd.h> #include <linux/android/binder.h> #include <linux/android/binderfs.h> int main(int argc, char *argv[]) { int fd, ret, saved_errno; size_t len; struct binderfs_device device = { 0 }; if (argc != 3) exit(EXIT_FAILURE); len = strlen(argv[2]); if (len > BINDERFS_MAX_NAME) exit(EXIT_FAILURE); memcpy(device.name, argv[2], len); fd = open(argv[1], O_RDONLY | O_CLOEXEC); if (fd < 0) { printf("%s - Failed to open binder-control device\n", strerror(errno)); exit(EXIT_FAILURE); } saved_errno = errno; close(fd); errno = saved_errno; if (ret < 0) { printf("%s - Failed to allocate new binder device\n", strerror(errno)); exit(EXIT_FAILURE); } printf("Allocated new binder device with major %d, minor %d, " "and name %s\n", device.major, device.minor, device.name); exit(EXIT_SUCCESS); } What this program simply does is to open the binder-control device node and sending a BINDER_CTL_ADD request to the kernel. Users of binderfs need to tell the kernel which name the new binder device should get. By default a name can only contain up to 256 chars including the terminating zero byte. The struct which is used is: /** * struct binderfs_device - retrieve information about a new binder device * @name: the name to use for the new binderfs binder device * @major: major number allocated for binderfs binder devices * @minor: minor number allocated for the new binderfs binder device * */ struct binderfs_device { char name[BINDERFS_MAX_NAME + 1]; __u32 major; __u32 minor; }; and is defined in linux/android/binderfs.h. Once the request is made via an ioctl(2) passing a struct binder_device with the name to the kernel it will allocate a new binder device and return the major and minor number of the new device in the struct (This is necessary because binderfs allocated a major device number dynamically at boot.). After the ioctl(2) returns there will be a new binder device located under /dev/binderfs with the chosen name: root@edfu:~# ls -al /dev/binderfs/ total 0 drwxr-xr-x 2 root root 0 Jan 10 15:19 . drwxr-xr-x 20 root root 4260 Jan 10 15:07 .. crw------- 1 root root 242, 0 Jan 10 15:19 binder-control crw------- 1 root root 242, 1 Jan 10 15:19 my-binder crw------- 1 root root 242, 2 Jan 10 15:19 my-binder1 #### binderfs: Deleting a binder Device Deleting binder devices does not involve issuing another ioctl(2) request through binder-control. They can be deleted via unlink(2). This means that the rm(1) tool can be used to delete them: root@edfu:~# rm /dev/binderfs/my-binder1 root@edfu:~# ls -al /dev/binderfs/ total 0 drwxr-xr-x 2 root root 0 Jan 10 15:19 . drwxr-xr-x 20 root root 4260 Jan 10 15:07 .. crw------- 1 root root 242, 0 Jan 10 15:19 binder-control crw------- 1 root root 242, 1 Jan 10 15:19 my-binder Note that the binder-control device cannot be deleted since this would make the binderfs instance unuseable. The binder-control device will be deleted when the binderfs instance is unmounted and all references to it have been dropped. #### binderfs: Mounting Multiple Instances Mounting another binderfs instance at a different location will create a new and separate instance from all other binderfs mounts. This is identical to the behavior of devpts, tmpfs, and also – even though never merged in the kernel – kdbusfs: root@edfu:~# mkdir binderfs1 root@edfu:~# mount -t binder binder binderfs1 root@edfu:~# ls -al binderfs1/ total 4 drwxr-xr-x 2 root root 0 Jan 10 15:23 . drwxr-xr-x 72 ubuntu ubuntu 4096 Jan 10 15:23 .. crw------- 1 root root 242, 2 Jan 10 15:23 binder-control There is no my-binder device in this new binderfs instance since its devices are not related to those in the binderfs instance at /dev/binderfs. This means users can easily get their private set of binder devices. #### binderfs: Mounting binderfs in User Namespaces The Android binderfs filesystem can be mounted and used to allocate new binder devices in user namespaces. This has the advantage that binderfs can be used in unprivileged containers or any user-namespace-based sandboxing solution: ubuntu@edfu:~\$ unshare --user --map-root --mount root@edfu:~# mkdir binderfs-userns root@edfu:~# mount -t binder binder binderfs-userns/ root@edfu:~# The "bfs" binary used here is the compiled program from above root@edfu:~# ./bfs binderfs-userns/binder-control my-user-binder Allocated new binder device with major 242, minor 4, and name my-user-binder root@edfu:~# ls -al binderfs-userns/ total 4 drwxr-xr-x 2 root root 0 Jan 10 15:34 . drwxr-xr-x 73 root root 4096 Jan 10 15:32 .. crw------- 1 root root 242, 3 Jan 10 15:34 binder-control crw------- 1 root root 242, 4 Jan 10 15:36 my-user-binder #### Kernel Patchsets The binderfs patchset is merged upstream and will be available when Linux 5.0 gets released. There are a few outstanding patches that are currently waiting in Greg's tree (cf. binderfs: remove wrong kern_mount() call and binderfs: make each binderfs mount a new instancechar-misc-linus) and some others are queued for the 5.1 merge window. But overall it seems to be in decent shape.
• Sagan - "The symmetric group: representations, combinatorial algorithms, and symmetric functions"; (the first two chapters here at least are representation theory) OR James & Kerber - "Representation Theory of the Symmetric Group" (this one includes some modular representations of $S_n$)
# Direction in Lorentz transformation The transformation, as known, is the following: \begin{align} x'&=\gamma(x-vt) \\ y'&=y \\ z'&=z \\ t'&=\gamma\left(t-\frac{vx}{c^2}\right) \end{align} Is there a meaning to the direction of $$v$$? If object A is at rest, and an object B is moving with speed $$v$$ towards him while an object C is moving with speed $$v$$ away from him. Will the transformation of time for example, be the same in both cases? And what about place ($$x$$)? My misunderstanding comes from seeing that the direction DOES matter while calculating the Doppler effect. Previous posts didn't help me, will the lorentz transformations in the case above be exactly the same? • You can use MathJax to typeset equations on this site. Mar 3, 2020 at 14:27 • Does this answer your question? Is the velocity a scalar or a vector in one dimensional Lorentz transformations? Please see the answer by G. Smith. Mar 3, 2020 at 14:30 • In case it's not completely clear from that linked question, yes the direction of the velocity does matter. The equations you list in your question are for the simplest case of the Lorentz transformation. The axes of the 2 frames are aligned with each other, they coincide at t=0, and the primed frame is moving in the +X direction with speed v relative to the unprimed frame. Wikipedia shows the transformations for more general cases. Mar 3, 2020 at 14:39 If we consider A's frame then B is moving to the right with velocity $$\vec v=(v,0,0)$$. The coordinates of B are related to A's by \begin{align} x'&=x-v\,t\tag{1}\\ t'&=t \end{align} Where B has the primed coordinates. Remember that the blue gridlines are all the points which have a constant $$x$$ or $$t$$ coordinate for observer A and the red gridlines have constant $$x$$ or $$t$$ coordinates for observer B. For example the red gridline in the middle has $$x'=0$$ (I didn't draw the time gridlines for observer B). So to answer your question: the direction of $$\vec v$$ does matter. Before $$t=0$$ observer B is always moving towards A and after $$t=0$$ B is always moving away from A. The sign of $$v$$ can switch in which direction B is moving though. If $$v<0$$ then B is moving to the left. When you are considering left moving observers you have to be careful. You can define the left moving transformation in two ways. Either define $$v$$ to always be positive, so the transformation for B moving to the left becomes \begin{align} x'&=x+v\,t\tag{2}\\ t'&=t \end{align} or allow $$v$$ to be negative, which means you can still use equation (1). Note that for the inverse transformation you always get an additional minus sign because A will be moving in the direction $$-\vec v$$ according to B's frame. So \begin{align} x&=x'+v\,t'\tag{3}\\ t&=t' \end{align}
# Thread: Average Rate of Change 1. ## Average Rate of Change For the function given below, answer (a) through (c). (a) For each function below, find the average rate of change of f from 1 to x: [f(x) - f(1)]/(x - 1), where x cannot = 0. ============================== (b) Use the result from part (a) to compute the average rate of change from x = 1 to x = 2. Be sure to simplify. ============================== (c) Find an equation of the secant line containing the points (1, f(1)) and (2, f(2)). HERE IS THE FUNCTION: f(x) = x^3 + x 2. Originally Posted by magentarita For the function given below, answer (a) through (c). (a) For each function below, find the average rate of change of f from 1 to x: [f(x) - f(1)]/(x - 1), where x cannot = 0. With f(x)= $x^3+ x$, what is f(1)? Put that into (f(x)- f(1))/(x-1). Once you've done that, you should be able to factor the numerator and cancel. ============================== (b) Use the result from part (a) to compute the average rate of change from x = 1 to x = 2. Be sure to simplify. You are given the formula use it! x= 2 here. What is f(2)? What is f(1)? What is (f(2)- f(1))/(2- 1)? Or just use the formula you got in (a)! ============================== (c) Find an equation of the secant line containing the points (1, f(1)) and (2, f(2)).[ Presumably by this time you have already calculated the f(1) and f(2) so you know the two points. In (b) you found the slope of this line! How do you find the equation of the line? HERE IS THE FUNCTION: f(x) = x^3 + x[/QUOTE] 3. ## ok.............. Originally Posted by HallsofIvy With f(x)= $x^3+ x$, what is f(1)? Put that into (f(x)- f(1))/(x-1). Once you've done that, you should be able to factor the numerator and cancel. You are given the formula use it! x= 2 here. What is f(2)? What is f(1)? What is (f(2)- f(1))/(2- 1)? Or just use the formula you got in (a)! Presumably by this time you have already calculated the f(1) and f(2) so you know the two points. In (b) you found the slope of this line! How do you find the equation of the line? HERE IS THE FUNCTION: f(x) = x^3 + x [/quote] I'll play around with this question and hope to understand your steps enough to find the answer.
Learning-to-Translate Based on the S-SSTC ## Abstract We present the S-SSTC framework for machine translation (MT), introduced in 2002 and developed since as a set of working MT systems (SiSTeC-ebmt for English—Malay and English—Chinese). Our approach is example-based, but differs from other Example-based Machine Translation (EBMT) in that it uses alignments of string-tree alignments, and that supervised learning is an integral part of the approach. In this presentation, we would like to stress a particular aspect, namely that this approach is better capable of modeling the translation knowledge of human translators than other example-based approaches. Because the translation knowledge is represented as alignments (synchronizations) between string-tree alignments (SSTCs, or structured string-tree correspondences), it is more natural to translators (and post-editors) than direct word-word, string-string or chunk-chunk correspondences used in classical Statistical Machine Translation (SMT) and EBMT models. It is also totally static, hence more understandable than procedural knowledge embedded in almost all Rule-based Machine Translation (RBMT) approaches. The learning process which is an integral part of the development of SiSTeC-ebmt MT systems is just like any other machine learning task; it is concerned with modeling and understanding learning phenomena with respect to the ‘world’ — a central aspect of cognition. Traditional theories of Machine Translation systems, however, have assumed that such cognition can be studied separately from learning. It is assumed that the knowledge is given to the system, stored in some representation language with a well-defined meaning, and that there is some mechanism which can be used to determine what source language text can be translated with respect to the given knowledge; the question of how this knowledge might be acquired and whether this should influence how the performance of the machine translation system is measured is not considered. We prove the usefulness of the ‘learning-to-translate’ approach by showing that through interaction with the world, the developed EBMT truly gains additional translating power, over what is possible in more traditional settings. Bilingual parallel texts which encode the correspondences between source and target sentences have been used extensively in implementing the so called example-based machine translation systems. In order to enhance the quality of example-based systems, sentences of a parallel corpus are normally annotated with their constituent or dependency structures, which in turn allows correspondences between source and target sentences to be established at the structural level. Here, we annotate parallel texts based on the Structured String-Tree Correspondence (SSTC). The SSTC is a general structure that can associate, to strings in a language, arbitrary tree structures as desired by the annotator to be the interpretation structures of the strings, and more importantly is the facility to specify the correspondence between the string and the associated tree which can be interpreted for both analysis and synthesis in the machine translation process. These features are very much desired in the design of an annotation scheme, in particular for the treatment of certain non-standard linguistic phenomena, such as unprojectivity or inversion of dominance. In this presentation, we will demonstrate how to use the good properties of the SSTC annotation scheme for S-SSTC-based MT, using the example of the SiSTeC-ebmt English—Malay Machine Translation system. We have chosen dependency structures as linguistic representations in the SSTCs, since they provide a natural way of annotating both the tree associated to a string as well as the mapping between the two. We also give a simple means to denote the translation elements between the corresponding source (English) and target (Malay) SSTCs. The dependency structure used here is in fact quite analogous to the use of abstract syntax tree in most of the compiler implementation. However, we note that the SSTCs can easily be extended to keep multiple levels of linguistic representation (e.g. syntagmatic1, functional and logical structures) if that is considered important to enhance the results of the machine translation system. Naturally, the more information annotated in an SSTC, the more difficult is the annotation work; that is why one should try to keep only the annotations contributing most to the task at hand. In the general case, let $S$ be a string (usually a sentence) and $T$ a tree (its linguistic representation). Instead of simply write $(S,T)$, we want to decompose that ‘large’ correspondence into smaller ones $(S_1, T_1)\ldots(S_n, T_n)$ in a hierarchical fashion; hence the adjective ‘structured’ in ‘SSTC’. If $T$ is an abstract representation of $S$, some nodes may represent discontinuous words or constituents (e.g. He gives the money back to her), or some words are not directly represented (e.g. auxiliaries, articles), or some words omitted (elided) in $S$ may have been restored in $T$. In the SSTC diagrams presented here, any tree node $N$ bears a pair $X/Y$ where $X = \text{SNODE}(N)$ and $Y = STREE(N)$. $X$ and $Y$ are generalized (not necessarily connex) substrings of the string $S$, and are written as minimal2 left-to-right lists of usual intervals, like 1_3+4_5). $\text{SNODE}(N)$ denotes the substring that corresponds to the lexical information contained in node3, while $\text{STREE}(N)$ denotes the (again possibly discontinuous) substring that corresponds to the whole subtree rooted at node $N$. As for the correspondences between the source (English) and target (Malay) SSTCs, the translation elements between phrases and words are coded in terms of STREE pairs and SNODE pairs, respectively. To illustrate this, we show in Figure 1 a pair of source (English) and target (Malay) SSTCs and the corresponding translation elements. In the example SSTCs given, an interval is assigned to each word in the sentence, i.e. 0_1 to “if”, 1_2 to “the”, etc. The node “not” has $\text{SNODE} =$5_6, meaning that its lexeme corresponds to the word “not” in the sentence. Similarly, the node bearing “is” has $\text{STREE} =$1_5+6_10, meaning that the subtree it dominates corresponds to the discontinuous substring “the oil level is” + “at the ADD mark”. ## Implementation notes The main purpose of the project described here is to build a general software package that provides an integrated environment for the construction of S-SSTC-based EBMT systems. In this project, we put emphasis on the development of an English->Malay MT system. However, the same methodology can be adapted to develop MT systems for any other language pairs. The current SiSTeC-ebmt platform consists of four major subcomponents, namely 1. the preparation of an annotated bilingual parallel texts to be used for the initial learning process, 2. a set of acquisition tools used to construct the initial bilingual knowledge bank, 3. a general MT system to translate new input sentences (using the bilingual knowledge bank) into the target language, together with all the related annotation, 4. the post-editing process to make corrections (if any) on the translation as well as on the annotations, which in turn will be used by the learning tools to confirm the well translated parts and adjust the translation elements of the BKB corresponding to the corrected parts. An English -> Malay MT system with 100,000 translation examples annotated in the S-SSTC has been constructed based on the implementation frame as described above. To provide an overview of the performance of this system, a quick comparison of the MT results produced by Google Translate and our SiSTeC-ebmt is given in the table below. Sample English Text Translation to Malay by Google Translate Translation to Malay by SiSTeC-ebmt (100,000 S-SSTCs) The main purpose of the project described in this paper is to build a general software package that provides an integrated environment for the construction of S-SSTC based EBMT systems. In this project, we put emphasis on the development of an English->Malay MT system in the domain of computer science texts. However, the same methodology can be adapted to develop MT systems for any other typology of texts, and naturally also for any other language pairs. Tujuan utama projek yang dihuraikan dalam kertas kerja ini adalah untuk membina satu pakej perisian umum yang menyediakan persekitaran bersepadu bagi pembinaan sistem S-SSTC EBMT berasaskan. Dalam projek ini, kami meletakkan penekanan kepada pembangunan bahasa Inggeris> MT sistem bahasa Melayu dalam domain teks sains komputer. Walau bagaimanapun, kaedah yang sama boleh disesuaikan untuk membangunkan sistem MT bagi mana-mana tipologi teks lain, dan secara semulajadi juga untuk mana-mana pasangan bahasa lain. Tujuan utama daripada projek itu digambarkan di dalam kertas ini untuk membina perisian umumnya pakej yang menyediakan mengintegrasikan S-SSTC persekitaran bagi pembinaan berdasarkan EBMT sistem. Dalam projek ini, kami meletakkan teks sains sistem menekankan pembangunan English->Malay MT di domain komputer. Walau bagaimanapun, metodologi sama boleh disesuaikan mengikut merangka sistem Tm untuk tipologi lain teks, dan secara semula jadi juga untuk sebarang pasang bahasa-bahasa lain. We provide also in the following table a comparison of the results produced by our SiSTeC-ebmt system with different size of its bilingual knowledge bank. Translation to Malay by SiSTeC-ebmt (1,500 S-SSTCs) Translation to Malay by SiSTeC-ebmt (25,000 S-SSTCs) Translation to Malay by SiSTeC-ebmt (100,000 S-SSTCs) Tujuan utama projek itu memerikan dengan kertas ini untuk membina bungkusan perisian jeneral yang memberikan mengintegrasikan persekitaran untuk pembinaan S-SSTC menempatkan sistem EBMT. Dalam projek ini, kami menyimpan penekanan terhadap perkembangan-perkembangan English->Malay MT sistem dalam kawasan kekuasaan komputer teks sains. Walau bagaimanapun, metodologi sama boleh menjadi disadur untuk berkembang MT sistem untuk sebarang typology yang lain (-lain) teks, dan semula jadinya juga untuk sebarang pasangan bahasa yang lain (-lain). Tujuan sesalur projek itu dikatakan dengan kertas ini untuk membina perisian jeneral bungkusan memberikan yang mengintegrasikan persekitaran untuk senibina S-SSTC berasaskan sistem EBMT. Dalam projek ini, kami meletakkan penekanan terhadap perkembangan English->Malay MT sistem dalam domain komputer teks sains. Walau bagaimanapun, perkaedahan yang sama boleh menjadi disesuaikan memajukan sistem MT untuk typology yang lain (-lain) teks, dan semula jadinya juga untuk bahasa yang lain (-lain) pasang. Tujuan utama daripada projek itu digambarkan di dalam kertas ini untuk membina perisian umumnya pakej yang menyediakan mengintegrasikan S-SSTC persekitaran bagi pembinaan berdasarkan EBMT sistem. Dalam projek ini, kami meletakkan teks sains sistem menekankan pembangunan English->Malay MT di domain komputer. Walau bagaimanapun, metodologi sama boleh disesuaikan mengikut merangka sistem Tm untuk tipologi lain teks, dan secara semula jadi juga untuk sebarang pasang bahasa-bahasa lain. page revision: 3, last edited: 05 Jun 2012 09:18
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 10 Jul 2020, 05:46 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A class consists of 10 boys and 20 girls. Exactly half of the boys and Author Message Manager Joined: 01 Jun 2019 Posts: 72 A class consists of 10 boys and 20 girls. Exactly half of the boys and  [#permalink] ### Show Tags 01 Jun 2020, 20:01 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics A class consists of 10 boys and 20 girls. Exactly half of the boys and half of the girls have brown eyes. Determine the probability that a randomly selected student will be a boy or a student with brown eyes. (A)$$\frac{1}{3}$$ (B)$$\frac{1}{2}$$ (C)$$\frac{1}{5}$$ (D)$$\frac{2}{3}$$ (E)$$\frac{1}{4}$$ Senior Manager Joined: 18 Dec 2017 Posts: 297 Re: A class consists of 10 boys and 20 girls. Exactly half of the boys and  [#permalink] ### Show Tags 01 Jun 2020, 20:26 Number of students having brown eyes=15 Number of boys having brown eyes= 5 Probability = (10+15-5)÷30 =2/3 Posted from my mobile device Re: A class consists of 10 boys and 20 girls. Exactly half of the boys and   [#permalink] 01 Jun 2020, 20:26
# scheme-theoretic description of abelian schemes Let $S$ be a locally noetherian scheme, $C$ the category of proper smooth $S$-schemes with geometrical connected fibres and $C_*$ the category of pointed objects of $S$, i.e. objects of $C$ together with a morphism $S \to C$. Also denote $A$ the category of abelian schemes over $S$. There is a well-known rigidity result stating that a pointed morphism between $X,Y \in A$ is already a group morphism. In other words, the inclusion functor $A \to C_*$ is fully faithful. Is there a nice description for the image? In other words, which purely scheme-theoretic properties do abelian schemes have and are there enough to characterize them? For example, $X \in A$ is "homogeneous". - The purely scheme-theoretic properties that Abelian schemes have which characterise the image are: (insert definition of an abelian scheme here, i.e. smooth proper, group structure, geom conn fibres). What more are you asking for? –  Kevin Buzzard Apr 28 '10 at 18:12 I'm asking for a char. which does not involve a group multiplication. For example, how can I decide whether $\mathbb{P}^1_S$ is an abelian scheme? This is just an example. –  Martin Brandenburg Apr 28 '10 at 18:35 How about smooth proper morphisms $X \to S$ with connected fibers, a section $S \to X$, such that the sheaf of Kähler differentials $\Omega_{X/S}$ is a pullback from $S$, and such that the group scheme $\underline{\rm Aut}_S X$ acts transitively on the fibers? The essential point is that the hypothesis on the differentials insures that no geometric fiber can contain a rational curve, so no affine algebraic group can act non-trivially. The result should follow from Chevalley's structure theorem for algebraic groups, with some fairly standard arguments (I haven't checked the details, though). - Triviality of $\Omega_{X/S}$ disposes of $\mathbb P^1$, at least! –  Mariano Suárez-Alvarez Apr 28 '10 at 19:50 Nice! By Theorem 6.14 in GIT (for which the projective hypothesis relaxes to properness by using Artin's results on Hilbert and Hom functors via algebraic spaces instead of schemes), the abelian scheme structure exists (uniquely) if it does so on geometric fibers. Thus, the proposed criterion above reduces to case when $S$ is spectrum of alg. closed field, in which case it follows by the suggested argument (using rational curves and Chevalley's theorem) provided we modify the hypothesis on Aut's to involve the identity component of the Aut-scheme on fibers (still geometric criterion). –  BCnrd Apr 28 '10 at 20:15 By a transitive action I mean that the morphism $\underline{\rm Aut}_S X \to X$ coming from the section is scheme-theoretically surjective. Since the fibers are connected, this implies that the connected component of the identity must dominate each fiber, and this is enough to conclude. –  Angelo Apr 28 '10 at 21:05 If the generic points of S have residue characteristic 0 then perhaps the condition on Aut_S X is unnecessary since any connected smooth proper variety over a field of characteristic 0 with trivial tangent bundle (and a rational point) is an abelian variety. This is not true in positive characteristics, but does suggest that some weaker condition might suffice. –  ulrich Apr 29 '10 at 11:09 @unknown: Could you please give an example of failure in positive characteristic, i.e. an example of a connected, smooth, proper scheme $X/k$ with trivial tangent bundle and a $k$-point, where $k$ is a field of positive characteristic, such that $X/k$ doesn't have a group law? –  Thanos D. Papaïoannou Apr 29 '10 at 22:23
## Solid state devices For an ideal silicon –p-n junction with Na=10^17cm-3 and Nd=10^15cm-3, find the depletion layer width and the maximum field a t zero bias for T=300K.
Skip to content # Math 672/673 ### Theory of Probability #### Winter/Spring 2023. MWF 10-11am. Catalog description: Measure and integration, probability spaces, laws of large numbers, central-limit theory, conditioning, martingales, random walks. Grades #### Homework (30%) There will be weekly homework, due Friday of each week on Canvas. Please submit as PDF, ideally using LaTeX or some similar typesetting program. Each problem should be written in “Theorem, Proof” form, where you state what you are proving as a theorem (you may rephrase the assigned questions to express it as a theorem if necessary) and then provide a proof in its own clearly marked section. You will also be asked to review assignments of other students as a “referee”. This will require you to read and provide constructive input on the proofs of your peers. In general, for each problem you should either “accept” the proved theorem as correct with a well-written proof, “revise and resubmit” if there are minor errors that can be fixed easily or the writing lacks clarity, or “reject”. In this situation, we will be more gentle than a flat rejection, and ask for a wholly new solution, and perhaps provide suggestions of where to begin. #### Midterm (30%) and Final (40%) There will be two exams each quarter, a midterm in week 6 of each quarter and a final exam in week 11 (finals week) of each quarter. We will discuss the format of the exams. In general the level of difficulty will reflect the difficulty of problems on the qualifying exam. We will cover most of Probability and Stochastics by Erhan Çinlar. I anticipate we will cover the first four chapters in Winter quarter, and the remaining chapters (perhaps focusing on particular applications/examples as we move deeper) in Spring. This schedule may be adjusted as necessary. ### Math 672 (Winter) • Measure and Integral: $\sigma$-algebras, monotone classes, kernels. • Probability spaces: Random variables, expectations, moments, existence and independence. • Convergence: Almost sure convergence, convergence in probability, $L^p$ convergence, convergence in distribution. Law of Large Numbers, Central Limits. • Conditioning: Conditional expectations, probabilities, distributions and independence. Construction of probability spaces. ### Math 673 (Spring) • Martingales and Stochastics • Poisson Random Measures • Levy Processes • Brownian Motion • Markov Processes
# greatest common divisor Printable View • Mar 25th 2008, 07:43 AM deniselim17 greatest common divisor Can anyone please help me with this proof? " If a|bc, then shows that a|[gcd⁡(a,b)∙gcd⁡(a,c)] " Any suggestions will be very appreciate. • Mar 25th 2008, 09:36 AM TheEmptySet Quote: Originally Posted by deniselim17 Can anyone please help me with this proof? " If a|bc, then shows that a|[gcd⁡(a,b)∙gcd⁡(a,c)] " Any suggestions will be very appreciate. Since the gcd of any two numbers can be written as a linear combination we can rewrite each of the above as follows $\exists \mbox{s,t } \in \mathbb{Z} \ni \mbox{gcd(a,b)} =as+bt$ and $\exists \mbox{m,n } \in \mathbb{Z} \ni \mbox{gcd(a,c)} =am+cn$ also since $a|bc \iff aq=bc \mbox{ for some } q \in \mathbb{Z}$ so $(gcd(a,b)) \cdot (gcd(a,c))=(as+bt)(am+cn)=a^2sm+acsn+abtm+bctn$ Grouping we get...(and subbing in from above) $=a(asm+csn+btm)+bctn \iff a(asm+csn+btm) +(aq)tn =$ $a(asm+csn+btm +qtn)$ so finally $a|(asm+csn+btm +qtn) \therefore a|(gcd(a,b)) \cdot (gcd(a,c))$ $QED$
# qiskit.aqua.algorithms.VQE.supports_aux_operators¶ classmethod VQE.supports_aux_operators()[ソース] Whether computing the expectation value of auxiliary operators is supported. If the minimum eigensolver computes an eigenstate of the main operator then it can compute the expectation value of the aux_operators for that state. Otherwise they will be ignored. bool True if aux_operator expectations can be evaluated, False otherwise
Should one really write $\text{det}A$, instead of $detA$, similarly $\text{Aut}G$ instead of $AutG$, or is $det$, $Aut$ acceptable? - ## migrated from math.stackexchange.comFeb 1 '13 at 6:16 This question came from our site for people studying math at any level and professionals in related fields. I would use \operatorname{det}(A), as that's what it is. –  gnometorule Feb 1 '13 at 4:51 Welcome to TeX.sx! Your post was migrated here from Math.SE. Please register on this site, too, and make sure that both accounts are associated with each other (by using the same OpenID), otherwise you won't be able to comment on or accept answers or edit your question. –  Werner Feb 1 '13 at 6:19 Note that $det(A)$ simply looks like it might mean $d\cdot e\cdot t(A)$. With respect to correct spacing in context, \operatorname is definitely the way to go. - Better note that \det is predefined in LaTeX, so $\det(A)$ will give the correct result out of the box. –  egreg Feb 1 '13 at 9:52 @egreg Yes, \operatorname is rather for the generic case (such as Aut in the OP). –  Hagen von Eitzen Feb 1 '13 at 13:40 1. $det A$ 2. $\text{det}A$ 3. $\mathrm{det}A$ 4. $\operatorname{det}A$ Personally, I prefer the second or third option or fourth option, as without italicization, it distinguishes the "operator" from the operand (matrix $A$ in this case), and/or fm other variables, which are italicized. And the advantage of the last option is the slight increase in spacing between the operator ($\operatorname{det}$) and the operand. See Deven's suggestion in the comment below: if you plan to use the "operator" a lot in a given post, you can write a "preamble" defining a "new command", which helps economize the effort involved in formulating a long post. - \text should generally not be used: it uses the font of the surrounding text, which might be italic or small caps... It should be reserved for words that are part of the text. –  Mariano Suárez-Alvarez Feb 1 '13 at 6:12 As for defining operators, there is a standard \DeclareMathOperator command in amslatex.sty. –  Mariano Suárez-Alvarez Feb 1 '13 at 6:13 \operatorname not only increases the spacing when it is needed, it also removed the space when the next thing is a delimiter (like parentheses) and has a couple of other technical niceties that make it the correct option. –  Mariano Suárez-Alvarez Feb 1 '13 at 6:15 There already is \det, no need to define it yourself. –  morbusg Feb 1 '13 at 6:35 For things that are operators \DeclareMathOperator or \operatorname is the way to go, for example $\mathrm{sin}\theta$ just looks wrong. On the other hand, one might argue that things that are just names for sets, could just be written as \mathrm{Mat}(R,n)` (obviously I'd make a \Mat macro), because it would not be used in a context where something could be seen as multiplication. \text should never be used to get something appear in upright, as it will be italic in an italic context. –  daleif Feb 1 '13 at 9:11
# U = C 1 / 2 ${\mathbf{U}} = \mathbf{C}^{1/2}$ and Its Invariants in Terms of C $\mathbf{C}$ and Its Invariants @article{Scott2020U, title={ U = C 1 / 2 \$\{\mathbf\{U\}\} = \mathbf\{C\}^\{1/2\}\$ and Its Invariants in Terms of C \$\mathbf\{C\}\$ and Its Invariants}, author={N. H. Scott}, journal={Journal of Elasticity}, year={2020}, volume={141}, pages={363-379} } • N. Scott • Published 22 April 2020 • Mathematics • Journal of Elasticity We consider N × N $N\times N$ tensors for N = 3 , 4 , 5 , 6 $N= 3,4,5,6$ . In the case N = 3 $N=3$ , it is desired to find the three principal invariants i 1 , i 2 , i 3 $i_{1}, i_{2}, i_{3}$ of U ${\mathbf{U}}$ in terms of the three principal invariants I 1 , I 2 , I 3 $I_{1}, I_{2}, I_{3}$ of C = U 2 ${\mathbf{C}}={\mathbf{U}}^{2}$ . Equations connecting the i α $i_{\alpha }$ and I α $I_{\alpha }$ are obtained by taking determinants of the factorisation λ 2 I − C = ( λ I − U ) ( λ I + U… 1 Citations • Mathematics Mathematics and Mechanics of Solids • 2021 A nonlinear small-strain elastic theory is constructed from a systematic expansion in Biot strains, truncated at quadratic order. The primary motivation is the desire for a clean separation between ## References SHOWING 1-10 OF 12 REFERENCES Determination of the stretch and rotation in the polar decomposition of the deformation gradient • Mathematics • 1984 On montre qu'en appliquant le theoreme de Cayley-Hamilton on peut calculer directement le tenseur d'etirement a droite U a partir du tenseur de deformation de Cauchy-Green a droite sans avoir recours Derivatives of the Rotation and Stretch Tensors Previous work on representing the rotation and stretch tensors, their time derivatives and their gradients with respect to the deformation gradient tensor is reviewed and some new results are On the Explicit Determination of the Polar Decomposition in n-Dimensional Vector Spaces A method for the explicit determination of the polar decomposition (and the related problem of finding tensor square roots) when the underlying vector space dimension n is arbitrary (but finite), is Nonlinear Fluid-Structure Interactions in Flapping Wing Systems Title of dissertation: NONLINEAR FLUID–STRUCTURE INTERACTIONS IN FLAPPING WING SYSTEMS Timothy Fitzgerald, Doctor of Philosophy, 2013 Dissertation directed by: Professor Balakumar Balachandran Direct determination of the rotation in the polar decomposition of the deformation gradient by maximizing a Rayleigh quotient • Mathematics • 2005 We develop a new effective method of determining the rotation R and the stretches U and V in the polar decomposition F = RU = VR of the deformation gradient. The method is based on a minimum property Invariants of C1∕2 in terms of the invariants of C The three invariants of C$^{1/2}$ are key to expressing this tensor and its inverse as a polynomial in C. Simple and symmetric expressions are presented connecting the two sets of invariants \$I_1,
# Op risk capital: why US should adopt SMA today ## No reason to delay roll-out of standardised approach, says TCH’s Greg Baer Attempts to measure operational risk capital have not gone well. The advanced measurement approach (AMA) by all accounts has proved a failure. The Basel Committee’s attempt to introduce a model-driven process for calculating operational risk capital was criticised by the industry and then formally rejected by regulators. In its place, Basel has developed a new, simplified methodology: the standardised measurement approach, or SMA. Part of the final package of revisions to Basel III, the SMA has
# Volkswagen Lupo Volkswagen Lupo Overview Manufacturer Volkswagen Also called Seat Arosa Production 1998–2005[1][2] Assembly Wolfsburg, Germany[nb 1] Brussels, Belgium[nb 2] Body and chassis Class City car (A) Body style 3-door hatchback Layout Front-engine, front-wheel-drive Platform Volkswagen Group A00 platform Related SEAT Arosa Powertrain Engine 1.0 L I4 (petrol) 1.4 L I4 (petrol) 1.6 L I4 (petrol) 1.2 L I3 (diesel) 1.4 L I3 (diesel) 1.7 L I4 (diesel) Transmission 5-speed manual 6-speed manual 5-speed semi-automatic 4-speed automatic Dimensions Wheelbase 2,318 mm (91.3 in) Length 3,524 mm (138.7 in) Width 1,640 mm (64.6 in) Height 1,457 mm (57.4 in) Curb weight 975 kg (2,150 lb) Chronology Successor Volkswagen Fox The Volkswagen Lupo is a city car produced by the German car manufacturer Volkswagen from 1998 to 2005.[2] ## Model history The Lupo was introduced in 1998 to fill a gap at the bottom of the Volkswagen model range caused by the increasing size and weight of the Polo. The 1998 Lupo was a badge-engineered version of the stablemate 1997 SEAT Arosa. Both use the A00 platform which is a shortened version of the Polo/Ibiza A0 platform. Initially only available in two trim variants, the budget E trim and the upgraded S trim; the range later expanded to include a Sport and GTI variant. Petrol engines ranged from 1.0 to 1.4 (1.6 for the GTI) with diesels from 1.2 to 1.7. The differences between the E and S trim included painted door mirrors, door handles and strip, central locking, electric windows, double folding seats and opening rear windows. Production of the Lupo was discontinued in 2005,[2] and was replaced by the Fox. The Lupo name is Latin, meaning wolf, and is named after its home town of Wolfsburg.[3] ## Specifications • Length 3,530 mm (139.0 in) • Width 1,803 mm (71.0 in) (with mirrors) • Height 1,447 mm (57.0 in) • Luggage capacity (rear seats up) 130 litres, (rear seats down) 833 litres • Weight 1,015 kg ## Engines Name Volume Type Output Torque 0–100 km/h Top speed Years Petrol engines 1.0 8v 997 cc (1 L; 61 cu in) 4 cyl. 50 PS (37 kW; 49 hp) at 5000 rpm 84 N·m (62 lb·ft) at 2750 rpm 18.0 s 152 km/h (94 mph) 1998–2000 1.0 8v 999 cc (1 L; 61 cu in) 4 cyl 50 PS (37 kW; 49 hp) at 5000 rpm 86 N·m (63 lb·ft) at 3000–3600 rpm 17.7 s 152 km/h (94 mph) 1998–2005 1.4 8v 1,390 cc (1 L; 85 cu in) 4 cyl. 60 PS (44 kW; 59 hp) at 4700 rpm 116 N·m (86 lb·ft) at 3000 rpm 14.3 s 160 km/h (99 mph) 2000–2005 1.4 16v 1,390 cc (1 L; 85 cu in) 4 cyl. 75 PS (55 kW; 74 hp) at 5000 rpm 126 N·m (93 lb·ft) at 3800 rpm 12.0 s 172 km/h (107 mph) 1998–2005 1.4 16v Sport 1,390 cc (1 L; 85 cu in) 4 cyl. 100 PS (74 kW; 99 hp) at 6000 rpm 126 N·m (93 lb·ft) at 4400 rpm 10.0 s 188 km/h (117 mph) 1999–2005 1.4 16v FSI 1,390 cc (1 L; 85 cu in) 4 cyl. 105 PS (77 kW; 104 hp) at 6200 rpm 130 N·m (96 lb·ft) at 4250 rpm 11.8 s 199 km/h (124 mph) 2000–2003 1.6 16v GTI 1,598 cc (2 L; 98 cu in) 4 cyl. 125 PS (92 kW; 123 hp) at 6500 rpm 152 N·m (112 lb·ft) at 3000 rpm 7.8 s 205 km/h (127 mph) 2000–2005 Diesel engines 1.2 TDI 3L 1,191 cc (1 L; 73 cu in) 3 cyl. 61 PS (45 kW; 60 hp) at 4000 rpm 140 N·m (103 lb·ft) at 1800–2400 rpm 14.5 s 165 km/h (103 mph) 1999–2005 1.4 TDI 1,422 cc (1 L; 87 cu in) 3 cyl. 75 PS (55 kW; 74 hp) at 4000 rpm 195 N·m (144 lb·ft) at 2200 rpm 12.3 s 170 km/h (106 mph) 1999–2005 1.7 SDI 1,716 cc (2 L; 105 cu in) 4 cyl. 60 PS (44 kW; 59 hp) at 4200 rpm 115 N·m (85 lb·ft) at 2200–3000 rpm 16.8 s 157 km/h (98 mph) 1998–2005 ## Versions ### Lupo 3L Volkswagen Lupo 3L The Lupo 3L was a special-edition made with the intent of being the world's first car in series production consuming as little as 3 litres of fuel per 100 kilometres (78 miles per US gallon or 94 miles per Imperial gallon)[citation needed]. To achieve this, the 3L was significantly changed from the standard Lupo to include: • 1.2 litre three cylinder diesel engine with turbocharger and direct injection (61 hp, 140 Nm) • Use of light-weight aluminum and magnesium alloys for doors, bonnet (hood), rear hatch, seat frames, engine block, wheels, suspension system etc. to achieve a weight of only 830 kg (1,830 lb) • Tiptronic gearbox • Engine start/stop automatic to avoid long idling periods • Low rolling resistance tires • Automated gearbox and clutch, to optimise fuel consumption, with a Tiptronic mode for the gearbox • Changed aerodynamics, so a ${\displaystyle {\mathbf {c}}_{\mathrm {w} }\,}$ value of 0.29 was achieved The 3L, along with the GTI and FSI, had a completely different steel body to other Lupos, using thinner but stronger steel sheet. The car had an automated electro hydraulic manual transmission with a Tiptronic mode on the selector and an automated electro hydraulic clutch. The car also had an ECO mode. When engaged it limited the power to 41 bhp (31 kW; 42 PS) (excluding kick down) and programmed the transmission to change up at the most economical point. ECO mode also activated the start/stop function, a feature that was new to European cars at the time. To restart, the driver simply takes his foot off the brake and presses the accelerator. In ECO mode, the clutch was disengaged when the accelerator pedal was released for maximum economy, so the car freewheels as much as possible, with the clutch re-engaging as soon as the accelerator pedal or brake pedal is touched. The 3L also has only four wheel bolts and alloy brake drums at the rear, along with many aluminium suspension components. Initially, there were very few options on the 3L, as options added weight which affected fuel consumption. Those available initially were electrically heated and electrically controlled mirrors, fog lights and different paint colours. In order to increase sales, other options were offered including all electric steering, electric windows and air conditioning. These options however, increased fuel consumption slightly. In 2001, a Japanese economy driver, Dr Miyano, used it to set a new world record for the most frugal circumnavigation of Britain in a standard diesel production car, with an average fuel economy figure of 119.48 mpg or 2.36 l/100 km. In November 2003, Gerhard Plattner covered a distance of 2,910 miles through 20 European countries in a standard Lupo 3L TDI. He achieved his aim of completing this journey, which started in Oslo, Norway and finished in The Hague in The Netherlands - with just 100 euros worth of fuel. In fact, all he required was 90.94 euros, which corresponds to an average consumption of 2.78 litres per 100 km (101.6 mpg).[4] According to the Lupo 3L instruction manual, the 3L engine also runs on Rapeseed Methyl Ester (RME) without any changes to the engine. During the period of series production of the Lupo 3L, Volkswagen also presented the 1L Concept, a prototype made with the objective of proving the capability of producing a roadworthy vehicle consuming only 1 litre of fuel per 100 kilometres (235 miles per US gallon). The Lupo 3L shared its engine and special gearbox with the Audi A2 1.2 TDI 3L. As a result of this and other changes, this Audi A2 is also capable of reaching the same results as the Lupo 3L. ### Lupo FSi The Lupo FSi was the first direct injection petrol powered production vehicle Volkswagen produced. A 5L/100 km 1.4 16v petrol version of the Lupo 3L with an average consumption of 4.9L/100 km. This direct injection engine next to a conventional engine with similar power uses around 30% less fuel. It had a similar automated gearbox to the 3L but with different gear ratios. Outwardly, it was almost identical to a 3L but with a different front grill, slightly wider wheels with a different design and lacked the magnesium steering wheel and rear bumper of the 3L. The early 3L's and FSi's had aluminium tailgates which were lighter and more aerodynamic than their standard lupo counterparts, the early had FSi has a unique spoiler and the later ones without the aluminium tailgates were fitted with the same spoiler as the Lupo GTI. The FSi was only sold in Germany, Austria and Switzerland. ### Lupo GTI Volkswagen Lupo GTI The 1.6 L Lupo GTI has been labelled a true successor to the Volkswagen Golf Mk1, one of the first true hot hatches.[citation needed] The GTI can be identified by its fully body coloured bumpers and twin central exhausts. In 2002, a six speed gearbox was added, together with improved throttle response, and was suggested as a competitor to the Mini Cooper or the larger Volkswagen Polo GTI.[5] The GTI features much more standard equipment which was not available on any other in the Lupo range, including bi xenon headlights, 15 inch Bathurst alloy wheels and an off black interior. With a DOHC sixteen valve four cylinder engine producing 125 PS (123 hp), the GTI had a top speed of 127 mph (204 km/h) and could accelerate 0 to 60 mph in 7.8 seconds. ## Literature • Hans-Rüdiger Etzold (2012). So wird's gemacht: VW Lupo/SEAT Arosa 1997–2005 (in German) (7th ed.). Delius Klasing Verlag. ISBN 978-3-7688-1182-8. ## Notes 1. ^ Between 1998 and 2006; from 2001, the 3L, GTI models only. 2. ^ Between 2001 and 2006; except 3L, GTI models. ## References 1. ^ "VW Lupo". autobild.de. Retrieved 23 May 2015. 2. ^ a b c * Bernd Wiersch (2012). Volkswagen Typenkunde 1994 bis 2005 (in German). Delius Klasing Verlag. p. 121. ISBN 978-3-7688-3421-6. Als einziges Lupo-Modell wurde der FSI in diesem Jahr (2003, editor) eingestellt. Die Produktion der übrigen Modelle lief bis 2005 weiter. 3. ^ "Auto Express February 2003". Autoexpress.co.uk. 2003-02-04. Retrieved 2011-09-05. 4. ^ Jamie Vondruska (2 December 2003). "Lupo 3L Once Again Enters Guiness Book of World Records". vwvortex.com. Retrieved 4 October 2016. 5. ^ "Evo March 2002". Evo.co.uk. 2002-03-07. Retrieved 2011-09-05.
# A projectile is shot at an angle of pi/3 and a velocity of 8 m/s. How far away will the projectile land? ${x}_{\max} = x = 5.655676106 \text{ }$meters #### Explanation: Solve for the time first $y = {v}_{0} \sin \theta \cdot t + \frac{1}{2} \cdot g \cdot {t}^{2}$ Assuming level ground then $y = 0$. Going up then going down then $y = 0$ $0 = 8 \sin \left(\frac{\pi}{3}\right) t + \frac{1}{2} \cdot \left(- 9.8\right) \cdot {t}^{2}$ $8 \cdot \frac{\sqrt{3}}{2} \cdot t - 4.9 \cdot {t}^{2} = 0$ $t \left(4 \sqrt{3} - 4.9 \cdot t\right) = 0$ ${t}_{1} = 0 \text{ }$seconds and ${t}_{2} = \frac{4 \sqrt{3}}{4.9} = 1.413919027 \text{ }$seconds We can solve for the range $x$ now $x = {v}_{0} \cos \theta \cdot {t}_{2}$ $x = 8 \cdot \cos \left(\frac{\pi}{3}\right) \cdot 1.413919027$ $x = 5.655676106 \text{ }$meters God bless....I hope the explanation is useful.
# Circle A has a center at (2 ,7 ) and an area of 81 pi. Circle B has a center at (4 ,3 ) and an area of 36 pi. Do the circles overlap? If not, what is the shortest distance between them? Apr 24, 2018 $\textcolor{b l u e}{\text{Circles intersect}}$ #### Explanation: First we find the radii of A and B. Area of a circle is $\pi {r}^{2}$ Circle A: $\pi {r}^{2} = 81 \pi \implies {r}^{2} = 81 \implies r = 9$ Circle B: $\pi {r}^{2} = 36 \pi \implies {r}^{2} = 36 \implies r = 6$ Now we know the radii of each we can test whether they intersect, touch in one place or do not touch. If the sum of the radii is equal to the distance between the centres, then the circles touch in one place only. If the sum of the radii is less than the distance between centres, then the circles do not touch If the sum of the radii is greater than the distance between centres then the circles intersect. We find the distance between centres using the distance formula. d=sqrt((x_2-x_1)^2+(y_2-y_1^2) $d = \sqrt{{\left(2 - 4\right)}^{2} + {\left(7 - 3\right)}^{2}} = 2 \sqrt{2}$ $9 + 6 = 15$ $15 > 2 \sqrt{2}$
crch (version 1.0-4) tt: The Truncated Student-t Distribution Description Density, distribution function, quantile function, and random generation for the left and/or right truncated student-t distribution with df degrees of freedom. Usage dtt(x, location = 0, scale = 1, df, left = -Inf, right = Inf, log = FALSE)ptt(q, location = 0, scale = 1, df, left = -Inf, right = Inf, lower.tail = TRUE, log.p = FALSE)rtt(n, location = 0, scale = 1, df, left = -Inf, right = Inf)qtt(p, location = 0, scale = 1, df, left = -Inf, right = Inf, lower.tail = TRUE, log.p = FALSE) Arguments x, q vector of quantiles. p vector of probabilities. n number of observations. If length(n) > 1, the length is taken to be the number required. location location parameter. scale scale parameter. df degrees of freedom (> 0, maybe non-integer). df = Inf is allowed. left left censoring point. right right censoring point. log, log.p logical; if TRUE, probabilities p are given as log(p). lower.tail logical; if TRUE (default), probabilities are P[X <= x] otherwise, P[X > x]. Value dtt gives the density, ptt gives the distribution function, qtt gives the quantile function, and rtt generates random deviates. Details If location or scale are not specified they assume the default values of 0 and 1, respectively. left and right have the defaults -Inf and Inf respectively. The truncated student-t distribution has density $$f(x) = 1/\sigma \tau((x - \mu)/\sigma) / (T((right - \mu)/\sigma) - T((left - \mu)/\sigma))$$ for $$left \le x \le right$$, and 0 otherwise. where $$T$$ and $$\tau$$ are the cumulative distribution function and probability density function of the student-t distribution with df degrees of freedom respectively, $$\mu$$ is the location of the distribution, and $$\sigma$$ the scale. dt
# Tajima's D: Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia Tajima's D is a statistical test created by and named after the Japanese researcher Fumio Tajima. The purpose of the test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under a non-random process, including directional selection or balancing selection, demographic expansion or contraction, genetic hitchhiking, or introgression. A randomly evolving DNA sequence contains mutations with no effect on the fitness and survival of an organism. The randomly evolving mutations are called "neutral", while mutations under selection are "non-neutral". For example, you would expect to find that a mutation which causes prenatal death or severe disease to be under selection. According to Motoo Kimura's neutral theory of molecular evolution, the majority of mutations in the human genome are neutral, ie have no effect on fitness and survival. When looking at the human population as a whole, we say that the population frequency of a neutral mutation fluctuates randomly (ie the percentage of people in the population with the mutation changes from one generation to the next, and this percentage is equally likely to go up or down), through genetic drift. The strength of genetic drift depends on the population size. If a population is at a constant size with constant mutation rate, the population will reach an equilibrium of gene frequencies. This equilibrium has important properties, including the number of segregating sites S / , and the number of nucleotide differences between pairs sampled (these are called pairwise differences). To standardize the pairwise differences, the mean or 'average' number of pairwise differences is used. This is simply the sum of the pairwise differences divided by the number of pairs, and is signified by π. The purpose of Tajima's test is to identify sequences which do not fit the neutral theory model at equilibrium between mutation and genetic drift. In order to perform the test on a DNA sequence or gene, you need to sequence homologous DNA for at least 3 individuals. Tajima's statistic computes a standardized measure of the total number of segregating sites (these are DNA sites that are polymorphic) in the sampled DNA and the average number of mutations between pairs in the sample. The two quantities whose values are compared are both method of moments estimates of the population genetic parameter theta, and so are expected to equal the same value. If these two numbers only differ by as much as one could reasonably expect by chance, then the null hypothesis of neutrality cannot be rejected. Otherwise, the null hypothesis of neutrality is rejected. ## Hypothetical example Lets say that you are a genetic researcher who finds two mutations, a mutation in a gene which causes pre-natal death and a mutation in DNA which has no effect on human health or survival. You publish your findings in a scientific journal, identifying the first mutation as "under negative selection" and the second as "neutral". The neutral mutation gets passed on from one generation to the next, while the mutation under negative selection disappears, since anyone with the mutation cannot reproduce and pass it on to the next generation. In order to back your discovery with more scientific evidence, you gather DNA samples from 100 people and determine the exact DNA sequence for the gene in each of these 100. Using all 100 DNA samples as input, you determine Tajima's D on both the detrimental mutation and the 'neutral' DNA. If your hypothesis was correct, then Tajima's Test will output "neutral" for the neutral mutation and "non-neutral" for the gene. ## Scientific explanation Under the neutral theory model, for a population at constant size at equilibrium: $E[\pi]=\theta=E\left[\frac{S}{\sum_{i=1}^{n-1} \frac{1}{i}}\right]=4N\mu$ for diploid DNA, and $E[\pi]=\theta=E\left[\frac{S}{\sum_{i=1}^{n-1} \frac{1}{i}}\right]=2N\mu$ for haploid. In the above formulas, S is the number of segregating sites, n is the number of samples, and i is the index of summation. But selection, demographic fluctuations and other violations of the neutral model (including rate heterogeneity and introgression) will change the expected values of S and π, so that they are no longer expected to be equal. The difference in the expectations for these two variables (which can be positive or negative) is the crux of Tajima's D test statistic. $D\,$ is calculated by taking the difference between the two estimates of the population genetics parameter $\theta\,$. This difference is called $d\,$, and D is calculated by dividing $d\,$ by the square root of its variance $\sqrt{\hat{V}(d)}$ (its standard deviation, by definition). $D=\frac {d} {\sqrt {\hat{V}(d)} }$ Fumio Tajima demonstrated by computer simulation that the $D\,$ statistic described above could be modeled using a beta distribution. If the $D\,$ value for a sample of sequences is outside the confidence interval then one can reject the null hypothesis of neutral mutation for the sequence in question. ## Statistical test When performing a statistical test such as Tajima's D, the critical question is whether the value calculated for the statistic is unexpected under a null process. For Tajima's D, the magnitude of the statistic is expected to increase the more the history of the population deviates from a history expected under neutrality. In the example below, we show the calculation of this statistic for some data, and find that it is unusual. In Tajima's Test, the null hypothesis is neutral evolution. ## Mathematical details $D=\frac {d} {\sqrt {\hat{V}(d)} } = \frac {\hat{k} - \frac{S}{a_1} } {\sqrt {[e_1S+e_2S(S-1)]} }$ where $e_1 = \frac {c_1}{a_1}$ $e_2 = \frac{c_2}{a_1^2+a_2}$ $c_1 = b_1 - \frac {1}{a_1}$ $c_2 = b_2 - \frac{n+2}{a_{1}n} +\frac{a_2}{a_{1}^{2}}$ $b_1 = \frac {n+1}{3(n-1)}$ $b_2 = \frac{2(n^{2}+n+3)}{9n(n-1)}$ $a_1 = \sum_{i=1}^{n-1} \frac{1}{i}$ $a_2 = \sum_{i=1}^{n-1} \frac{1}{i^2}$ $\hat{k}\,$ and $\frac{S}{a_1}$ are two estimates of the expected number of single nucleotide polymorphisms (SNPs)between two DNA sequences under the neutral mutation model in a sample size $n\,$ from an effective population size $N\,$ The first estimate is the average number of SNPs found in (n choose 2) pairwise comparisons of sequences (i,j) in the sample $\hat{k}= \frac { \sum\sum_{i The second estimate is derived from the expected value of $S\,$, the total number of polymorphisms in the sample $E(S)=a_1M\,$ Tajima defines $M=4N\mu\,$, whereas Hartl & Clark use a different symbol to define the same parameter $\theta=4N\mu\,$. ## Historical example The genetic mutation which causes sickle-cell anemia is non-neutral because it affects survival and fitness. People homozygous for the mutation have the sickle-cell disease, while those without the mutation (homozygous for the wild-type allele) do not have the disease. People with one copy of the mutated allele (heterozygous) do not have the disease, but instead are resistant to malaria. Thus in Africa, where there is a prevalence of the malaria parasite Plasmodium falciparum that is transmitted through mosquitos Anopheles, there is a selective advantage for heterozygous individuals. Meanwhile, in countries such as the USA where the risk of malaria infection is low, the population frequency of the mutation is lower. ## Example Suppose you are a geneticist studying an unknown gene. As part of your research you get DNA samples from four random people (plus yourself). For simplicity, you label your sequence as a string of zeroes, and for the other four people you put a zero when their DNA is the same as yours and a one when it is different. (For this example, the specific type of difference is not important.) Position 12345 67890 12345 67890 Person Y 00000 00000 00000 00000 Person A 00100 00000 00100 00010 Person B 00000 00000 00100 00010 Person C 00000 01000 00000 00010 Person D 00000 01000 00100 00010 Notice the four polymorphic sites (positions where someone differs from you, at 3, 7, 13 and 19 above). Now compare each pair of sequences and get the average number of polymorphisms between two sequences. There are "five choose two" (ten) comparisons that need to be done. Person Y is you! You vs A Person Y 00000 00000 00000 00000 Person A 00100 00000 00100 00010 3 polymorphisms You vs B Person Y 00000 00000 00000 00000 Person B 00000 00000 00100 00010 2 polymorphisms You vs C Person Y 00000 00000 00000 00000 Person C 00000 01000 00000 00010 2 polymorphisms You vs D Person Y 00000 00000 00000 00000 Person D 00000 01000 00100 00010 3 polymorphisms A vs B Person A 00100 00000 00100 00010 Person B 00000 00000 00100 00010 1 polymorphism A vs C Person A 00100 00000 00100 00010 Person C 00000 01000 00000 00010 3 polymorphisms A vs D Person A 00100 00000 00100 00010 Person D 00000 01000 00100 00010 2 polymorphisms B vs C Person B 00000 00000 00100 00010 Person C 00000 01000 00000 00010 2 polymorphisms B vs D Person B 00000 00000 00100 00010 Person D 00000 01000 00100 00010 1 polymorphism C vs D Person C 00000 01000 00000 00010 Person D 00000 01000 00100 00010 1 polymorphism The average number of polymorphisms is ${3 + 2 + 2 + 3 + 1 + 3 + 2 + 2 + 1 + 1\over 10} = 2$. The lower-case d described above is the difference between these two numbers—the average number of polymorphisms found in pairwise comparison (2) and the total number of polymorphic sites (4). Thus d = 2 − 4 = − 2. Since this is a statistical test, you need to assess the significance of this value. A discussion of how to do this is provided below. ## Significance A negative Tajima's D signifies an excess of low frequency polymorphisms, indicating population size expansion and/or positive selection. A positive Tajima's D signifies low levels of both low and high frequency polymorphisms, indicating a decrease in population size and/or balancing selection. However, calculating a conventional "p-value" associated with any Tajima's D value that is obtained from a sample is impossible. Briefly, this is because there is no way to describe the distribution of the statistic that is independent of the true, and unknown, theta parameter (no pivot quantity exists). To circumvent this issue, several options have been proposed. Tajima (1989) found an empirical similarity between the distribution of the test statistic and a beta distribution with mean zero and variance one. He estimated theta by taking Watterson's estimator and dividing it the number of samples. Simulations have shown this distribution to be conservative (Fu and Li, 1991) , and now that the computing power is more readily available this approximation is not frequently used. A more nuanced approach was presented in a paper by Simonsen et. al. These authors advocated constructing a confidence interval for the true theta value, and then performing a grid search over this interval to obtain the critical values at which the statistic is significant below a particular alpha value. An alternative approach is for the investigator to perform the grid search over the values of theta which they believe to be plausible based on their knowledge of the organism under study. Bayesian approaches are a natural extension of this method. A very rough rule of thumb to significance is that values greater than +2 or less than -2 are likely to be significant. This rule is based on an appeal to asymptotic properties of some statistics, and thus +/- 2 does not actually represent a critical value for a significance test. Finally, genome wide scan's of Tajima's D in sliding windows along a chromosomal segment are often performed. With this approach, those regions that have a value of D that greatly deviates from the bulk of the empirical distribution of all such windows are reported as significant. This method does not assess significance in the traditional statistical test, but is quite powerful given a large genomic region, and is unlikely to falsely identify interesting regions of a chromosome if only the greatest outliers are reported. ## References [1] Statistical Method for Testing the Neutral Mutation Hypothesis by DNA Polymorphism. Fumio Tajima. Genetics, 123: 585-595. [2] Principles of Population Genetics, 4th ed. Daniel L. Hartl & Andrew G. Clark. Sinauer Associates, Inc. 2007 [3] Properties of Statistical Tests of Neutrality for DNA Polymorphism Data, Genetics, 1995 ## Computational tools for Tajima's D test • [4] DNAsp (Windows) • [5] Variscan (Mac OS X, Linux, Windows) • [6] Arlequin (Windows) • [7] Online view of Tajima'S D values in human genome • [8] Online computation of Tajima's D • MEGA4
# How can I change the keyboard shortcut for switching the active window? The default keyboard shortcut (on Windows at least) for switching focus to the next window is Ctrl+F6, and for switching to the previous windows it's Shift+Ctrl+F6. How can I change this to Ctrl+Tab and Shift+Ctrl+Tab respectively (or some other pair of combinations which is not used by default)? - You'll want to see this... – J. M. Feb 18 '12 at 1:24 @J.M. this command doesn't appear to be in KeyEventTranslations.tr -- interesting. – Mr.Wizard Feb 18 '12 at 6:26 @Mr. Wizard: it's definitely not in Linux (that's why I couldn't post an answer); have you checked Windows by any chance? – J. M. Feb 18 '12 at 6:29 @J.M. yes, I am on Windows 7 and at least the command is not obvious if it is there; searching for "F6" reveals nothing. – Mr.Wizard Feb 18 '12 at 6:35 This discussion on Mathgroup might be of use here. – István Zachar Feb 18 '12 at 13:30 You need to add the following to KeyEventTranslations.tr: Item[KeyEvent["Tab", Modifiers -> {Control}], FrontEndExecute[FrontEndToken["CycleNotebooksForward"]]], Item[KeyEvent["Tab", Modifiers -> {Shift, Control}], FrontEndExecute[FrontEndToken["CycleNotebooksBackward"]]], This will map Control-Tab and Control-Shift-Tab to cycling between notebooks. For some reason, using the Tab key sometimes fails, but any alternative shortcut could be used (for example Ctrl-). On Windows KeyEventTranslation.tr is located in \$InstallationDirectory\SystemFiles\FrontEnd\TextResources\Windows - For example, Item[KeyEvent["", Modifiers -> {Control}], FrontEndExecute[FrontEndToken["CycleNotebooksForward"]]], works, too. – Andrew MacFie Feb 18 '12 at 14:46 Perhaps Ctrl+Tab fails sometimes because it is already mapped? – Andrew MacFie Feb 26 '12 at 16:31 This solution doesn't seem to work when I want to switch away from a minimized window. – Andrew MacFie Feb 26 '12 at 16:32 Apparently Ctrl+F6 and Ctrl+Shift+F6 are default Windows keyboard shortcuts, although I was only aware of the Tab variants. Because of this, these commands on not (apparently) configurable from within Mathematica. Further, Mathematica does not recognize the Tab commands. It may be possible to rig something using SetSelectedNotebook but so far I have failed to do this within the the confines of KeyEventTranslations.tr and MenuSetup.tr. Perhaps an EventHandler within a Palette could be made to work but I am tired of this problem. - Aha -- they're Windows's, not Mathematica's. – Andrew MacFie Feb 18 '12 at 13:38 FWIW, on a Mac it's CMD + `, and this does work within Mathematica by default. – zentient Jan 21 '14 at 12:32
# Is there a link between Training, Test errors based on k fold CV and not doing CV? I am using Matlab to train a feedforward NN using Cross validation (CV) approach. My understanding of CV approach is the following. (Please correct me where wrong) 1. Let X be the entire dataset with Y as the label set. Split X into 90/10 ratio to get: [Xtrain,Xtest] using holdout approach by calling the cvpartition(Y,'Holdout',0.1,'Stratify',true) 2. Apply CV on Xtrain: For every fold I calculated the accuracy. At the end of the CV loop I have an average accuracy score. Let accCV denote this variable. Inside the CV loop xtrain is further split into [xtrain_cv,xtrain_val]. 3. After CV loop, I reinitialze the weights and re-train a new model using Xtrain. Then I get a training accuracy which I denote by the variable accTrain. 4. Using the model obtained in Step3 I test for evaluating the model's purpose and consider this to be the generalization performance that is the performance when we have an unseen future data, Xtest. I call this accuracy as accTest. Question1: Is it possible that accCV will be less than the accuracy over the train set Xtrain when not using the CV approach? That is I call the NN model over Xtrain only once and record the accuracy and denote it by variable accTrain, then is it possible that accCV ~ accTrain?. Or intuitively, accCV should be close to the accuracy when not using CV approach since the dataset is the same which is Xtrain. If this is the case, then why use CV when outside the CV we do not reuse the model that was created inside the CV? What does it tell us? Question2: If accCV < accTest but the accuracy on the entire dataset Xtrain without using CV is close to that of accTest (accTrain ~ accTest) are we doing something wrong? What is the best case scenario? Is it accCV ~ accTest? It is expected that accCV < accTrain: the former is the accuracy on the test folds (averaged over all the splits), so represents models' scores on unseen-to-them data. Similarly, you would expect accTrain > accTest. There are two main reasons to evaluate a model, whether by k-fold cross-validation or simple train/test split: for hyperparameter optimization / model selection, or to estimate future performance. (N.B., k-fold should generally give better predictions than simple train/test split.) If you make any decisions based on the scores, then they no longer represent unbiased estimates of future performance. If, in your setup, you make no decisions based on step 2, then you should expect accCV ~ accTest, and there's no real reason to include that step. If you do make decisions based on step 2, you may expect accCV > accTest, though the gap is probably substantially smaller than the gap in accTrain > accTest. You may see discrepancies here, due to natural variation in the datasets, or perhaps data leakage. • Thank you for your asnwer, however few points are unclear. Could you please clarify?(1) I could not understand what you meant:"If you make any decisions based on the scores, then they no longer represent unbiased estimates of future performance." Did you mean to have a separate unseen data representing future data that should not be used in the CV loop and use this dataset for performance evaluation?(2)If accCV~accTest then is it a good or bad thing?(3) Is my creation of the test set (future unseen) which is never used in the CV loop correct? I called accTest for this test set. – Sm1 Jan 27 '20 at 19:56 • (1,3) Your setup, with a k-fold CV on the train set and a separate test set, is most commonly used when you want to do model selection: you fit many models with CV on the train set, choose the model pipeline with best accCV, refit that pipeline to the entire training set, and finally score on the test set getting accTest. If you are not doing model selection, then you can get rid of either your step 2 or 4. Read some of the Related questions. (2) accCV~accTest is good; if not, then your test set is not iid with train set, or you have some data leakage, or... – Ben Reiniger Jan 27 '20 at 20:17 • Just to confirm, by accTrain you meant the training accuracy on 90% of the split data set that was used inside the CV procedure and which is not the train data in the CV fold. Also, if we do training by CV then there is no need for a separate test set for generalization purpose since the test fold will serve that purpose. Is my understanding correct? Thank you very much for your help andclarifications. – Sm1 Jan 28 '20 at 18:55
# Base classes for Matrix Groups¶ sage: G = GL(2,5); G General Linear Group of degree 2 over Finite Field of size 5 sage: TestSuite(G).run() sage: g = G.1; g [4 1] [4 0] sage: TestSuite(g).run() We test that trac ticket #9437 is fixed: sage: len(list(SL(2, Zmod(4)))) 48 AUTHORS: • William Stein: initial version • David Joyner (2006-03-15): degree, base_ring, _contains_, list, random, order methods; examples • William Stein (2006-12): rewrite • David Joyner (2007-12): Added invariant_generators (with Martin Albrecht and Simon King) • David Joyner (2008-08): Added module_composition_factors (interface to GAP’s MeatAxe implementation) and as_permutation_group (returns isomorphic PermutationGroup). • Simon King (2010-05): Improve invariant_generators by using GAP for the construction of the Reynolds operator in Singular. • Sebastian Oehms (2018-07): Add subgroup() and ambient() see trac ticket #25894 class sage.groups.matrix_gps.matrix_group.MatrixGroup_base Base class for all matrix groups. This base class just holds the base ring, but not the degree. So it can be a base for affine groups where the natural matrix is larger than the degree of the affine group. Makes no assumption about the group except that its elements have a matrix() method. ambient() Return the ambient group of a subgroup. OUTPUT: A group containing self. If self has not been defined as a subgroup, we just return self. EXAMPLES: sage: G = GL(2,QQ) sage: m = matrix(QQ, 2,2, [[3, 0],[~5,1]]) sage: S = G.subgroup([m]) sage: S.ambient() is G True as_matrix_group() Return a new matrix group from the generators. This will throw away any extra structure (encoded in a derived class) that a group of special matrices has. EXAMPLES: sage: G = SU(4,GF(5)) sage: G.as_matrix_group() Matrix group over Finite Field in a of size 5^2 with 2 generators ( [ a 0 0 0] [ 1 0 4*a + 3 0] [ 0 2*a + 3 0 0] [ 1 0 0 0] [ 0 0 4*a + 1 0] [ 0 2*a + 4 0 1] [ 0 0 0 3*a], [ 0 3*a + 1 0 0] ) sage: G = GO(3,GF(5)) sage: G.as_matrix_group() Matrix group over Finite Field of size 5 with 2 generators ( [2 0 0] [0 1 0] [0 3 0] [1 4 4] [0 0 1], [0 2 1] ) subgroup(generators, check=True) Return the subgroup generated by the given generators. INPUT: • generators – a list/tuple/iterable of group elements of self • check – boolean (optional, default: True). Whether to check that each matrix is invertible. OUTPUT: The subgroup generated by generators as an instance of FinitelyGeneratedMatrixGroup_gap EAMPLES: sage: UCF = UniversalCyclotomicField() sage: G = GL(3, UCF) sage: e3 = UCF.gen(3); e5 =UCF.gen(5) sage: m = matrix(UCF, 3,3, [[e3, 1, 0], [0, e5, 7],[4, 3, 2]]) sage: S = G.subgroup([m]); S Subgroup with 1 generators ( [E(3) 1 0] [ 0 E(5) 7] [ 4 3 2] ) of General Linear Group of degree 3 over Universal Cyclotomic Field sage: CF3 = CyclotomicField(3) sage: G = GL(3, CF3) sage: e3 = CF3.gen() sage: m = matrix(CF3, 3,3, [[e3, 1, 0], [0, ~e3, 7],[4, 3, 2]]) sage: S = G.subgroup([m]); S Subgroup with 1 generators ( [ zeta3 1 0] [ 0 -zeta3 - 1 7] [ 4 3 2] ) of General Linear Group of degree 3 over Cyclotomic Field of order 3 and degree 2 class sage.groups.matrix_gps.matrix_group.MatrixGroup_gap(degree, base_ring, libgap_group, ambient=None, category=None) Base class for matrix groups that implements GAP interface. INPUT: • degree – integer. The degree (matrix size) of the matrix group. • base_ring – ring. The base ring of the matrices. • libgap_group – the defining libgap group. • ambient – A derived class of ParentLibGAP or None (default). The ambient class if libgap_group has been defined as a subgroup. Element structure_description(G, latex=False) Return a string that tries to describe the structure of G. This methods wraps GAP’s StructureDescription method. For full details, including the form of the returned string and the algorithm to build it, see GAP’s documentation. INPUT: • latex – a boolean (default: False). If True return a LaTeX formatted string. OUTPUT: • string Warning From GAP’s documentation: The string returned by StructureDescription is not an isomorphism invariant: non-isomorphic groups can have the same string value, and two isomorphic groups in different representations can produce different strings. EXAMPLES: sage: G = CyclicPermutationGroup(6) sage: G.structure_description() 'C6' sage: G.structure_description(latex=True) 'C_{6}' sage: G2 = G.direct_product(G, maps=False) sage: LatexExpr(G2.structure_description(latex=True)) C_{6} \times C_{6} This method is mainly intended for small groups or groups with few normal subgroups. Even then there are some surprises: sage: D3 = DihedralGroup(3) sage: D3.structure_description() 'S3' We use the Sage notation for the degree of dihedral groups: sage: D4 = DihedralGroup(4) sage: D4.structure_description() 'D4' Works for finitely presented groups (trac ticket #17573): sage: F.<x, y> = FreeGroup() sage: G=F / [x^2*y^-1, x^3*y^2, x*y*x^-1*y^-1] sage: G.structure_description() 'C7' And matrix groups (trac ticket #17573): sage: groups.matrix.GL(4,2).structure_description() 'A8' class sage.groups.matrix_gps.matrix_group.MatrixGroup_generic(degree, base_ring, category=None) Base class for matrix groups over generic base rings You should not use this class directly. Instead, use one of the more specialized derived classes. INPUT: • degree – integer. The degree (matrix size) of the matrix group. • base_ring – ring. The base ring of the matrices. Element degree() Return the degree of this matrix group. OUTPUT: Integer. The size (number of rows equals number of columns) of the matrices. EXAMPLES: sage: SU(5,5).degree() 5 matrix_space() Return the matrix space corresponding to this matrix group. This is a matrix space over the field of definition of this matrix group. EXAMPLES: sage: F = GF(5); MS = MatrixSpace(F,2,2) sage: G = MatrixGroup([MS(1), MS([1,2,3,4])]) sage: G.matrix_space() Full MatrixSpace of 2 by 2 dense matrices over Finite Field of size 5 sage: G.matrix_space() is MS True sage.groups.matrix_gps.matrix_group.is_MatrixGroup(x) Test whether x is a matrix group. EXAMPLES: sage: from sage.groups.matrix_gps.matrix_group import is_MatrixGroup sage: is_MatrixGroup(MatrixSpace(QQ,3)) False sage: is_MatrixGroup(Mat(QQ,3)) False sage: is_MatrixGroup(GL(2,ZZ)) True sage: is_MatrixGroup(MatrixGroup([matrix(2,[1,1,0,1])])) True
Blog Oh No, Not You Again! thammond – 2007 October 02 Oh dear. Yesterday’s post “Using ISO URNs” was way off the mark. I don’t know. I thought that walk after lunch had cleared my mind. But apparently not. I guess I was fixing on eyeballing the result in RDF/N3 rather than the logic to arrive at that result. (Continues.) There are three namespace cases (and I was only wrong in two out of the three, I think): 1. “pdf:” I was originally going to suggest the use of “data:” for the PDF information dictionary terms here but then lunged at using an HTTP URI (the URI of the page for the PDF Reference manual on the Adobe site) for regular orthodox conformancy and good churchgoing: @prefix pdf: <http://www.adobe.com/devnet/pdf/pdf_reference.html> . This was wrong on two counts: a) Afaik no such use for this URI as a namespace has ever been made by Adobe. And it is in the gift of the DNS tenant (elsewhere called “owner”) to mint URIs under that namespace and to ascribe meanings to those URIs. b) Also the URI is not best suited to a role as namespace URI since RDF namespaces typically end in “/” or “#” to make the division between namespace and term clearer. (In XML it doesn’t make a blind bit of difference as XML namespaces are just a scoping mechanism.) So to have a property URI as http://www.adobe.com/devnet/pdf/pdf_reference.htmlAuthor does the job but looks pretty rough and more importantly precludes (at least, complicates) the possibility of dereferencing the URI to return a page with human or machine readable semantics. Better in RDF terms is one of the following: a) http://www.adobe.com/devnet/pdf/pdf_reference/Author In the absence of any published namespace from Adobe for these terms, I think it would have been more prudent to fall back on “data:” URIs. So @prefix pdf: <data:,> . data:,Author data:,CreationDate data:,Creator etc. This is correct (afaict) and merely provides a URI representation for bare strings. Had we wanted to relate those terms to the PDF Reference we might have tried something like: data:,PDF%20Reference:Author data:,PDF%20Reference:CreationDate data:,PDF%20Reference:Creator etc. And if we had wanted to make those truly secondary RDF resources related to a primary RDF resource for the “namespace” we could have attempted something like: data:,PDF%20Reference#Author data:,PDF%20Reference#CreationDate data:,PDF%20Reference#Creator etc. Note though that the “data:” specification is not clear about the implications of using “#”. (Is it allowed, or isn;t it?) We must suspect that it is not allowed, but see this mail from Chris Lilley (W3C) which is most insightful. 1. “pdfx:” The example was just for demo purposes, but (as per 1a above) it is incumbent on the namespace authority (here ISO) to publish a URI for the term to be used. Anyhow, the namespace URI I cited @prefix pdfx: <urn:iso:std:iso-iec:15930:-1:2001> . would not have been correct and would have led to these mangled URIs: urn:iso:std:iso-iec:15930:-1:2001GTS_PDFXVersion urn:iso:std:iso-iec:15930:-1:2001GTS_PDFXConformance It should have been something closer to @prefix pdfx: <urn:iso:std:iso-iec:15930:-1:2001:> . urn:iso:std:iso-iec:15930:-1:2001:GTS_PDFXVersion urn:iso:std:iso-iec:15930:-1:2001:GTS_PDFXConformance 1. “_usr:” This was the one correct call in yesterday’s post. @prefix _usr: <data:,> . The only problem here would be to differentiate these terms from the terms listed in the PDF Reference manual, although the PDF information dictionary makes no such distinction itself. To sum up, perhaps the best way of rendering the PDF information dictionary keys in RDF would be to use “data:” URIs for all (i.e. a methodology for URI-ifying strings) and to bear in mind that at some point ISO might publish URNs for the PDF/X mandated keys: ‘GTS_PDFXVersion‘ and ‘GTS_PDFXConformance‘. So, # document infodict (object 58: 476983): @prefix: pdfx: <data:,> . @prefix: pdf: <data:,> . @prefix: _usr: <data:,> . <> _usr:Apag_PDFX_Checkup "1.3"; pdf:Author "Scott B. Tully"; pdf:CreationDate "D:20020320135641Z"; pdf:Creator "Unknown"; pdfx:GTS_PDFXConformance "PDF/X-1a:2001"; pdfx:GTS_PDFXVersion "PDF/X-1:2001"; pdf:Keywords "PDF/X-1"; pdf:ModDate "D:20041014121049+10'00'"; pdf:Producer "Acrobat Distiller 4.05 for Macintosh"; pdf:Subject "A document from our PDF archive. "; pdf:Title "Tully Talk November 2001"; pdf:Trapped "False" .
Discussion Forum Que. An object can have which of the following multiplicities? a. Zero b. One c. More than one d. All of the above. Answer:All of the above.
# Normal vector artifacts with NVMeshMender ## Recommended Posts Posted (edited) I'm working on a mesh converter (from OBJ to custom binary format). I've been trying to get tangent vector calculation to work properly but there were some issues with my code, so I switched to NVMeshMender (though without the "d3d9" dependency, I changed the source slightly to use glm). Problem is, it's still not working, but not even without the tangents. Whether I instruct NVMeshMender to (re)calculate the normals or not, it introduces artifacts in the raw normals, which I did not have before. I load the OBJ file line-by-line, collect unique vertices (which have equal V, N, UV indices), then run MeshMender on this data. I store the normals and tangents packed in GL_INT_2_10_10_10_REV format. Strange thing is, only a couple of normals are messed up, and it seems to be random. Here's a screenshot (note it might seem too dark but that's because I forgot to turn off the tone mapping pass). The Iron Man model does not seem to have artifacts, but nearly every other model seems to do. By posting this I'm hoping that somebody will recognize this kind of artifact and can point me in the right direction. Any help is greatly appreciated! EDIT: When I also export the normals from Blender in the OBJ, the results are slightly different but the artifacts are still present (e.g. the monkey's eyes have more artifacts horizontally)... Edited by bodigy ##### Share on other sites Does all parts of the mesh have unique uv coordinates? To me it looks a bit like a normal map that has been generated with a mesh that has overlapping uv coordinates. ##### Share on other sites Posted (edited) This shouldn't have to do anything with UV-s, if I'm not mistaken. These are ONLY the "raw" view-space normal vectors on the screenshots; they simply come from the vertex attribute array. Just to be sure, I tried with the same mesh but all UVs removed, and I'm seeing the same thing. :( ... :) Edited by bodigy ##### Share on other sites Ok, do you normalize the normals before storing and using them? I use this code to generate normal, tangent and binormal, perhaps you can make use of it and see if the result is different: void CalculateNormalTangentBinormal(D3DXVECTOR3* P0, D3DXVECTOR3* P1, D3DXVECTOR3* P2, D3DXVECTOR2* UV0, D3DXVECTOR2* UV1, D3DXVECTOR2* UV2, D3DXVECTOR3* normal, D3DXVECTOR3* tangent, D3DXVECTOR3* binormal) { D3DXVECTOR3 P = *P1 - *P0; D3DXVECTOR3 Q = *P2 - *P0; //Cross product normal->x = P.y * Q.z - P.z * Q.y; normal->y = P.z * Q.x - P.x * Q.z; normal->z = P.x * Q.y - P.y * Q.x; float s1 = UV1->x - UV0->x; float t1 = UV1->y - UV0->y; float s2 = UV2->x - UV0->x; float t2 = UV2->y - UV0->y; float tmp = 0.0f; if (fabsf(s1 * t2 - s2 * t1) <= 0.0001f) tmp = 1.0f; else tmp = 1.0f / (s1 * t2 - s2 * t1 ); tangent->x = (t2 * P.x - t1 * Q.x) * tmp; tangent->y = (t2 * P.y - t1 * Q.y) * tmp; tangent->z = (t2 * P.z - t1 * Q.z) * tmp; binormal->x = (s1 * Q.x - s2 * P.x) * tmp; binormal->y = (s1 * Q.y - s2 * P.y) * tmp; binormal->z = (s1 * Q.z - s2 * P.z) * tmp; D3DXVec3Normalize(normal, normal); D3DXVec3Normalize(tangent, tangent); D3DXVec3Normalize(binormal, binormal); } ##### Share on other sites Posted (edited) I used to have similar code which I replaced with the MeshMender, so now I tried reverting back (and telling the mender not to recalculate normals) and to my surprise it had the same artifacts! I then realized there's something else I changed at the same time.... Which is the GL_INT_2_10_10_10_REV pack logic... With my old version, it works fine -- unfortunately I wrote that a long time ago and now I realize it relies on undefined behavior. :( http://stackoverflow.com/questions/3784996/why-does-left-shift-operation-invoke-undefined-behaviour-when-the-left-side-oper return U32 ( (((I32)(w) & 3) << 30) | (((I32)(z * 511.0f) & 1023) << 20) | (((I32)(y * 511.0f) & 1023) << 10) | ((I32)(x * 511.0f) & 1023) ); Now I'm trying to come up with something which isn't undefined... Stay tuned :) Or if anybody has some code I'd be grateful! Edited by bodigy ##### Share on other sites Posted (edited) Partial success - I managed to come up with the below, which works with normals :) But now I see issues with tangents, probably because of some other issue I have. U32 packNormalizedFloat_2_10_10_10_REV(float x, float y, float z, float w) { const U32 xs = x < 0; const U32 ys = y < 0; const U32 zs = z < 0; const U32 ws = w < 0; return U32 ( ws << 31 | ((U32)(w       + (ws << 1)) &   1) << 30 | zs << 29 | ((U32)(z * 511 + (zs << 9)) & 511) << 20 | ys << 19 | ((U32)(y * 511 + (ys << 9)) & 511) << 10 | xs <<  9 | ((U32)(x * 511 + (xs << 9)) & 511) ); } Edited by bodigy ##### Share on other sites Here are the tangent vector artifacts: ##### Share on other sites Posted (edited) This is now resolved. The problem wasn't with NVMeshMender of course... :) I had several mistakes all working together against me. Hope someone finds my GL_INT_2_10_10_10_REV pack function useful - since I personally couldn't find anything with Google! Edited by bodigy ## Create an account Register a new account • ### Forum Statistics • Total Topics 628710 • Total Posts 2984335 • 23 • 11 • 10 • 13 • 14
# Let $G$ be a finite abelian group with elements $a_1,a_2,\dots,a_n$. If $G$ has more than one element of order $2$ then $a_1a_2\dots a_n=1$. Let $G$ be a finite abelian group with elements $a_1,a_2,\dots,a_n$. If $G$ has more than one element of order $2$ then $a_1a_2\dots a_n=1$. Attempt Clearly, if $a_i$ is not of order 2, the inverse of $a_i$ must be in the product. So the elements left are all of order $2$ or identity, say $b_1,b_2,\dots,b_m$ Since $b_i^2=1$ for $i=1,\dots,m$. Let $H=\{1,b_1,\dots,b_m\}$ Then $H \cong C_2\times C_2 \times\dots \times C_2$. For $C_2\times C_2$, it is indeed $V$-group $\{1,a,b,ab\}$. Clearly, $1abab=1$. Assume the result holds for direct product of less than $k$ cyclic groups of order $2$. Let $H$ be a direct product of $k$ cyclic groups of order $2$. Write $H=\langle b_1\rangle \times \dots \times \langle b_k\rangle$. Consider $K=\langle b_1\rangle \times \dots\times \langle b_{k-1} \rangle$. Then the result holds for $K$. Now I need to relate this result to $H$. • The solution is fine except for the last sentence. I do not see what your argument is there. – Tobias Kildetoft Nov 11 '16 at 10:46 • @TobiasKildetoft Ouh I realized my mistake. Then I think my solution is not complete – Alan Wang Nov 11 '16 at 10:47 • I agree it is not complete, but you are on the right track by writing the relevant elements as a direct product. I would consider each entry in such a tuple separately in the product and do a bit of counting. – Tobias Kildetoft Nov 11 '16 at 10:51 • @TobiasKildetoft I continue my attempt by using induction but have no idea how to relate the induction hypothesis to complete the induction. – Alan Wang Nov 11 '16 at 11:00 To finish your inductive proof, first look at the elements in $H$ that doesn't contain a factor $b_k$ (we'll include $1$ here for counting purposes). This is just the product of all elements of $K$, so by the inductive hypothesis, they all multiply to $1$. Now look at the elements of $H$ that do contain a factor $b_k$. This is the same as above, except that each element now carries an additional factor $b_k$ (remember to include the lone $b_k$ as well). The product of all those elements is therefore $b_k^{|K|}$ times the product of all elements of $K$, which simplifies to just $b_k^{|K|}$ by the inductive hypothesis. Lastly, note how many elements there are in $K$, and you see that $b_k^{|K|} = 1$. One could also try to appeal to some sort of symmetry moral with this problem (this is not a valid proof, by any measure, but I personally like to think along these lines when pondering the eternal question "yeah... but why?"). Note that $(a_1a_2\cdots a_n)^2 = 1$, so $a_1a_2\cdots a_n$ is either the identity or some degree-2 element. If there is only one degree $2$ element in the group, then there's nothing wrong with $a_1a_2\cdots a_n$ being that one element. But if there are more than one, how would the group know which one to pick? The group being abelian means that there is no algebraic property that distinguishes any of the order $2$ elements, but "being the product of all the elements in the group" is a pretty distinguishing feature. The most (/ only?) consistent choice for $a_1a_2\cdots a_n$ is therefore $1$. • Is it $|K|=2^{k-1}$, so $b_k^{2^{k-1}}=1$? – Alan Wang Nov 11 '16 at 11:27 • A bit confusion here. Isn't it $|K|=|b_1|\dots |b_{k-1}|=2^{k-1}$? – Alan Wang Nov 11 '16 at 11:29 • @AlanWang Exactly (I misread your exponent with my previous comment). So the product of all elements in $H$ that doesn't have a $b_k$ in them is $1$, and the product of all elements in $H$ that does have $b_k$ in them is $b_k^{2^{k-1}} = 1$. Therefore the product of all the elements in $H$ is also $1$. – Arthur Nov 11 '16 at 11:30 Consider $H=\{g\in G:g^2=1\}$; then $H$ is a subgroup of $G$. Moreover, the product of the elements not in $H$ is $1$, because each element has there its inverse (distinct from it). Thus you can assume, without loss of generality, that $H=G$. Therefore $H\cong C_2^n$. If $n>1$, there is an automorphism that swaps any two components. Since the product of all elements is invariant under automorphisms…
# How does this sequence grow Let $a(n)$ be the number of solutions of the equation $a^2+b^2\equiv -1 \pmod {p_n}$, where $p_n$ is the n-th prime and $0\le a \le b \le \frac{p_n-1}2$. Is the sequence $a(1),a(2),a(3),\dots$ non-decreasing? Data for the first thousand values of the sequence supports this conjecture. Here is an example for $n=5$: The fifth prime is 11. The equation $a^2+b^2 \equiv -1 \pmod {11}$ has just two solutions with the required conditions on $a$ and $b$, namely: $1^2+3^2=10$ and $4^2+4^2=32$. Here are the first fifty values of $a(n)$: 0,1,1,1,2,2,3,3,3,4,4,5,6,6,6,7,8,8,9,9,10,10,11,12,13,13,13,14,14,15,16,17,18,18,19,19,20,21,21,22,23,23,24,25,25,25,27,28,29,29 - The answer is yes, and the number of solutions with a prime $p$ is $\lfloor \frac{p+5}{8} \rfloor$ when $p \not\equiv 1 \pmod{8}$ and is $\lfloor \frac{p+5}{8} \rfloor + 1$ when $p \equiv 1 \pmod{8}$. The equation $a^{2} + b^{2} + c^{2} = 0$ defines a conic in $\mathbb{P}^{2}/\mathbb{F}_{p}$. If $p > 2$ this conic has a point on it (by the standard pigeonhole argument that there is a solution to $a^{2} + b^{2} \equiv -1 \pmod{p}$), and so it is isomorphic to $\mathbb{P}^{1}$. Hence, there are $p+1$ points on this conic in $\mathbb{P}^{2}$. Every such point has the form $(a : b : 0)$ or $(a : b : 1)$. If $p \equiv 3 \pmod{4}$, there are no points of the form $(a : b : 0)$, while if $p \equiv 1 \pmod{4}$, then there is a solution to $x^{2} \equiv -1 \pmod{p}$ and $(1 : \pm x : 0)$ give two such points. Hence the number of solutions to $a^{2} + b^{2} \equiv -1 \pmod{p}$ with $0 \leq a \leq p-1$, $0 \leq b \leq p-1$ is $p+1$ or $p-1$ depending on what $p$ is mod $4$. Now it takes a bit more thought and some careful keeping track of solutions with $a$ or $b$ equal to zero, or $a = b$ to derive the formula. @David - Your formula and mine are the same if $p > 2$. Note that Ceiling[2/8] = 1. –  Jeremy Rouse Jun 12 '14 at 10:36
# Coping with Math Anxiety Multiplication is vexation, The Rule of Three perplexes me, —Old Rhyme ### What Is Math Anxiety? A famous stage actress was once asked if she had ever suffered from stage-fright, and if so how she had gotten over it. She laughed at the interviewer’s naive assumption that, since she was an accomplished actress now, she must not feel that kind of anxiety. She assured him that she had always had stage fright, and that she had never gotten over it. Instead, she had learned to walk on stage and perform—in spite of it. Like stage fright, math anxiety can be a disabling condition, causing humiliation, resentment, and even panic. Consider these testimonials from a questionnaire we have given to students in the past several years: • When I look at a math problem, my mind goes completely blank. I feel stupid, and I can’t remember how to do even the simplest things. • I've hated math ever since I was nine years old, when my father grounded me for a week because I couldn’t learn my multiplication tables. • In math there’s always one right answer, and if you can’t find it you've failed. That makes me crazy. • Math exams terrify me. My palms get sweaty, I breathe too fast, and often I can't even make my eyes focus on the paper. It’s worse if I look around, because I’d see everybody else working, and know that I’m the only one who can’t do it. • I've never been successful in any math class I've ever taken. I never understand what the teacher is saying, so my mind just wanders. • Some people can do math—not me! What all of these students are expressing is math anxiety, a feeling of intense frustration or helplessness about one's ability to do math. What they did not realize is that their feelings about math are common to all of us to some degree. Even the best mathematicians, like the actress mentioned above, are prone to anxiety—even about the very thing they do best and love most. In this essay we will take a constructive look at math anxiety, its causes, its effects, and at how you as a student can learn to manage this anxiety so that it no longer hinders your study of mathematics. Lastly, we will examine special strategies for studying mathematics, doing homework, and taking exams. Let us begin by examining some social attitudes towards mathematics that are especially relevant. ### Social and Educational Roots Imagine that you are at a dinner party, seated with many people at a large table. In the course of conversation the person sitting across from you laughingly remarks, “of course, I’m illiterate…!” What would you say? Would you laugh along with him or her and confess that you never really learned to read either? Would you expect other people at the table to do so? Now imagine the same scene, only this time the guest across from you says, “of course, I’ve never been any good at math…!” What happens this time? Naturally, you can expect other people at the table to chime in cheerfully with their own claims to having “never been good at math”—the implicit message being that no ordinary person ever is. Poor teaching leads to the inevitable idea that the subject (mathematics) is only adapted to peculiar minds, when it is the one universal science, and the one whose ground rules are taught us almost in infancy and reappear in the motions of the universe. —H.J.S. Smith The fact is that mathematics has a tarnished reputation in our society. It is commonly accepted that math is difficult, obscure, and of interest only to “certain people,” i.e., nerds and geeks—not a flattering characterization. The consequence in many English-speaking countries, and especially in the United States, is that the study of math carries with it a stigma, and people who are talented at math or profess enjoyment of it are often treated as though they are not quite normal. Alarmingly, many school teachers—even those whose job it is to teach mathematics—communicate this attitude to their students directly or indirectly, so that young people are invariably exposed to an anti-math bias at an impressionable age. It comes as a surprise to many people to learn that this attitude is not shared by other societies. In Russian or German culture, for example, mathematics is viewed as an essential part of literacy, and an educated person would be chagrined to confess ignorance of basic mathematics. (It is no accident that both of these countries enjoy a centuries-long tradition of leadership in mathematics.) Students must learn that mathematics is the most human of endeavors. Flesh and blood representatives of their own species engaged in a centuries long creative struggle to uncover and to erect this magnificent edifice. And the struggle goes on today. On the very campuses where mathematics is presented and received as an inhuman discipline, cold and dead, new mathematics is created. As sure as the tides. —J.D. Phillips Our jaundiced attitude towards mathematics has been greatly exacerbated by the way in which it has been taught since early in this century. For nearly seventy years, teaching methods have relied on a behaviorist model of learning, a paradigm which emphasizes learning-by-rote; that is, memorization and repetition. In mathematics, this meant that a particular type of problem was presented, together with a technique of solution, and these were practiced until sufficiently mastered. The student was then hustled along to the next type of problem, with its technique of solution, and so on. The ideas and concepts which lay behind these techniques were treated as a sideshow, or most often omitted altogether. Someone once described this method of teaching mathematics as inviting students to the most wonderful restaurant in the world—and then forcing them to eat the menu! Little wonder that the learning of mathematics seems to most people a dull and unrewarding enterprise, when the very meat of the subject is boiled down to the gristle before it is served. The mind is not a vessel to be filled. It is a fire to be kindled. —Plutarch This horror story of mathematics education may yet have a happy ending. Reform efforts in the teaching of mathematics have been under way for several years, and many—if not all—teachers of mathematics have conscientiously set about replacing the behaviorist paradigm with methods based on constructivist or other progressive models of learning. As yet, however, there remains no widely accepted teaching methodology for implementing these reform efforts, and it may well be that another generation will pass before all students in the primary and secondary grades are empowered to discover the range and beauty of mathematical ideas, free of the stigmas engendered by social and educational bias. Finally, young women continue to face an additional barrier to success in mathematics. Remarkably, even at the start of the 21st century, school-age girls are still discouraged by parents, peers, and teachers with the admonition that mathematics “just isn't something girls do.” Before we became teachers, we would have assumed that such attitudes died out a generation ago, but now we know better. Countless of our female students have told how friends, family members, and even their junior and senior high school instructors impressed upon them the undesirability of pursuing the study of mathematics. My own wife (a mathematician) recalls approaching her junior high school geometry teacher after class with a question about what the class was studying. He actually patted her on the head, and explained that she “didn’t need to know about that stuff.” (And, needless to say, he didn’t answer her question.) Rank sexism such as this is only part of the problem. For all adolescents, but especially for girls, there is concern about how one is viewed by members of the opposite sex—and being a “geek” is not seen as the best strategy. Peer pressure is the mortar in that wall. And parents, often even without knowing it, can facilitate this anxiety and help to discourage their daughters from maintaining an open mind and a natural curiosity towards the study of science and math. Together these social and educational factors lay the groundwork for many widely believed myths and misconceptions about the study of mathematics. To an examination of these we now turn. ### Math Myths A host of common but erroneous ideas about mathematics are available to the student who suffers math anxiety. These have the effect of justifying or rationalizing the fear and frustration he or she feels, and when these myths are challenged a student may feel defensive. This is quite natural. However, it must be recognized that loathing of mathematics is an emotional response, and the first step in overcoming it is to appraise one’s opinions about math in a spirit of detachment. Consider the five most prevalent math myths, and see what you make of them: #### Myth #1: Aptitude for math is inborn. This belief is the most natural in the world. After all, some people just are more talented at some things (music and athletics come to mind) and to some degree it seems that these talents must be inborn. Indeed, as in any other field of human endeavor, mathematics has had its share of prodigies. Karl Gauss helped his father with bookkeeping as a small child, and the Indian mathematician Ramanujan discovered deep results in mathematics with little formal training. It is easy for students to believe that doing math requires a math brain, one in particular which they have not got. Math Brain But consider: to generalize from “three spoons, three rocks, three flowers”—to the number “three”—is an extraordinary feat of abstraction, yet every one of us accomplished this when we were mere toddlers! Mathematics is indeed inborn, but it is inborn in all of us. It is a human trait, shared by the entire race. Reasoning with abstract ideas is the province of every child, every woman, every man. Having a special genetic make-up is no more tune. Ask your math teacher or professor if he or she became a mathematician in consequence of having a special brain. (Be sure to keep a straight face when you do this.) Almost certainly, after the laughter has subsided, it will turn out that a parent or teacher was responsible for helping your instructor discover the beauty in mathematics, and the rewards it holds for the student—and decidedly not a special brain. (If you ask my wife, on the other hand, she will tell you it was orneriness; she got sick of being told she couldn’t do it.) #### Myth #2: To be good at math you have to be good at calculating Some people count on their fingers. Invariably, they feel somewhat ashamed about it, and try to do it furtively. But this is ridiculous. Why shouldn't you count on your fingers? What else is a Chinese abacus, but a sophisticated version of counting on your fingers? Yet people accomplished at using the abacus can out-perform anyone who calculates figures mentally. Math Fingers Modern mathematics is a science of ideas, not an exercise in calculation. It is a standing joke that mathematicians can’t do arithmetic reliably, and I often admonish my students to check my calculations on the chalkboard because I'm sure to get them wrong if they don’t. There is a serious message in this: being a wiz at figures is not the mark of success in mathematics. This bears emphasis: a pocket calculator has no knowledge, no insight, no understanding—yet it is better at addition and subtraction than any human will ever be. And who would prefer being a pocket calculator to being human? This myth is largely due to the methods of teaching discussed above, which emphasize finding solutions by rote. Indeed, many people suppose that a professional mathematician’s research involves something like doing long division to more and more decimal places, an image that makes mathematicians smile sadly. New mathematical ideas—the object of research—are precisely that. Ideas. And ideas are something we can all relate to. That’s what makes us people to begin with. #### Myth #3: Math requires logic, not creativity. The grain of truth in this myth is that, of course, math does require logic. But what does this mean? It means that we want things to make sense. We don't want our equations to assert that 1 is equal to 2. Logic is the anatomy of thought. —John Locke This is no different from any other field of human endeavor, in which we want our results and propositions to be meaningful—and they can’t be meaningful if they do not jive with the principles of logic that are common to all mankind. Mathematics is somewhat unique in that it has elevated ordinary logic almost to the level of an artform, but this is because logic itself is a kind of structure—an idea—and mathematics is concerned with precisely that sort of thing. The moving power of mathematics is not reasoning but imagination. —Augustus De Morgan But it is simply a mistake to suppose that logic is what mathematics is about, or that being a mathematician means being uncreative or unintuitive, for exactly the opposite is the case. The great mathematicians, indeed, are poets in their soul. How can we best illustrate this? Consider the ancient Greeks, such as Pythagoras, who first brought mathematics to the level of an abstract study of ideas. They noticed something truly astounding: that the musical tones most pleasing to the ear are those achieved by dividing a plucked string into ratios of integers. For instance, the musical interval of a “fifth” is achieved by plucking a taut string whilst pressing the finger against it at a distance exactly three-fourths along its total length. From such insights, the Pythagoreans developed an elaborate and beautiful theory of the nature of physical reality, one based on number. And to them we owe an immense debt, for to whom does not music bring joy? Yet no one could argue that music is a cold, unfeeling enterprise of mere logic and calculation. If you remain unconvinced, take a stroll through the Mathematical Art of M.C. Escher. Here is the creative legacy of an artist with no advanced training in math, but whose works consciously celebrate mathematical ideas, in a way that slips them across the transom of our self-conscious anxiety, presenting them afresh to our wondering eyes. #### Myth #4: In math what's counts is getting the right answer. If you are building a bridge, getting the right answer counts for a lot, no doubt. Nobody wants a bridge that tumbles down during rush hour because someone forgot to carry the 2 in the 10’s place! But are you building bridges, or studying mathematics? Even if you are studying math so that you can build bridges, what matters right now is understanding the concepts that allow bridges to hang magically in the air—not whether you always remember to carry the 2. That you be methodical and complete in your work is important to your math instructor, and it should be important to you as well. This is just a matter of doing what you are doing as well as you can do it—good mental and moral hygiene for any activity. But if any instructor has given you the notion that “the right answer” is what counts most, put it out of your head at once. Nobody overly fussy about how his or her bootlace is tied will ever stroll at ease through Platonic Realms. #### Myth #5: Men are better than women at math. If there is even a ghost of a remnant of a suspicion in your mind about gender making a whit’s difference in students’ mathematics aptitude, slay the beast at once. Special vigilance is required when it comes to this myth, because it can find insidious ways to affect one’s attitude without ever drawing attention to itself. For instance, I’ve had female students confide to me that—although of course they do not believe in a gender gap when it comes to ability—still it seems to them a little unfeminine to be good at math. There is no basis for such a belief, and in fact a sociological study several years ago found that female mathematicians are, on average, slightly more feminine than their non-mathematician counterparts. Sadly, the legacy of generations of gender bias, like our legacy of racial bias, continues to shade many people’s outlooks, often without their even being aware of it. It is every student’s, parent’s, and educator’s duty to be on the lookout for this error of thought, and to combat it with reason and understanding wherever and however it may surface. Hypatia of Alexandria Across the centuries, from Hypatia to Amalie Nöther to thousands of contemporary women in school and university math departments around the globe, female mathematicians have been and remain full partners in creating the rich tapestry of mathematics. A web search for "women in mathematics" will turn up many outstanding sites with information about historical and contemporary women in mathematics. You may also like to check out Platonic Realms' own inspirational poster on Great Women of Mathematics in the Math Store. ### Taking Possession of Math Anxiety Even though all of us suffer from math anxiety to some degree—just as anyone feels at least a little nervous when speaking to an audience—for some of us it is a serious problem, a burden that interferes with our lives, preventing us from achieving our goals. The first step, and the one without which no further progress is possible, is to recognize that math anxiety is an emotional response. (In fact, severe math anxiety is a learned emotional response.) As with any strong emotional reaction, there are constructive and unconstructive ways to manage math anxiety. Unconstructive (and even damaging) ways include rationalization, suppression, and denial. By “rationalization,” we mean finding reasons why it is okay and perhaps even inevitable – and therefore justified – for you to have this reaction. The myths discussed above are examples of rationalizations, and while they may make you feel better (or at least less bad) about having math anxiety, they will do nothing to lessen it or to help you get it under control. Therefore, rationalization is unconstructive. By “suppression” is meant having awareness of the anxiety – but trying very, very hard not to feel it. I have found that this is very commonly attempted by students, and it is usually accompanied by some pretty severe self-criticism. Students feel that they shouldn’t feel this anxiety, that it’s a weakness which they should overcome, by brute force if necessary. When this effort doesn’t succeed (as invariably it doesn’t) the self-criticism becomes ever harsher, leading to a deep sense of frustration and often a severe loss of self-esteem – particularly if the stakes for a student are high, as when his or her career or personal goals are riding on a successful outcome in a math class, or when parental disapproval is a factor. Consequently, suppression of math anxiety is not only unconstructive, but can actually be damaging. Finally, there is denial. People using this approach probably aren’t likely to see this essay, much less read it, for they carefully construct their lives so as to avoid all mathematics as much as possible. They choose college majors, and later careers, that don’t require any math, and let the bank or their spouse balance the checkbook. This approach has the advantage that feelings of frustration and anxiety about math are mostly avoided. However, their lives are drastically constrained, for in our society fewer than 25% of all careers are, so-to-speak, “math-free,” and thus their choices of personal and professional goals are severely limited. (Most of these math-free jobs, incidentally, are low-status and low-pay.) The Universe is a grand book which cannot be read until one first learns to comprehend the language and become familiar with the characters in which it is composed. It is written in the language of mathematics. —Galileo People in denial about mathematics miss out on something else too, for the student of mathematics learns to see aspects of the structure and beauty of our world that can be seen in no other way, and to which the “innumerate” necessarily remain forever blind. It would be a lot like never hearing music, or never seeing colors. (Of course some people have these disabilities, but innumeracy is something we can do something about.) Okay, so what is the constructive way to manage math anxiety? I call it “taking possession.” It involves making as conscious as possible the sources of math anxiety in one’s own life, accepting those feelings without self-criticism, and then learning strategies for disarming math anxiety's influence on one’s future study of mathematics. (These strategies are explored in depth in the next section.) Begin by understanding that your feelings of math anxiety are not uncommon, and that they definitely do not indicate that there is anything wrong with you or inferior about your ability to learn math. For some this can be hard to accept, but it is worth trying to accept—since after all it happens to be true. This can be made easier by exploring your own “math-history.” Think back across your career as a math student, and identify those experiences which have contributed most to your feelings of frustration about math. For some this will be a memory of a humiliating experience in school, such as being made to stand at the blackboard and embarrassed in front of one’s peers. For others it may involve interaction with a parent. Whatever the principle episodes are, recall them as vividly as you are able to. Then, write them down. This is important. After you have written the episode on a sheet(s) of paper, write down your reaction to the episode, both at the time and how it makes you feel to recall it now. (Do this for each episode if there is more than one.) After you have completed this exercise, take a fresh sheet of paper and try to sum up in a few words what your feelings about math are at this point in your life, together with the reason or reasons you wish to succeed at math. This too is important. Not until after we lay out for ourselves in a conscious and deliberate way what our feelings and desires are towards mathematics, will it become possible to take possession of our feelings of math anxiety and become free to implement strategies for coping with those feelings. At this point it can be enormously helpful to share your memories, feelings, and goals with others. In a math class I teach for arts majors, I hand out a questionnaire early in the semester asking students to do exactly what is described above. After they have spent about twenty minutes writing down their recollections and goals, I lead them in a classroom discussion on math anxiety. This process of dialogue and sharing—though it may seem just a bit on the goopy side—invariably brings out of each student his or her own barriers to math, often helping these students become completely conscious of these barriers for the first time. Just as important, it helps all my students understand that the negative experiences they have had, and their reactions to them, are shared one way or another by almost everyone else in the room. If you do not have the opportunity to engage in a group discussion in a classroom setting, find friends or relatives whom you trust to respect your feelings, and induce them to talk about their own experiences of math anxiety and to listen to yours. Once you have taken possession of your math anxiety in this way, you will be ready to implement the strategies outlined in the next section. ### Strategies for Success Mathematics, as a field of study, has features that set it apart from almost any other scholastic discipline. On the one hand, correctly manipulating the notation to calculate solutions is a skill, and as with any skill mastery is achieved through practice. On the other hand, such skills are really only the surface of mathematics, for they are only marginally useful without an understanding of the concepts which underlie them. Consequently, the contemplation and comprehension of mathematical ideas must be our ultimate goal. Ideally, these two aspects of studying mathematics should be woven together at every point, complementing and enhancing one another, and in this respect studying mathematics is much more like studying, say, music or painting than it is like studying history or biology. The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver. —I.N. Herstein In view of mathematics’ unique character, the successful student must devise a special set of strategies for accomplishing his or her goals, including strategies for lecture taking, homework, and exams. We will examine each of these in turn. Keep in mind that these strategies are suggestions, not laws handed down from the mountain. Each student must find for him or herself the best way to implement these ideas, fitting them to his or her own unique learning styles. As the Greek said, know thyself! #### Taking Lectures Math teachers are a mixed bag, no question, and it’s easy to criticize, especially when the criticism is justified. If your own math teacher really connects with you, really helps you understand, terrific—and be sure to let him or her know. But if not, there are a couple of things you will want to keep in mind. To begin with, think what the teacher’s job entails. First, a textbook must be chosen, a syllabus prepared, and the material being taught (which your teacher may or may not have worked with in some time) completely mastered. This is before you ever step into class on that first day. Second, for every lecture the teacher gives, there is at least an hour’s preparation, writing down lecture notes, thinking about how best to present the material, and so on. This is on top of the time spent grading student work—which itself can be done only after the instructor works the exercises for him or herself. Finally, think about the anxiety you feel about speaking to an audience, and about your own math anxiety, and then imagine what a math teacher must do: manage both kinds of anxiety simultaneously. It would be wonderful if every instructor were a brilliant lecturer. But even the least brilliant deserves consideration for the difficulty of the job. The second thing to keep in mind is that getting the most out of a lecture is your job. Many students suppose that writing furiously to get down everything the instructor puts on the board is the best they can do. Unfortunately, you cannot both write the details and focus on the ideas at the same time. Consequently, you will have to find a balance. Particularly if the instructor is lecturing from a set text, it may be that almost everything he or she puts on the board is in the text, so in effect it’s written down for you already. In this case, make some note of the instructor’s ideas and commentary and methods, but make understanding the lecture your primary focus. One of the best things you can do to enhance the value of a lecture is to review the relevent parts of the textbook before the lecture. Then your notes, instead of becoming yet another copy of information you paid for when you bought the book, can be an adjunct set of insights and commentary that will help you when it comes time to study on your own. Finally, remember that your success is your instructor’s success too. He or she wants you to achieve your goals. So develop a rapport with the instructor, letting him or her know when you are feeling lost and requesting help. Don’t wait until after the lecture—raise your hand or your voice the minute the instructor begins to discuss an idea or procedure that you are unable to follow. Use any help labs or office hours that are available. If you are determined to succeed and your instructor knows it, then he or she will be just as determined to help you. #### Self-Study and Homework There you are, just you and the textbook and maybe some lecture notes, alone in the glare of your desk lamp. It’s a tense moment. Like most students, you turn to the exercises and see what happens. Pretty soon you are slogging away, turning frequently to the solutions in the back of the book to check whether you have a clue. If you’re lucky, it goes mostly smoothly, and you mark the problems that won’t come right so that you can ask about them in class. If you’re not so lucky, you get bogged down, stuck on this problem or that, while the hours slide by like agonized glaciers, and you miss your favorite TV show, and you think of all the homework for your other classes that you haven’t got to yet, and you begin to visualize burning your textbook…except that the stupid thing cost you 80 bucks…. Let’s start over. Many instructors (but not all) encourage their students to work together on homework problems. Modern learning theories emphasize the value of doing this, and I find that students who collaborate can develop a synergy among themselves which supports their learning, helping them to learn more, more quickly, and more lastingly. Find out how your instructor feels about this, and if it is permitted find others in class who are interested in studying together. You will still want to put in plenty of time for self-study, but a couple of hours a week spent studying with others may be very valuable to you. #### Working Problems Most problem sets are designed so that the first few problems are rote, and look just like the examples in the book. Gradually, they begin to stretch you a bit, testing your comprehension and your ability to synthesize ideas. Take them one at a time. If you get completely stuck on one, skip it for now. But come back to it. Give yourself time, for your subconscious mind will gradually formulate ideas about how to work the exercise, and it will present these notions to your conscious mind when it is ready. As an experienced math instructor, it is my sad duty to report that about a third of the students in any given class, on any given assignment, will look the exercises over, and conclude that they don’t know how to do it. They then tell themselves, “I can’t do something I don’t understand,” and close the book. Consequence: no homework gets done. About another third will look the exercises over, decide that they pretty much get it, and tell themselves, “I don’t need to do the homework, because I already understand it,” and close the book. Consequence: no homework gets done. I keep the subject constantly before me and wait till the first dawnings open little by little into the full light. —Isaac Newton Don’t let this be you. If you’ve pretty much already got it, great. Now turn to the hard exercises (whether they were assigned or not), and test how thorough your understanding really is. If you are unable to do them with ease, then you need to go back to the more routine exercises and work on your skills. On the other hand, if you feel you cannot do the homework because you don’t understand it, then go back in the textbook to where you do understand, and work forward from there. Pick the easiest exercises, and work at them. Compare them to the examples. Work through the examples. Try doing the exercises the same way the examples were done. In short, work at it. You will learn mathematics this way—and in no other way. #### Story Problems Everybody complains about story problems, sometimes even the instructor. One is tempted to feel that math is hard enough without some sadist turning it into wordy, dense, hard-to-understand story problems. But again, ask yourself: “Why am I studying math? Is it so that I'll always know how to factor a quadratic equation?” Hardly. The study of math is meant to give you power over the real world. And the real world doesn’t present you with textbook equations, it presents you with story problems. Your boss doesn’t tell you to solve for x, he tells you, “We need a new supplier for flapdoodles. Bob’s Flapdoodle Emporium wholesales them at $129 per gross, but charges$1.25 per ton per mile for shipping. Sally’s Flapdoodle Express wholesales them at $143 per gross, but ships at a flat rate of$85 per ton. Figure out how each of these will impact our marginal cost, and report to me this afternoon.” The real world. Personally, I love story problems—because if you can work a story problem, you know you really understand the math. It helps to have a strategy, so you might want to check out the Solving Story Problems article in the Platonic Realms Encyclopedia sometime soon. #### Exams For many students, this is the very crucible of math anxiety. Math exams represent a do-or-die challenge that can inflame all one’s doubts and frustrations. It is frankly not possible to eliminate all the anxiety you may feel about exams, but here are some techniques and strategies that will dramatically improve your test-taking experience. Don’t cram. The brain is in many ways just like a muscle. It must be exercised regularly to be strong, and if you place too much stress on it then it won’t function at its peak until it has had time to rest and recover. You wouldn’t prepare for a big race by staying up and running all night. Instead, you would probably do a light work-out, permit yourself some recreation such as seeing a movie or reading a book, and turn-in early. The same principle applies here. If you have been studying regularly, you already know what you need to know, and if you have put off studying until now it is too late to do much about it. There is nothing you will gain in the few hours before the exam, desperately trying to absorb the material, that will make up for not being fresh and alert at exam time. On exam day, have breakfast. The brain consumes a surprisingly large number of calories, and if you haven’t made available the nutrients it needs it will not work at full capacity. Get up early enough so that you can eat a proper meal (but not a huge one) at least two hours before the exam. This will ensure that your stomach has finished with the meal before your brain makes a demand on the blood supply. When you get the exam, look it over thoroughly. Read each question, noting whether it has several parts and its overall weight in the exam. Begin working only after you have read every question. This way you will always have a sense of the exam as a whole. (Remember to look on the backs of pages.) If there are some questions that you feel you know immediately how to do, then do these first. (Some students have told me they save the easiest ones for last because they are sure they can do them. This is a mistake. Save the hardest ones for last.) It is extremely common to get the exam, look at the questions, and feel that you can’t work a single problem. Panic sets in. You see everyone else working, and become certain you are doomed. Some students will sit for an hour in this condition, ashamed to turn in a blank exam and leave early, but unable to calm down and begin thinking about the questions. This initial panic is so common (believe it or not, most of the other students taking the exam are having the same experience), that it’s just as well to assume ahead of time that this is what is going to happen. This gives you the same advantage as when the dentist alerts you that “this may hurt a little.” Since you've been warned, there's far less tendency to have an uncontrollable panic reaction when it happens. So say to yourself, “Well, I may as well relax because I expected this.” Take a deep breath, let it out slowly. Do this a couple of times. Look for the question on the exam that most resembles what you know how to do, and begin poking it and prodding it and thinking about it to see what it is made of. Don’t bother about the other students in the room—they’ve got their own problems. Before long your brain (remember, it’s a muscle) will begin to unclench a bit, and some things will occur to you. You’re on your way. Math exams are usually timed—but remember, it’s not a race! You don’t want to dally, but don’t rush yourself either. Work efficiently, being methodical and complete in your solutions. Box, circle, or underline your answers where appropriate. If you don’t take time to make your work neat and ordered, then not only will the grader have trouble understanding what you’ve done, but you can actually confuse yourself—with disastrous results. If you get stuck on a problem, don’t entangle yourself with it to the detriment of your overall score. After a few minutes, move on to the rest of the exam and come back to this one if you have time. And regardless of whether you have answered every question, give yourself at least two or three minutes at the end of the exam period to review your answers. The “oops” mistakes you find this way will surprise you, and fixing them is worth more to your score than trying to bang out something for that last, troublesome question. In math, having the right answer is nice—but it doesn’t pay the bills. SHOW YOUR WORK. Finally, place things in perspective. Fear of the exam will make it seem like a much bigger deal than it really is, so remind yourself what it does not represent. It is not a test of your overall intelligence, of your worth as a person, or of your prospects for success in life. Your future happiness will not be determined by it. It is only a math test—it tests nothing about you except whether you understand certain concepts and possess the skills to implement them. You can’t demonstrate your understanding and skills to their best advantage if you panic through making more of it than it is. When you get the exam back, don’t bury it or burn it or treat it like it doesn’t exist—use it. Discover your mistakes and understand them thoroughly. After all, if you don’t learn from your mistakes, you are likely to make them again. #### * * * * * Math anxiety affects all of us at one time or another, but for all of us it is a barrier we can overcome. In this article we have examined the social and educational roots of math anxiety, some common math myths associated with it, and several techniques and strategies for managing it. Other things could be said, and other strategies are available which may help you with your own struggle with math. Talk to your instructor and to other students. With determination and a positive outlook—and a little help—you will accomplish things you once thought impossible. The harmony of the world is made manifest in Form and Number, and the heart and soul and all the poetry of Natural Philosophy are embodied in the concept of mathematical beauty. —D’Arcy Wentworth Contributors • Wendy Hageman Smith, author • B. Sidney Smith, author Citation Info • [MLA] Hageman Smith, Wendy, B. Sidney Smith. "Coping With Math Anxiety." Platonic Realms Interactive Mathematics Encyclopedia. Platonic Realms, 14 Feb 2014. Web. 14 Feb 2014. <http://platonicrealms.com/> • [APA] Hageman Smith, Wendy, B. Sidney Smith (14 Feb 2014). Coping With Math Anxiety. Retrieved 14 Feb 2014 from Platonic Realms Minitexts: http://platonicrealms.com/minitexts/Coping-With-Math-Anxiety/
# Unsupervised Question Decomposition for Question Answering #### Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, Douwe Kiela (EMNLP 2020) This paper tackles a problem that seems fundamentally quite important to me. Given a question, how can we decompose it in easier questions? This is clearly related to how humans answer questions. Suppose you don’t know Alice and Bob, and I ask you: who is older, Alice or Bob?'', that naturally maps to at least three subquestions: how old is Alice?'', how old is Bob?'', and which number is greater?''. It turns out we have Question Answering models that are good at answering each of these simpler questions. But how do we break down a complex question? The paper proposes to use retrieval as a way to form subquestions. In particular, it assumes a large set of questions (without labels or answers). In the paper, they come from CommonCrawl, i.e. collected from the Web. Then, given a question, they find a pair of questions that maximally diverge from each other but are still related to the complex question. Then, they train a complicated model, using back-translation and several denoising objectives, to train a model for question decomposition, and along with it a model that given a decomposed question and the answers that a simple QA model gives to each subquestion, produces an answer for the complex question. There are a number of things in the paper that I find not elegant, solution-wise. For example, they always decompose a question into two subquestions. If you give their model a simple question that their base QA model can answer, they would still decompose it. Also, it can’t decompose them into more than 2, and from their objective it does seem computationally hard to extend it. Also, the model is quite complicated, with an amalgamation of different unsupervised objectives, which has been common in unsupervised NLP. Finally, the idea of maximally diverging subquestions does not appear sound to me. For example, how old is Alice?'' and how old is Bob?'' are very similar, yet they are the right questions to ask. Their model seems to produce paraphrases (e.g. how many years ago was Bob born?'') to get around that, which doesn't feel like what should happen. I think you want questions that provide you different (complementary) useful bits of information, not necessarily questions that are as divergent as possible. For instance, you don't want to ask how old is Alice?'' and what year was Alice born in?'', even if they’re both quite different. However, it is to their merit that they find a way to do everything unsupervised, which is new, and the problem seems quite important to me. That I like, for sure. Looking forward to further work on this.
Algebra Level 4 $f(x)=\ln(\sqrt{x^{2}-5x-24}-x-2)$ What is the domain of the definition of the function? ×
# Solve the inequality $\sqrt x+\sqrt{x+1}>\sqrt 3$. Solve the inequality $\sqrt x+\sqrt{x+1}>\sqrt 3$. I want to make sure my method is correct: The condition is that $x\geq 0$ $x+2\sqrt x\times \sqrt{x+1} +x+1>3$ $2\sqrt{x(x+1)}>2-2x$ $4x(x+1)>4-8x+4x^{2}$ $4x^{2}+4x>4-8x+4x^{2}$ $12x>4$ $x>\frac{1}{3}$ $x\in (\frac{1}{3},\infty)$ I know my final solution is fine, but is everything written properly? Should I put $\iff$ at the beginning of each row? • A $\iff$ would not be correct as you are squaring the equation, and $y=x^2$ is a many-one function. – GoodDeeds Nov 1 '16 at 18:19 • In step three how do you know that 2-2x >= 0 or that 2-2x < 0 but |2-2x| < 2\sqrt{x(x+1)? – fleablood Nov 1 '16 at 18:48 • It might just be me, but I think worry about $\iff$. Sometimes the conclusions will only go one way. You will have to worry about addding extraneous information, especially when you square. The results of which will certainly not be an if and only if. – fleablood Nov 1 '16 at 18:55 Over its domain (the set of non-negative real numbers) the function $f(x)=\sqrt{x}+\sqrt{x+1}$ is an increasing function, since it is the sum of two non-negative increasing functions. It follows that $f(x)>\sqrt{3}$ holds as soon as $x>x_0$, where $x_0$ is the only positive number such that $$\sqrt{x_0}+\sqrt{x_0+1} = \sqrt{3}.\tag{1}$$ $x_0=\frac{1}{3}$ is clearly a solution of $(1)$, hence the given inequality holds for $\color{red}{\large x>\frac{1}{3}}$. • I guess you meant $\sqrt{3}$ and not $3$ . – Sylvain Julien Nov 1 '16 at 19:46 • @SylvainJulien: sure, thanks. Now fixed. – Jack D'Aurizio Nov 1 '16 at 19:50 We have $x\geq 0$ . then $\sqrt{x}+\sqrt{x+1}>\sqrt{3} \implies$ $\sqrt{x+1}>\sqrt{3}-\sqrt{x} \implies$ $x+1>3+x-2\sqrt{3x} \implies$ $\sqrt{3x}>1 \implies x>\frac{1}{3}$ In the other direction, $x>\frac{1}{3} \implies$ $\sqrt{x+1}+\sqrt{x}>\sqrt{\frac{4}{3}}+\sqrt{\frac{1}{3}} =\sqrt{3}$ Qed. • But what if $\sqrt{3} - \sqrt{x}$ is negative? Then we can't square both sides of the inequality, let alone use the $\iff$ sign. – Stefan4024 Nov 1 '16 at 18:27 • If it is negative, the inequality is satisfied since $\sqrt{x+1}\geq 0$. – hamam_Abdallah Nov 1 '16 at 18:29 • Yeah, but you have to explicitly mentioned that. Otherwise you're just squaring, which might not be true after all. – Stefan4024 Nov 1 '16 at 18:31 • What @Stefan4024 is getting at is that 1 > -2, but squaring both sides doesn't preserve the inequality. – Turambar Nov 1 '16 at 18:34 Your answer is fine and correct, and you can write it more concise with key steps. In the end, you can simply write $x > 1/3$ without re-write it as $x \in (1/3, \infty)$. You've made a mistake by assuming that $2 - 2x \ge 0$, when squaring in the second row. To fix this consider the two cases when $x < 1$ and $x \ge 1$. This will enable you to add the $\iff$ signs, which you're required, as otherwise you have proven that $\sqrt{x} + \sqrt{x+1} > \sqrt{3} \implies x > \frac 13$ instead of $x > \frac 13 \implies \sqrt{x} + \sqrt{x+1} > \sqrt{3}$ Assume that $x > 1$. Then we have that $2-2x < 0 < 2\sqrt{x(x+1)}$, so the inequality is true for any $x \ge 1$. On the other side when $x < 1$ you can continue in your way and you will get that $x \in \left(\frac 13,1\right)$. Now combining the answers you will get that the solution set is $\left(\frac 13,\infty\right)$ • if $x\geq 1$ then I have $4x(x+1) < 4-8x+4x^{2}$. I change the inequality sign? – lmc Nov 1 '16 at 18:32 • @Now_now_Draco_play_nicely You can quickly discard the case $x\ge 1$, as the RHS is negative, but the LHS is positive. It's a trivial thing, but you have to be careful about it. – Stefan4024 Nov 1 '16 at 18:37 • I'm really confused now. Can you write out how you would solve this problem step by step? – lmc Nov 1 '16 at 18:41 • @Now_now_Draco_play_nicely You can check now – Stefan4024 Nov 1 '16 at 18:44 • Let me ask you just one more thing if I had for example $2\sqrt{x(x+1)}<2-2x$ then I wouldn't need to check for $x\geq 1$ since on the RHS I would have a non-negative expression? – lmc Nov 1 '16 at 19:15 Note that $\sqrt{\frac13}+\sqrt{\frac13+1}=\sqrt{3}$ . The inequality follows trivially. • Also, the same argument is essentially given by Jack D'Aurizio, but in an answer of much higher quality. – wythagoras Nov 1 '16 at 20:54 • @wythagoras it doesn't. I am comfortable with your edit. – Jacob Wakem Nov 1 '16 at 20:54 • @wythagoras I was not aware of his solution. I am not comfortable with an intersubjective standard of quality. – Jacob Wakem Nov 1 '16 at 20:56
# Forcing the best fit line to pass through a certain point (Octave) • MATLAB Gold Member I have the following code in Octave: Matlab: h = [29.3 25.3 19.7 16.0 10.9]; v = [0.53 0.47 0.37 0.29 0.21]; plot(h,v,'obk') hold on p = polyfit(h,v,1); y = polyval(p,h); plot(h,y,'-bk') And I get a good graph: I can extrapolate the best fit line using the following code: Matlab: x = -1:0.01:11; >> y = polyval(p,x); >> plot(x,y,'--r') and if I zoom on the graph, I get this: Evidently, the line doesn't pass through (0,0). But I have to make it pass through the Origin. In that case, it will no longer be the best fit line, but nevertheless it will serve my purpose. Any idea on how to do this? Related MATLAB, Maple, Mathematica, LaTeX News on Phys.org marcusl Gold Member Find the angle of the line through origin that minimizes mean squared error. RPinPA Homework Helper Evidently, the line doesn't pass through (0,0). But I have to make it pass through the Origin. In that case, it will no longer be the best fit line, but nevertheless it will serve my purpose. Any idea on how to do this? Yes. You're fitting a different model. Instead of ##y = mx + b## you want to fit ##y = mx##. I don't know if you can do it with polyfit(), but the math is pretty simple. Minimize the square error E ##E = \sum_i (y - y_i)^2 = \sum_i (mx_i - y_i)^2 = \sum_i m^2 x_i^2 - 2 \sum_i mx_i y_i + \sum_i y_i^2## ##dE/dm = 0 \\ \Rightarrow 2m \sum_i x_i^2 - 2\sum_i x_i y_i = 0 \\ \Rightarrow m = (\sum_i x_i y_i )/ (\sum_i x_i^2)## That is the best fit value of ##m## for the model ##y = mx##, and you should interpolate / extrapolate using that model. Wrichik Basu FactChecker Gold Member In the MATLAB function fitlm, you can specify the desired model that does not have a constant term using a modelspec like 'Y ~ A + B + C – 1' . See https://www.mathworks.com/help/stats/fitlm.html#bt0ck7o-modelspec I believe that their other linear regression tools have similar capabilities. I don't know about Octave. Last edited: Gold Member Octave says that fitlm has not yet been implemented. Gold Member @RPinPA Thanks, that works fine. I plotted the function using fplot, and I am getting the desired results. Good idea. FactChecker Gold Member That would draw the line toward (0,0). You may need to add it many times to get it as close as you want and then all the statistical calculations would be messed up. Wrichik Basu and jedishrfu jedishrfu Mentor That would draw the line toward (0,0). You may need to add it many times to get it as close as you want and then all the statistical calculations would be messed up. Sometimes cheap solutions work but not as well as one would like that’s why they’re cheap. Octave apparently doesn’t have the fitlm() function but does have some linear regression methods. https://octave.sourceforge.io/optim/function/LinearRegression.html Gold Member Sometimes cheap solutions work but not as well as one would like that’s why they’re cheap. Absolutely, I don't expect any better. How much can one provide for free? There are some major differences between Matlab and Octave. For example, for symbolic math, Octave depends on SymPy, while Matlab was created much before Python. jedishrfu jedishrfu Mentor I sometimes use freemat when I need to compute a quick plot. It has much of the core Matlab functionality and is easy to install. More recently, Julia from MIT has come online to challenge Matlab in performance. Much of its syntax is similar to Matlab with notable differences in how arrays are referenced ie parens in Matlab vs square brackets in Julia. Many folks are extending the Julia ecosystem with new packages on github everyday. It’s main weakness is its IDE which is cobbled together using Juno or using Jupyter notebooks. The notebooks are preferred over the IDE but Matlab users have a great IDE that’s hard to give up. Wrichik Basu
# Definition:Octal Notation ## Definition Octal is another word for base $8$. That is, every number $x \in \R$ is expressed in the form: $\ds x = \sum_{j \mathop \in \Z} r_j 8^j$ where: $\forall j \in \Z: r_j \in \set {0, 1, 2, 3, 4, 5, 6, 7}$ ## Also known as Octal notation is also known as octonary or octenary. ## Examples ### Example: $371 \cdotp 24$ The integer expressed in octal as $371 \cdotp 24$ is expressed in decimal to $2$ decimal places as $249 \cdotp 31$. ## Also see • Results about octal notation can be found here. ## Historical Note Octal notation was advocated by Emanuel Swedenborg. Octal notation used to be important in the field of computer science, but is less so nowadays, as hexadecimal has proved itself more convenient in general.
Math Calculators, Lessons and Formulas It is time to solve your math problem mathportal.org • Geometry • Hexagonal pyramid # Hexagonal pyramid ans: syntax error C DEL ANS ± ( ) ÷ × 7 8 9 4 5 6 + 1 2 3 = 0 . auto next question calculator • Question 1: 1 pts A hexagonal pyramid has 7 faces, 6 of which are triangles and one which is a hexagon. • Question 2: 1 pts How can we calculate the base area of a hexagonal pyramid? $A=3\cdot \dfrac{a^{2}\sqrt{3}}{4}$ $A= \dfrac{a^{2}\sqrt{3}}{2}$ $A=6\cdot \dfrac{a^{2}\sqrt{3}}{2}$ $A=3\cdot \dfrac{a^{2}\sqrt{3}}{2}$ • Question 3: 1 pts Find the length of the lateral height of the regular hexagonal pyramid shown on the picture. $9\sqrt{5}cm$ $4\sqrt{15}cm$ $5\sqrt{17}cm$ $4\sqrt{3}cm$ • Question 4: 1 pts Find the length of the height of the regular hexagonal pyramid shown on the picture. $9\sqrt{3}cm$ $4\sqrt{5}cm$ $3\sqrt{3}cm$ $4\sqrt{3}cm$ • Question 5: 2 pts The basic edge of the regular hexagonal pyramid is 6 cm, and the height of the pyramid is equal to the shorter diagonal of the base. Find the volume of the pyramid. *shorter diagonal $=a\sqrt{3}$ $\dfrac{1}{4}\cdot 6\cdot \dfrac{6^{2}\sqrt{3}}{2}\cdot 6\sqrt{3}$ $\dfrac{1}{3}\cdot 6\cdot \dfrac{6^{2}\sqrt{3}}{4}\cdot 6\sqrt{3}$ $\dfrac{1}{3}\cdot 6\cdot \dfrac{6\sqrt{3}}{4}\cdot 6\sqrt{3}$ $\dfrac{1}{3}\cdot 6\cdot \dfrac{6^{2}}{4}\cdot 12$ • Question 6: 2 pts The slant edge of a right regular hexagonal pyramid is $10 cm$ and the height is $8cm.$ Find the area of the base. Area of the base$=$ • Question 7: 2 pts The slant edge of a right regular hexagonal pyramid is $b=3\sqrt{5}cm$ and the base edge is $6cm.$ Find the volume of that pyramid. $48\sqrt{3}cm^{3}$ $49\sqrt{3}cm^{3}$ $54\sqrt{3}cm^{3}$ • Question 8: 2 pts A regular hexagonal pyramid has the perimeter of its base $24cm$ and its altitude is $15cm.$ Find its volume. $81\sqrt{3}cm^{3}$ $96\sqrt{3}cm^{3}$ $108\sqrt{3}cm^{3}$ $120\sqrt{3}cm^{3}$ • Question 9: 3 pts Find the surface area of a regular hexagonal pyramid whose height is $6cm$ and the radius of a circle inscribed in the base is $2\sqrt{3}cm.$ Surface area $=$ • Question 10: 3 pts The lateral surface area of regular hexagonal pyramid is $108cm^{2},$ a the area of its base is $54\sqrt{3}cm^{2].$ Find the volume of that pyramid. $V=54\sqrt{3}cm^{3}$ $V=108\sqrt{3}cm^{3}$ $V=162\sqrt{3}cm^{3}$ • Question 11: 3 pts The base of a right pyramid is a regular hexagon of side $8cm$ and its slant surfaces are inclined to the horizontal at an angle of $60^{\circ}$. Find the surface area. Surface area $=$ • Question 12: 3 pts Find the surface area of two pyramids with their bases stuck together. Surface area $=$
# How rich are you? 1. Aug 17, 2007 ### Lisa! How rich are you? (It takes less than 1 minute to find out!) 2. Aug 17, 2007 I put an income of $50k us because thats the starting pay for engineers and im at 98%. This is lame Lisa. I hate you. 3. Aug 17, 2007 ### Lisa! That's 0.98 % , dear! You need to see your doctor! 4. Aug 17, 2007 ### Kurdt Staff Emeritus How come I earn nothing and I'm the richest person in the world? This is insane :tongue: 5. Aug 17, 2007 ### Lisa! Those who're satisfied with what they have are the richest people in the world! Note to mention that they dont get anywhere in their lives.:tongue: 6. Aug 17, 2007 ### Kurdt Staff Emeritus I think it was picking on me because it recognised I was from the UK, and has been poorly designed so that even though I earn less than the bottom 0.8% it still thinks I'm rich. 7. Aug 17, 2007 ### Jimmy Snyder If I didn't have a dime, I'd still be the richest person in the world. The site equates income with wealth, even though these are two different concepts. For example: Last edited: Aug 17, 2007 8. Aug 17, 2007 ### humanino I cannot access the link of the OP. However, being asked "How rich I am", I would say that I am happily rich. 9. Aug 17, 2007 ### chroot Staff Emeritus "You're in the TOP 0.49% richest people in the world!" Still kind of a pointless argument, though. I'm aware that my money would have greater purchasing power in an Angolan village. Unfortunately, I don't live in an Angolan village, and food costs$10 a meal here. - Warren 10. Aug 17, 2007 ### Staff: Mentor It's not working for me. You are the 107,565 richest person in the world! You're in the TOP 0.001% richest people in the world! Then I entered my last year's income which was a lot higher than this year and got the same result. 11. Aug 17, 2007 Good point. John, a friend of mine mustered out of the Air Force in Australia during the Viet Nam war. The terms of his separation entitled him to free air passage on Air Force planes on an available-space basis, as long as he was headed back to the states, and he showed up in Nepal with $500 in his pocket and stayed for a year. He told me that apart from the local rug-merchant, he was the richest man in town. 12. Aug 17, 2007 ### chroot Staff Emeritus We already know you make an obscene amount of money, Evo. You're probably making the program have round-off errors. :rofl: - Warren 13. Aug 17, 2007 ### Cyrus Its like superman III, only the guys in real life got caught and went to jail. 14. Aug 17, 2007 ### Evo ### Staff: Mentor I notice they don't ask what you're expenses are. 15. Aug 17, 2007 ### chroot Staff Emeritus Isn't that sort of the point? You can control expenses even more easily than you can control income. It would be disingenous for you (or any other wealthy person) to claim that they somehow can't avoid those expenses. - Warren 16. Aug 17, 2007 ### russ_watters ### Staff: Mentor I have no idea what you are talking about, warren. I can't live without$65 surf and turf once a week. 17. Aug 17, 2007 ### turbo That's weekly groceries for my wife and me. 18. Aug 17, 2007 ### chroot Staff Emeritus - Warren 19. Aug 17, 2007 ### Cyrus He grows most of his own food or buys it from farmers market. Im sure it tastes better and costs less. 20. Aug 17, 2007 ### arunma Heh, I put in my TA salary, which is about 17.5k before taxes when combined with my summer RA salary. Apparently I'm in the top 11.69%. Makes sense, since I'm making more money than I ever have before. 21. Aug 17, 2007 ### Staff: Mentor No, I'm a non-vegetarian. I only eat meat. I have a small heard of cattle and a bunch of calves chained-up in my basement. I eat about one a week.... With no family, though, a lot of it just goes to waste. 22. Aug 17, 2007 ### Chi Meson 53,884,514 people are richer than I am, and they want ME to donate money? Fine, I'll get in line. I'll be the 53,884,515th person. 23. Aug 17, 2007 ### turbo Sorry, reality alert, with poorly-developed sense of foolishness. 24. Aug 17, 2007 ### Mallignamius It doesn't seem to refresh on its own. Maybe try reloading the entire page for each query? 25. Aug 18, 2007 ### Jimmy Snyder I believe the technical term is meatetarian. For myself, I am a humanitarian.
# User:Integer Jump to navigation Jump to search There is currently no text in this page. You can search for this page title in other pages, search the related logs, or edit this page .
# Pushdown# Trino can push down the processing of queries, or parts of queries, into the connected data source. This means that a specific predicate, aggregation function, or other operation, is passed through to the underlying database or storage system for processing. The results of this pushdown can include the following benefits: • Improved overall query performance • Reduced network traffic between Trino and the data source • Reduced load on the remote data source These benefits often result in significant cost reduction. Support for pushdown is specific to each connector and the relevant underlying database or storage system. ## Predicate pushdown# Predicate pushdown optimizes row-based filtering. It uses the inferred filter, typically resulting from a condition in a WHERE clause to omit unnecessary rows. The processing is pushed down to the data source by the connector and then processed by the data source. If predicate pushdown for a specific clause is succesful, the EXPLAIN plan for the query does not include a ScanFilterProject operation for that clause. ## Projection pushdown# Projection pushdown optimizes column-based filtering. It uses the columns specified in the SELECT clause and other parts of the query to limit access to these columns. The processing is pushed down to the data source by the connector and then the data source only reads and returns the neccessary columns. If projection pushdown is succesful, the EXPLAIN plan for the query only accesses the relevant columns in the Layout of the TableScan operation. ## Dereference pushdown# Projection pushdown and dereference pushdown limit access to relevant columns, except dereference pushdown is more selective. It limits access to only read the specified fields within a top level or nested ROW data type. For example, consider a table in the Hive connector that has a ROW type column with several fields. If a query only accesses one field, dereference pushdown allows the file reader to read only that single field within the row. The same applies to fields of a row nested within the top level row. This can result in significant savings in the amount of data read from the storage system. ## Aggregation pushdown# Aggregation pushdown can take place provided the following conditions are satisfied: • If aggregation pushdown is generally supported by the connector. • If pushdown of the specific function or functions is supported by the connector. • If the query structure allows pushdown to take place. You can check if pushdown for a specific query is performed by looking at the EXPLAIN plan of the query. If an aggregate function is successfully pushed down to the connector, the explain plan does not show that Aggregate operator. The explain plan only shows the operations that are performed by Trino. As an example, we loaded the TPCH data set into a PostgreSQL database and then queried it using the PostgreSQL connector: SELECT regionkey, count(*) FROM nation GROUP BY regionkey; You can get the explain plan by prepending the above query with EXPLAIN: EXPLAIN SELECT regionkey, count(*) FROM nation GROUP BY regionkey; The explain plan for this query does not show any Aggregate operator with the count function, as this operation is now performed by the connector. You can see the count(*) function as part of the PostgreSQL TableScan operator. This shows you that the pushdown was successful. Fragment 0 [SINGLE] Output layout: [regionkey_0, _generated_1] Output partitioning: SINGLE [] Output[regionkey, _col1] │ Layout: [regionkey_0:bigint, _generated_1:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: ?} │ regionkey := regionkey_0 │ _col1 := _generated_1 └─ RemoteSource[1] Layout: [regionkey_0:bigint, _generated_1:bigint] Fragment 1 [SOURCE] Output layout: [regionkey_0, _generated_1] Output partitioning: SINGLE [] TableScan[postgresql:tpch.nation tpch.nation columns=[regionkey:bigint:int8, count(*):_generated_1:bigint:bigint] groupingSets=[[regionkey:bigint:int8]], gro Layout: [regionkey_0:bigint, _generated_1:bigint] Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: 0B} _generated_1 := count(*):_generated_1:bigint:bigint regionkey_0 := regionkey:bigint:int8 A number of factors can prevent a push down: • adding a condition to the query • using a different aggregate function that cannot be pushed down into the connector • using a connector without pushdown support for the specific function As a result, the explain plan shows the Aggregate operation being performed by Trino. This is a clear sign that now pushdown to the remote data source is not performed, and instead Trino performs the aggregate processing. Fragment 0 [SINGLE] Output layout: [regionkey, count] Output partitioning: SINGLE [] Output[regionkey, _col1] │ Layout: [regionkey:bigint, count:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?} │ _col1 := count └─ RemoteSource[1] Layout: [regionkey:bigint, count:bigint] Fragment 1 [HASH] Output layout: [regionkey, count] Output partitioning: SINGLE [] Aggregate(FINAL)[regionkey] │ Layout: [regionkey:bigint, count:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?} │ count := count("count_0") └─ LocalExchange[HASH][$hashvalue] ("regionkey") │ Layout: [regionkey:bigint, count_0:bigint,$hashvalue:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?} └─ RemoteSource[2] Layout: [regionkey:bigint, count_0:bigint, $hashvalue_1:bigint] Fragment 2 [SOURCE] Output layout: [regionkey, count_0,$hashvalue_2] Output partitioning: HASH [regionkey][$hashvalue_2] Project[] │ Layout: [regionkey:bigint, count_0:bigint,$hashvalue_2:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?} │ $hashvalue_2 := combine_hash(bigint '0', COALESCE("$operator\$hash_code"("regionkey"), 0)) └─ Aggregate(PARTIAL)[regionkey] │ Layout: [regionkey:bigint, count_0:bigint] │ count_0 := count(*) └─ TableScan[tpch:nation:sf0.01, grouped = false] Layout: [regionkey:bigint] Estimates: {rows: 25 (225B), cpu: 225, memory: 0B, network: 0B} regionkey := tpch:regionkey ### Limitations# Aggregation pushdown does not support a number of more complex statements: • complex grouping operations such as ROLLUP, CUBE, or GROUPING SETS • expressions inside the aggregation function call: sum(a * b) • coercions: sum(integer_column) • aggregations with ordering • aggregations with filter ## Join pushdown# Join pushdown allows the connector to delegate the table join operation to the underlying data source. This can result in performance gains, and allows Trino to perform the remaining query processing on a smaller amount of data. The specifics for the supported pushdown of table joins varies for each data source, and therefore for each connector. However, there are some generic conditions that must be met in order for a join to be pushed down: • all predicates that are part of the join must be possible to be pushed down • the tables in the join must be from the same catalog You can verify if pushdown for a specific join is performed by looking at the EXPLAIN plan of the query. The explain plan does not show a Join operator, if the join is pushed down to the data source by the connector: EXPLAIN SELECT c.custkey, o.orderkey FROM orders o JOIN customer c ON c.custkey = o.custkey; The following plan results from the PostgreSQL connector querying TPCH data in a PostgreSQL database. It does not show any Join operator as a result of the successful join push down. Fragment 0 [SINGLE] Output layout: [custkey, orderkey] Output partitioning: SINGLE [] Output[custkey, orderkey] │ Layout: [custkey:bigint, orderkey:bigint] │ Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: ?} └─ RemoteSource[1] Layout: [orderkey:bigint, custkey:bigint] Fragment 1 [SOURCE] Output layout: [orderkey, custkey] Output partitioning: SINGLE [] TableScan[postgres:Query[SELECT l."orderkey" AS "orderkey_0", l."custkey" AS "custkey_1", r."custkey" AS "custkey_2" FROM (SELECT "orderkey", "custkey" FROM "tpch"."orders") l INNER JOIN (SELECT "custkey" FROM "tpch"."customer") r O Layout: [orderkey:bigint, custkey:bigint] Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: 0B} orderkey := orderkey_0:bigint:int8 custkey := custkey_1:bigint:int8 It is typically beneficial to push down a join. Pushing down a join can also increase the row count compared to the size of the input to the join. This may impact performance. ## Limit pushdown# A LIMIT or FETCH FIRST clause reduces the number of returned records for a statement. Limit pushdown enables a connector to push processing of such queries of unsorted record to the underlying data source. A pushdown of this clause can improve the performance of the query and significantly reduce the amount of data transferred from the data source to Trino. Queries include sections such as LIMIT N or FETCH FIRST N ROWS. Implementation and support is connector-specific since different data sources have varying capabilities. ## Top-N pushdown# The combination of a LIMIT or FETCH FIRST clause with an ORDER BY clause creates a small set of records to return out of a large sorted dataset. It relies on the order to determine which records need to be returned, and is therefore quite different to optimize compared to a Limit pushdown. The pushdown for such a query is called a Top-N pushdown, since the operation is returning the top N rows. It enables a connector to push processing of such queries to the underlying data source, and therefore significantly reduces the amount of data transferred to and processed by Trino. Queries include sections such as ORDER BY ... LIMIT N or ORDER BY ... FETCH FIRST N ROWS. Implementation and support is connector-specific since different data sources support different SQL syntax and processing.
Why is $q(f,g) = (f-g,0)$ not adjointable? Let $$A= C([0,1])$$ and $$J= \{f \in A: f(0) = 0\}$$. Consider the Hilbert $$C^*$$-module $$E:= A \oplus J$$ (with the obvious right $$A$$-action and inner product). I want to prove that $$q: E \to E: (f,g) \mapsto (f-g, 0)$$ is not adjointable. This is claimed in Lance's book on Hilbert $$C^*$$-modules, p22. Here is what I tried. Assume to the contrary that $$q$$ is adjointable. Then there is $$q^*: E \to E$$ such that $$(\overline{f-g})s= \langle q(f,g) , (s,t)\rangle = \langle (f,g), q^*(s,t)\rangle.$$ In particular, $$q^*(s,t)$$ does not depend on $$t$$ so we have $$q^*(s,t) = q^*(s,0)$$. Then I'm stuck. This is just a calculation. Continuning your argument, $$q^*(s,t) = q^*(s,0) = (s_1,-s_2)$$ for some $$s_1\in A, s_2\in J$$ (I add the minus sign for convenience later). Then $$\overline{f} s - \overline{g}s = \langle (f,g), (s_1,-s_2) \rangle = \overline{f} s_1 - \overline{g}s_2,$$ for all $$f\in A, g\in J$$. Set $$f=1,g=0$$ to see that $$s = s_1$$; set $$f=0$$ to see that $$\overline{g} s = \overline{g} s_2,$$ for all $$g\in J$$. Letting $$g$$ run through an approximate identity for $$J$$ (so a net $$(g_i)$$ with $$g_i(x)\rightarrow 1$$ for each $$x>0$$) we conclude that $$s(x) = s_2(x)$$ for all $$x>0$$. If for example $$s=1\in A\setminus J$$ this shows that $$s_2(x)=1$$ for all $$x>0$$, contradicting that $$s_2\in J$$.
# Rational Expressions and Radical Expressions Show 40 post(s) from this thread on one page Page 1 of 2 12 Last • July 9th 2009, 07:33 PM illeatyourxface I just want to check my answers with someone because im not sure i am correct... 1. 12u cubed/7v to the fifth * 14v squared/3u = 8 u squared/ v cubed 2. 9x * 4/x = 36? 3. x squared -3x +2/x squared -4x +4 divided by x squared -2x +1/ 3x squared -12 = 3(x+2)/ (x-1) 4. 15 divided by 5x/x+1 = 3(x+1) / x 5. 6x/x-2 divided by 3x = 2/x-2 • July 9th 2009, 07:43 PM yeongil You really should learn to type in LaTex. I can barely read your post. Quote: 1. 12u cubed/7v to the fifth * 14v squared/3u = 8 u squared/ v cubed You mean this? $\frac{12u^3}{7v^5} \cdot \frac{14v^2}{3u}$ If so, then $\frac{12u^3}{7v^5} \cdot \frac{14v^2}{3u} = \frac{4u^2}{v^3} \cdot \frac{2}{1} = \frac{8u^2}{v^3}$ It's correct. Quote: 2. 9x * 4/x = 36? You mean this? $9x \cdot \frac{4}{x}$ If so, and assuming that x isn't 0, then 36 is right. Quote: 3. x squared -3x +2/x squared -4x +4 divided by x squared -2x +1/ 3x squared -12 = 3(x+2)/ (x-1) I'm not even touching this one. (Headbang) Quote: 4. 15 divided by 5x/x+1 = 3(x+1) / x You mean this? $15 \div \frac{5x}{x + 1}$ $= 15 \cdot \frac{x + 1}{5x} = \frac{3(x + 1)}{x}$ Quote: 5. 6x/x-2 divided by 3x = 2/x-2 You mean this? $\frac{6x}{x - 2} \div 3x$ $= \frac{6x}{x - 2} \cdot \frac{1}{3x} = \frac{2}{x - 2}$ 01 • July 9th 2009, 07:48 PM illeatyourxface Thanks again, how would you solve problem involving addition??? ex) x/ (x squared-9) + 3/(x squared-9) • July 9th 2009, 07:49 PM pickslides Quote: Originally Posted by illeatyourxface I just want to check my answers with someone because im not sure i am correct... 1. 12u cubed/7v to the fifth * 14v squared/3u = 8 u squared/ v cubed $\frac{12u^3}{7v^5}\times\frac{14v^2}{3u}$ $=\frac{12u^3\times 14v^2}{7v^5\times 3u}$ $=\frac{12u^3\times 14v^2}{7v^5\times 3u}$ $=\frac{8u^3\times v^2}{v^5\times u}$ $=8u^2\times v^{-3}$ $=\frac{8u^2}{ v^{-3}}$ • July 9th 2009, 07:51 PM yeongil Quote: Originally Posted by illeatyourxface Thanks again, how would you solve problem involving addition??? ex) x/ (x squared-9) + 3/(x squared-9) If the denominators are the same, just add the numerators up, like this: $\frac{x}{x^2 - 9} + \frac{3}{x^2 - 9} = \frac{x + 3}{x^2 - 9}$ But we're not done, because the denominator can be factored: $\frac{x + 3}{x^2 - 9} = \frac{x + 3}{(x + 3)(x - 3)}$ Cancel out the x + 3, and the answer is $\frac{1}{x - 3}$ 01 • July 9th 2009, 07:53 PM illeatyourxface Quote: Originally Posted by pickslides $\frac{12u^3}{7v^5}\times\frac{14v^2}{3u}$ $=\frac{12u^3\times 14v^2}{7v^5\times 3u}$ $=\frac{12u^3\times 14v^2}{7v^5\times 3u}$ $=\frac{8u^3\times v^2}{v^5\times u}$ $=8u^2\times v^{-3}$ $=\frac{8u^2}{ v^{-3}}$ I thought negative exponents were used only if you moved it from the top or bottom...?? • July 9th 2009, 07:55 PM Stroodle Quote: Originally Posted by illeatyourxface 3. x squared -3x +2/x squared -4x +4 divided by x squared -2x +1/ 3x squared -12 = Do you mean: $\frac{x^2-3x+2}{x^2-4x+4}\div\frac{x^2-2x+1}{3x^2-12}$ $=\frac{(x-2)(x-1)}{(x-2)^2}\div\frac{(x-1)^2}{3(x+2)(x-2)}$ $=\frac{3(x-2)^2(x+2)(x-1)}{(x-2)^2(x-1)^2}$ $=\frac{3(x+2)}{x-1}$ • July 9th 2009, 07:57 PM illeatyourxface okayyy how would you add or subtract if the demoninator was different? ex) 4/x cubed y + 7/x y squared • July 9th 2009, 07:58 PM illeatyourxface and im really sorry if this is confusing i dont know how to type in LAtex, im sorrry • July 9th 2009, 08:06 PM Stroodle $\frac{12u^3}{7v^5}\times\frac{14v^2}{3u}$ $=\frac{168u^3v^2}{21v^5u}$ $=\frac{8u^2}{v^3}$ edit* Oops I didn't notice that Yeongil already answered this one correctly... • July 9th 2009, 08:13 PM pickslides Quote: Originally Posted by illeatyourxface I thought negative exponents were used only if you moved it from the top or bottom...?? yep my bad, forgot to remove the "-" sign • July 9th 2009, 08:34 PM illeatyourxface to solve it.... would it be like 4/ x cubed y + 7/ x y squared = (4)(y) + (7) (x squared) / (x cubed y)(x y squared) ???? • July 9th 2009, 08:44 PM Stroodle Do you mean: $\frac{4}{x^3y}+\frac{7}{xy^2}$ $=\frac{4y}{x^3y^2}+\frac{7x^2}{x^3y^2}$ $=\frac{4y+7x^2}{x^3y^2}$ • July 9th 2009, 09:11 PM illeatyourxface okay soo if you are subtracting... 11/3x - 5/6x = 33x -5 / 6x ? • July 9th 2009, 09:16 PM Stroodle $\frac{11}{3x}-\frac{5}{6x}$ To make the denominator the same for both terms you multiply the top and bottom of the first term by 2 $=\frac{22}{6x}-\frac{5}{6x}$ $=\frac{22-5}{6x}$ $=\frac{17}{6x}$ Show 40 post(s) from this thread on one page Page 1 of 2 12 Last
MOAB: Mesh Oriented datABase  (version 5.1.1) 1.Introduction 2.MOAB Data Model 2.1. MOAB Interface 2.2. Mesh Entities 2.3. Entity Sets 2.4. Tags 3.MOAB API Design Philosophy and Summary 4.Related Mesh Services 4.1. Visualization 4.2. Parallel Decomposition 4.3. Skinner 4.4. Tree Decompositions 4.6. File Readers/Writers Packaged With MOAB 4.7. AHF Representation 4.8. Uniform Mesh Refinement 5.Parallel Mesh Representation and Query 5.1. Nomenclature & Representation 5.2. Parallel Mesh Initialization 5.3. Parallel Mesh Query Functions 5.4. Parallel Mesh Communication 6.Building MOAB-Based Applications 7.iMesh (ITAPS Mesh Interface) Implementation in MOAB 8.Python Interface (PyMOAB) 9.Structured Mesh Representation 10.Spectral Element Meshes 10.1. Representations 10.3. MOAB Representation 11.Performance and Using MOAB Efficiently from Applications 12.Error Handling 13.Conclusions and Future Plans 14.References ## 1.Introduction In scientific computing, systems of partial differential equations (PDEs) are solved on computers. One of the most widely used methods to solve PDEs numerically is to solve over discrete neighborhoods or “elements” of the domain. Popular discretization methods include Finite Difference (FD), Finite Element (FE), and Finite Volume (FV). These methods require the decomposition of the domain into a discretized representation, which is referred to as a “mesh”. The mesh is one of the fundamental types of data linking the various tools in the analysis process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in PDE-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element “zoo”, along with polygons and polyhedra. The functional interface to MOAB is simple, consisting of only four fundamental data types. This data is quite powerful, allowing the representation of most types of metadata commonly found on the mesh. Internally MOAB uses array-based storage for fine-grained data, which in many cases provides more efficient access, especially for large portions of mesh and associated data. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of the following four fundamental types: mesh interface instance, mesh entities (vertex, edge, tri, etc.), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed graph provided by set parent/child relationships is useful for embedding graphs whose nodes include collections of mesh entities; this approach has been used to represent a wide variety of application-specific data, including geometric model topology, processor partitions, and various types of search trees. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities, and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. Various mesh-related tools are provided with MOAB or can be used directly with MOAB. These tools can be used for mesh format translation (mbconvert), mesh skinning (Skinner class), solution transfer between meshes (MBCoupler tool), ray tracing and other geometric searches (OrientedBoxTreeTool, AdaptiveKDTree), visualization (vtkMOABReader tool), and relation between mesh and geometric models (the separately-packed Lasso tool). These tools are described later in this document. MOAB is written in the C++ programming language, with applications interacting with MOAB mostly through its moab::Interface class. All of the MOAB functions and classes are isolated in and accessed through the moab namespace1. The remainder of this report gives class and function names without the “moab::” namespace qualification; unless otherwise noted, the namespace qualifier should be added to all class and function names referenced here. MOAB also implements the iMesh interface, which is specified in C but can be called directly from other languages. Almost all of the functionality in MOAB can be accessed through the iMesh interface. MOAB is developed and supported on the Linux and MacOS operating systems, as well as various HPC operating systems. MOAB can be used on parallel computing systems as well, including both clusters and high-end parallel systems like IBM BG/P and Cray systems. MOAB is released under a standard LGPL open source software license. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in several scientific computing applications [1]. MOAB can also be used as a mesh format translator, using readers and writers included in MOAB. MOAB has also been used as a bridge to couple results in multi-physics analysis and to link these applications with other mesh services [2]. The remainder of this report is organized as follows. Section 2, “Getting Started”, provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 describes how to build MOAB-based applications. Section 7 contains a brief description of MOAB’s relation to the iMesh mesh interface. Sections 8 and 9 discuss MOAB's representations of structured and spectral element meshes, respectively. Section 10 gives helpful hints for accessing MOAB in an efficient manner from applications. Section 11 gives a conclusion and future plans for MOAB development. Section 12 gives references cited in this report. Several other sources of information about MOAB may also be of interest to readers. Meta-data conventions define how sets and /or tags are used together to represent various commonly-used simulation constructs; conventions used by MOAB are described in Ref [4], which is also included in the MOAB source distribution. This document is maintained separately from this document, since it is expected to change over time. The MOAB project maintains a wiki [5], which links to most MOAB-related information. MOAB also uses several mailing lists [6],[7] for MOAB-related discussions. Potential users are encouraged to interact with the MOAB team using these mailing lists. 1 Non-namespaced names are also provided for backward compatibility, with the “MB” prefix added to the class or variable name. ## 2.MOAB Data Model The MOAB data model describes the basic types used in MOAB and the language used to communicate that data to applications. This chapter describes that data model, along with some of the reasons for some of the design choices in MOAB. ### 2.1. MOAB Interface MOAB is written in C++. The primary interface with applications is through member functions of the abstract base class Interface. The MOAB library is created by instantiating Core, which implements the Interface API. Multiple instances of MOAB can exist concurrently in the same application; mesh entities are not shared between these instances2. MOAB is most easily viewed as a database of mesh objects accessed through the instance. No other assumptions explicitly made about the nature of the mesh stored there; for example, there is no fundamental requirement that elements fill space or do not overlap each other geometrically. 2 One exception to this statement is when the parallel interface to MOAB is used; in this case, entity sharing between instances is handled explicitly using message passing. This is described in more detail in Section 5 of this document. ### 2.2. Mesh Entities MOAB represents the following topological mesh entities: vertex, edge, triangle, quadrilateral, polygon, tetrahedron, pyramid, prism, knife, hexahedron, polyhedron. MOAB uses the EntityType enumeration to refer to these entity types (see Table 1). This enumeration has several special characteristics, chosen intentionally: the types begin with vertex, entity types are grouped by topological dimension, with lower-dimensional entities appearing before higher dimensions; the enumeration includes an entity type for sets (described in the next section); and MBMAXTYPE is included at the end of this enumeration, and can be used to terminate loops over type. In addition to these defined values, the an increment operator (++) is defined such that variables of type EntityType can be used as iterators in loops. MOAB refers to entities using “handles”. Handles are implemented as long integer data types, with the four highest-order bits used to store the entity type (mesh vertex, edge, tri, etc.) and the remaining bits storing the entity id. This scheme is convenient for applications because: • Handles sort lexicographically by type and dimension; this can be useful for grouping and iterating over entities by type. • The type of an entity is indicated by the handle itself, without needing to call a function. • Entities allocated in sequence will typically have contiguous handles; this characteristic can be used to efficiently store and operate on large lists of handles. This handle implementation is exposed to applications intentionally, because of optimizations that it enables, and is unlikely to change in future versions. ### Table 1: Values defined for the EntityType enumerated type. MBVERTEX = 0 MBPRISM MBEDGE MBKNIFE MBTRI MBHEX MBQUAD MBPOLYHEDRON MBPOLYGON MBENTITYSET MBTET MBMAXTYPE MBPYRAMID MOAB defines a special class for storing lists of entity handles, named Range. This class stores handles as a series of (start_handle, end_handle) subrange tuples. If a list of handles has large contiguous ranges, it can be represented in almost constant size using Range. Since entities are typically created in groups, e.g. during mesh generation or file import, a high degree of contiguity in handle space is typical. Range provides an interface similar to C++ STL containers like std::vector, containing iterator data types and functions for initializing and iterating over entity handles stored in the range. Range also provides functions for efficient Boolean operations like subtraction and intersection. Most API functions in MOAB come in both range-based and vector-based variants. By definition, a list of entities stored in an Range is always sorted, and can contain a given entity handle only once. Range cannot store the handle 0 (zero). Typical usage of an Range object would look like: using namespace moab; int my_function(Range &from_range) { int num_in_range = from_range.size(); Range to_range; Range::iterator rit; for (rit = from_range.begin(); rit != from_range.end(); ++rit) { EntityHandle this_ent = *rit; to_range.insert(this_ent); } } Here, the range is iterated similar to how std::vector is iterated. The term adjacencies is used to refer to those entities topologically connected to a given entity, e.g. the faces bounded by a given edge or the vertices bounding a given region. The same term is used for both higher-dimensional (or bounded) and lower-dimensional (or bounding) adjacent entities. MOAB provides functions for querying adjacent entities by target dimension, using the same functions for higher- and lower-dimension adjacencies. By default, MOAB stores the minimum data necessary to recover adjacencies between entities. When a mesh is initially loaded into MOAB, only entity-vertex (i.e. “downward”) adjacencies are stored, in the form of entity connectivity. When “upward” adjacencies are requested for the first time, e.g. from vertices to regions, MOAB stores all vertex-entity adjacencies explicitly, for all entities in the mesh. Non-vertex entity to entity adjacencies are never stored, unless explicitly requested by the application. In its most fundamental form, a mesh need only be represented by its vertices and the entities of maximal topological dimension. For example, a hexahedral mesh can be represented as the connectivity of the hex elements and the vertices forming the hexes. Edges and faces in a 3D mesh need not be explicitly represented. We refer to such entities as “AEntities”, where ‘A’ refers to “Auxiliary”, “Ancillary”, and a number of other words mostly beginning with ‘A’. Individual AEntities are created only when requested by applications, either using mesh modification functions or by requesting adjacencies with a special “create if missing” flag passed as “true”. This reduces the overall memory usage when representing large meshes. Note entities must be explicitly represented before they can be assigned tag values or added to entity sets (described in following Sections). ### 2.3. Entity Sets Entity sets are also known as "mesh sets", or when the context is clear, not to be confused with std::set, just "sets". Entity sets are used to store arbitrary collections of entities and other sets. Sets are used for a variety of things in mesh-based applications, from the set of entities discretizing a given geometric model entity to the entities partitioned to a specific processor in a parallel finite element application. MOAB entity sets can also store parent/child relations with other entity sets, with these relations distinct from contains relations. Parent/child relations are useful for building directed graphs with graph nodes representing collections of mesh entities; this construct can be used, for example, to represent an interface of mesh faces shared by two distinct collections of mesh regions. MOAB also defines one special set, the “root set” or the interface itself; all entities are part of this set by definition. Defining a root set allows the use of a single set of MOAB API functions to query entities in the overall mesh as well as its subsets. MOAB entity sets can be one of two distinct types: list-type entity sets preserve the order in which entities are added to the set, and can store a given entity handle multiple times in the same set; set-type sets are always ordered by handle, regardless of the order of addition to the set, and can store a given entity handle only once. This characteristic is assigned when the set is created, and cannot be changed during the set’s lifetime. MOAB provides the option to track or not track entities in a set. When entities (and sets) are deleted by other operations in MOAB, they will also be removed from containing sets for which tracking has been enabled. This behavior is assigned when the set is created, and cannot be changed during the set’s lifetime. The cost of turning tracking on for a given set is sizeof(EntityHandle) for each entity added to the set; MOAB stores containing sets in the same list which stores adjacencies to other entities. Using an entity set looks like the following: using namespace moab; // load a file using MOAB, putting the loaded mesh into a file set EntityHandle file_set; ErrorCode rval = moab->create_meshset(MESHSET_SET, file_set); Range set_ents; // get all the 3D entities in the set rval = moab->get_entities_by_dimension(file_set, 3, set_ents); Entity sets are often used in conjunction with tags (described in the next section), and provide a powerful mechanism to store a variety of meta-data with meshes. ### 2.4. Tags Applications of a mesh database often need to attach data to mesh entities. The types of attached data are often not known at compile time, and can vary across individual entities and entity types. MOAB refers to this attached data as a “tag”. Tags can be thought of loosely as a variable, which can be given a distinct value for individual entities, entity sets, or for the interface itself. A tag is referenced using a handle, similarly to how entities are referenced in MOAB. Each MOAB tag has the following characteristics, which can be queried through the MOAB interface: • Name • Size (in bytes) • Storage type • Data type (integer, double, opaque, entity handle) • Handle The storage type determines how tag values are stored on entities. • Dense: Dense tag values are stored in arrays which match arrays of contiguous entity handles. Dense tags are more efficient in both storage and memory if large numbers of entities are assigned the same tag. Storage for a given dense tag is not allocated until a tag value is set on an entity; memory for a given dense tag is allocated for all entities in a given sequence at the same time. • Sparse: Sparse tags are stored as a list of (entity handle, tag value) tuples, one list per sparse tag, sorted by entity handle. • Bit: Bit tags are stored similarly to dense tags, but with special handling to allow allocation in bit-size amounts per entity. MOAB also supports variable-length tags, which can have a different length for each entity they are assigned to. Variable length tags are stored similarly to sparse tags. The data type of a tag can either be one understood at compile time (integer, double, entity handle), in which case the tag value can be saved and restored properly to/from files and between computers of different architecture (MOAB provides a native HDF5-based save/restore format for this purpose; see Section 4.6). The opaque data type is used for character strings, or for allocating “raw memory” for use by applications (e.g. for storage application-defined structures or other abstract data types). These tags are saved and restored as raw memory, with no special handling for endian or precision differences. An application would use the following code to attach a double-precision tag to vertices in a mesh, e.g. to assign a temperature field to those vertices: using namespace moab; // load a file using MOAB and get the vertices Range verts; rval = moab->get_entities_by_dimension(0, 0, verts); // create a tag called “TEMPERATURE” Tag temperature; double def_val = -1.0d-300, new_val = 273.0; rval = moab->tag_create(“TEMPERATURE”, sizeof(double), MB_TAG_DENSE, MB_TYPE_DOUBLE, temperature, &def_val); // assign a value to vertices for (Range::iterator vit = verts.begin(); vit != verts.end(); ++vit) rval = moab->tag_set_data(temperature, &(*rit), 1, &new_val); The semantic meaning of a tag is determined by applications using it. However, to promote interoperability between applications, there are a number of tag names reserved by MOAB which are intended to be used by convention. Mesh readers and writers in MOAB use these tag conventions, and applications can use them as well to access the same data. Ref. [4] maintains an up-to-date list of conventions for meta-data usage in MOAB. ## 3.MOAB API Design Philosophy and Summary This section describes the design philosophy behind MOAB, and summarizes the functions, data types and enumerated variables in the MOAB API. A complete description of the MOAB API is available in online documentation in the MOAB distribution [8]. MOAB is designed to operate efficiently on collections of entities. Entities are often created or referenced in groups (e.g. the mesh faces discretizing a given geometric face, the 3D elements read from a file), with those groups having some form of temporal or spatial locality. The interface provides special mechanisms for reading data directly into the native storage used in MOAB, and for writing large collections of entities directly from that storage, to avoid data copies. MOAB applications structured to take advantage of that locality will typically operate more efficiently. MOAB has been designed to maximize the flexibility of mesh data which can be represented. There is no explicit constraint on the geometric structure of meshes represented in MOAB, or on the connectivity between elements. In particular, MOAB allows the representation of multiple entities with the same exact connectivity; however, in these cases, explicit adjacencies must be used to distinguish adjacencies with AEntities bounding such entities. The number of vertices used to represent a given topological entity can vary, depending on analysis needs; this is often the case in FEA. For example, applications often use “quadratic” or 10-vertex tetrahedral, with vertices at edge midpoints as well as corners. MOAB does not distinguish these variants by entity type, referring to all variants as “tetrahedra”. The number of vertices for a given entity is used to distinguish the variants, with canonical numbering conventions used to determine placement of the vertices [9]. This is similar to how such variations are represented in the Exodus [10] and Patran [11] file formats. In practice, we find that this simplifies coding in applications, since in many cases the handling of entities depends only on the number of corner vertices in the element. Some MOAB API functions provide a flag which determines whether corner or all vertices are requested. The MOAB API is designed to balance complexity and ease of use. This balance is evident in the following general design characteristics: • Entity lists: Lists of entities are passed to and from MOAB in a variety of forms. Lists output from MOAB are passed as either STL vector or Range data types. Either of these constructs may be more efficient in both time and memory, depending on the semantics of the data being requested. Input lists are passed as either Range’s, or as a pointer to EntityHandle and a size. The latter allows the same function to be used when passing individual entities, without requiring construction of an otherwise unneeded STL vector. • Entity sets: Most query functions accept an entity set as input. Applications can pass zero to indicate a request for the whole interface. Note that this convention applies only to query functions; attempts to add or subtract entities to/from the interface using set-based modification functions, or to add parents or children to the interface set, will fail. Allowing specification of the interface set in this manner avoids the need for a separate set of API functions to query the database as a whole. • Implicit Booleans in output lists: A number of query functions in MOAB allow specification of a Boolean operation (Interface::INTERSECT or Interface::UNION). This operation is applied to the results of the query, often eliminating the need for code the application would need to otherwise implement. For example, to find the set of vertices shared by a collection of quadrilaterals, the application would pass that list of quadrilaterals to a request for vertex adjacencies, with Interface::INTERSECT passed for the Boolean flag. The list of vertices returned would be the same as if the application called that function for each individual entity, and computed the intersection of the results over all the quadrilaterals. Applications may also input non-empty lists to store the results, in which case the intersection is also performed with entities already in the list. In many cases, this allows optimizations in both time and memory inside the MOAB implementation. Since these objectives are at odds with each other, tradeoffs had to be made between them. Some specific issues that came up are: • Using ranges: Where possible, entities can be referenced using either ranges (which allow efficient storage of long lists) or STL vectors (which allow list order to be preserved), in both input and output arguments. • Entities in sets: Accessing the entities in a set is done using the same functions which access entities in the entire mesh. The whole mesh is referenced by specifying a set handle of zero3. • Entity vectors on input: Functions which could normally take a single entity as input are specified to take a vector of handles instead. Single entities are specified by taking the address of that entity handle and specifying a list length of one. This minimizes the number of functions, while preserving the ability to input single entities.4 Table 2 lists basic data types and enumerated variables defined and used by MOAB. Values of the ErrorCode enumeration are returned from most MOAB functions, and can be compared to those listed in Appendix [ref-appendix]. MOAB uses several pre-defined tag names to define data commonly found in various mesh-based analyses. Ref. [4] describes these meta-data conventions in more detail. These conventions will be added to as new conventions emerge for using sets and tags in MOAB applications. ### Table 2: Basic data types and enums defined in MOAB. Enum / Type Description ErrorCode Specific error codes returned from MOAB EntityHandle Type used to represent entity handles Tag Type used to represent tag handles TagType Type used to represent tag storage type DataType Type used to represent tag data type Table 3 lists the various groups of functions that comprise the MOAB API. This is listed here strictly as a reference to the various types of functionality supported by MOAB; for a more detailed description of the scope and syntax of the MOAB API, see the online documentation [7]. ### Table 3: Groups of functions in MOAB API. See Ref. [7] for more details. Function group Examples Description Constructor, destructor, interface Interface, ~Core, query_interface Construct/destroy interface; get pointer to read/write interface Entity query get_entities_by_dimension, get_entities_by_handle Get entities by dimension, type, etc. Vertex coordinates get_coords, set_coords Get/set vertex coordinates Connectivity get_connectivity, set_connectivity Get/set connectivity of non-vertex entities Tags tag_get_data, tag_create Create, read, write tag data Handles type_from_handle, id_from_handle Go between handles and types/ids Geometric dimension get_dimension, set_dimension Get/set geometric dimension of mesh Mesh modification create_vertex, delete_entity Create or delete mesh entities Information list_entities, get_last_error Get or print certain information High-order nodes high_order_node Get information on high-order nodes Canonical numbering side_number Get canonical numbering information 3In iMesh, the whole mesh is specified by a special entity set handle, referred to as the “root set”. 4Note that STL vectors of entity handles can be input in this manner by using &vector[0] and vector.size() for the 1d vector address and size, respectively. ## 4.Related Mesh Services A number of mesh-based services are often used in conjunction with a mesh library. For example, parallel applications often need to visualize the mesh and associated data. Other services, like spatial interpolation or finding the faces on the “skin” of a 3D mesh, can be implemented more efficiently using knowledge of specific data structures in MOAB. Several of these services provided with MOAB are described in this chapter. ### 4.1. Visualization Visualization is one of the most common needs associated with meshes. The primary tool used to visualize MOAB meshes is VisIt [11]. Users can download a VisIt version that has the MOAB plugin compiled, then a file in hdf5 MOAB format (default extension h5m) can be read directly. There are capabilities in VisIt for viewing and manipulation of tag data and some types of entity sets. Dense tag data is visualized using the same mechanisms used to view other field data in VisIt, e.g. using a pseudocolor plot; Material sets, neumann and dirichlet sets and parallel partition sets can be visualized using the subset capability. Figure 2 shows a vertex-based radiation temperature field computed by the Cooper rad-hydro code [1] for a subset of geometric volumes in a mesh. ### 4.2. Parallel Decomposition To support parallel simulation, applications often need to partition a mesh into parts, designed to balance the load and minimize communication between sets. MOAB includes the mbpart tool for this purpose, constructed on the well-known Zoltan partitioning library [12] and Metis [13]. After computing the partition using Zoltan or Metis, MOAB stores the partition as either tags on individual entities in the partition, or as tagged sets, one set per part. Since a partition often exhibits locality similar to how the entities were created, storing it as sets (based on Range’s) is often more memory-efficient than an entity tag-based representation. Figure below shows a couple of partitioned meshes computed with mbpart with -z option for Zoltan and visualized in VisIt. ### 4.3. Skinner An operation commonly applied to mesh is to compute the outermost “skin” bounding a contiguous block of elements. This skin consists of elements of one fewer topological dimension, arranged in one or more topological balls on the boundary of the elements. The Skinner tool computes the skin of a mesh in a memory-efficient manner. Skinner uses knowledge about whether vertex-entity adjacencies and AEntities exist to minimize memory requirements and searching time required during the skinning process. This skin can be provided as a single collection of entities, or as sets of entities distinguished by forward and reverse orientation with respect to higher-dimensional entities in the set being skinned. The following code fragment shows how Skinner can be used to compute the skin of a range of hex elements: using namespace moab; Range hexes, faces; ErrorCode rval = moab->get_entities_by_dimension(0, 3, hexes); Skinner myskinner(moab); bool verts_too = false; ErrorCode rval = myskinner.find_skin(hexes, verts_too, faces); Skinner can also skin a mesh based on geometric topology groupings imported with the mesh. The geometric topology groupings contain information about the mesh “owned” by each of the entities in the geometric model, e.g. the model vertices, edges, etc. Links between the mesh sets corresponding to those entities can be inferred directly from the mesh. Skinning a mesh this way will typically be much faster than doing so on the actual mesh elements, because there is no need to create and destroy interior faces on the mesh. ### 4.4. Tree Decompositions MOAB provides several mechanisms for spatial decomposition and searching in a mesh: • AdaptiveKDTree: Adaptive KD tree, a space-filling decomposition with axis-aligned splitting planes, enabling fast searching. • BSPTree: Binary Space Partition tree, with non-axis-aligned partitions, for fast spatial searches with slightly better memory efficiency than KD trees. • OrientedBoxTreeTool: Oriented Bounding Box tree hierarchy, useful for fast ray-tracing on collections of mesh facets. These trees have various space and time searching efficiencies. All are implemented based on entity sets and parent/child relations between those sets, allowing storage of a tree decomposition using MOAB’s native file storage mechanism (see Section 4.6.1). MOAB’s entity set implementation is specialized for memory efficiency when representing binary trees. Tree decompositions in MOAB have been used to implement fast ray tracing to support radiation transport [14], solution coupling between meshes [2], and embedded boundary mesh generation [15]. MOAB also includes the DAGMC tool, supporting Monte Carlo radiation transport. The following code fragment shows very basic use of AdaptiveKDTree. A range of entities is put in the tree; the leaf containing a given point is found, and the entities in that leaf are returned. using namespace moab; // create the adaptive kd tree from a range of tets EntityHandle tree_root ErrorCode rval = myTree.build_tree(tets, tree_root); // get the overall bounding box corners double boxmax[3], boxmin; rval = myTree.get_tree_box(tree_root, boxmax, boxmin); // get the tree leaf containing point xyz, and the tets in that leaf rval = myTree.leaf_containing_point(tree_root, xyz, treeiter); Range leaf_tets; rval = moab->get_entities_by_dimension(treeiter.handle(), 3, leaf_tets, false); More detailed examples of using the various tree decompositions in MOAB can be found in [ref-treeexamples]. Mesh readers and writers communicate mesh into/out of MOAB from/to disk files. Reading a mesh often involves importing large sets of data, for example coordinates of all the nodes in the mesh. Normally, this process would involve reading data from the file into a temporary data buffer, then copying data from there into its destination in MOAB. To avoid the expense of copying data, MOAB has implemented a reader/writer interface that provides direct access to blocks of memory used to represent mesh. The reader interface, declared in ReadUtilIface, is used to request blocks of memory for storing coordinate positions and element connectivity. The pointers returned from these functions point to the actual memory used to represent those data in MOAB. Once data is written to that memory, no further copying is done. This not only saves time, but it also eliminates the need to allocate a large memory buffer for intermediate storage of these data. MOAB allocates memory for nodes and elements (and their corresponding dense tags) in chunks, to avoid frequent allocation/de-allocation of small chunks of memory. The chunk size used depends on from where the mesh is being created, and can strongly affect the performance (and memory layout) of MOAB. Since dense tags are allocated at the chunk size, this can also affect overall memory usage in cases where the mesh size is small but the number of dense tags or dense tag size is large. When creating vertices and elements through the normal MOAB API, default chunk sizes defined in the SequenceManager class are used. However, most of the file readers in MOAB allocate the exact amount of space necessary to represent the mesh being read. There are also a few exceptions to this: • When compiled in parallel, this space is increased by a factor of 1.5, to allow subsequent creation of ghost vertices/elements in the same chunk as the original mesh. • The .cub file reader, which creates nodes and elements for individual geometric entities in separate calls, allocates using the default vertex/element sequence sizes, which are defined in the SequenceManager class in MOAB. Applications calling the reader interface functions directly can specify the allocation chunk size as an optional parameter. The reader interface consists of the following functions: • get_node_coords: Given the number of vertices requested, the number of geometric dimensions, and a requested start id, allocates a block of vertex handles and returns pointers to coordinate arrays in memory, along with the actual start handle for that block of vertices. • get_element_connect: Given the number of elements requested, the number of vertices per element, the element type and the requested start id, allocates the block of elements, and returns a pointer to the connectivity array for those elements and the actual start handle for that block. The number of vertices per element is necessary because those elements may include higher-order nodes, and MOAB stores these as part of the normal connectivity array. • update_adjacencies: This function takes the start handle for a block of elements and the connectivity of those elements, and updates adjacencies for those elements. Which adjacencies are updated depends on the options set in AEntityFactory. The following code fragment illustrates the use of ReadUtilIface to read a mesh directly into MOAB’s native representation. This code assumes that connectivity is specified in terms of vertex indices, with vertex indices starting from 1. // get the read iface from moab ErrorCode rval = moab->query_interface(iface); // allocate a block of vertex handles and read xyz’s into them std::vector<double*> arrays; EntityHandle startv, *starth; rval = iface->get_node_coords(3, num_nodes, 0, startv, arrays); for (int i = 0; i < num_nodes; i++) infile >> arrays[0][i] >> arrays[1][i] >> arrays[2][i]; // allocate block of hex handles and read connectivity into them rval = iface->get_element_connect(num_hexes, 8, MBHEX, 0, starth); for (int i = 0; i < 8*num_hexes; i++) infile >> starth[i]; // change connectivity indices to vertex handles for (int i = 0; i < 8*num_hexes; i++) starth[i] += startv-1; The writer interface, declared in WriteUtilIface, provides functions that support writing vertex coordinates and element connectivity to storage locations input by the application. Assembling these data is a common task for writing mesh, and can be non-trivial when exporting only subsets of a mesh. The writer interface declares the following functions: • get_node_coords: Given already-allocated memory and the number of vertices and dimensions, and a range of vertices, this function writes vertex coordinates to that memory. If a tag is input, that tag is also written with integer vertex ids, starting with 1, corresponding to the order the vertices appear in that sequence (these ids are used to write the connectivity array in the form of vertex indices). • get_element_connect: Given a range of elements and the tag holding vertex ids, and a pointer to memory, the connectivity of the specified elements are written to that memory, in terms of the indices referenced by the specified tag. Again, the number of vertices per element is input, to allow the direct output of higher-order vertices. • gather_nodes_from_elements: Given a range of elements, this function returns the range of vertices used by those elements. If a bit-type tag is input, vertices returned are also marked with 0x1 using that tag. If no tag is input, the implementation of this function uses its own bit tag for marking, to avoid using an n2 algorithm for gathering vertices. • reorder: Given a permutation vector, this function reorders the connectivity for entities with specified type and number of vertices per entity to match that permutation. This function is needed for writing connectivity into numbering systems other than that used internally in MOAB. The following code fragment shows how to use WriteUtilIface to write the vertex coordinates and connectivity indices for a subset of entities. using namespace moab; // get the write iface from moab WriteUtilIface *iface; ErrorCode rval = moab->query_interface(iface); // get all hexes the model, and choose the first 10 of those Range tmp_hexes, hexes, verts; rval = moab->get_entities_by_type(0, MBHEX, tmp_hexes); for (int i = 0; i < 10; i++) hexes.insert(tmp_hexes[i]); rval = iface->gather_nodes_from_elements(hexes, 0, verts); // assign vertex ids iface->assign_ids(verts, 0, 1); // allocate space for coordinates & write them std::vector<double*> arrays(3); for (int i = 0; i < 3; i++) arrays[i] = new double[verts.size()]; iface->get_node_coords(3, verts.size(), verts, 0, 1, arrays); // put connect’y in array, in the form of indices into vertex array std::vector<int> conn(8*hexes.size()); iface->get_element_connect(hexes.size(), 8, 0, hexes, 0, 1, &conn[0]); ### 4.6. File Readers/Writers Packaged With MOAB MOAB has been designed to efficiently represent data and metadata commonly found in finite element mesh files. Readers and writers are included with MOAB which import/export specific types of metadata in terms of MOAB sets and tags, as described earlier in this document. The number of readers and writers in MOAB will probably grow over time, and so they are not enumerated here. See the src/io/README file in the MOAB source distribution for a current list of supported formats. Because of its generic support for readers and writers, described in the previous section, MOAB is also a good environment for constructing new mesh readers and writers. The ReadTemplate and WriteTemplate classes in src/io are useful starting points for constructing new file readers/writers; applications are encouraged to submit their own readers/writers for inclusion in MOAB’s contrib/io directory in the MOAB source. The usefulness of a file reader/writer is determined not only by its ability to read and write nodes and elements, but also in its ability to store the various types of meta-data included with the typical mesh. MOAB readers and writers are distinguished by their ability to preserve meta-data in meshes that they read and write. For example, MOAB’s CUB reader imports not only the mesh saved from CUBIT, but also the grouping of mesh entities into sets which reflect the geometric topology of the model used to generate the mesh. See [4] for a more detailed description of meta-data conventions used in MOAB’s file readers and writers, and the individual file reader/writer header files in src/io for details about the specific readers and writers. Three specific file readers in MOAB bear further discussion: MOAB’s native HDF5-based file reader/writer; the CUB reader, used to import mesh and meta-data represented in CUBIT; and the CGM reader, which imports geometric models. These are described next. A mesh database must be able to save and restore the data stored in its data model, at least to the extent to which it understands the semantics of that data. MOAB defines an HDF5-based file format that can store data embedded in MOAB. By convention, these files are given an “.h5m” file extension. When reading or writing large amounts of data, it is recommended to use this file format, as it is the most complete and also the most efficient of the file readers/writers in MOAB. CUBIT is a toolkit for generating tetrahedral and hexahedral finite element meshes from solid model geometry [16]. This tool saves and restores data in a custom “.cub” file, which stores both mesh and geometry (and data relating the two). The CUB reader in MOAB can import and interpret much of the meta-data information saved in .cub files. Ref. [4] describes the conventions used to store this meta-data in the MOAB data model. The information read from .cub files, and stored in the MOAB data model, includes: • Geometric model entities and topology • Model entity names and ids • Groups, element blocks, nodesets, and sidesets, including model entities stored in them • Mesh scheme and interval size information assigned to model entities Note that although information about model entities is recovered, MOAB by default does not depend on a solid modeling engine; this information is stored in the form of entity sets and parent/child relations between them. See Ref. [4] for more information. The Common Geometry Module (CGM) [17] is a library for representing solid model and other types of solid geometry data. The CUBIT mesh generation toolkit uses CGM for its geometric modeling support, and CGM can restore geometric models in the exact state in which they were represented in CUBIT. MOAB contains a CGM reader, which can be enabled with a configure option. Using this reader, MOAB can read geometric models, and represent their model topology using entity sets linked by parent/child relations. The mesh in these models comes directly from the modeling engine faceting routines; these are the same facets used to visualize solid models in other graphics engines. When used in conjunction with the VisIt visualization tool (see Section 4.1), this provides a solution for visualizing geometric models. The figure below shows a model imported using MOAB’s CGM reader and visualized with VisIt. ### 4.7. AHF Representation Currently, the upward (vertex to entities) adjacencies are created and stored the first time a query requiring the adjacency is performed. Any non-vertex entity to entity adjacencies are performed using boolean operations on vertex-entity adjacencies. Because of this approach, such adjacency queries might become expensive with increasing dimension. We have added an alternative approach for obtaining adjacencies using the Array-based Half-Facet (AHF) representation[23]. The AHF uses sibling half-facets as a core abstraction which are generalizations of the opposite half-edge and half-face data structure for 2D and 3D manifold meshes. The AHF data structure consists of two essential maps: 1) the mapping between all sibling half-facets (sibhfs) and, 2) the mapping from each vertex to some incident half-facet (v2hf). The entire range of adjacencies (higher-, same- and lower-dimension) are computed using these two maps. In the current release, AHF based adjacencies calls do not support the following cases: • polygon/polyhedral meshes, • mixed entity type meshes, • meshsets, • create_if_missing option set to true, and • modified meshes. The support for these would be gradually added in the next releases. In these cases, any adjacency call would revert back to MOAB's native adjacency list based queries. If for some reason, the user does not want to configure MOAB with AHF but would still like to use the AHF-based adjacencies for certain queries, they could use the following three interface functions provided in the HalfFacetRep class which implements the AHF maps and adjacency queries: • initialize : This function creates all the necessary AHF maps for the input mesh and hence should be called before any adjacency calls are made. • deinitialize: This function deletes all the AHF maps and should be called after all AHF-based adjacency calls have been performed. TODO:: Other features to be added • obtain ring neighborhoods with support for half-rings • efficient extraction of boundaries ### 4.8. Uniform Mesh Refinement Many applications require a hierarchy of successively refined meshes for a number of purposes such as convergence studies, to use multilevel methods like multigrid, generate large meshes in parallel computing to support increasing mesh sizes with increase in number of processors, etc. Uniform mesh refinement provides a simple and efficient way to generate such hierarchies via successive refinement of the mesh at a previous level. It also provides a natural hierarchy via parent and child type of relationship between entities of meshes at different levels. Generally, the standard nested refinement patterns used are the subdivision schemes from 1 to 4 for 2D (triangles, quads) and 1 to 8 for 3D (tets, hexes) entity types. However, many applications might require degree 3 or more for p-refinements. MOAB supports generation of a mesh hierarchy i.e., a sequence of meshes with user specified degrees for each level of refinement, from an initial unstructured mesh with support for higher degrees of refinement (supported degrees are listed later). Thus MOAB supports multi-degree and multi-level mesh generation via uniform refinement. The following figure shows the initial and most refined mesh for four simple meshes to illustrate the multi-degree capability. Uniform Refinement of 2D and 3D meshes Applications using mesh hierarchies require two types of mesh access: intralevel and interlevel. The intralevel access involves working with the mesh at a particular level whereas interlevel access involves querying across different levels. In order to achieve data locality with reduced cache misses for efficient intralevel mesh access, old vertices in the previous i.e. immediate parent mesh are duplicated in the current level. All the entities thus created for the current level use the new entityhandles of old vertices along with the handles of the new vertices. This design makes mesh at each level of the hierarchy independent of those at previous levels. For each mesh in the hierarchy, a MESHSET is created and all entities of the mesh are added to this MESHSET. Thus the meshes of the hierarchy are accessible via these level-wise MESHSET handles. For interlevel queries, separate interface functions are defined to allow queries across different levels. These queries mainly allow obtaining the parent at some specified parent level for a child from later levels and vice versa. The child-parent queries are not restricted to a level difference of one as the internal array-based layout of the memory allows traversing between different levels via index relations. The hierarchy generation capability is implemented in NestedRefine class. In Table 4, the user interface functions are briefly described. The main hierarchy generating function takes as input a sequence of refinement degrees to be used for generating mesh at each level from the previous level, the total number of levels in the hierarchy. It returns EntityHandles for the meshsets created for the mesh at each level. The number of levels in the hierarchy is prefixed (by the user) and cannot change during hierarchy generation. An example of how to generate a hierarchy can be found under examples/UniformRefinement.cpp. The next three functions for getting the coordinates, connectivity and adjacencies are standard operations to access and query a mesh. The coordinates and connectivity functions, which are similar to their counterparts in MOAB Interface class, allows one to query the mesh at a specific level via its level index. The reason to provide such similar functions is to increase computational efficiency by utilizing the direct access to memory pointers to EntitySequences created during hierarchy generation under the NestedRefine class instead of calling the standard interfaces where a series of calls have to made before the EntitySequence of the requested entity is located. It is important to note that any calls to the standard interfaces available under Interface class should work. The underlying mesh data structure used for uniform refinement is the AHF datastructure implemented in HalfFacetRep class. This direct dependence on HalfFacetRep class removes the necessity to configure MOAB with --enable-ahf flag in order to use the uniform refinement capability. During the creation of an object of the NestedRefine class, it will internally initialize all the relevant AHF maps for the mesh in memory. During mesh hierarchy generation, after the creation of mesh at each level all the relevant AHF maps are updated to allow query over it. For interlevel queries, currently three kinds of queries are supported. The child to parent function allows querying for a parent in any of the previous levels (including the initial mesh). The parent to child, on the other hand, returns all its children from a requested child level. These two types of queries are only supported for entities, not vertices. A separate vertex to entities function is provided which returns entities from the previous level that are either incident or contain this vertex. ### Table 4: User interface functions NestedRefine class. Function group Function Description Hierarchy generation generate_mesh_hierarchy Generate a mesh hierarchy with a given sequence of refinement degree for each level. Vertex coordinates get_coordinates Get vertex coordinates Connectivity get_connectivity Get connectivity of non-vertex entities Interlevel Queries child_to_parent Get the parent entity of a child from a specified parent level Interlevel Queries parent_to_child Get all children of the parent from a specified child level Interlevel Queries vertex_to_entities Get all the entities in the previous level incident on or containing the vertex ### Table 5: The refinement degrees currently supported. Mesh Dimension EntityTypes Degree Number of children 1 MBEDGE 2, 3, 5 2, 3, 5 2 MBTRI, MBQUAD 2, 3, 5 4, 9, 25 3 MBTET, MBHEX 2, 3 8, 27 In Table 5, the currently supported degrees of refinement for each dimension is listed along with the number of children created for each such degree for a single entity. The following figure shows the cpu times(serial run) for generating hierarchies with various degrees of refinement for each dimension and can be used by the user to guide in choosing the degrees of refinement for the hierarchy. For example, if a multilevel hierarchy is required, a degree 2 refinement per level would give a gradually increasing mesh with more number of levels. If a very refined mesh is desired quickly, then a small hierarchy with high-order refinement should be generated. Mesh sizes Vs. Time Current support: • Single dimension(i.e, no curves in surface meshes or curves/surfaces in volume meshes) • Linear point projection • Serial TODO: • Mixed-dimensional meshes • Mixed-entity types • High-Order point projection • Parallel ## 5.Parallel Mesh Representation and Query A parallel mesh representation must strike a careful balance between providing an interface to mesh similar to that of a serial mesh, while also allowing the discovery of parallel aspects of the mesh and performance of parallel mesh-based operations efficiently. MOAB supports a spatial domain-decomposed view of a parallel mesh, where each subdomain is assigned to a processor, lower-dimensional entities on interfaces between subdomains are shared between processors, and ghost entities can be exchanged with neighboring processors. Locally, each subdomain, along with any locally-represented ghost entities, are accessed through a local MOAB instance. Parallel aspects of the mesh, e.g. whether entities are shared, on an interface, or ghost entities, are embedded in the same data model (entities, sets, tags, interface) used in the rest of MOAB. MOAB provides a suite of parallel functions for initializing and communicating with a parallel mesh, along with functions to query the parallel aspects of the mesh. ### 5.1. Nomenclature & Representation Before discussing how to access parallel aspects of a mesh, several terms need to be defined: Shared entity: An entity shared by one or several other processors. Multi-shared entity: An entity shared by more than two processors. Owning Processor: Each shared entity is owned by exactly one processor. This processor has the right to set tag values on the entity and have those values propagated to any sharing processors. Part: The collection of entities assigned to a given processor. When reading mesh in parallel, the entities in a Part, along with any lower-dimensional entities adjacent to those, will be read onto the assigned processor. Partition: A collection of Parts which take part in parallel collective communication, usually associated with an MPI communicator. Interface: A collection of mesh entities bounding entities in multiple parts. Interface entities are owned by a single processor, but are represented on all parts/processors they bound. Ghost entity: A shared, non-interface, non-owned entity. Parallel status: A characteristic of entities and sets represented in parallel. The parallel status, or “pstatus”, is represented by a bit field stored in an unsigned character, with bit values as described in Table 6. ### Table 6: Bits representing various parallel characteristics of a mesh. Also listed are enumerated values that can be used in bitmask expressions; these enumerated variables are declared in MBParallelConventions.h. Bit Name Represents 0x1 PSTATUS_NOT_OWNED Not owned by the local processor 0x2 PSTATUS_SHARED Shared by exactly two processorstd> 0x4 PSTATUS_MULTISHARED Shared by three or more processors 0x8 PSTATUS_INTERFACE Part of lower-dimensional interface shared by multiple processors 0x10 PSTATUS_GHOST Non-owned, non-interface entities represented locally Parallel functionality is described in the following sections. First, methods to load a mesh into a parallel representation are described; next, functions for accessing parallel aspects of a mesh are described; functions for communicating mesh and tag data are described. ### 5.2. Parallel Mesh Initialization Parallel mesh is initialized in MOAB in several steps: 1. Establish a local mesh on each processor, either by reading the mesh into that representation from disk, or by creating mesh locally through the normal MOAB interface. 2. Find vertices, then other entities, shared between processors, based on a globally-consistent vertex numbering stored on the GLOBAL_ID tag. 3. Exchange ghost or halo elements within a certain depth of processor interfaces with neighboring processors. These steps can be executed by a single call to MOAB’s load_file function, using the procedure described in Section 5.2.1. Or, they can be executed in smaller increments calling specific functions in MOAB’s ParallelComm class, as described in Section 5.2.2. Closely related to the latter method is the handling of communicators, described in more detail in Section. In the file reading approach, a mesh must contain some definition of the partition (the assignment of mesh, usually regions, to processors). Partitions can be derived from other set structures already on the mesh, or can be computed explicitly for that purpose by tools like mbzoltan (see Section 4.2). For example, geometric volumes used to generate the mesh, and region-based material type assignments, are both acceptable partitions (see Ref. [4] for information about this and other meta-data often accompanying mesh). In addition to defining the groupings of regions into parts, the assignment of specific parts to processors can be done implicitly or using additional data stored with the partition. MOAB implements several specific methods for loading mesh into a parallel representation: • READ_PART: each processor reads only the mesh used by its part(s). • READ_DELETE: each processor reads the entire mesh, then deletes the mesh not used by its part(s). • BCAST_DELETE: the root processor reads and broadcasts the mesh; each processor then deletes the mesh not used by its part(s). The READ_DELETE and BCAST_DELETE methods are supported for all file types MOAB is able to read, while READ_PART is only implemented for MOAB’s native HDF5-based file format and netcdf and pnetcdf files from climate application. Various other options control the selection of part sets or other details of the parallel read process. For example, the application can specify the tags, and optionally tag values, which identify parts, and whether those parts should be distributed according to tag value or using round-robin assignment. The options used to specify loading method, the data used to identify parts, and other parameters controlling the parallel read process, are shown in Table 7. ### Table 7: Options passed to MOAB’s load_file function identifying the partition and other parameters controlling the parallel read of mesh data. Options and values should appear in option string as “option=val”, with a delimiter (usually “;”) between options. Option Value Description PARTITION <tag_name> Sets with the specified tag name should be used as part sets PARTITION_VAL <val1, val2-val3, ...> Integer values to be combined with tag name, with ranges input using val1, val2-val3. Not meaningful unless PARTITION option is also given. PARTITION_DISTRIBUTE (none) If present, or values are not input using PARTITION_VAL, sets with tag indicated in PARTITION option are partitioned across processors in round-robin fashion. PARALLEL_RESOLVE_SHARED_ENTS <pd.sd> Resolve entities shared between processors, where partition is made up of pd- dimensional entities, and entities of dimension sd and lower should be resolved. PARALLEL_GHOSTS <gd.bd.nl[.ad]> Exchange ghost elements at shared inter-processor interfaces. Ghost elements of dimension gd will be exchanged. Ghost elements are chosen going through bd-dimensional interface entities. Number of layers of ghost elements is specified in nl. If ad is present, lower-dimensional entities bounding exchanged ghost entities will also be exchanged; allowed values for ad are 1 (exchange bounding edges), 2 (faces), or 3 (edges and faces). PARALLEL_COMM <id> Use the ParallelComm with index <id>. Index for a ParallelComm object can be checked with ParallelComm::get_id(), and a ParallelComm with a given index can be retrieved using ParallelComm::get_pcomm(id). CPUTIME (none) Print cpu time required for each step of parallel read & initialization. MPI_IO_RANK <r> If read method requires reading mesh onto a single processor, processor with rank r is used to do that read. Several example option strings controlling parallel reading and initialization are: “PARALLEL=READ_DELETE; PARTITION=MATERIAL_SET; PARTITION_VAL=100, 200, 600-700”: The whole mesh is read by every processor; this processor keeps mesh in sets assigned the tag whose name is “MATERIAL_SET” and whose value is any one of 100, 200, and 600-700 inclusive. “PARALLEL=READ_PART; PARTITION=PARALLEL_PARTITION, PARTITION_VAL=2”: Each processor reads only its mesh; this processor, whose rank is 2, is responsible for elements in a set with the PARALLEL_PARTITION tag whose value is 2. This would by typical input for a mesh which had already been partitioned with e.g. Zoltan or Parmetis. “PARALLEL=BCAST_DELETE; PARTITION=GEOM_DIMENSION, PARTITION_VAL=3, PARTITION_DISTRIBUTE”: The root processor reads the file and broadcasts the whole mesh to all processors. If a list is constructed with entity sets whose GEOM_DIMENSION tag is 3, i.e. sets corresponding to geometric volumes in the original geometric model, this processor is responsible for all elements with index R+iP, i >= 0 (i.e. a round-robin distribution). ### 5.2.2. Parallel Mesh Initialization Using Functions After creating the local mesh on each processor, an application can call the following functions in ParallelComm to establish information on shared mesh entities. See the [http://ftp.mcs.anl.gov/pub/fathom/moab-docs/HelloParMOAB_8cpp-example.html example] in the MOAB source tree for a complete example of how this is done from an application. • ParallelComm::resolve_shared_entities (collective): Resolves shared entities between processors, based by default on GLOBAL_ID tag values of vertices. For meshes with more than 2 billion vertices, an opaque tag of size 8 should be used, in which a 64 bit integer is casted to that opaque tag. Various forms are available, based on entities to be evaluated and maximum dimension for which entity sharing should be found. • ParallelComm::exchange_ghost_cells (collective): Exchange ghost entities with processors sharing an interface with this processor, based on specified ghost dimension (dimension of ghost entities exchanged), bridge dimension, number of layers, and type of adjacencies to ghost entities. An entity is sent as a ghost if it is within that number of layers, across entities of the bridge dimension, with entities owned by the receiving processor, or if it is a lower-dimensional entity adjacent to a ghost entity and that option is requested. ### 5.2.3. Communicator Handling The ParallelComm constructor takes arguments for an MPI communicator and a MOAB instance. The ParallelComm instance stores the MPI communicator, and registers itself with the MOAB instance. Applications can specify the ParallelComm index to be used for a given file operation, thereby specifying the MPI communicator for that parallel operation. For example: using namespace moab; // pass a communicator to the constructor, getting back the index MPI_Comm my_mpicomm; int pcomm_index; ParallelComm my_pcomm(moab, my_mpicomm, &pcomm_index); // write the pcomm index into a string option pcomm_index); // specify that option in a parallel read operation In the above code fragment, the ParallelComm instance with index pcomm_index will be used in the parallel file read, so that the operation executes over the specified MPI communicator. If no ParallelComm instance is specified for a parallel file operation, a default instance will be defined, using MPI_COMM_WORLD. Applications needing to retrieve a ParallelComm instance created previously and stored with the MOAB instance, e.g. by a different code component, can do so using a static function on ParallelComm: ParallelComm *my_pcomm = ParallelComm::get_pcomm(moab, pcomm_index); ParallelComm also provides the ParallelComm::get_all_pcomm function, for retrieving all ParallelComm instances stored with a MOAB instance. For syntax and usage of this function, see the MOAB online documentation for ParallelComm.hpp [8]. ### 5.3. Parallel Mesh Query Functions Various functions are commonly used in parallel mesh-based applications. Functions marked as being collective must be called collectively for all processors that are members of the communicator associated with the ParallelComm instance used for the call. ParallelComm::get_pstatus: Get the parallel status for the entity. ParallelComm::get_pstatus_entities: Get all entities whose pstatus satisfies (pstatus & val). ParallelComm::get_owner: Get the rank of the owning processor for the specified entity. ParallelComm::get_owner_handle: Get the rank of the owning processor for the specified entity, and the entity's handle on the owning processor. ParallelComm::get_sharing_data: Get the sharing processor(s) and handle(s) for an entity or entities. Various overloaded versions are available, some with an optional “operation” argument, where Interface::INTERSECT or Interface::UNION can be specified. This is similar to the operation arguments to Interface::get_adjacencies. ParallelComm::get_shared_entities: Get entities shared with the given processor, or with all processors. This function has optional arguments for specifying dimension, whether interface entities are requested, and whether to return just owned entities. ParallelComm::get_interface_procs: Return all processors with whom this processor shares an interface. ParallelComm::get_comm_procs: Return all processors with whom this processor communicates. ### 5.4. Parallel Mesh Communication Once a parallel mesh has been initialized, applications can call the ParallelComm::exchange_tags function for exchanging tag values between processors. This function causes the owning processor to send the specified tag values for all shared, owned entities to other processors sharing those entities. Asynchronous communication is used to hide latency, and only point-to-point communication is used in these calls. ## 6.Building MOAB-Based Applications There are two primary mechanisms supported by MOAB for building applications, one based on MOAB-defined make variables, and the other based on the use of libtool and autoconf. Both assume the use of a “make”-based build system. The easiest way to incorporate MOAB into an application’s build process is to include the “moab.make” file into the application’s Makefile, adding the make variables MOAB_INCLUDES and MOAB_LIBS_LINK to application’s compile and link commands, respectively. MOAB_INCLUDES contains compiler options specifying the location of MOAB include files, and any preprocessor definitions required by MOAB. MOAB_LIBS_LINK contains both the options telling where libraries can be found, and the link options which incorporate those libraries into the application. Any libraries depended on by the particular configuration of MOAB are included in that definition, e.g. the HDF5 library. Using this method to incorporate MOAB is the most straightforward; for example, the following Makefile is used to build one of the example problems packaged with the MOAB source: include ${MOAB_LIB_DIR}/moab.make GetEntities : GetEntities.o${CXX} $<${MOAB_LIBS_LINK} -o $@ .cpp.o :${CXX} ${MOAB_INCLUDES} -c$< Here, the MOAB_LIB_DIR environment variable or make argument definition specifies where the MOAB library is installed; this is also the location of the moab.make file. Once that file has been included, MOAB_INCLUDES and MOAB_LIBS_LINK can be used, as shown. Other make variables are defined in the moab.make file which simplify building applications: • MOAB_LIBDIR, MOAB_INCLUDEDIR: the directories into which MOAB libraries and include files will be installed, respectively. Note that some include files are put in a subdirectory named “moab” below that directory, to reflect namespace naming conventions used in MOAB. • MOAB_CXXFLAGS, MOAB_CFLAGS, MOAB_LDFLAGS: Options passed to the C++ and C compilers and the linker, respectively. • MOAB_CXX, MOAB_CC, MOAB_FC: C++, C, and Fortran compilers specified to MOAB at configure time, respectively. The second method for incorporating MOAB into an application’s build system is to use autoconf and libtool. MOAB is configured using these tools, and generates the “.la” files that hold information on library dependencies that can be used in application build systems also based on autoconf and libtool. Further information on this subject is beyond the scope of this User’s Guide; see the “.la” files as installed by MOAB, and contact the MOAB developer’s mailing list [6] for more details. ## 7.iMesh (ITAPS Mesh Interface) Implementation in MOAB iMesh is a common API to mesh data developed as part of the Interoperable Tools for Advanced Petascale Simulations (ITAPS) project [19]. Applications using the iMesh interface can operate on any implementation of that interface, including MOAB. MOAB-based applications can take advantage of other services implemented on top of iMesh, including the MESQUITE mesh improvement toolkit [20]. MOAB’s native interface is accessed through the Interface abstract C++ base class. Wrappers are not provided in other languages; rather, applications wanting to access MOAB from those languages should do so through iMesh. In most cases, the data models and functionality available through MOAB and iMesh are identical. However, there are a few differences, subtle and not-so-subtle, between the two: SPARSE tags used by default: MOAB’s iMesh implementation creates SPARSE tags by default, because of semantic requirements of other tag-related functions in iMesh. To create DENSE tags through iMesh, use the iMesh_createTagWithOptions extension function (see below). Higher-order elements: ITAPS currently handles higher-order elements (e.g. a 10-node tetrahedron) usi[21]5. As described in [sec-entities], MOAB supports higher-order entities by allowing various numbers of vertices to define topological entities like quadrilateral or tetrahedron. Applications can specify flags to the connectivity and adjacency functions specifying whether corner or all vertices are requested. Self-adjacencies: In MOAB’s native interface, entities are always self-adjacent6; that is, adjacencies of equal dimension requested from an entity will always include that entity, while from iMesh will not include that entity. Option strings: The iMesh specification requires that options in the options string passed to various functions (e.g. iMesh_load) be prepended with the implementation name required to parse them, and delimited with spaces. Thus, a MOAB-targeted option would appear as “moab:PARALLEL=READ_PART moab:PARTITION=MATERIAL_SET”. To provide complete MOAB support from other languages through iMesh, a collection of iMesh extension functions are also available. A general description of these extensions appears below; for a complete description, see the online documentation for iMesh-extensions.h [8]. • Recursive get_entities functions: There are many cases where sets include other sets (see [4] for more information). MOAB provides iMesh_getEntitiesRec, and other recursive-supporting functions, to get all non-set entities of a given type or topology accessible from input set(s). Similar functions are available for number of entities of a given type/topology. • Get entities by tag, and optionally tag value: It is common to search for entities with a given tag, and possibly tag value(s); functions like iMesh_getEntitiesByTag are provided for this purpose. • Options to createTag: To provide more control over the tag type, the iMesh_createTagWithOptions is provided. The storage type is controlled with the “ • MBCNType: Canonical numbering evaluations are commonly needed by applications, e.g. to apply boundary conditions locally. The MBCN package provides these evaluations in terms of entity types defined in MOAB [9]; the getMBCNType is required to translate between iMesh_Topology and MBCN type. • Iterator step: Step an iterator a specified number of entities; allows advancement of an iterator without needing to allocate memory to hold the entity handles stepped over. • Direct access to tag storage: The Interface::tag_iterate function allows an application get a pointer to the memory used to store a given tag. For dense tags on contiguous ranges of entities, this provides more efficient access to tags. The iMesh functionn iMesh_tagIterate provides access to this functionlity. See examples/TagIterateC.c and examples/TagIterateF.F for examples of how to use this from C and Fortran, respectively. As required by the iMesh specification, MOAB generates the “iMesh-Defs.inc” file and installs it with the iMesh and MOAB libraries. This file defines make variables which can be used to build iMesh-based applications. The method used here is quite similar to that used for MOAB itself (see Section 6). In particular, the IMESH_INCLUDES and IMESH_LIBS make variables can be used with application compile and link commands, respectively, with other make variables similar to those provided in moab.make also available. Note that using the iMesh interface from Fortran-based applications requires a compiler that supports Cray pointers, along with the pass-by-value (VAL) extension. Almost all compilers support those extensions; however, if using the gcc series of compilers, you must use gfortran 4.3 or later. 5There are currently no implementations of this interface. 6iMesh and MOAB both define adjacencies using the topological concept of closure. Since the closure of an entity includes the entity itself, the d-dimensional entities on the closure of a given entity should include the entity itself. ## 8.Python Interface (PyMOAB) A Python interface to MOAB's essential core functionality and a few other tools has been added as of Version 5.0. The pymoab module can be used to interactively interrogate existing mesh files or prototype MOAB-based algorithms. It can also be connected to other Python applications or modules for generation, manipulation, and visualization of a MOAB mesh and mesh data. Examples of this can be found in the laplaciansmoother.py and yt2moab.py files. Interaction with the PyMOAB interface is intended to be somewhat analagous to interaction with the MOAB C++ API. A simple example of file loading and mesh interrogation can be found in interrogate_mesh.py PyMOAB uses NumPy internally to represent data and vertex coordinates though other properly formed data structures can be used to create vertices, set data, etc. Data and vertex coordinates will always be returned in the form of NumPy arrays, however. EntityHandles are represented by Python long integers. Arrays of these values are commonly returned from calls as MOAB Ranges. MOAB ErrorCodes - Errors are automatically checked internally by the PyMOAB instance, raising various exceptions depending on the error that occurs. These exceptions can be handled as raised from the functions or acceptable error values returned can also be specified via the exceptions parameter common to all PyMOAB functions in which case no exception will be raised. Documentation for PyMOAB functions is provided as part of this User's Guide, but can also be accessed in the Python interpreter by calling help(<function_or_method>). Examples of PyMOAB usage can be found in the /examples/python/ directory. ## 9.Structured Mesh Representation A structured mesh is defined as a D-dimensional mesh whose interior vertices have 2D connected edges. Structured mesh can be stored without connectivity, if certain information is kept about the parametric space of each structured block of mesh. MOAB can represent structured mesh with implicit connectivity, saving approximately 57% of the storage cost compared to an unstructured representation7. Since connectivity must be computed on the fly, these queries execute a bit slower than those for unstructured mesh. More information on the theory behind MOAB's structured mesh representation can be found in “MOAB-SD: Integrated structured and unstructured mesh representation”[18]. Currently, MOAB's structured mesh representation can only be used by creating structured mesh at runtime; that is, structured mesh is saved/restored in an unstructured format in MOAB's HDF5-based native save format. For more details on how to use MOAB's structured mesh representation, see the scdseq_test.cpp source file in the test/ directory. 7 This assumes vertex coordinates are represented explicitly, and that there are approximately the same number of vertices and hexahedra in a structured hex mesh. ## 10.Spectral Element Meshes The Spectral Element Method (SEM) is a high-order method, using a polynomial Legendre interpolation basis with Gauss-Lobatto quadrature points, in contrast to the Lagrange basis used in (linear) finite elements [20]. SEM obtains exponential convergence with decreasing mesh characteristic sizes, and codes implementing this method typically have high floating-point intensity, making the method highly efficient on modern CPUs. Most Nth-order SEM codes require tensor product cuboid (quad/hex) meshes, with each d-dimensional element containing (N+1)^d degrees of freedom (DOFs). There are various methods for representing SEM meshes and solution fields on them; this document discusses these methods and the tradeoffs between them. The mesh parts of this discussion are given in terms of the iMesh mesh interface and its implementation by the MOAB mesh library [21]. The figure above shows a two-dimensional 3rd-order SEM mesh consisting of four quadrilaterals. For this mesh, each quadrilateral has (N+1)^2=16 DOFs, with corner and edge degrees of freedom shared between neighboring quadrilaterals. ### 10.1. Representations There are various representations of this mesh in a mesh database like MOAB, depending on how DOFs are related to mesh entities and tags on those entities. We mention several possible representations: 1. Corner vertices, element-based DOFs: Each quadrilateral is defined by four vertices, ordered in CCW order typical of FE meshes. DOFs are stored as tags on quadrilaterals, with size (N+1)^2 values, ordered lexicographically (i.e. as a 2D array tag(i,j) with i varying faster than j.) In the figure above, the connectivity for face 1 would be (1, 4, 16, 13), and DOFs would be ordered (1..16). Note that in this representation, tag values for DOFs shared by neighboring elements must be set multiple times, since there are as many copies of these DOFs as elements sharing them. 2. High-order FE-like elements: Each DOF is represented by a mesh vertex. Quadrilaterals each have (N+1)^2 vertices, ordered as they would be for high-order finite elements (corner vertices first, then mid-edge and mid-face elements; see [22]). Mid -face, -edge, and -region vertices for a given edge/face/region would be ordered lexicographically, according to positive direction in a corresponding reference element. In the figure above, the connectivity array for face 1 would be (1, 4, 16, 13, 2, 3, 8, 12, 14, 15, 5, 9, 6, 7, 10, 11). DOF values are stored as tags on vertices. Since DOFs are uniquely associated with vertices and vertices are shared by neighboring elements, tag values only need to be set once. Full vertex-quadrilateral adjacencies are available, for all vertices. 3. Linear FE-like elements, one vertex per DOF, array with DOF vertices: Each quadrilateral is defined by four (corner) vertices, with additional vertices representing mid-edge and mid-face DOFs. An additional “DOF array” tag is assigned to each quadrilateral, storing the array of vertices representing the (N+1)^2 DOFs for the quadrilateral, ordered lexicographically. For the figure above, the connectivity array for face 1 would be (1, 4, 16, 13), and the DOF array would be (1..16), assuming that vertex handles are integers as shown in the figure. DOF values are stored as tags on vertices, and lexicographically-ordered arrays of DOFs can be retrieved using the DOF array tag as input to the tag_get_data function in MOAB. Adjacency functions would only be meaningful for corner vertices, but tag values would only need to be set once per DOF. 4. High-order FE-like elements, array with DOF vertices: This is a combination of options 2 and 3. The advantage would be full vertex-quad adjacency support and direct availability of lexicographically-ordered vertex arrays, at the expense of more memory. 5. Convert to linear mesh: Since a spectral element is a cuboid with higher-order vertices, it can always be converted to N^2 linear cuboids using the high-order vertices as corners of the finer quads/hexes. This is how readers in ParaView and VisIt typically import spectral meshes (CAM-SE also exports connectivity in this form). As a convenience for applications, functions could also be provided for important tasks, like assembling the vertex handles for an entity in lexographic order (useful for option 2 above), and getting an array of tag values in lexicographic order (for option 3 above). There are various competing tradeoffs in the various representation types. These include: • Adjacencies: being able to retrieve the element(s) using a given (corner or higher-order) vertex. • Connectivity list: being able to retrieve the connectivity of a given element, consisting of all (corner + higher-order) vertices in the element, usually in lexicographical order. This is closely linked with being able to access the connectivity list as a const*, i.e. using the list straight from memory without needing to copy it. • Memory vs. time: There is a memory vs. execution time tradeoff between duplicating interface vertex solution/tag variables in neighboring elements (more memory but more time-efficient and allows direct access to tag storage by applications) versus using vertex-based tags (less memory but requires assembly of variables into lexicographically-ordered arrays, and prevents direct access from applications). The lower-memory option (storing variables on vertices and assembling into lexicographically-ordered arrays for application use) usually ends up costing more in memory anyway, since applications must allocate their own storage for these arrays. On the other hand, certain applications will always choose to do that, instead of sharing storage with MOAB for these variables. In the case where applications do share memory with MOAB, other tools would need to interpret the lexicographically-ordered field arrays specially, instead of simply treating the vertex tags as a point-based field. ### 10.3. MOAB Representation In choosing the right MOAB representation for spectral meshes, we are trying to balance a) minimal memory usage, b) access to properly-ordered and -aligned tag storage, and c) maximal compatibility with tools likely to use MOAB. The solution we propose is to use a representation most like option 2) above, with a few optional behaviors based on application requirements. In brief, we propose to represent elements using the linear, FE-ordered connectivity list (containing only corner vertices from the spectral element), with field variables written to either vertices, lexicographically-ordered arrays on elements, or both, and with a lexicographically-ordered array (stored on tag SPECTRAL_VERTICES) of all (corner+higher-order) vertices stored on elements. In the either/or case, the choice will be evident from the tag size and the entities on which the tag is set. In the both case, the tag name will have a “-LEX” suffix for the element tags, and the size of the element tag will be (N+1)^2 times that of the vertex-based tag. Finally, the file set containing the spectral elements (or the root set, if no file set was input to the read) will contain a “SPECTRAL_ORDER” tag whose value is N. These conventions are described in the “Metadata Information” document distributed with the MOAB source code. ## 11.Performance and Using MOAB Efficiently from Applications MOAB is designed to operate efficiently on groups of entities and for large meshes. Applications will be most efficient when they operate on entities in groups, especially groups which are close in their order of creation. The MOAB API is structured to encourage operations on groups of entities. Conversely, MOAB will not perform as well as other libraries if there are frequent deletion and creation of entities. For those types of applications, a mesh library using a C++ object-based representation is more appropriate. In this section, performance of MOAB when executing a variety of tasks is described, and compared to that of other representations. Of course, these metrics are based on the particular models and environments where they are run, and may or may not be representative of other application types. One useful measure of MOAB performance is in the representation and query of a large mesh. MOAB includes a performance test, located in the test/perf directory, in which a single rectangular region of hexahedral elements is created then queried; the following steps are performed: • Create the vertices and hexes in the mesh • For each vertex, get the set of connected hexahedra • For each hex, get the connected vertices, their coordinates, average them, and assign them as a tag on the hexes This test can be run on your system to determine the runtime and memory performance for these queries in MOAB. ## 12.Error Handling Errors are handled through the routine MBError(). This routine calls MBTraceBackErrorHandler(), the default error handler which tries to print a traceback. The arguments to MBTraceBackErrorHandler() are the line number where the error occurred, the function where error was detected, the file in which the error was detected, the corresponding directory, the error message, and the error type. A small set of macros is used to make the error handling lightweight. These macros are used throughout the MOAB libraries and can be employed by the application programmer as well. When an error is first detected, one should set it by calling MB_SET_ERR(err_code, err_msg); Note, err_msg can be a string literal, or a C++ style output stream with << operators, such that the error message string is formated, like MB_SET_ERR(MB_FAILURE, "Failed " << n << " times"); The user should check the return codes for all MOAB routines (and possibly user-defined routines as well) with ErrorCode rval = MOABRoutine(...);MB_CHK_ERR(rval); To pass back a new error message (if rval is not MB_SUCCESS), use ErrorCode rval = MOABRoutine(...);MB_CHK_SET_ERR(rval, "User specified error message string (or stream)"); If this procedure is followed throughout all of the user’s libraries and codes, any error will by default generate a clean traceback of the location of the error. In addition to the basic macros mentioned above, there are some variations, such as (for more information, refer to src/moab/ErrorHandler.hpp): The error control mechanism are enabled by default if a MOAB Core instance is created. Otherwise, the user needs to call MBErrorHandler_Init() and MBErrorHandler_Finalize() at the application level in the main function. For example code on error handling, please refer to examples/TestErrorHandling.cpp, examples/TestErrorHandlingPar.cpp and examples/ErrorHandlingSimulation.cpp. ## 13.Conclusions and Future Plans MOAB, a Mesh-Oriented datABase, provides a simple but powerful data abstraction to structured and unstructured mesh, and makes that abstraction available through a function API. MOAB provides the mesh representation for the VERDE mesh verification tool, which demonstrates some of the powerful mesh metadata representation capabilities in MOAB. MOAB includes modules that import mesh in the ExodusII, CUBIT .cub and Vtk file formats, as well as the capability to write mesh to ExodusII, all without licensing restrictions normally found in ExodusII-based applications. MOAB also has the capability to represent and query structured mesh in a way that optimizes storage space using the parametric space of a structured mesh; see Ref. [17] for details. Initial results have demonstrated that the data abstraction provided by MOAB is powerful enough to represent many different kinds of mesh data found in real applications, including geometric topology groupings and relations, boundary condition groupings, and inter-processor interface representation. Our future plans are to further explore how these abstractions can be used in the design through analysis process. ## 14.References [2] T.J. Tautges and A. Caceres, “Scalable parallel solution coupling for multiphysics reactor simulation,” Journal of Physics: Conference Series, vol. 180, 2009. [3] T.J. Tautges, MOAB Meta-Data Information, 2010. [4] T.J. Tautges, “MOAB - ITAPS – SIGMA.”, http://sigma.mcs.anl.gov/ [5] “MOAB Developers Email List.”, moab-dev@mcs.anl.gov. [6] “MOAB Users Email List.”, moab@mcs.anl.gov. [7] “MOAB online documentation.”, http://ftp.mcs.anl.gov/pub/fathom/moab-docs/index.html [8] T.J. Tautges, “Canonical numbering systems for finite-element codes,” Communications in Numerical Methods in Engineering, vol. Online, Mar. 2009. [9] L.A. Schoof and V.R. Yarberry, EXODUS II: A Finite Element Data Model, Albuquerque, NM: Sandia National Laboratories, 1994. [10] M. PATRAN, “PATRAN User’s Manual,” 2005. [11] VisIt User's Guide. [12] K. Devine, E. Boman, R. Heaphy, B. Hendrickson, and C. Vaughan, “Zoltan Data Management Services for Parallel Dynamic Applications,” Computing in Science and Engineering, vol. 4, 2002, pp. 90–97. [13] METIS - Serial Graph Partitioning and Fill-reducing Matrix Ordering. http://glaros.dtc.umn.edu/gkhome/views/metis [14] T.J. Tautges, P.P.H. Wilson, J. Kraftcheck, B.F. Smith, and D.L. Henderson, “Acceleration Techniques for Direct Use of CAD-Based Geometries in Monte Carlo Radiation Transport,” International Conference on Mathematics, Computational Methods & Reactor Physics (M&C 2009), Saratoga Springs, NY: American Nuclear Society, 2009. [15] H. Kim and T. Tautges, “EBMesh: An Embedded Boundary Meshing Tool,” in preparation. [16] G.D. Sjaardema, T.J. Tautges, T.J. Wilson, S.J. Owen, T.D. Blacker, W.J. Bohnhoff, T.L. Edwards, J.R. Hipp, R.R. Lober, and S.A. Mitchell, CUBIT mesh generation environment Volume 1: Users manual, Sandia National Laboratories, May 1994, 1994. [17] T.J. Tautges, “CGM: A geometry interface for mesh generation, analysis and other applications,” Engineering with Computers, vol. 17, 2001, pp. 299-314. [18] T. J. Tautges, MOAB-SD: Integrated structured and unstructured mesh representation, Engineering with Computers, vol. 20, no. 3, pp. 286-293, 2004. [19] “Interoperable Technologies for Advanced Petascale Simulations (ITAPS),” Interoperable Technologies for Advanced Petascale Simulations (ITAPS). [20] P. Knupp, “Mesh quality improvement for SciDAC applications,” Journal of Physics: Conference Series, vol. 46, 2006, pp. 458-462. [21] M. O. Deville, P. F. Fischer, and E. H. Mund, High-order methods for incompressible fluid flow. Cambridge, UK; New York: Cambridge University Press, 2002. [22] T. J. Tautges, “MOAB Wiki.” [Online]. Available: http://sigma.mcs.anl.gov/moab-library [Accessed: 20-Oct-2016]. [23] T. J. Tautges, “Canonical numbering systems for finite-element codes,” International Journal for Numerical Methods in Biomedical Engineering, vol. 26, no. 12, pp. 1559–1572, 2010. [24] V. Dyedov, N. Ray, D.Einstein, X. Jiao, T. Tautges, “AHF: Array-based half-facet data structure for mixed-dimensional and non-manifold meshes”, In Proceedings of 22nd International Meshing Roundtable, Orlando, Florida, October 2013.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 16 Aug 2018, 22:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If f(x)=√(x^2−2x+1), what is f(9)? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 47946 If f(x)=√(x^2−2x+1), what is f(9)?  [#permalink] ### Show Tags 15 Apr 2018, 10:00 00:00 Difficulty: 5% (low) Question Stats: 96% (00:31) correct 4% (00:20) wrong based on 94 sessions ### HideShow timer Statistics If $$f(x)=\sqrt{x^2−2x+1}$$, what is f(9)? A. -8 B. 5 C. 8 D. 9 E. 82 _________________ examPAL Representative Joined: 07 Dec 2017 Posts: 553 If f(x)=√(x^2−2x+1), what is f(9)?  [#permalink] ### Show Tags 15 Apr 2018, 10:30 Bunuel wrote: If $$f(x)=\sqrt{x^2−2x+1}$$, what is f(9)? A. -8 B. 5 C. 8 D. 9 E. 82 We'll show two approaches. The Precise approach involves straight up calculation: 9^2 - 2*9 + 1 = 81 - 18 + 1 = 64, the square root of which is 8. Note that the 'square root function' always returns a positive number which is why (A) is incorrect. We could also have estimated the answer, an Alternative approach. We know that the square root of x^2 is x. Since we're asked for the square root of something smaller than x^2 (e.g. x^2 - 2*x+1) we know the answer must be smaller than x. Only C and B make sense but we can estimate (without calculating) that (B) is too far: 5^2 = 25 and 81 - 18 + 1 is about 60, much larger than that. Note that in this case estimation is not necessary as the equation is simple, however it can be very useful for difficult calculations. _________________ David Senior tutor at examPAL Signup for a free GMAT course We won some awards: Save up to \$250 on examPAL packages (special for GMAT Club members) Intern Joined: 15 Apr 2018 Posts: 2 Re: If f(x)=√(x^2−2x+1), what is f(9)?  [#permalink] ### Show Tags 15 Apr 2018, 21:51 1 f(x) = sqrt{(x-1)^2} = x - 1 f(9) = 9 - 1 = 8 Sent from my iPad using GMAT Club Forum Intern Joined: 04 Apr 2018 Posts: 5 Re: If f(x)=√(x^2−2x+1), what is f(9)?  [#permalink] ### Show Tags 18 Apr 2018, 10:26 anwer is C car a square root can never be negative Re: If f(x)=√(x^2−2x+1), what is f(9)? &nbs [#permalink] 18 Apr 2018, 10:26 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# A lovely one Geometry Level 4 If , inside a big circle , exacty 36 small circles , each of radius 5 , can be drawn in such a way that each small circle touches the big circle and also touches both its adjacent small circles , then the radius of the big circle is a Find $[ a]$ Clarification - • The image is just shown to understand the given situation , here all the small circles are of same radius and each one touches its adjacent one and the circle in which it is inscribed. • [.] - Greatest integer function Note - If you can give a general formula for the radius of the big circle such the there are more than or equal to 3 small circles inscribed in it , you will be appreciated ×
# Definition:Subdivision (Real Analysis) ## Definition Let $\closedint a b$ be a closed interval of the set $\R$ of real numbers. ### Finite Let $x_0, x_1, x_2, \ldots, x_{n - 1}, x_n$ be points of $\R$ such that: $a = x_0 < x_1 < x_2 < \cdots < x_{n - 1} < x_n = b$ Then $\set {x_0, x_1, x_2, \ldots, x_{n - 1}, x_n}$ form a finite subdivision of $\closedint a b$. ### Infinite Let $x_0, x_1, x_2, \ldots$ be an infinite number of points of $\R$ such that: $a = x_0 < x_1 < x_2 < \cdots < x_{n - 1} < \ldots \le b$ Then $\set {x_0, x_1, x_2, \ldots}$ forms an infinite subdivision of $\closedint a b$. ## Normal Subdivision $P$ is a normal subdivision of $\closedint a b$ if and only if: the length of every interval of the form $\closedint {x_i} {x_{i + 1} }$ is the same as every other. That is, if and only if: $\exists c \in \R_{> 0}: \forall i \in \N_{< n}: x_{i + 1} - x_i = c$ ## Also known as Some sources use the term partition for the concept of a subdivision. However, the latter term has a different and more general definition, so its use is deprecated on $\mathsf{Pr} \infty \mathsf{fWiki}$.
MathSciNet bibliographic data MR2279281 (2008d:11060) 11G25 (11S40 11S70 14G15 19D06 19F27) Deitmar, Anton Remarks on zeta functions and $K$$K$-theory over ${\bf F}\sb 1$${\bf F}\sb 1$. Proc. Japan Acad. Ser. A Math. Sci. 82 (2006), no. 8, 141–146. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
# Divergence or Convergence based on rhat using different datasets [Please include Stan program and accompanying data if possible] ##### stan file stanmodelcode="data { int<lower=1> N; // Number of points from 2 to track length-1 of the i'th for a cluster. int<lower=1> K; // number of states int<lower=1> J; // number of cells in the cluster. vector[N] angular_change_cluster; // an array containning the angular change of all trajectories. vector[N] radius_cluster; // an array containning the radius (second-to-the-end) of all trajectories. vector[N+J] number_of_neighbor_cluster; // we will use it as a covariate to determine two transition matrices. vector<lower=0>[K] alpha_prior; // dirichlet distribution parameters vector[K] alpha_P_1[K]; // dirichlet distribution parameters for P_1 vector[K] alpha_P_2[K]; // dirichlet distribution parameters for P_2 int<lower=1> counter_n[J]; // where to start the markovian process int<lower=1> b_final_f[J]; // where to end the markovian process int<lower=1> a_initial_f[J]; // first position of each trajectory } parameters { simplex[K] prior_pi; // k is the number of observed models we consider. simplex[K] P[2, K]; // transition matrix real<lower=0, upper=4> alpha_per_angle; real<lower=0, upper=10> beta_per_angle; real<lower=0, upper=10> alpha_non_per_angle; real<lower=0, upper=4> beta_non_per_angle; real<lower=0, upper=10> alpha_hesi_angle; } model { matrix[K, K] P_transformed[K]; //saving P as a matrix format. vector[N+J] prior_pi_alternative[K]; vector[N] prior_pi_modeling[K]; row_vector[N] models[K]; // coordinates row_vector[K] a; row_vector[K] b; int t; vector[K] transfer; for(k in 1:K){ target+= dirichlet_lpdf(P[1, k]| alpha_P_1[k]); } for(k in 1:K){ target+= dirichlet_lpdf(P[2, k]| alpha_P_2[k]); } for(l in 1:2){ for(m in 1:K){ for(n in 1:K){ P_transformed[l, m, n] = P[l, m, n]; } } } target+= dirichlet_lpdf(prior_pi| alpha_prior); for(j in 1:J){ # target+= normal_lpdf(alpha_r[j]| b_0, sigma_alpha); for(k in 1:K){ prior_pi_alternative[k, a_initial_f[j]] = prior_pi[k]; } for(l in counter_n[j]: b_final_f[j]){ for(k in 1:K){ b[k] = prior_pi_alternative[k, (l-1)]; } if(number_of_neighbor_cluster[l-1]<=0){ a = b * P_transformed[1]; } if(number_of_neighbor_cluster[l-1]>0){ a = b * P_transformed[2]; } for(k in 1:K){ prior_pi_alternative[k, l] = a[k]; } } } t = 1; for(j in 1:J){ for(l in counter_n[j]: b_final_f[j]){ for(k in 1:K){ prior_pi_modeling[k, t] = prior_pi_alternative[k, l]; } t = t+1; } } ## Introducing no priors means we consider flat prior for the parameter. Note that the domain of the flat prior comes from the bound we have already defined for parameters. for(n in 1:N){ target += log_sum_exp( transfer ); } }" Hello STAN group, Thanks in advance for your support. I have some sets of data to run the above program with. I already use the following command to run the program fit <- stan(model_code = stanmodelcode, model_name = "example", data = dat.cluster.1, iter = 200000, chains = 8, verbose = TRUE, control = list(adapt_delta = 0.9999, stepsize = 0.001, max_treedepth =15), cores=8) My problem is the program converges for some of data sets, but diverges for others where I get large rhat for them. My question is whether the increase of iterations can help those data sets converge? Please note that it takes 10 days to get the program run. It might be worth mentioning that I got no error while running the program. Also, I did not play with some parameters like adapt_delta, stepsize, max_treedepth. Regards, Elaheh [edit: escaped program, but didn’t clean up spacing] iter = 200000, chains = 8 Without know anything else about the model, this is almost definitely way too many iterations. Running for more iterations probably isn’t going to help you. The defaults (2000 iterations, 4 chains) should give you something interesting. If it’s taking this long to get something out of the model, there’s almost certainly something wrong with the model itself. It’s true that a model can behave quite differently on different data, but that probably means that you need to use different models for the two bits of data. Is there a small version of this model you can start with? It’s almost always best to build up these models gradually, checking everything (posterior pairplots, posterior predictives if you have them, neff/Rhat) along the way. 1 Like Also these settings: control = list(adapt_delta = 0.9999, stepsize = 0.001, max_treedepth =15) Making the adapt delta really close to one like that and making the treedepth very deep are things you do when you have no other options. First step is to try all the reparameterizations you can think of. Thanks for your prompt reply. I have already checked the model many times… It seems correct to me… I also reparametrized it couple of times… The data I work with are from the same phenomenon which means they have the same format. Please note that the data is big… Can it make a difference? Regards, Elaheh So when I’m saying the model has a problem, I’m not talking about a typo or a mistake coding. Usually, if Stan doesn’t sample a model efficiently, there’s some underlying mathematical problem in the model, and it’s worth figuring out. Honestly, if someone just gave you this model, there’s very little reason to trust it – even if it’s published somewhere or is what everyone uses. A lot of models are just bad. One of Stan’s ways of diagnosing a model is that it takes forever to run it :D. The most common thing to look for are unidentifiabilities in parameters. An example of this is if you had a problem like, ‘1 = a + b, solve for a and b’. There’s a whole line of solutions, not just one. It’s very easy to accidentally code these into your models. Look at your model in Shinystan or use the R pairplots. Look for where parameters are tightly correlated in the samples. Anything like this is bad – it’s difficult for Stan to explore these places. Really all you can do to start with the smallest version of the model you have with a small bit of data, and build up from there. The adapt_delta and max_treedepth are about the only knobs you get with NUTS, and they’re kindof last-resort things. The volume of data can make a difference. You might have to make prior assumptions for small data to avoid parameters going crazy that aren’t necessary if you just jam a ton of data in your model. However, big data might reveal misspecified pieces of your model you wouldn’t have noticed with only a little data. But this is 2nd order stuff. First is get the model happily sampling in Stan. 4 Likes Also just because data is formatted the same way and goes into the same models, doesn’t mean the posteriors look the same! It’s hard to separate any of these things. It’s all very tangled haha. That’s part of the fun. 2 Likes Thanks for the time you spent on this. The problem might be because of the prior I already implemented. I made some changes following your comments to let a wider range for parameters. The other possibility is the model I use. Anyway, it seems I need to spend some time on it… Regards, Elaheh I would look at your priors—those interval-constrained priors can cause probability mass to pile up at the boundaries, which go to plus or minus infinity on the unconstrained scale where Stan samples. They will also push the estimated posterior means away from the boundary (creating bias if the broader prior was more accurate). Additionally, the posterior has to be proper. So flat priors without upper and lower bounds have to be combined with data to ensure that the posterior is proper. Youc an also vectorize most of these loops, e.g., with: for(k in 1:K){ prior_pi_alternative[k, a_initial_f[j]] = prior_pi[k]; being prior_pi_alternative[ , a_initial_f[j] = prior_pi; Won’t help with efficiency, but it makes the code easier to read and shorter. 1 Like Hi Bob, Thanks for your comments. I implemented some changes and sent the programs to run on server. Please find the below for the new model. ##### stan file stanmodelcode="data { int<lower=1> N; // Number of points from 2 to track length-1 of the i'th for a cluster. int<lower=1> K; // number of states int<lower=1> J; // number of cells in the cluster. vector[N] angular_change_cluster; // an array containning the angular change of all trajectories. vector[N] radius_cluster; // an array containning the radius (second-to-the-end) of all trajectories. vector[N+J] number_of_neighbor_cluster; // we will use it as a covariate to determine two transition matrices. vector<lower=0>[K] alpha_prior; // dirichlet distribution parameters vector[K] alpha_P_1[K]; // dirichlet distribution parameters for P_1 vector[K] alpha_P_2[K]; // dirichlet distribution parameters for P_2 int<lower=1> counter_n[J]; // where to start the markovian process int<lower=1> b_final_f[J]; // where to end the markovian process int<lower=1> a_initial_f[J]; // first position of each trajectory } parameters { simplex[K] prior_pi; // k is the number of observed models we consider. simplex[K] P[2, K]; // transition matrix real<lower=0, upper=4> alpha_per_angle; real<lower=0, upper=10> beta_per_angle; real<lower=0, upper=10> alpha_non_per_angle; real<lower=0, upper=4> beta_non_per_angle; } model { matrix[K, K] P_transformed[K]; //saving P as a matrix format. vector[N+J] prior_pi_alternative[K]; vector[N] prior_pi_modeling[K]; row_vector[N] models[K]; // coordinates row_vector[K] a; row_vector[K] b; int t; vector[K] transfer; for(k in 1:K){ target+= dirichlet_lpdf(P[1, k]| alpha_P_1[k]); } for(k in 1:K){ target+= dirichlet_lpdf(P[2, k]| alpha_P_2[k]); } for(l in 1:2){ for(m in 1:K){ for(n in 1:K){ P_transformed[l, m, n] = P[l, m, n]; } } } target+= dirichlet_lpdf(prior_pi| alpha_prior); for(j in 1:J){ # target+= normal_lpdf(alpha_r[j]| b_0, sigma_alpha); for(k in 1:K){ prior_pi_alternative[k, a_initial_f[j]] = prior_pi[k]; } for(l in counter_n[j]: b_final_f[j]){ for(k in 1:K){ b[k] = prior_pi_alternative[k, (l-1)]; } if(number_of_neighbor_cluster[l-1]<=0){ a = b * P_transformed[1]; } if(number_of_neighbor_cluster[l-1]>0){ a = b * P_transformed[2]; } for(k in 1:K){ prior_pi_alternative[k, l] = a[k]; } } } t = 1; for(j in 1:J){ for(l in counter_n[j]: b_final_f[j]){ for(k in 1:K){ prior_pi_modeling[k, t] = prior_pi_alternative[k, l]; } t = t+1; } } ## Introducing no priors means we consider flat prior for the parameter. Note that the domain of the flat prior comes from the bound we have already defined for parameters. for(n in 1:N){ transfer[1] = log(prior_pi_modeling[1, n]) + beta_lpdf( (angular_change_cluster[n]/pi())| alpha_non_per_angle, beta_non_per_angle); transfer[2] = log(prior_pi_modeling[2, n]) + beta_lpdf((angular_change_cluster[n]/pi())| alpha_per_angle, beta_per_angle); target += log_sum_exp( transfer ); if(max(transfer)==transfer[1]){ } if(max(transfer)==transfer[2]){ } } }" I manually separated the radius part. By using fit <- stan(model_code = stanmodelcode, model_name = "example", data = dat.cluster.4, iter = 1000, chains = 8, verbose = TRUE, control = list(adapt_delta = 0.9999, stepsize = 0.001, max_treedepth =15), cores=8) I got the following result > print(fit_ss\$summary) mean se_mean sd 2.5% prior_pi[1] 0.3764711 0.10041494 0.2009592 0.1376445 prior_pi[2] 0.6235289 0.10041494 0.2009592 0.2179815 P[1,1,1] 0.3801863 0.09655334 0.1932178 0.1398530 P[1,1,2] 0.6198137 0.09655334 0.1932178 0.2368528 P[1,2,1] 0.4522426 0.09892335 0.1979552 0.1815026 P[1,2,2] 0.5477574 0.09892335 0.1979552 0.2061507 P[2,1,1] 0.4436479 0.09343571 0.1869700 0.2586955 P[2,1,2] 0.5563521 0.09343571 0.1869700 0.2294627 P[2,2,1] 0.4068376 0.13352512 0.2672146 0.1265628 P[2,2,2] 0.5931624 0.13352512 0.2672146 0.1298532 alpha_per_angle 1.9637384 0.20160161 0.4034314 1.3729530 beta_per_angle 5.7979664 0.66569337 1.3320983 4.0125342 alpha_non_per_angle 3.5532393 1.11268094 2.2265631 1.3269409 beta_non_per_angle 2.2606035 0.21655704 0.4333837 1.6565645 lp__ -2881.7569818 413.73155873 827.8903103 -4860.2304270 25% 50% 75% 97.5% prior_pi[1] 0.2555593 3.027808e-01 0.5134481 0.7820185 prior_pi[2] 0.4865519 6.972192e-01 0.7444407 0.8623555 P[1,1,1] 0.2477330 3.397177e-01 0.4498614 0.7631472 P[1,1,2] 0.5501386 6.602823e-01 0.7522670 0.8601470 P[1,2,1] 0.2850488 4.624272e-01 0.5867090 0.7938493 P[1,2,2] 0.4132910 5.375728e-01 0.7149512 0.8184974 P[2,1,1] 0.2991540 3.646874e-01 0.5324278 0.7705373 P[2,1,2] 0.4675722 6.353126e-01 0.7008460 0.7413045 P[2,2,1] 0.2143347 2.616189e-01 0.5840389 0.8701468 P[2,2,2] 0.4159611 7.383811e-01 0.7856653 0.8734372 alpha_per_angle 1.7081805 1.866690e+00 2.1497405 2.7507069 beta_per_angle 4.6163355 5.778767e+00 6.9176130 7.8860717 alpha_non_per_angle 1.8658726 3.094800e+00 3.8579596 8.6777145 beta_non_per_angle 1.9498927 2.252867e+00 2.4857999 3.0782765 lp__ -3051.6708411 -2.404105e+03 -2357.9987077 -2335.8918311 n_eff Rhat prior_pi[1] 4.005154 93.38408 prior_pi[2] 4.005154 93.38408 P[1,1,1] 4.004605 159.48504 P[1,1,2] 4.004605 159.48504 P[1,2,1] 4.004389 146.56405 P[1,2,2] 4.004389 146.56405 P[2,1,1] 4.004223 189.73539 P[2,1,2] 4.004223 189.73539 P[2,2,1] 4.004924 134.33409 P[2,2,2] 4.004924 134.33409 alpha_per_angle 4.004529 143.81990 beta_per_angle 4.004277 195.69684 alpha_non_per_angle 4.004319 153.92392 beta_non_per_angle 4.004983 88.77896 lp__ 4.004131 411.07480 Definitely, the model did not converge. I have some questions and I appreciate your time answering them. 1- How many more iterations do I need to get the model converged? Considering that the result is obtained after 3 days and half, I need to get some number of iterations which makes sense with time as well. 2- Is my new result an indicator of non-convergence for all parameters equally? In other words, based on what we see as rhat, do you expect the same converging speed for all parameters? 3- Does it make sense to you to rewrite the last part of code as follows for(n in 1:N){ # transfer[1] = log(prior_pi_modeling[1, n]) + beta_lpdf( (angular_change_cluster[n]/pi())| alpha_non_per_angle, beta_non_per_angle); # transfer[2] = log(prior_pi_modeling[2, n]) + beta_lpdf((angular_change_cluster[n]/pi())| alpha_per_angle, beta_per_angle); if(max(prior_pi_modeling[, n])==prior_pi_modeling[1, n]){ target +=beta_lpdf( (angular_change_cluster[n]/pi())| alpha_non_per_angle, beta_non_per_angle); } if(max(prior_pi_modeling[, n])==prior_pi_modeling[2, n]){ target +=beta_lpdf((angular_change_cluster[n]/pi())| alpha_per_angle, beta_per_angle); } Do you think it affects the convergence speed? Regards Elaheh [edit: escaped code, but it’s still unreadable with the spacing] What specifically are you talking about? That’s a rather large file. If you have a mixture model, your R-hats are going to be bad simply due to identifiability. Michael Betancourt wrote a case study on mixture models on our web site that may be helpful and there’s some basic info in the manual chapter on problematic posteriors. Sorry, missed the questions for all the code the first time. It’s not going to converge if that’s where you’re at now. When things are this bad, all is lost. I’m still not sure what all of this is doing, but you have a bunch of problematic constructs in your model including interval-constrained priors and conditionals that seem to be conditioned on parameters. Thse will break continuous differentiability unless you’re very careful (e.g., doing smooth interpolations). Hi Bob, Thanks for the time you spent on my questions. I will implement them and let you know what happens next. Regards, Elaheh I have a similar problem where I’m fitting a model to multiple (24) datasets. My model converges well for most data sets but not for others, I think due to poor parameter identifiability. Originally my 12 parameters had uniform priors. I think adding informative priors is probably the way to go to assist with identifiability. With 24 data sets, can you use a hierarchical model? 1 Like I don’t know what a hierarchical model is! Possibly. This would mean there are parameters that apply to several data sets? I am fitting a simple hydrological simulation model to streamflow and chemistry data from 8 different catchments x 3 different time periods. I compare the results from the 3 time periods to assess the identifiability/stability of the catchment parameters. OK, so that’s time series, too. With hierarchical models, the idea is to let each condition (catchemnt, time period) get its own parameter (as you do when you fit them all together), but then you give it a prior you also fit at the same time. For example, if alpha is a parameter in every model, you would have alpha[1], ..., alpha[24] and set up a prior like alpha ~ normal(mu, sigma); where mu and sigma are themselves parameters to estimate. Then mu gives you the mean among all conditions, sigma the scale of deviation across conditions. There’s a tutorial in the Stan case studies using Radon and another using repeated binary trials that are gentle intros to hierarchical models. 1 Like Yeah exactly. Although then you would think of it as one larger data set that can be sliced up to give you your separate datasets. For example, if you think the sites are not independent then it may make sense to model the site-specific parameters as having common dependencies on parameters that affect all possible sites. Like how in the eight schools example in the RStan vignette and the Stan manual (and BDA3) the school specific parameters \theta_1,\ldots,\theta_8 each depend on the so-called population or global parameters \mu and \tau. 1 Like
## Overview You are using a WattNode Pulse and the indicated power or energy is roughly double the expected values. You’ve checked the scale factors and everything appears correct. ## Possible Problems • Your data logger or pulse counting device is counting every transition or change of state (low-to-high and high-to-low) as a pulse. We define a pulse as a full cycle including two transitions. You can solve this either by reconfiguring your data logger, or by changing the scale factors by a factor of 2.0 to correct. For example, if you’ve computed a scale factor of 1.2 watthours per pulse, you could change this to 0.6 watthours per pulse to adjust. • Your data logger or pulse counting device is AC coupled (this is rare) and so treats every transition as a pulse. As above, change the scale factors by a factor of 2.0 to correct. • The WattNode is measuring incorrectly. This is unlikely: we’ve never seen a measurement error anywhere near this large and most problems result in low readings. There is a small chance that you’ve applied 240 VAC to a WattNode designed for 120 VAC; if this is the case, you should see the LEDs rapidly flashing red-green-red-green. • The current transformers are measuring incorrectly. We are not aware of any type of load or installation scenario that can cause a CT to report two times the actual current. There are rare cases where CTs are mislabeled, so this could be the problem. The best way to check for this is to compare the actual (or estimated) current to the output voltage (millivolts AC) of the current transformer. $mV!AC_{CT}=frac{1000 cdot A_{Est}}{3 cdot A_{CT}}$ • mVACCT – AC millivolts measured on the CT outputs (between the white and black wires). The normal full-scale value is 333 mVAC. • AEst – Estimated (or measured) actual current through the CT • ACT – Rated current of the CT • You may have an error in your scale factor computations. Contact our technical support and we can check the values for you. Keywords: two times
## Theorem 31: the non-zero integers (mod p) form a group under multiplication I did a couple of sessions with groups of 15 and 16 year olds last week.  I wanted to work with them on ideas involving squares (quadratic residues) in modular arithmetic, but that meant understanding what modular arithmetic was about, so I started by introducing that idea.  I’d like to write about part of what I did in the sessions, because I felt it was quite a successful activity and I hope you might find it interesting too.  I encourage you to try these things for yourself, rather than just reading! So, modular arithmetic.  We started by noticing that 3248 cannot be a square, because no square ends in an 8.  What was important to us was just the last digit, and we can record that using the notation of modular arithmetic.  Here are four equivalent statements: • 3248 leaves remainder 8 when divided by 10; • $3248 \equiv 8$ (mod 10); • 3248 and 8 leave the same remainder when divided by 10; and • 10 divides 3248 – 8. They are all useful ways of thinking about the same idea. We moved on to see how addition works: we saw that $3248 + 73 \equiv 8 + 3 \equiv 11 \equiv 1$ (mod 10).  We can simply focus on the last digits, because that’s all that’s important in the mod 10 world. Then we drew up multiplication tables in the mod 5 world, the mod 3 world and the mod 7 world, and looked for interesting patterns that we might be able to explain.  Here’s the grid for the mod 5 version; the completed tables are over the break (but I encourage you to draw them up for yourself!).  I’ve filled in the diagonal, so that you can check you’ve got the right idea. x (mod 5) 0 1 2 3 4 0 0 1 1 2 4 3 4 4 1 OK, here’s my version of the completed tables for 3, 5 and 7. x (mod 3) 0 1 2 0 0 0 0 1 0 1 2 2 0 2 1 x (mod 5) 0 1 2 3 4 0 0 0 0 0 0 1 0 1 2 3 4 2 0 2 4 1 3 3 0 3 1 4 2 4 0 4 3 2 1 x (mod 7) 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 1 2 3 4 5 6 2 0 2 4 6 1 3 5 3 0 3 6 2 5 1 4 4 0 4 1 5 2 6 3 5 0 5 3 1 6 4 2 6 0 6 5 4 3 2 1 See any interesting patterns?  The students with whom I was working came up with three or four patterns, which I’ll describe below, but I think there are other interesting things to notice (and explain). 1. Perhaps the most obvious thing is that the 0 row and 0 column of each table are filled with 0s.  This does not come as a surprise: $0 \times a = 0$ in ordinary arithmetic, so $0 \times a \equiv 0$ (mod $n$) for any $n$.  So we can be confident that the same will happen in other multiplication tables too. 2. There’s a symmetry about the diagonal going from top left to bottom right.  A little thought reveals that this is because $a \times b \equiv b \times a$ (mod $n$) — multiplication (mod $n$) is commutative.  Again, a pattern that occurs in any of these multiplication tables. 3. There’s a symmetry about a diagonal going from top right to bottom left (being slightly careful about which diagonal we choose).  To make sense of this one, it’s helpful to use negative numbers in the mod arithmetic world: $5 \equiv -2$ (mod 7), for example.  The symmetry then says that $(-a) \times (-b) \equiv a \times b$ (mod $n$); once again, a pattern that works for any modulus. 4. Ignoring the 0 row and 0 column, each row and column contains each possible number exactly once.  The numbers form a Latin square (think Sudoku, if you prefer!).  Does this work for all moduli?  The rest of this post is really going to be making sense of this pattern; I encourage you to consider it further before reading on. Does it work for all moduli?  Perhaps we should try some more.  Let’s try an even modulus: let’s try 4. x (mod 4) 0 1 2 3 0 0 0 0 0 1 0 1 2 3 2 0 2 0 2 3 0 3 2 1 Ah.  So we don’t have a Latin square here.  Why? A little thought reveals that the problem somehow occurs when we multiply a number that is not coprime to the modulus.  You could test this by trying other moduli (e.g. 6 or 9 or 10). So for the rest of this post I’d like to concentrate on prime moduli, because it makes our lives a little simpler.  Similar arguments can be adapted to deal with composite moduli in a suitable way — this would be a good exercise if you’re interested. So let’s think about the multiplication table (mod $p$), where $p$ is a prime. One strategy is to think of the row corresponding to $a$ as being made by repeatedly adding $a$ (mod $p$), and then argue that it takes $p$ steps to get back to the beginning because $a$ and $p$ are coprime. Another is to use Bézout’s theorem.  That tells us that since $a$ and $p$ are coprime, there are integers $m$ and $n$ such that $am + np = 1$.  Interpreting that equation as a congruence (mod $p$), we see that $am \equiv 1$ (mod $p$).  We can think of $m$ as the multiplicative inverse of $a$ (mod $p$): it’s the number by which we multiply $a$ to get 1 (in the mod $p$) world).  The fact that $a$ has a multiplicative inverse (mod $p$) at all is really useful (it’s not obvious), and an added bonus is that Bézout’s theorem gives us an explicit way to compute it. How does this help?  Well, there are $p-1$ possible values to go in the row corresponding to $a$ (not including 0, remember), because there are $p-1$ non-zero values in the mod $p$ world.  If we could show that all the $p-1$ multiples of $a$ we’re looking at ($a$, $2a$, …, $(p-1)a$) are different, then we’d be in business: they’d have to be $1$, $2$, …, $p-1$ in some order. So our job now is to show that if $ka \equiv la$ (mod $p$), then $k \equiv l$ (mod $p$) (if two multiples of $a$ are the same, then they’re the same multiple).  But we can do that, using the existence of the inverse that we saw just now.  If $ka \equiv la$ (mod $p$), then, multiplying by this inverse $m$, we have $kam \equiv lam$ (mod $p$).  That is, $k \equiv l$ (mod $p$).  Putting that another way, if $a$ is coprime to the modulus $p$, then we can `cancel’ the $a$ from both sides of a congruence, by multiplying by its multiplicative inverse. So that explains the pattern. It’s now not far to this week’s theorem (which is, I confess, an excuse for this post!). Theorem The non-zero integers (mod $p$) form an Abelian group under multiplication. To prove this, we have four (or five) things to check. • Firstly, the binary operation is indeed a binary operation: if we take two non-zero integers (mod $p$) and multiply them, we get another non-zero integer (mod $p$).  This is an exercise! • The operation is associative.  This follows from the associativity of multiplication in the integers — again, I’ll leave it to you to write out the details. • There is an identity element, namely 1.  We can see this appearing in our multiplication tables above: the row corresponding to 1 makes no change. • Each element of the set has an inverse.  This is what we’ve just proved. • The operation is commutative (we saw this earlier, when discussing one of the lines of symmetry in the multiplication tables), so the group is indeed Abelian. I’ve used this theorem previously in this blog.  I used it to give an example of a group in my post about Lagrange’s theorem in group theory, and mentioned it again in Proof 3 in my post about Fermat’s little theorem.  It’s a handy sort of thing to know (and the existence of inverses is really crucial to all sorts of things in number theory). I’m not quite sure what to put here, to be honest.  The existence of inverses like this is elementary number theory (and so covered by Davenport’s The Higher Arithmetic, for example), but to find applications of the fact that these numbers form a group, you might do better to find an introductory book on group theory.  There’s further information (such as what happens when the modulus isn’t prime) on Wikipedia. ### 16 Responses to “Theorem 31: the non-zero integers (mod p) form a group under multiplication” 1. Theorem 31: the non-zero integers (mod p) form a group under … : Best Mod . com Says: […] Then we drew up multiplication tables in the mod 5 world, the mod 3 world and the mod 7 world, and looked for interesting patterns that we might be able to explain. Here’s the grid for the mod 5 version; the completed tables are over … See the article here: Theorem 31: the non-zero integers (mod p) form a group under … […] 2. Theorem 35: the best rational approximations come from continued fractions « Theorem of the week Says: […] 35: the best rational approximations come from continued fractions By theoremoftheweek Like Theorem 31, this post is based on a session that I did with some school students.  And as I did there, I […] 3. Junwoo Jung Says: Dear the author of “Theorem of The Week”, It is a very impressive and interesting post. It is very useful to me and my work. So, I wanna to say “thanks a lot”. Junwoo Jung, South Korea. 4. Number Theory: Lecture 2 « Theorem of the week Says: […] of congruence notation, and the notation […] 5. Theorem 38: There is a primitive root modulo a prime « Theorem of the week Says: […] This is something that I was lecturing on the other day.  There I was talking to third-year Cambridge undergraduates, who know quite a bit of mathematics about groups and the like, and so I presented the topic accordingly.  My challenge in this post is to describe some of the ideas in a way that assumes less technical background, since my aim is wherever possible to make these posts accessible to people who haven’t studied maths at university.  I am going to assume that people know a small amount of modular arithmetic, but that’s OK because I’ve written about it previously. […] 6. Brian Says: Amazing post. Indeed, Herstein’s Topics in Algebra has a question regarding the proof of the group of non-zero integers modulo p, which I was stuck with it and your post explained it very well. I’m currently working through your other blog posts and very pleased with them. Please keep them coming! 7. Luqing Ye Says: I’ve also wrote a post about such matter. http://h5411167.wordpress.com/2011/11/09/group-and-modulo/ 8. Sutton Trust summer school 2012 « Theorem of the week Says: […] lemma.  In the second, we mentioned the Fundamental Theorem of Arithmetic, and we learned about modular arithmetic.  In the third and final lecture, we mentioned that is irrational, and talked about continued […] 9. Theorem 43: the Steinitz Exchange Lemma « Theorem of the week Says: […] is , which is a vector space over the field of integers modulo (here is a prime).  Vectors in this space are -tuples of integers modulo […] 10. Sutton Trust summer school 2013 | Theorem of the week Says: […] lemma.  In the second, we mentioned the Fundamental Theorem of Arithmetic, and we learned about modular arithmetic.  In the third and final lecture, we mentioned that is irrational, and talked about continued […] 11. Number Theory: Lecture 2 | Theorem of the week Says: […] Lemma 7: Let be a natural number, and let be an integer coprime to .  Then has a multiplicative inverse modulo .  That is, there is an such that .  We proved this using Bézout. […] 12. SImon Says: I wonder, when you used Bezout’s theorem, whether you proved that the inverse m exists in the group..thx 13. theoremoftheweek Says: We just need it to be an integer coprime to p, don’t we? I think that follows immediately. 14. Groups and Group Actions: Lecture 1 | Theorem of the week Says: […]  It turns out I sort of wrote one myself on this blog a while back, looking specifically at some modular arithmetic (which we’ll meet later in the course but which you could read up on now in the blog post). […] 15. Groups and Group Actions: Lecture 7 | Theorem of the week Says: […] written before about the result that the non-zero integers modulo a prime form a group under multiplication, which […] 16. Groups and Group Actions: Lecture 1 | Theorem of the week Says: […]  It turns out I sort of wrote one myself on this blog a while back, looking specifically at some modular arithmetic (which we’ll meet later in the course but which you could read up on now in the blog post). […]
Home/Class 11/Physics/ ## QuestionPhysicsClass 11 The velocity of a particle at which the kinetic energy is eqyal to its rest mass energy is (A) $${\left(\frac{{{3}{c}}}{{{2}}}\right)}$$ (B) $${3}\frac{{{c}}}{{\sqrt{{{2}}}}}$$ (C) $$\frac{{{\left({3}{c}\right)}^{{{1}/{2}}}}}{{{2}}}$$ (D) $$\frac{{{c}\sqrt{{{3}}}}}{{{2}}}$$ The relativistic K.E. of a particle of rest mass $${m}_{{{0}}}$$ is given by $${K}={\left({m}-{m}_{{{0}}}\right)}{c}^{{{2}}},$$ where$${m}=\frac{{{m}_{{{0}}}}}{{\sqrt{{{1}-\upsilon^{{{2}}}/{c}^{{{2}}}}}}}$$ Here, $${m}$$ is mass of particle moving with velocity $$\upsilon$$. As $${K}.{E}.=$$ rest mass energy $$\therefore{\left({m}-{m}_{{{0}}}\right)}{c}^{{{2}}}={m}_{{{0}}}{c}^{{{2}}}$$ or $${m}{c}^{{{2}}}={2}{m}_{{{0}}}{c}^{{{2}}}$$ $$\frac{{{m}_{{{0}}}}}{{\sqrt{{{1}-\upsilon^{{{2}}}/{c}^{{{2}}}}}}}={2}{m}_{{{0}}}$$ $${1}-\upsilon^{{{2}}}/{c}^{{{2}}}=\frac{{{1}}}{{{4}}}$$ $$\upsilon^{{{2}}}/{c}^{{{2}}}={3}/{4},\upsilon=\frac{{\sqrt{{{3}}}}}{{{2}}}{c}$$
# Velocity and average acceleration So you can find an expression for the velocity by integrating and then do the same averaging procedure that you did for the acceleration. Calculating average acceleration from velocity change and time acceleration is a measure of how rapidly the velocity is changing since we define average. Average acceleration, measured in units of distance per time-squared (typically, meters per second per second), is the average rate at which an object's velocity. Speed and velocity: concepts and formulas graphing the average acceleration of an object can be calculated using the following equation: a = (vf - vi) / (tf. Average acceleration ▫ changing velocity (non-uniform) means an acceleration is present ▫ acceleration is the rate of change of the velocity ▫ units are m/s². You seem to assume we know both the initial and final velocities in that case we know the average velocity (if the acceleration is constant) vave = (vf+vi)/2 and. Average acceleration is the ratio of change in velocity delta v to the time interval delta t in which the velocity changes (so that's basically. So, average velocity =rπrv =vπ c)average acceleration = change in velocity / time let us assume that the particle started with its velocity. Average acceleration is the rate at which velocity changes average acceleration is the change in velocity divided by an elapsed time for instance, if the velocity. Its average velocity over the interval from t=0 to t=4 is: a 5 m/s -10-0 v- e -4 = a body having constant acceleration and a changing velocity → hopefully not. A similar formula gives the average acceleration: again, the instantaneous acceleration is found by measuring the change in velocity over a. Useful equations related to acceleration, average velocity, final velocity and distance traveled. “acceleration is defined as the rate of change of velocity of a body”in many cases the velocity of a body changes due to a change either in its. Practice science questions on the subject of physics velocity and acceleration q: average velocity can be calculated by dividing displacement over what. Average acceleration due to gravity formula | what is the value of acceleration: acceleration of an object is defined as its rate of change of velocity at that. 214 acceleration (esagy) average acceleration average acceleration is the change in average velocity divided by the time taken quantity: average. What's the difference between acceleration and velocity velocity is the average acceleration over a period of time is the change in velocity ( \delta \ mathbf{v} ). Acceleration is the rate of change for velocity, that is, change in velocity over a specified period of time average acceleration is the final velocity minus the initial . Our free-falling object would be constantly accelerating given these average velocity values during each consecutive 1-second time interval, we could say that . Its average velocity in that time interval ∆t is defined as = d/∆t the displacement d is a vector, the time interval ∆t is a scalar dividing a vector by a scalar. Along with displacement, velocity and acceleration round out the holy trinity of unless a question specifically asks you about the average velocity or speed. As an example of finding a mean value, we find an average acceleration at 0: 49, how do we know that the acceleration is the derivative of the velocity function . ## Velocity and average acceleration Computer drawing of a ball showing simple translation and the definitions of average velocity and acceleration we live in a world that is defined by three spatial. (a) what is his average velocity for the first 4 s (b) what is his instantaneous velocity at t = 5 s (c) what is his average acceleration between 0 and 4 s. Acceleration is the rate at which velocity changes it is also a vector, meaning that it has both a magnitude and direction the si unit for. You have come across the word acceleration when the velocity changes in reference with time average acceleration is nothing but the ratio of average change. A soccer ball slows down from 306 m/s, west to 11 m/s, west during a 275 second period what is its average velocity what is its average acceleration. How to find average acceleration acceleration is a quantity that describes change in velocity, include both changes in speed and changes in direction you can. The vector a shown in figure (a) represents a velocity of 10 m/s northeast, in similar fashion, the average acceleration vector is a = ( v f − v o )/ t, where v o. Physics calculator to solve for velocity given initial, constant acceleration and time with constant acceleration equations calculator average velocity. [APSNIP--] [APSNIP--] Velocity and average acceleration Rated 3/5 based on 27 review 2018.
# Thread: How to apply differentiation here? 1. ## How to apply differentiation here? Under ideal conditions, a perfect gas satisfies the equation PV=K ( where P is pressure, V is volume and K is a constant ). If K=60 and pressure is found by measurement to be 1.5 units with an error of 5% per unit,find approximately the error in calculating V. 2. ## Re: How to apply differentiation here? Originally Posted by Vinod Under ideal conditions, a perfect gas satisfies the equation PV=K ( where P is pressure, V is volume and K is a constant ). If K=60 and pressure is found by measurement to be 1.5 units with an error of 5% per unit,find approximately the error in calculating V. The error is given by the exact change in volume divided by the volume. $\frac{\Delta V}{V}$ This, in general, is hard to calculate so we approximate it with $\frac{\Delta V}{V} \approx \frac{dV}{V}$ Now if we take the derivative using the product rule we get $PdV+VdP=0 \iff dV=-\frac{VdP}{P}$ plugging this into the above gives $\frac{\Delta V}{V} \approx \frac{dV}{V}=-\frac{\frac{Vdp}{P}}{V}=-\frac{dP}{P}=-\frac{.05}{1.5}=-\frac{1}{30}$ Since the 5% can be positive or negative we get that the approximate error is $\pm \frac{1}{30} \approx \pm 3.\bar{3}\%$
Skip to main content # Section3Criteria for Excellence Learning narratives: These narratives are perhaps the most important assignments in the course. They frame each portfolio in the course by explaining to the reader: 1. (Before beginning work on the portfolio) What do I know about these topics already? Where have I seen these terms and/or solved these types of problems in the past? Other than unfamiliarity with the material, what more specific questions do I have about this material and/or what am I most interested in learning about it? What am I most concerned about in the material ahead? What goal(s) do I have for my learning and/or my performance in this portfolio? 2. (After completing work on the portfolio, before submission) What did I learn in this portfolio? Did I reach the goals set forth in the assignment? For each learning standard and goal in this assignment, what evidence have I included in my portfolio that I meet that learning standard? Why does my evidence show that? If one or more learning standard was not met, what would I like to do to improve on that standard between now and the final exam period? Your learning narrative should be the first item in your portfolio for each portfolio you submit. Please typeset this narrative using a word process or Overleaf, and upload it to your portfolio as a DOC/DOCX or PDF (preferred). Minimum length is 500 words, and your pre-portfolio and post-portfolio responses (1. and 2. above) should be clearly sectioned in your narrative writeup. Reading and annotating: The goal of these assignments is that you gain experience reading mathematical ideas carefully, asking and helping classmates to answer "good" questions about mathematical material, and locating and sharing helpful resources from elsewhere online. To complete these assignments, read the linked section and add your annotations to our Linear Algebra Hypothes.is group. Remember, everyone in the class can see and interact with your annotations. In each section, do at least two of the following five things. 1. "I notice..." Describe what about this passage sticks out to you. How is it related to other ideas, either from our class or from previous classes like calculus? How would you describe this idea in your own words? 2. "I wonder..." Speculate on the deeper meanings/connections of this idea. What else might be true, based on what you read here? Where else might it be useful, outside of real analysis? 3. Ask a clarifying question. When you encounter a passage that you don't understand after several readings, formulate a question whose answer would most help to clarify your thinking. Your question should tell the reader both what you do understand about the topic, and what about the writing is confusing for you. A question that says only "I don't get this?" or "What does this mean?" is likely not to receive credit. 4. Answer a classmate's question. Choose "Reply" in a classmate's comment and respond to it. Address their question directly and specifically, providing references to other places in the text where appropriate. Try to give an answer that will most clarify their understanding. 5. Share a helpful video. Locate a video on YouTube that you find helpfully explains or clarifies the material you're reading. Embed that video in a Hypothes.is comment (How? Click here) and write 1-2 sentences explaining what question(s) you had about the reading and how your understanding was improved by the video. (Try to avoid using videos Dr. S has uploaded. Find someone else's perspective!) Piazza discussions: The goal of Piazza is to provide a place for you to collaborate with your instructor and your classmates during the problem solving process, with a particular focus on stimulating discussion around the conceptual and proof-based elements of the course. You're invited to participate in these discussions at least several times per portfolio in the course. Contributions you make to the Piazza discussions can be: 1. "Good" questions about something you're working on. A "good" question details for the reader much of the following: (1) the specific task you're working on, (2) the specific place where you're getting hung up / confused, (3) your best guess about why you're getting stuck there, (4) what you've tried so far, including what resources you've consulted, and/or (5) what kind of feedback you're looking for. You needn't write a page worth of text; just be sure your question provides enough of the above context for the reader to be able to provide helpful suggestions. 2. Feedback to others' "good" questions. Effective responses address the "good" question's request for feedback in ways that help to further understanding, but not give away an entire solution. Where good questions can be lengthy, good feedback to those questions is often brief and incisive.
# On $\prod^{(p-1)/2}_{i,j=1\atop p\nmid 2i+j}(2i+j)$ and $\prod^{(p-1)/2}_{i,j=1\atop p\nmid 2i-j}(2i-j)$ modulo a prime $p>3$ QUESTION: Is my following conjecture true? Conjecture. Let $$p>3$$ be a prime and let $$h(-p)$$ be the class number of the imaginary quadratic field $$\mathbb Q(\sqrt{-p})$$. Then $$\frac{p-1}2!!\prod^{(p-1)/2}_{i,j=1\atop p\nmid 2i+j}(2i+j)\equiv \begin{cases}(-1)^{(p-1)/4}\pmod p&\text{if}\ p\equiv 1\pmod4, \\(-1)^{(h(-p)+1)/2}\pmod p&\text{if}\ p\equiv 3\pmod 4.\end{cases}$$ Also, $$\frac{p-3}2!!\prod^{(p-1)/2}_{i,j=1\atop p\nmid 2i-j}(2i-j)\equiv \begin{cases}1\pmod p&\text{if}\ p\equiv1\pmod4,\\(-1)^{(p-1+2h(-p))/4}\pmod p&\text{if}\ p\equiv3\pmod4.\end{cases}$$ I have checked the conjecture via a computer. It should be true in my opinion. Your comments are welcome! • I have proved that the two congruences are equivalent and that the square of each left-hand side is congruent to $1$ modulo $p$. So it remains to determine the signs. Nov 1 '18 at 12:21 • Foe what I said in the previous comments, see Lemma 4.1 of my preprint arxiv.org/abs/1810.12102. Nov 2 '18 at 1:43 Denote $$p=2m+1$$. The idea is very simple: calculate the product $$\prod_{j\in\{s,s+1\}, 1\leqslant i\leqslant m,\atop p\nmid 2i+j} (2i+j).$$ Note that this is a product of all non-zero residues modulo $$p$$ except $$s+1$$, thus it equals $$-1/(s+1)$$. Now apply this observation for $$s=1,3,\dots,m-1$$ (if $$m$$ is even) and $$s=2,4,\dots,m-1$$ (if $$m$$ is odd) and multiply, you almost get your double product. Namely, if $$m$$ is even (so $$p\equiv 1\pmod 4$$) you get the whole double product, which appears to be congruent $$\pmod p$$ to $$(-1)^{m/2}/m!!$$ as conjectured. If $$m$$ is odd, you get that the double product is congruent $$\pmod p$$ to $$\prod_{i=1}^{m-1} (1+2i)\cdot (-1)^{(m-1)/2}/m!!$$ (the first product corresponds to the case $$j=1$$), and since the right hand side of your formula is just $$m!$$ by Mordell, we need only to prove that $$\prod_{i=1}^{m-1} (1+2i)\cdot (-1)^{(m-1)/2}\equiv m!$$, that is easy: write each $$1+2i$$ as $$-2(m-i)$$, you get $$2^{m-1}(m-1)! (-1)^{(m-1)/2}$$ in the left hand side and $$m!=-(m-1)!/2$$ in the right hand side, so we need $$2^m (-1)^{(m+1)/2}\equiv 1$$ which is clear as $$2^m\equiv (-1)^{(p^2-1)/8}=(-1)^{(m+1)/2}$$.
# Question #337cb Feb 24, 2018 ${\text{55 cm}}^{3}$ #### Explanation: The thing to remember about the density of a given substance is that it tells you the mass of exactly $1$ unit of volume of that substance. In your case, you know that the density of steel is equal to ${\text{7.8 g cm}}^{- 3}$. This value tells you that every ${\text{1 cm}}^{3}$ of steel, the equivalent of one unit of volume, has a mass of $\text{7.8 g}$. So if you know that you need $\text{7.8 g}$ of steel in order to have ${\text{1 cm}}^{3}$ of steel, you can say that $\text{430 g}$ of steel would correspond to a volume of $430 \textcolor{red}{\cancel{\textcolor{b l a c k}{{\text{g"))) * "1 cm"^3/(7.8color(red)(cancel(color(black)("g")))) = color(darkgreen)(ul(color(black)("55 cm}}^{3}}}}$ The answer is rounded to two sig figs. So remember, every time you have the density of a substance, you can use it as a conversion factor to find the mass of a given volume or the volume of a given mass. For this example, you would have $\text{mass " -> " volume:" " " "1 cm"^3/"7.8 g}$ ${\text{volume " -> " mass:" " " "7.8 g"/"1 cm}}^{3}$
# Wave Problem Hey! Here is one that I thought would be easy: Two traveling waves move on a string that has a fixed end at x=0. They are identical except for opposite velocities. Each has an amplitude of 2.46mm, a period of 3.65ms, and a speed of 111m/s. Write the wave function of the resulting standing wave. The wave would be represented by the function, y(x,t)=(A_sw)(sinkx)(sinwt) k=w/v=1720/111=15.5/m This is not right though... any ideas? I am least sure about k. Last edited:
1. A voltage is described as v(t) = 3u(t) - 3u(t-3) V where u(t) is the step function. The values at t = 2 seconds and t = 4 seconds are x and y respectively, where: (select the correct answer). A) B) C) 2. A current is described by the function i(t) = 5cos(1000t - 2) mA. The argument of the cosine is in radians and t is in seconds. The frequency may be expressed as: (select the two answers that apply) A) B) C) D) E) F) G) 3. A voltage is described as v(t) = [3r(t) - 4r(t-2)] volts. The value of the voltage at t = 4 seconds is: A) B) C) D) E) 4. 4.A current is described by the function i(t) = 5cos(1000t - 2) mA. The argument of the cosine function is in radians, so the '2' refers to a phase in radians. The time, t, is in seconds. The value at t = 1 ms is: (choose the best answer) A) B) C)
## On the converse of Pansu's theorem created by marchese on 09 Nov 2022 [BibTeX] Submitted Paper Inserted: 9 nov 2022 Last Updated: 9 nov 2022 Year: 2022 Abstract: We provide a suitable generalisation of Pansu's differentiability theorem to general Radon measures on Carnot groups and we show that if Lipschitz maps between Carnot groups are Pansu-differentiable almost everywhere for some Radon measures $\mu$, then $\mu$ must be absolutely continuous with respect to the Haar measure of the group. Download:
# Set Notation graph LR classDef currentPage stroke:#333,stroke-width:4px ALG(["fas:fa-trophy Algorithmis fas:fa-trophy "]) ASY_ANA(["fas:fa-check Asymptotic Analysis#160;"]) click ASY_ANA "./math-asymptotic-analysis" MAT_NOT(["fas:fa-check Mathematical Notation#160;"]) click MAT_NOT "./math-notation" POL(["fas:fa-check Polynomials #160;"]) click POL "./math-polynomials" MAT_FUN(["fas:fa-check Math Functions#160;"]) click MAT_FUN "./math-functions" LOG(["fas:fa-check Logarithms#160;"]) click LOG "./math-logarithms" COM(["fas:fa-check Combinatorics#160;"]) click COM "./math-combinatorics" SET_NOT(["fas:fa-check Set Notation#160;"]) click SET_NOT "./math-set-notation" class SET_NOT currentPage GRA(["fas:fa-check Graphing#160;"]) click GRA "./math-graphing" BW(["fas:fa-check Bitwise Logic#160;"]) click BW "./math-bitwise" ASY_ANA-->ALG BW-->ALG COM & GRA & SET_NOT-->ASY_ANA MAT_NOT--> SET_NOT POL & LOG--> MAT_FUN MAT_FUN--> GRA Set theory is an absolutely fascinating branch of mathematics. It’s intuitively simple on the surface; however, it’s incredibly deep and has far-reaching implications. “The axioms of set theory imply the existence of a set-theoretic universe so rich that all mathematical objects can be construed as sets”1. Fortunately or perhaps unfortunately depending on your perspective, the portion of set theory that applies to algorithms is diminutive albeit ubiquitous. This page focuses on the set of concepts (pun intended) that apply to algorithm design and analysis. This section has an ancillary purpose as it also serves as an introduction to mathematical notation in general. It’s important to become fluent in written math for two reasons. The first is that reading and writing proofs is requisite for algorithm mastery. The second is that it is an efficient medium for communicating dense concepts. The concise symbols that comprise the language of math is far more expressive and precise than natural languages such as English, Spanish, etc… Those in the software field often struggle with mathematical notation because written math, while far more precise than natural language, is not nearly as Draconian as typical programming languages. Miswritten syntax in math has no impact (assuming it’s still comprehensible). A syntax error while programming results in a compiler error. Remember that mathematical notation is a matter of convention and authors tend to interpret the convention loosely. There are several correct ways to write the same thing. For instance, $A^\prime$, $A^\complement$, and $A^\sim$ are all equivalent. What’s worse, it’s also common to see the same symbol with a different meaning depending on the context. For instance, the symbol $\Sigma$ denotes a summation; however, it’s not uncommon to define it to be an arbitrary variable. Deriving meaning from mathematical notation takes practice. Do not be discouraged if it isn’t painfully obvious at first. Keeping all this in mind, understand that every attempt is made to showcase the most common representations; however, it’s highly likely that you will encounter variations. PRO TIP: Bookmark this page in the event that you encounter a symbol you don’t recognize. #### History Georg Cantor introduced set theory in his 1874 paper entitled On a Property of the Collection of All Real Algebraic Numbers. Although controversial at the time, set theory was a major breakthrough in mathematics. Georg’s paper essentially proved that there are multiple sizes of infinity. The concept is so powerful that it is used as a foundation for all of mathematics. Set theory sustained a crippling blow in the early 1900s when Bertrand Russell formulated what is know as Russell’s Paradox. Mr. Russell wrote about it in a book that he co-authored with Alfred North-Whitehead entitled Principia Mathmatica2. The book was published in 1903. In a nutshell, a logical inconstancy arises when trying to formulate the set of all sets that are not members of themselves. The said set is only a member of itself if it’s not a member of itself. Luckily, set theory was so incredibly useful that it was not abandoned. Ernet Zermelo and Abraham Fraenkel reformulated set theory to alleviate the paradox. Georg’s original theory is now referred to as naive set theory and the more complete theory is called axiomatic or Zermelo-Fraenkel set theory. In the domain of computer science, naive set theory is more than adequate. All of the relevant concept are covered here. Although the wider topic of set theory is not directly applicable to algorithms, it’s an absolutely fascinating topic that anyone will enjoy exploring. ## What is a Set? Set theory concerns grouping objects, known as members or elements, into well-defined unordered sets. All the items within a set are unique; that is, duplicates are not allowed. This is a familiar concept as it’s taught in grade school. As an example, consider the Kardashians; each family member (Kim, Khloe, Kourtney, Rob, etc…) is an element in the Kardashian set. A more formal example is the classification of the animal kingdom into sets known as phyla, class, order, family, and genus. Before continuing on, take a minute and try to identify a few well-defined sets. Sets are depicted as objects inside curly braces and are denoted by capital letters. For instance, $A=\{1,2,3\}$ is the mathematical representation of the set $A$ with the members $1$, $2$, and $3$. Sets can contain other sets such as $B=\{\{1,2\},\{2,4\}\}$. The symbol $\emptyset$, alternatively $\{\}$, represents the empty set (set with no objects). Conversely, $\U$ is the universe (aka universal set or universe of discourse) which contains all possible elements in an area of interest. The number of items included in a set, represented by surrounding vertical bars, is called the cardinality3 of the set. $\vert A \vert$ means the “cardinality of $A$”. As $A$ is defined in this paragraph, $\vert A \vert=3$. ### Summary Symbol Description $\{1, 2\}$ Set containing the members $1$ and $2$ $\emptyset$ Empty set — set with no members $\U$ Universe — set with all members in the area of interest $\vert A\vert$ Cardinality (number of items in set) of set $A$ ## $\U$ Number Sets It’s common to constrain the types of numbers that an algorithm accepts or generates. Set theory defines several sets of numbers based on their properties. See the table below. Symbol Name Description $\N$ Natural (aka Counting) All positive whole numbers including 04 $\Z$5 Integers $\N$ + negative whole numbers $\Q$ Rational (Quotient) $\Z$ + numbers representable as fractions $\frac{1}{2}$ or $\frac{3}{4}$ $\R$ Real $\Q$ + irrational numbers including algebraic numbers (e.g. $\sqrt{2}$) and transcendental numbers (e.g. $\pi$ and $e$) There are many more well-defined numbers sets6; however, the above is sufficient for the purposes at hand. The astute reader may have noticed that each successive set encompasses the previous set. This is by design. The Venn diagram below represents the relationship between the number sets. Another way to express the diagram above is $\N \subset \Z \subset \Q \subset \R$. The next section elaborates on the $\subset$ symbol. When a set in comprised of elements from a larger set, it is said to be a subset of that set. From the opposite perspective, a larger set that contains all items in a smaller set is a superset. This is depicted graphically below. The $\subset$ symbol indicates a subset relationship and a $\supset$ symbol denotes a superset relationship as shown in mathematical notation below. $A=\{1,2,3\}, B=\{1,2,3,4,5\}$ $A \subset B, B \supset A$ Strictly speaking, what’s shown above are proper subsets and supersets. The proper distinction means that the subset is not equal to the superset or vice versa. If the possibility exists that the two sets are equals the symbols $\subseteq$ and $\supseteq$ are used as shown below. $A=\{1,2,3,4,5\}, B=\{1,2,3,4,5\}$ $A \subseteq B, B \supseteq A$ Stated differently, if $\alpha \subseteq \sigma$ then $\sigma$ contains every element that $\alpha$ contains. If $\alpha \subset \sigma$ then $\sigma$ contains every element that $\alpha$ contains and some additional elements. Although it may seem like a pedantic distinction, it’s valuable in practice. There is also convenient notation for indicating the lack of a relationship. A $/$ through a symbol negates the relationship. Each of the set symbols shown above has an equivalent negated symbol: $\not\subseteq$, $\not\subset$, $\not\supseteq$, and $\not\supset$. See the mathematical statements below. $A=\{1,2,3,4,5\}, B=\{6,7,8,9,10\}$ $A \not\subset B, B \not\supset A$ A final important relationship is a set’s complement. The complement of a set is all the items in $\U$ that are not in the set. See the image below. The complement of $A$ in mathematical notation is $A^\prime$. It is also common to see $A^\complement$, $\overline{A}$, or $A^\sim$. As is evident by now, there are many symbols. Don’t be alarmed if they are difficult to remember. They are all summarized below for easy referral. ### Summary Symbol Name Example Negation $\subseteq$ Subset $\{1,2,3\} \subseteq \{1,2,3\}$ $\not\subseteq$ $\subset$ Proper Subset $\{1,2\} \subset \{1,2,3\}$ $\not\subset$ $\supseteq$ Superset $\{1,2,3\} \supseteq \{1,2,3\}$ $\not\supseteq$ $\supset$ Proper Superset $\{1,2,3\} \supset \{1,2\}$ $\not\supset$ $^\prime$ Complement $\U=\{1,2,3\}, A=\{1,2\}, A^\prime=\{3\}$ ## Building a Set The elements belonging to a well-defined set share certain properties as defined by the set. This section describes the notation for describing those properties. The first task is to denote set membership. The symbol $\in$, read as “is a member of”, indicates that an element belongs to a set. For instance, $\sigma \in A$ indicates that the element $\sigma$ is included in the set $A$. Read aloud, it says “$\sigma$ is a member of $A$”. Contrarily, the symbol $\notin$ indicates the opposite. $\sigma \notin A$ is read as “$\sigma$ is not a member of $A$”. See the examples below. $1 \in \{1, 2\}$ $3 \notin \{1, 2\}$ As has already been demonstrated, one way of defining a set is to list all the members inside $\{\}$s (e.g. $A=\{1,2,3\}$). Inconveniently, many sets are so large that listing all the elements isn’t a possibility. One way to overcome this limitation is to use a $\ldots$ symbol to denote a sequence. For instance, the notation for a set of whole numbers between $1$ and $100$ is: $\{1,2,3,\ldots,100\}$ Omitting the ending or beginning number indicates that the sequence extends through infinity as illustrated below. $\N = \{0,1,2,\dots\}$ $\Z = \{...,0,1,2,\dots\}$ Strictly speaking, the remaining symbols in this section are predicate logic symbols rather than set notation. However, this is a pedantic distinction. It makes sense to introduce them here because they are often seen together. Not all sets conform to a sequence. In these cases, the $\vert$ symbol combined with a predicate suffices. $\vert$ is the such as symbol. A predicate is an expression that accepts a single argument and returns a true or false value. Any value that satisfies the predicate (yields a true result) is a member of the set. This is best demonstrated with an example. The statement below reads: the set of all elements from the real numbers set such that $x \lt 0$. Stated differently, the set of all negative real numbers. $\{x \in \R \vert x \lt 0\}$ It’s also possible to express set assertions in mathematical notation. The $\forall$ symbol, read as for all, represents all members in a set. The following statement reads: for all members in the $\N$ set, the member is $\geq 0$. Alternatively, every member in $\N$ is greater than or equal to zero. $\forall x \in \N, x \geq 0$ Another common symbol is $\exists$ which is known as the existential qualification. It asserts that at least one item exists that satisfies the specified predicate. The statement below reads, there exists at least one element in $\N$ that is even. $\exists x \in \N,x \bmod 2=0$ The notation introduced up to this point concerns defining sets and their elements. The following section outlines set operations. ### Summary Symbol Name Example $\in$ is a member of $1 \in \{1,2,3\}$ $\notin$ is not a member of $4 \notin \{1,2,3\}$ $\ldots$ sequence $\N = \{0,1,2,\dots\}$ $\vert$ such that $\{x \vert x > 0\} = \{1,2,3,...\}$ $\forall$ for all $\forall x \in \{2,4,6,8\}, x \bmod 2=0$ $\exists$ there exists $\exists x \in \{1,2,3,4\},x \bmod 2=0$ ## Set Operations Similar to numbers, sets support basic arithmetic operations. This section outlines five such operations. The first is the set union represented by the $\cup$ symbol. A result of a union operation on two sets is a new set with all the distinct elements from both sets combined. Notice the word distinct; if an item exists in both sets, it’s not duplicated. See the example below. $\{a,b,c,d,e\} \cup \{c,d,e,f,g,h\} = \{a,b,c,d,e,f,g,h\}$ The $\cap$ symbol represents a set intersection. The intersection of two sets is the items that both sets have in common as shown below. $\{a,b,c,d,e\} \cap \{c,d,e,f,g,h\} = \{c,d,e\}$ Taking the difference of two sets works the same way as it does with numbers. Subtracting $A$ from $B$ results in a set with all the items in $B$ that are not in $A$. The set difference symbol ($-$) is even the same as the symbol used for subtracting numbers. See the example below. $\{a,b,c,d,e\} - \{c,d,e,f,g,h\} = \{f,g,h\}$ The three operations outlined above are fairly innocuous. However, the multiplication operation can seem a bit strange on first inspection. Multiplying two sets generates the Cartesian product of the sets. The Cartesian product is the set of ordered pairs (2-tuples7) with the first element from the first set and the second from the second set. This is easier to explain using set builder notation as shown below. $A \times B = \{(a,b) \vert a \in A \; \text{and} \; b \in B\}$ Stated differently, the Cartesian product of $A$ and $B$ is the set of all ordered pairs $(a, b)$ where $a$ is in $A$ and $b$ is in $B$. See the graphical depiction below. $\green{\{e,f,g\}} \times \blue{\{a,b,c\}} = \begin{array}{c|c} \green{ \quad\;\begin{matrix} \{a,\quad & b,\quad & c\} \end{matrix} } \\ \blue{ \begin{matrix} \{e,\\ f,\\ g\} \end{matrix} } \left[\ \begin{matrix} (\blue{e},\green{a}) & (\blue{e},\green{b}) & (\blue{e},\green{c})\\ (\blue{f},\green{a}) & (\blue{f},\green{b}) & (\blue{f},\green{c})\\ (\blue{g},\green{a}) & (\blue{g},\green{b}) & (\blue{g},\green{c}) \end{matrix}\ \right] \end{array}$ The depiction above is meant to provide an illustration of what actually occurs when multiplying two sets. The statement below is the proper notation. $\{e,f,g\}\times\{a,b,c\} = \{(e,a),(e,b),(e,c),(f,a),(f,b),(f,c),(g,a),(g,b),(g,c)\}$ The final operation is flatten or flat map, represented by the $\bigcup$ symbol, and it works on sets of sets. It takes all the sets and flattens them into a single flat set containing the distinct items from all the sets. See the example below. $\bigcup \{ \{a,b,c\}, \{c,d,e\} \} = \{a,b,c,d,e\}$ Notice how, just like the union operation, the result is the distinct items from all the sets. Thus concludes the demonstration of the five set operations. ### Summary #### Quick Review A quick review of a few mathematical properties. • Commutative property: Changing the order of the operands does not change the result: Assuming $\sigma$ is some arbitrary operation, $\sigma$ is commutative if $3 \sigma 4 = 4 \sigma 3$ • Associative property: The order in which the operations are preformed does not matter. Assuming $\sigma$ is some arbitrary operation, $\sigma$ is associative if $(2 \sigma 3) \sigma 4 = 2 \sigma (3 \sigma 4)$ Symbol Name Example Properties Identity $\cup$ Union $\{1,2\} \cup \{2,3\} = \{1,2,3\}$ Commutative, Associative $A \cup \emptyset = A$ $\cap$ Intersection $\{1,2\} \cap \{2,3\} = \{2\}$ Commutative, Associative $A \cap \emptyset = \emptyset$, $A \cap \U = A$ $-$ Difference $\{1,2,4\} — \{2,3\} = \{1,4\}$   $A - \emptyset = \emptyset - A = \emptyset$ $\times$ Cartesian Product $\{1,2\} \times \{2,3\} = \{(1,2),(1,3),(2,2),(2,3)\}$   $A \times \emptyset = \emptyset \times A = \emptyset$ $\bigcup$ Flatten $\bigcup \{ \{1,2\}, \{2,3\} \} = \{1,2,3\}$ N/A $\bigcup{\emptyset} = \emptyset$ ## Exercises 1. Specify if each statement is true of false. a. $-1 \in \N$ b. $\forall x \in \N, x \gt 0$ c. $\exists \sigma \in \Z, \sigma \bmod 2 = 0$ d. $a \notin \{a,b,c\}$ e. $\{0,1,2,\ldots\} = \Q$ f. $\U - A = A^\prime$ 1. false - Negative numbers are not part of the $\N$ set. 2. false - zero is included in $\N$ and it is not greater than itself. 3. true - there is at least even number is the $\Z$ set 4. false - $a$ _is_ in the set 5. false - $\Q$ has far more numbers than all the positive whole number including zero. 6. true - the complement of set $A$ is all the items in $\U$ that are not in $A$ 2. Assuming $A = \{1,2,3,a\}$ and $B = \{1,a,b,c\}$, calculate each set operation below: a. $A \cup B$ b. $A \cap B$ c. $A - B$ d. $B - A$ e. $A \times B$ 1. $\{1,2,3,a,b,c\}$ 2. $\{1,a\}$. 3. $\{2,3\}$. 4. $\{b,c\}$. 5. $\{(1,1),(1,a),(1,b),(1,c),(2,1),(2,a),(2,b),(2,c),(3,1),(3,a),(3,b),(3,c),(a,1),(a,a),(a,b),(a,c)\}$. 3. With regard to Cartesian products, answer the following questions. 1. Regarding $A \times B = \{(a,b) \vert a \in A \; \text{and} \; b \in B\}$. Is $(a,b)$ equivalent to $(b,a)$? 2. Is $A \times B$ equivalent to $B \times A$. 1. No, $(a,b)$ is NOT equivalent to $(b,a)$. Recall that the output of a multiplication operation is ordered pairs meaning that the order has meaning. Changing the order of the pair changes it's identity. 2. No, unlike numbers, multiplication of sets is not commutative. $A = \{1,2\} \\ B = \{3,4\} \\ A \times B = \{(1,3),(1,4),(2,3),(2,4)\} \\ B \times A = \{(3,1),(3,2),(4,1),(4,2)\}$ 1. Ironically, much like Russel’s paradox invalidated naive set theory, Kurt Gödel’s incompleteness theorem invalided the system of logic set forth in the Principia Mathmatica. As with almost everything, XKCD has a great comic about it https://xkcd.com/468/ 2. The term cardinality, coined by the father of set theory Georg Cantor, comes from “cardinal numbers”. That is, numbers indicating quantity. This is useful because numbers are an abstract concept that can mean more than one thing. A number could indicate the position of an element in a sequence as well as the number of elements in a set. 3. Some fields of mathematics define $\N$ to include $0$ and others do not. It is also common to encounter the set “Counting Numbers” which comprises all the whole positive numbers excluding $0$ and the set “Whole Numbers” that includes “Counting Numbers” and $0$. Alternatively, $\N_0$ is sometimes used to denote the set “Whole Numbers” and $\N_1$ to represent the set “Counting Numbers”. 4. Z came from the first letter of the German word Zahlen, which means number. 5. $\mathbb{P}$ prime numbers, $\mathbb{I}$ irrational numbers, $\mathbb{C}$ complex numbers, $\mathbb{H}$ quaternions, $\mathbb{O}$ octonions, $\mathbb{S}$ sedenions to name a few 6. A tuple is a finite ordered list of elements. Prefixing a non-negative integer indicates the number of items in the tuple. e.g. 2-tuple is a tuple with two elements.
# constructible A function $f:\mathbb{N}\rightarrow\mathbb{N}$ is time constructible if there is a deterministic Turing machine $T$ (with alphabet $\{0,1,B\}$) such that when $T$ receives as input the a series of $n$ ones, it halts after exactly $f(n)$ steps. Similarly $f$ is space constructible if there is a similar Turing machine which halts after using exactly $f(n)$ cells. Most ’natural’ functions are both time and space constructible, including constant functions, polynomials, and exponentials, for example. Title constructible Constructible 2013-03-22 13:03:20 2013-03-22 13:03:20 Henry (455) Henry (455) 7 Henry (455) Definition msc 68Q15 time constructible space constructible
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Diffraction % Progress Practice Diffraction Progress % Diffraction Students will learn about diffraction of light and the interference patterns that are produced through constructive and destructive interference of the waves. Students will also learn to calculate diffraction pattern spacing or work backwards to calculate the wavelength of light emitted. ### Key Equations $m \lambda = d \sin{\theta}$ Double slit interference maxima. $m$ is the order of the interference maximum in question, $d$ is the distance between slits. and $\theta$ is the angular position of the maximum. $m \lambda = d \sin{\theta}$ Single slit interference maxima. $m$ and $\theta$ are defined as above and $d$ is the width of the slit. $m \lambda = d \sin{\theta}$ Diffraction grating interference maxima. $m$ and $\theta$ are defined as above and $d$ is the distance between the lines on the grating. $m \lambda = 2nd$ Thin film interference: $n$ is the index of refraction of the film, $d$ is the thickness of the film, and $m$ is an integer. In the film interference, there is a $\lambda/2$ delay (phase change) if the light is reflected from an object with an index of refraction greater than that of the incident material. $\frac{1}{f} = \frac{1}{d_0} +\frac{1}{d_i}$ Guidance • Waves are characterized by their ability to constructively and destructively interfere . Light waves which interfere with themselves after interaction with a small aperture or target are said to diffract . • Light creates interference patterns when passing through holes (“slits”) in an obstruction such as paper or the surface of a CD, or when passing through a thin film such as soap. #### Example 1 A typical experimental setup for an interference experiment will look something like this: $\text{Recall: for constructive interference we require a path difference of} \;m \lambda = d \sin{\theta}$ $\text{d = spacing of slits}$ $\text{m = number of maximum (i.e. m = 0 is the central maximum, m = 1 is the 1st maximum and thus the first dot to the right and left of the center, etc.)}$ $\text{L = distance form diffraction grating to screen}$ $\Delta y = \text{distance from nth spot out from the center (if m = 1 then it is the 1st spot)}$ Definitions: Maximum = place where waves constructively interfere Minimum = place where waves destructively interfere Because the screen distance L is much larger than the slit distance d , one can see that $\frac{\Delta y}{L} = \text{tan} \theta \approx \text{sin} \theta$ . Thus, the condition for a first maximum becomes $\lambda = d \sin{\theta} = d \frac{\Delta y}{L}$ One can now easily calculate where the first maximum should appear if given the wavelength of the laser light, the distance to the screen and the distance between slits. First Maximum: $\Delta y = \frac {L \lambda}{d}$ #### Example 2 White light (which is comprised of all wavelengths and thus all colors) separates into a rainbow pattern as shown below. Each wavelength of light has a unique interference pattern given by the equation above. Thus all the wavelengths (i.e. colors) have a unique $\Delta y$ based on the equation given at the end of Example 1. This is how white light separates out into its individual wavelengths producing a rainbow after going through a diffraction grating. ### Explore More 1. In your laboratory, light from a $650 \;\mathrm{nm}$ laser shines on two thin slits. The slits are separated by $0.011 \;\mathrm{mm}$ . A flat screen is located $1.5 \;\mathrm{m}$ behind the slits. 1. Find the angle made by rays traveling to the third maximum off the optic axis. 2. How far from the center of the screen is the third maximum located? 3. How would your answers change if the experiment was conducted underwater? 2. Again, in your laboratory, $540 \;\mathrm{nm}$ light falls on a pinhole $0.0038 \;\mathrm{mm}$ in diameter. Diffraction maxima are observed on a screen $5.0 \;\mathrm{m}$ away. 1. Calculate the distance from the central maximum to the first interference maximum. 2. Qualitatively explain how your answer to (a) would change if you: 1. move the screen closer to the pinhole 2. increase the wavelength of light 3. reduce the diameter of the pinhole 3. Students are doing an experiment with a Helium-neon laser, which emits $632.5 \;\mathrm{nm}$ light. They use a diffraction grating with $8000$ lines/cm. They place the laser $1 \;\mathrm{m}$ from a screen and the diffraction grating, initially, $95 \;\mathrm{cm}$ from the screen. They observe the first and then the second order diffraction peaks. Afterwards, they move the diffraction grating closer to the screen. 1. Fill in the Table ( below ) with the expected data based on your understanding of physics. Hint: find the general solution through algebra before plugging in any numbers. 2. Plot a graph of the first order distance as a function of the distance between the grating and the screen. 3. How would you need to manipulate this data in order to create a linear plot? 4. In a real experiment what could cause the data to deviate from the expected values? Explain. 5. What safety considerations are important for this experiment? 6. Explain how you could use a diffraction grating to calculate the unknown wavelength of another laser. Distance of diffraction grating to screen $(cm)$ Distance from central maximum to first order peak $(cm)$ $95$ $75$ $55$ $35$ $15$ 4. A crystal of silicon has atoms spaced $54.2 \;\mathrm{nm}$ apart. It is analyzed as if it were a diffraction grating using an $x-$ ray of wavelength $12 \;\mathrm{nm}$ . Calculate the angular separation between the first and second order peaks from the central maximum. 5. Laser light shines on an oil film $(n = 1.43)$ sitting on water. At a point where the film is $96 \;\mathrm{nm}$ thick, a $1^{st}$ order dark fringe is observed. What is the wavelength of the laser? 6. You want to design an experiment in which you use the properties of thin film interference to investigate the variations in thickness of a film of water on glass. 1. List all the necessary lab equipment you will need. 2. Carefully explain the procedure of the experiment and draw a diagram. 3. List the equations you will use and do a sample calculation using realistic numbers. 4. Explain what would be the most significant errors in the experiment and what effect they would have on the data. #### Answers to Selected Problems 1. a. $10.2^\circ$ b. $27\;\mathrm{cm}$ c. $20\;\mathrm{cm}$ 2. a. $0.72\;\mathrm{m}$ 3. $54\;\mathrm{cm}, 44\;\mathrm{cm}, 21\;\mathrm{cm}, 8.8\;\mathrm{cm}$ 4. $13.5^\circ$ 5. $549 \;\mathrm{nm}$ 6. . ### Explore More Sign in to explore more, including practice questions and solutions for Diffraction. Please wait... Please wait...
Constructing a line segment through a point so it has a ratio $1:2$ Question: Let $AOB$ be a given angle less than $180^\circ$ and let $P$ be an interior point of the angular region of $\angle AOB$. Show, with proof, how to construct, using only ruler and compass, a line segment $CD$ passing through $P$ such that $C$ lies on ray $OA$ and $D$ lies on ray $OB$, and $CP : PD=1 : 2$. My attempt: I thought that constructing a triangle and constructing medians for every side would give me the centroid, which divides the median in the ratio $2:1$. But how can I construct a triangle? Let $K$ are point such that $OP:PK=2:1$. Then $KD||OA$. $DP$ intersect to $OA$ in $C$ Let $Q$ be a midpoint for $OP$ (you can easly construct it), the reflect $Q$ across $P$ to point $M$ and then reflect line $OA$ across $M$ to line $p$ which cuts $OB$ at $E$. Then reflect $E$ across $M$ to get $C$ (on $OA$) where $CP$ cuts $OB$ is $D$.
# How do you solve 2x^2-x= -5 using the quadratic formula? Apr 26, 2016 The solutions for the equation are: color(green)( x = (1+sqrt(-39))/4, color(green)( x = (1-sqrt(-39))/4 #### Explanation: $2 {x}^{2} - x = - 5$ $2 {x}^{2} - x + 5 = 0$ The equation is of the form color(blue)(ax^2+bx+c=0 where: $a = 2 , b = - 1 , c = 5$ The Discriminant is given by: $\Delta = {b}^{2} - 4 \cdot a \cdot c$ $= {\left(- 1\right)}^{2} - \left(4 \cdot 2 \cdot 5\right)$ $= 1 - 40 = - 39$ The solutions are found using the formula $x = \frac{- b \pm \sqrt{\Delta}}{2 \cdot a}$ $x = \frac{- \left(- 1\right) \pm \sqrt{- 39}}{2 \cdot 2} = \frac{1 \pm \sqrt{- 39}}{4}$ The solutions are: • color(green)( x = (1+sqrt(-39))/4 • color(green)( x = (1-sqrt(-39))/4
# Classification from scratch, linear discrimination 8/8 Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis. ## Bayes (naive) classifier Consider the follwing naive classification rule$$m^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}$$or$$m^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}$$(where $\mathbb{P}[\mathbf{X}=\mathbf{x}]$ is the density in the continuous case). In the case where $y$ takes two values, that will be standard $\{0,1\}$ here, one can rewrite the later as$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}$$and the set$$\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}$$is called the decision boundary. Assume that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then explicit expressions can be derived.$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}$$where $r_y^2$ is the Manalahobis distance, $$r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]$$ Let $\delta_y$be defined as$$\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)$$the decision boundary of this classifier is $$\{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}$$which is quadratic in ${\color{blue}{\mathbf{x}}}$. This is the quadratic discriminant analysis. This can be visualized bellow. The decision boundary is here But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$. In that case, actually, $$\delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y)$$ and the decision frontier is now linear in ${\color{blue}{\mathbf{x}}}$. This is the linear discriminant analysis. This can be visualized bellow Here the two samples have the same variance matrix and the frontier is ## Link with the logistic regression Assume as previously that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}$$is equal to $$\mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}$$which is linear in $\mathbf{x}$$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}$$Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule. Observe furthermore that the slope is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was$$\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}$$which is maximal when $\mathbf{\omega}$ is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, when $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$. ## Homebrew linear discriminant analysis To compute vector $\mathbf{\omega}$ m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean) m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean) Sigma = var(myocarde[,1:7]) omega = solve(Sigma)%*%(m1-m0) omega [,1] FRCAR -0.012909708542 INCAR 1.088582058796 INSYS -0.019390084344 PRDIA -0.025817110020 PAPUL 0.020441287970 PVENT -0.038298291091 REPUL -0.001371677757 For the constant – in the equation $\omega^T\mathbf{x}+b=0$ – if we have equiprobable probabilities, use b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 ## Application (on the small dataset) In order to visualize what’s going on, consider the small dataset, with only two covariates, x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) m0 = apply(df[df$y=="0",1:2],2,mean) m1 = apply(df[df$y=="1",1:2],2,mean) Sigma = var(df[,1:2]) omega = solve(Sigma)%*%(m1-m0) omega [,1] x1 -2.640613174 x2 4.858705676 Using R regular function, we get library(MASS) fit_lda = lda(y ~x1+x2 , data=df) fit_lda   Coefficients of linear discriminants: LD1 x1 -2.588389554 x2 4.762614663 which is the same coefficient as the one we got with our own code. For the constant, use b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 If we plot it, we get the red straight line plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")]) abline(a=b/omega[2],b=-omega[1]/omega[2],col="red") As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters points(m0["x1"],m0["x2"],pch=4) points(m1["x1"],m1["x2"],pch=4) segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue") points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19) Of course, we can also use R function predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) One can also consider the quadratic discriminent analysis since it might be difficult to argue that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$ fit_qda = qda(y ~x1+x2 , data=df) The separation curve is here plot(df$x1,df$x2,pch=19, col=c("blue","red")[1+(df$y=="1")]) predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) # Classification from scratch, SVM 7/8 Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines. ## A formal introduction Here $y$ takes values in $\{-1,+1\}$. Our model will be $$m(\mathbf{x})=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ Thus, the space is divided by a (linear) border$$\Delta:\lbrace\mathbf{x}\in\mathbb{R}^p:\mathbf{\omega}^T\mathbf{x}+b=0\rbrace$$ The distance from point $\mathbf{x}_i$ to $\Delta$ is $$d(\mathbf{x}_i,\Delta)=\frac{\mathbf{\omega}^T\mathbf{x}_i+b}{\|\mathbf{\omega}\|}$$If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider $$\max_{\mathbf{\omega},b}\left\lbrace\min_{i=1,\cdots,n}\left\lbrace\text{distance}(\mathbf{x}_i,\Delta)\right\rbrace\right\rbrace$$ The strategy is to maximize the margin. One can prove that we want to solve $$\max_{\mathbf{\omega},m}\left\lbrace\frac{m}{\|\mathbf{\omega}\|}\right\rbrace$$ subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=m$, $\forall i=1,\cdots,n$. Again, the problem is ill posed (non identifiable), and we can consider $m=1$: $$\max_{\mathbf{\omega}}\left\lbrace\frac{1}{\|\mathbf{\omega}\|}\right\rbrace$$ subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=1$, $\forall i=1,\cdots,n$. The optimization objective can be written$$\min_{\mathbf{\omega}}\left\lbrace\|\mathbf{\omega}\|^2\right\rbrace$$ ## The primal problem In the separable case, consider the following primal problem,$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R}}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, $\forall i=1,\cdots,n$. In the non-separable case, introduce slack (error) variables $\mathbf{\xi}$ : if $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, there is no error $\xi_i=0$. Let $C$ denote the cost of misclassification. The optimization problem becomes$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R},{\color{red}{\mathbf{\xi}}}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2 + C\sum_{i=1}^n\xi_i\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1-{\color{red}{\xi_i}}$, with ${\color{red}{\xi_i}}\geq 0$, $\forall i=1,\cdots,n$. Let us try to code this optimization problem. The dataset is here n = length(myocarde[,"PRONO"]) myocarde0 = myocarde myocarde0$PRONO = myocarde$PRONO*2-1 C = .5 and we have to set a value for the cost $C$. In the (linearly) constrained optimization function in R, we need to provide the objective function $f(\mathbf{\theta})$ and the gradient $\nabla f(\mathbf{\theta})$. f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] .5*sum(w^2) + C*sum(xi)} grad_f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] c(2*w,0,rep(C,length(xi)))} and (linear) constraints are written as $\mathbf{U}\mathbf{\theta}-\mathbf{c}\geq \mathbf{0}$ U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]), cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1))) C = c(rep(1,n),rep(0,n)) Then we use constrOptim(theta=p_init, f, grad_f, ui = U,ci = C) Observe that something is missing here: we need a starting point for the algorithm, $\mathbf{\theta}_0$. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints). Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic program$$\min_{\mathbf{z}\in\mathbb{R}^d}\left\lbrace\frac{1}{2}\mathbf{z}^T\mathbf{D}\mathbf{z}-\mathbf{d}\mathbf{z}\right\rbrace$$subject to $\mathbf{A}\mathbf{z}\geq\mathbf{b}$. library(quadprog) eps = 5e-4 y = myocarde[,&quot;PRONO&quot;]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) D = diag(n+7+1) diag(D)[8+0:n] = 0 d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1) A = Ui b = Ci sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution (omega = qpsol[1:7]) [1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521 (b = qpsol[n+7+1]) [1] 997.6289927 Given an observation $\mathbf{x}$, the prediction is $$y=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)&gt;0)-1 Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far. ## The dual problem The Lagrangian of the separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha}\in\mathbb{R}^n$, $\mathbf{\alpha}\geq \mathbf{0}$ as$$\mathcal{L}(\mathbf{\omega},b,\mathbf{\alpha})=\frac{1}{2}\|\mathbf{\omega}\|^2-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1\big)$$Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. Consider the Dual Problem, with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$ $$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$ and $\mathbf{\alpha}\geq\mathbf{0}$. The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\in\mathbb{R}^n$, $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\geq \mathbf{0}$, and define the Lagrangian $\mathcal{L}(\mathbf{\omega},b,{\color{red}{\mathbf{\xi}}},\mathbf{\alpha},{\color{red}{\mathbf{\beta}}})$ as$$\frac{1}{2}\|\mathbf{\omega}\|^2+{\color{blue}{C}}\sum_{i=1}^n{\color{red}{\xi_i}}-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1+{\color{red}{\xi_i}}\big)-\sum_{i=1}^n{\color{red}{\beta_i}}{\color{red}{\xi_i}}$$ Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. The Dual Problem become with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$$$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$, $\mathbf{\alpha}\geq\mathbf{0}$ and $\mathbf{\alpha}\leq {\color{blue}{C}}$. As previsouly, one can also use quadratic programming library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) Q = sapply(1:n, function(i) y[i]*t(X)[,i]) D = t(Q)%*%Q d = matrix(1, nrow=n) A = rbind(y,diag(n),-diag(n)) C = .5 b = c(0,rep(0,n),rep(-C,n)) sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution The two problems are connected in the sense that for all $\mathbf{x}$$$\mathbf{\omega}^T\mathbf{x}+b = \sum_{i=1}^n \alpha_i y_i (\mathbf{x}^T\mathbf{x}_i)+b$$ To recover the solution of the primal problem,$$\mathbf{\omega}=\sum_{i=1}^n \alpha_iy_i \mathbf{x}_i$$thus omega = apply(qpsol*y*X,2,sum) omega 1 FRCAR INCAR INSYS 0.0000000000000002439074265 0.0550138658687635215271960 -0.0920163239049630876653652 0.3609571899422952534486342 PRDIA PAPUL PVENT REPUL -0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454 0.0010093656567606212794835 while $b=y-\mathbf{\omega}^T\mathbf{x}$ (but actually, one can add the constant vector in the matrix of explanatory variables). More generally, consider the following function (to make sure that $D$ is a definite-positive matrix, we use the nearPD function). svm.fit = function(X, y, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = X[i,] %*% X[j,] }} Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) } On our dataset, we obtain M = as.matrix(myocarde[,1:7]) center = function(z) (z-mean(z))/sd(z) for(j in 1:7) M[,j] = center(M[,j]) bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5) y_pred = 2*((cbind(1,M)%*%bomega)&gt;0)-1 table(obs=myocarde0$PRONO,pred=y_pred) pred obs -1 1 -1 27 2 1 9 33 i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression). ## Kernel Based Approach In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below, It might be difficult, here, because which want to find a straight line in the two dimensional space $(x_1,x_2)$. But maybe, we can distort the space, possible by adding another dimension That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any). A positive kernel on $\mathcal{X}$ is a function $K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ symmetric, and such that for any $n$, $\forall\alpha_1,\cdots,\alpha_n$ and $\forall\mathbf{x}_1,\cdots,\mathbf{x}_n$,$$\sum_{i=1}^n\sum_{j=1}^n\alpha_i\alpha_j k(\mathbf{x}_i,\mathbf{x}_j)\geq 0.$$ For example, the linear kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i^T\mathbf{x}_j$. That’s what we’ve been using here, so far. One can also define the product kernel $k(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i)\cdot\kappa(\mathbf{x}_j)$ where $\kappa$ is some function $\mathcal{X}\rightarrow\mathbb{R}$. Finally, the Gaussian kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\exp[-\|\mathbf{x}_i-\mathbf{x}_j\|^2]$. Since it is a function of $\|\mathbf{x}_i-\mathbf{x}_j\|$, it is also called a radial kernel. linear.kernel = function(x1, x2) { return (x1%*%x2) } svm.fit = function(X, y, FUN=linear.kernel, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = FUN(X[i,], X[j,]) } } Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) } To relate this duality optimization problem to OLS, recall that $y=\mathbf{x}^T\mathbf{\omega}+\varepsilon$, so that $\widehat{y}=\mathbf{x}^T\widehat{\mathbf{\omega}}$, where $\widehat{\mathbf{\omega}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}$ But one can also write $$y=\mathbf{x}^T\widehat{\mathbf{\omega}}=\sum_{i=1}^n \widehat{\alpha}_i\cdot \mathbf{x}^T\mathbf{x}_i$$ where $\widehat{\mathbf{\alpha}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\widehat{\mathbf{\omega}}$, or conversely, $\widehat{\mathbf{\omega}}=\mathbf{X}^T\widehat{\mathbf{\alpha}}$. ## Application (on our small dataset) One can actually use a dedicated R package to run a SVM. To get the linear kernel, use library(kernlab) df0 = df df0$y = 2*(df$y=="1")-1 SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc") Since the dataset is not linearly separable, there will be some mistakes here table(df0$y,predict(SVM1)) -1 1 -1 2 2 1 1 5 The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract $\omega$ nor $b$ from the 24 slots of that objet). But it’s possible by adding a small option in the function SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red") Here the cost is $C$=.5, but of course, we can change it SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel” SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc") Observe that here, we’ve been able to separare the white and the black points table(df0$y,predict(SVM3))   -1 1 -1 4 0 1 0 6 plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM3(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") Now, to be completely honest, if I understand the theory of the algorithm used to compute $\omega$ and $b$ with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters) or (to be continued…) # Traveling Salesman In the second part of the course on graphs and networks, we will focus on economic applications, and flows. The first series of slides are on the traveling salesman problem. Slides are available online. # Simple and heuristic optimization This week, at the Rmetrics conference, there has been an interesting discussion about heuristic optimization. The starting point was simple: in complex optimization problems (here we mean with a lot of local maxima, for instance), we do not necessarily need extremely advanced algorithms that do converge extremly fast, if we cannot ensure that they reach the optimum. Converging extremely fast, with a great numerical precision to some point (that is not the point we’re looking for) is useless. And some algorithms might be much slower, but at least, it is much more likely to converge to the optimum. Wherever we start from. We have experienced that with Mathieu, while we were looking for maximum likelihood of our MINAR process: genetic algorithm have performed extremely well. The idea is extremly simple, and natural. Let us consider as a starting point the following algorithm, 1. Start from some 2. At step , draw a point in a neighborhood of • either then • or then This is simple (if you do not enter into details about what such a neighborhood should be). But using that kind of algorithm, you might get trapped and attracted to some local optima if the neighborhood is not large enough. An alternative to this technique is the following: it might be interesting to change a bit more, and instead of changing when we have a maximum, we change if we have almost a maximum. Namely at step , • either then • or then for some . To illustrate the idea, consider the following function > f=function(x,y) { r <- sqrt(x^2+y^2); + 1.1^(x+y)*10 * sin(r)/r } (on some bounded support). Here, by picking noise and values arbitrary, we have obtained the following scenarios > x0=15 > MX=matrix(NA,501,2) > MX[1,]=runif(2,-x0,x0) > k=.5 > for(s in 2:501){ + bruit=rnorm(2) + X=MX[s-1,]+bruit*3 + if(X[1]>x0){X[1]=x0} + if(X[1]<(-x0)){X[1]=-x0} + if(X[2]>x0){X[2]=x0} + if(X[2]<(-x0)){X[2]=-x0} + if(f(X[1],X[2])+k>f(MX[s-1,1], + MX[s-1,2])){MX[s,]=X} + if(f(X[1],X[2])+k<=f(MX[s-1,1], + MX[s-1,2])){MX[s,]=MX[s-1,]} +} It does not always converge towards the optimum, and sometimes, we just missed it after being extremely unlucky Note that if we run 10,000 scenarios (with different random noises and starting point), in 50% scenarios, we reach the maxima. Or at least, we are next to it, on top. What if we compare with a standard optimization routine, like Nelder-Mead, or quasi gradient ?Since we look for the maxima on a restricted domain, we can use the following function, > g=function(x) f(x[1],x[2]) > optim(X0, g,method="L-BFGS-B", + lower=-c(x0,x0),upper=c(x0,x0))$par In that case, if we run the algorithm with 10,000 random starting point, this is where we end, below on the right (while the heuristic technique is on the left), In only 15% of the scenarios, we have been able to reach the region where the maximum is. So here, it looks like an heuristic method works extremelly well, if do not need to reach the maxima with a great precision. Which is usually the case actually. # EM and mixture estimation Following my previous post on optimization and mixtures (here), Nicolas told me that my idea was probably not the most clever one (there). So, we get back to our simple mixture model, In order to describe how EM algorithm works, assume first that both  and  are perfectly known, and the mixture parameter is the only one we care about. • The simple model, with only one parameter that is unknown Here, the likelihood is so that we write the log likelihood as which might not be simple to maximize. Recall that the mixture model can interpreted through a latent variate  (that cannot be observed), taking value when  is drawn from , and 0 if it is drawn from . More generally (especially in the case we want to extend our model to 3, 4, … mixtures),  and . With that notation, the likelihood becomes and the log likelihood the term on the right is useless since we only care about p, here. From here, consider the following iterative procedure, Assume that the mixture probability  is known, denoted . Then I can predict the value of  (i.e.  and ) for all observations, So I can inject those values into my log likelihood, i.e. in having maximum (no need to run numerical tools here) that will be denoted . And I can iterate from here. Formally, the first step is where we calculate an expected (E) value, where is the best predictor of  given my observations (as well as my belief in ). Then comes a maximization (M) step, where using , I can estimate probability . • A more general framework, all parameters are now unkown So far, it was simple, since we assumed that  and   were perfectly known. Which is not reallistic. An there is not much to change to get a complete algorithm, to estimate . Recall that we had  which was the expected value of Z_{1,i}, i.e. it is a probability that observation i has been drawn from . If , instead of being in the segment  was in , then we could have considered mean and standard deviations of observations such that =0, and similarly on the subset of observations such that =1. But we can’t. So what can be done is to consider  as the weight we should give to observation i when estimating parameters of , and similarly, 1-would be weights given to observation i when estimating parameters of . So we set, as before and then and for the variance, well, it is a weighted mean again, and this is it. • Let us run the code on the same data as before Here, the code is rather simple: let us start generating a sample > X1 = rnorm(n,0,1) > X20 = rnorm(n,0,1) > Z  = sample(c(1,2,2),size=n,replace=TRUE) > X2=4+X20 > X = c(X1[Z==1],X2[Z==2]) then, given a vector of initial values (that I called  and then  before), >  s = c(0.5, mean(X)-1, var(X), mean(X)+1, var(X)) I define my function as, >  em = function(X0,s) { +  Ep = s[1]*dnorm(X0, s[2], sqrt(s[4]))/(s[1]*dnorm(X0, s[2], sqrt(s[4])) + +  (1-s[1])*dnorm(X0, s[3], sqrt(s[5]))) +  s[1] = mean(Ep) +  s[2] = sum(Ep*X0) / sum(Ep) +  s[3] = sum((1-Ep)*X0) / sum(1-Ep) +  s[4] = sum(Ep*(X0-s[2])^2) / sum(Ep) +  s[5] = sum((1-Ep)*(X0-s[3])^2) / sum(1-Ep) +  return(s) +  } Then I get , or . So this is it ! We just need to iterate (here I stop after 200 iterations) since we can see that, actually, our algorithm converges quite fast, > for(i in 2:200){ + s=em(X,s) + } Let us run the same procedure as before, i.e. I generate samples of size 200, where difference between means can be small (0) or large (4), Ok, Nicolas, you were right, we’re doing much better ! Maybe we should also go for a Gibbs sampling procedure ?… next time, maybe…. # Optimization and mixture estimation Recently, one of my students asked me about optimization routines in R. He told me he that R performed well on the estimation of a time series model with different regimes, while he had trouble with a (simple) GARCH process, and he was wondering if R was good in optimization routines. Actually, I always thought that mixtures (and regimes) was something difficult to estimate, so I was a bit surprised… Indeed, it reminded me some trouble I experienced once, while I was talking about maximum likelihooh estimation, for non standard distribution, i.e. when optimization had to be done on the log likelihood function. And even when generating nice samples, giving appropriate initial values (actually the true value used in random generation), each time I tried to optimize my log likelihood, it failed. So I decided to play a little bit with standard optimization functions, to see which one performed better when trying to estimate mixture parameter (from a mixture based sample). Here, I generate a mixture of two gaussian distributions, and I would like to see how different the mean should be to have a high probability to estimate properly the parameters of the mixture. The density is here  proportional to The true model is , and  being a parameter that will change, from 0 to 4. The log likelihood (actually, I add a minus since most of the optimization functions actually minimize functions) is > logvraineg <- function(param, obs) { + p <- param[1] + m1 <- param[2] + sd1 <- param[3] + m2 <- param[4] +  sd2 <- param[5] +  -sum(log(p * dnorm(x = obs, mean = m1, sd = sd1) + (1 – p) * + dnorm(x = obs, mean = m2, sd = sd2))) +  } The code to generate my samples is the following, >X1 = rnorm(n,0,1) > X20 = rnorm(n,0,1) > Z  = sample(c(1,2,2),size=n,replace=TRUE) > X2=m+X20 > X = c(X1[Z==1],X2[Z==2]) Then I use two functions to optimize my log likelihood, with identical intial values, > O1=nlm(f = logvraineg, p = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), obs = X) > logvrainegX <- function(param) {logvraineg(param,X)} > O2=optim( par = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), +   fn = logvrainegX) Actually, since I might have identification problems, I take either  or , depending whether  or  is the smallest parameter. On the graph above, the x-axis is the difference between means of the mixture (as on the animated grap above). Then, the red point is the median of estimated parameter I have (here ), and I have included something that can be interpreted as a confidence interval, i.e. where I have been in 90% of my scenarios: theblack vertical segments. Obviously, when the sample is not enough heterogeneous (i.e.  and  rather different), I cannot estimate properly my parameters, I might even have a probability that exceed 1 (I did not add any constraint). The blue plain horizontal line is the true value of the parameter, while the blue dotted horizontal line is the initial value of the parameter in the optimization algorithm (I started assuming that the mixture probability was around 0.2). The graph below is based on the second optimization routine (with identical  starting values, and of course on the same generated samples), (just to be honest, in many cases, it did not converge, so the loop stopped, and I had to run it again… so finally, my study is based on a bit less than 500 samples (times 15 since I considered several values for the mean of my second underlying distribution), with 200 generated observations from a mixture). The graph below compares the two (empty circles are the first algorithm, while plain circles the second one), On average, it is not so bad…. but the probability to be far away from the tru value is not small at all… except when the difference between the two means exceeds 3… If I change starting values for the optimization algorithm (previously, I assumed that the mixture probability was 1/5, here I start from 1/2), we have the following graph which look like the previous one, except for small differences between the two underlying distributions (just as if initial values had not impact on the optimization, but it might come from the fact that the surface is nice, and we are not trapped in regions of local minimum). Thus, I am far from being an expert in optimization routines in R (see here for further information), but so far, it looks like R is not doing so bad… and the two algorithm perform similarly (maybe the first one being a bit closer to the trueparameter).
edit retag close merge delete Sort by » oldest newest most voted Dear nihao how do you solved it? I have the same problem. more I'm encountering the same problem. I would be interested in the solution, too. Thanks, Ed more Sadly, after posting this, I had a typo in my keystone database after having to resort to the python debugger. You have to ensure you that your os_parameters (or OS_ env) are exactly right for it to work. Here is the minimum set of OS_* environment I needed for glance to work: OS_TENANT_NAME OS_USERNAME OS_PASSWORD OS_AUTH_URL One thing I found a little frustrating is that the glance client does not accept the same parameters as the keystone client in this regard. For example, I needed to use names instead of the ids (e.g. tenant name vs. tenant id). Another thing that is a little confusing is that if you define os_ name AND id, it's not clear to the user which one is used, and this could cause the user to get a bad catalog list or invalid authentication. To verify that you have the right environment or settings, use "keystone catalog" (or "keystone catalog --service image") with the same environment. A successful call should result in the list service endpoints. As a note to the developers, it would be nice if the command-line tools would produce more information, such as the settings used to authenticate particularly if debugging is enabled. more Hi DigitalWonk, The inconsistencies with the parameters are addressed in the next-generation client ( http://github.com/openstack/python-gl... ). The old glance client will be deprecated in the Grizzly release cycle. Best, -jay more Have you tried creating the glance endpoint in the keystone catalog? e.g.: $GLANCE_HOST=<your_glance_host>$ keystone-manage endpointTemplates add RegionOne glance http://$GLANCE_HOST:9292/v1 http://$GLANCE_HOST:9292/v1 http://\$GLANCE_HOST:9292/v1 1 1 more I created the glance ednpoint by use the command "keystone --token ADMIN --endpoint http://192.168.32.123:35357/v2.0 user-create --tenant_id eebbdcb86e644c7297b9d443702987f5 --name glance --pass nihao --enabled true",if do something wrong . I use the openstack-install-guide-trunk[1] installed it . more tkanks everyone , i have solved it ,the reason is because when i use keystone i make some mistake! more
## Algebra 1 The equation y = $\frac{1}{4}\times(2)^{x}$ is an exponential function. This is an exponential growth because the a value is greater than 1. To graph, you start by plotting the y-intercept which is (0,$\frac{1}{4}$) because f(0) = $\frac{1}{4}\times(2)^{0}$ equals $\frac{1}{4}\times$1 which then equals $\frac{1}{4}$. Then you make a table of values for the equation and plot them.
## Vol 3, No 2 (2011) A problem for degenerated semilinear hyperbolic system in sector PDF (Українська) R. V. Andrusyak, V. M. Kyrylych, O. V. Peliushkevych 4-12 Rings whose non-zero derivations have finite kernels PDF O. D. Artemovych 13-17 The analytical properties of solutions of the three-body problem PDF A. V. Belyaev 18-35 On operations on some classes of discontinuous maps PDF B. M. Bokalo, N. M. Kolos 36-48 The matrix diophantine equations $AX + BY = C$ PDF (Українська) N. S. Dzhaliuk, V. M. Petrychkovych 49-56 Applications of parafunctions of the triangle $(0; 1)$-matrices to finding the number of graphs connectivity PDF (Українська) T. S. Dovbniak 57-63 Free subsemigroups in automorphism group of a polynomial ring of two variables over number fields PDF (Українська) Zh. I. Dovghey, M. I. Sumaryuk 64-70 The decomposable and the ambiguous sets PDF (Українська) O. Karlova 71-76 On solutions of differential-functional equations of neutral type PDF (Українська) R. I. Kachurivsky 77-82 The analogs of monotonous methods of Newton PDF (Українська) M. I. Kopach, A. F. Obshta, B. A. Shuvar 83-87 An inhomogeneous diffusion process model on a half-line with general boundary condition of Feller-Wentzel PDF (Українська) B. I. Kopytko, R. Shevchuk 88-99 Approximate solution of linear differential equations of neutral type by aggregative-iterative method PDF (Українська) L. P. Kostyshyn, Ya. J. Bigun 100-107 The faithful triangular representations of Kaloujnine$p$-groups over a $p$-element field PDF (Українська) Y. G. Leonov 108-113 Sufficient conditions for the existence of points of symmetrically quasi-continuity and of symmetrically cliquishness of functions of two variables PDF (Українська) V. V. Nesterenko 114-119 Multipoint problem with for factorized parabolic operator with variable coefficients PDF (Українська) I. R. Tymkiv 120-130 On the closure of the extended bicyclic semigroup PDF I. R. Fihel, O. V. Gutik 131-157
MadeEasy Test Series 2019: Algorithms - Time complexity 243 views cant we write the recurrance relation for bar() as T(n) = 5T(n-1) + c, like cant we take both the recurrance call as combined as both have same parameter? and if not, then how to solve such? edited 0 can't see "foo" 0 Sorry, was asking for bar(), edited 0 you can write it as, bar(n)= bar(n-1) + bar(n-1) = 2bar(n-1) 0 @Markzuck It is 3 multiplied by what bar(n-1) returns and not 3 bar(n-1) function calls. 0 Ohh got it So while calling we need to write separately? Like 5*tn-1 means 5*return value But in complexity eqn we just write number of times called which is 2? Not 5tn-1 right? 1 Yes.What matters is how many times the function is getting called. It is getting called twice in your example. 0 I don't understand how it gets T(n) = O(2^n). I try solving it and I am getting this equation: T(n) = 2^(n-1) * 3^(n) + ( 2^(n-2) -1 ) so by this leading term will be O(2^n * 3^n) Please if someone know where I did this wrong. Related questions 1 160 views Please show the ideal way to deal with such comparisons as I am getting g>=f IN genral what logic shall be followed to analyse such complex comparions? O($n^2$) O(n) O(nlogn) O($n(logn)^2$
+0 # Help me 0 66 1 Let f(n)  be the base-10 logarithm of the sum of the elements of the $$n$$th row in Pascal's triangle. Express $$\frac{f(n)}{\log_{10} 2}$$ in terms of n. Mar 22, 2020 #1 +485 +2 Hey there guest! Glad to see you posting something about logarithms!(I need to brush up on them lol). Let's get to the problem. It's first given that f(n) is the base 10 logarithm of the sum of the elements in the nth row of pascals triangle. A key thing that you should realize is that the nth row of pascals triangle can be represented as: $${n \choose 0} + {n \choose 1} + {n \choose 2}.... {n \choose n}$$ Coincidentally, this also equals $$2^n$$. You can try this out yourself with the first few rows of pascal's triangle! The function f(n) then becomes: $$f(n) = \log_{10}2^{n}$$ The expression we are asked to simplify then turns into: $$\log_{10}(2^n)/ \log _{10} 2$$ Here, we have a neat log property we can use: $$\log_{a} b / \log_{a} c = \log_{c} b$$ We then convert $$\log_{10}(2^n)/ \log _{10} 2 = \log_{2}2^n$$ With the basic definition of a log, we then have $$\log_{2}2^n = n$$ So the expression in terms of n is just n Quick edit / note : Remember that the "first row" of pascal's triangle with just 1 term, is actually counted as the "0th" row. Mar 22, 2020 edited by jfan17  Mar 22, 2020
# Tool to saturate the dangling bonds at the surface of nanostructures When it comes to studying 2D, 1D or 0D nanomaterials, saturating the dangling bonds in the surface helps remove unwanted localized surface states from the band structure 1. However, so far I have not found an ideal way to conduct this saturation process. My current procedure includes using the Avogadro software to add H to the surface (of Si and Ge nanostructures). However this package treats any structure as a molecule, and adds H to the periodic directions as well (and I later delete the extra atoms and edit the structure manually). Are there any other free alternatives which can be used? • Possible duplicate here: mattermodeling.stackexchange.com/questions/4203/… Oct 16 at 6:21 • Sorry. I didn't find that earlier. Thanks – PBH Oct 16 at 8:20 • @PBH Do you feel that the other question is sufficient, or should we leave this one open too? If we should leave this one open, why? Oct 16 at 17:18 • I have given an answer, I feel this is close to a duplicate but some of these tools likely do not work in 3D, 2D, 1D and 0D. The generalized approach given should work for any system. If we want to combine the two questions, I can move my answer to the possible duplicate. Oct 16 at 22:27 • @NikeDattani the answer I got is significantly better than the options available in the other question. – PBH Oct 17 at 1:39
# Is there a closed form for the sum of the cubes of the binomial coefficients? We know that $$\sum_{k=0}^n \binom{n}{k} = 2^n\;\; \text{ and }\;\; \sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$$ hold for all $$n\in \mathbb{N}_0$$. Now I tried to find a similar expression for $$\sum_{k=0}^n \binom{n}{k}^3$$ but didn't get anywhere at all. What I found were only asymptotic estimates (see Sum of cubes of binomial coefficients or Asymptotics of $$\sum_{k=0}^{n} {\binom n k}^a$$). Now is there a closed form for this sum or, what would be even better, for $$\sum_{k=0}^n \binom{n}{k}^\alpha$$ with any $$\alpha \in \mathbb{N}_0$$? These numbers are called the Franel Numbers. It's proven in (Petkovšek, M., Wilf, H. and Zeilberger, D. (1996). A=B. Wellesley, MA: A K Peters. p. 160) that there is no closed form for these numbers, in terms of the sum of a fixed number of hypergeometric terms. However, as @Robert_Israel points out, the expression could possibly be represented by different types of closed form. • ... if "closed form" is defined as "the sum of a fixed number of hypergeometric terms". There could be other types of "closed form". – Robert Israel Nov 28 '18 at 18:24 • @Robert Was just thinking that. Thanks for the suggestion. – Jam Nov 28 '18 at 18:26 The binomial coefficient for a given pair of $$n \geq k \geq 0$$ integers can be expressed in terms of a Pochhammer symbol as the following. $$\binom n k = \frac{(-1)^k(-n)_k} {k!}.$$ The expression is valid even if $$n$$ is an arbitrary real number. Here we note two things. 1. The Pochhammer symbol $$(-n)_k$$ is zero, if $$n \geq 0$$ and $$k > -n$$. 2. The factorial $$k!$$ can be written as $$(1)_k$$. Using these observations, we can express your sums in terms of a generalized hypergeometric function $$_pF_q$$ as the following. For the sum of the binomial coefficients, we have $$\sum_{k=0}^n \binom n k = \sum_{k=0}^n \frac{(-1)^k(-n)_k}{k!} = \sum_{k=0}^\infty (-n)_k{\frac{(-1)^k}{k!}} = {_1F_0}\left({{-n}\atop{-}}\middle|\,-1\right).$$ For the sum the square of the binomial coefficients, we have $$\sum_{k=0}^n {\binom n k}^2 = \sum_{k=0}^n \left(\frac{(-1)^k(-n)_k}{k!}\right)^2 = \sum_{k=0}^\infty \frac{\left((-n)_k\right)^2}{k!} \cdot \frac{1}{k!} = {_2F_1}\left({{-n, -n}\atop{1}}\middle|\,1\right).$$ And for the sum of the cube of the binomial coefficients $$-$$ also known as Franel numbers $$-$$, we have $$\sum_{k=0}^n {\binom n k}^3 = \sum_{k=0}^n \left(\frac{(-1)^k(-n)_k}{k!}\right)^3 = \sum_{k=0}^\infty \frac{\left((-n)_k\right)^3}{(k!)^2} \cdot \frac{(-1)^k}{k!} = {_3F_2}\left({{-n, -n, -n}\atop{1, 1}}\middle|\,-1\right).$$ In general, for a positive integer $$r$$, we have the binomial sum \begin{align*} \sum_{k=0}^n {\binom n k}^r &= \sum_{k=0}^n \left(\frac{(-1)^k(-n)_k}{k!}\right)^r = \sum_{k=0}^\infty \frac{\left((-n)_k\right)^r}{(k!)^{r-1}} \cdot \frac{(-1)^{rk}}{k!} \\ &= {_rF_{r-1}}\left({{-n, -n, \dots, -n}\atop{1, \dots, 1}}\middle|\,(-1)^r\right). \end{align*}
# Transforming parabola to straight line 1. ### turin 2,326 To the moderator: This isn't a HW question, but it probably sounds like one, so I appologize. Please move this to the HW forum if need be. I have an integration domain inside three intersecting curves. Two of the curves are straightlines and the third is a parabola. These three boundaries are of the form $$y=Ax \, \qquad y=Bx \, \qquad y=(1+x)^2$$ where A and B are arbitrary constant slopes > 4. I want to transform the boundary of this domain into a triangle, as simply as possible. Any hints? 2. ### Office_Shredder 4,500 Staff Emeritus You don't just want to turn the parabola into a straight line, you want to keep the other straight lines as straight lines as well. 3. ### turin 2,326 Correct. I already know how to straighten the parabola, e.g. u=(1+x)^2; that's trivial. BTW, I don't care about the Jacobian; my #1 priority is straight boundaries for the domain of integration.
The Assassination of JFK - John Fitzgerald Kennedy, 35th President of the United States Over fifty years after the assassination of President Kennedy the Media are still trying to convince the public that it was the ‘lone gunmen’ Lee Harvey Oswald that killed Kennedy.  Stephen King’s new series entitled 11.22.63, starring James Franco is about a man that goes back in time to stop Oswald from shooting the President.  In the plot Franco’s character stops Oswald from shooting the President so Kennedy then escapes the assassin’s bullet. But the truth is if he did really stop Oswald from pulling the trigger then Kennedy would have still been killed because Oswald did not kill Kennedy. There was three real marksmen out their waiting for Kennedy and Oswald was just to be used as a scapegoat.  Perhaps Stephen King should go back to writing fiction because that’s what he is good at and this new series is just that, fiction.  Again it just shows how controlled the Mass Media is because over 55 years after this event they still can’t or won’t reveal the truth.  They still try to make us believe that it was Oswald, how pathetic they are. If it was not Oswald then who did it?  It was the secret powers that run the USA, Europe and Britain who were behind it, namely the Cabal or the Committee of 300 etc..  Financial adviser to the Rothschilds, Walter Rathenau, disclosed the reality of this 300 "Only 300 men, each of whom knows all others govern the fate of Europe”. Walter Rathenau was assassinated in 1922 for revealing this secret.  Kennedy would not play ball with the ‘300’ anymore and he carried out policies against their wishes. We will look at Why they murdered Kennedy and Who was behind it. John Fitzgerald Kennedy was the 35th President of the United States from January 1961 until his assassination in November 22nd 1963 in Dallas, Texas. There is a wealth of evidence that the Israel government and its intelligence agency Mossad were one of the the main culprits in the assassination of JFK as well as the CIA and the FBI who are all under the control of the group known as the Committee of 300 or the Illuminati. Let us go through some of this evidence. President John F. Kennedy was against secret organizations such as the Freemasons and he was well aware of the higher degrees of the Freemasons, the Illuminati who controlled most of the world secretly from behind the scenes.  JFK was the only President of America since Abraham Lincoln who was not a puppet of the Illuminati.  JFK would not follow their agenda, he wouldn’t invade Cuba, he did not want the Vietnam War and he wanted to remove the Federal Reserve and the CIA. Below is part of JFK’s speech about Secret Societies from April 1961. “The very word "secrecy" is repugnant in a free and open society; and we are as a people inherently and historically opposed to secret societies, to secret oaths and secret proceedings.  "For we are opposed around the world by a monolithic and ruthless conspiracy that relies on covert means for expanding its sphere of influence--on infiltration instead of invasion, on subversion instead of elections, on intimidation instead of free choice, on guerrillas by night instead of armies by day. It is a system which has conscripted vast human and material resources into the building of a tightly knit, highly efficient machine that combines military, diplomatic, intelligence, economic, scientific and political operations”.  Its preparations are concealed, not published. Its mistakes are buried not headlined. Its dissenters are silenced, not praised. No expenditure is questioned, no rumor is printed, and no secret is revealed”. On June 4, 1963 Kennedy signed Executive Order 11110. This Executive Order gave the authority to the United States Treasury to print money with complete autonomy from the Federal Reserve. The Treasury became authorized to issue Silver Certificate currency backed by the United States’ silver reserves.  Kennedy authorized $4.3 billion dollars of silver backed currency to be printed. If this money was allowed into circulation it would have been the end of the Federal Reserve and the Cabal or Committee of 300 would not allow this to happen. Other brave men in the past had tried to stop the private banksters from running the government’s money supply. The Primary Owners of the Federal Reserve Bank Are: 1. Rothschild's of London and Berlin 2. Lazard Brothers of Paris 3. Israel Moses Seaf of Italy 4. Kuhn, Loeb & Co. of Germany and New York 5. Warburg & Company of Hamburg, Germany 6. Lehman Brothers of New York 7. Goldman, Sachs of New York 8. Rockefeller Brothers of New York Seven of those International bankers are Jewish. 4 out of 5 of American Presidents who opposed a privately-held central bank while in office were assassinated: Abraham Lincoln, James Garfield, William McKinley, and John F. Kennedy. These were the only U.S. Presidential assassinations in history. They were all murdered by the Cabal, Committee of 300, Illuminati etc. Abraham Lincoln opposed a private national bank. Lincoln created his own money system, Greenbacks (opposed to the private bankers) to run the United States while he was in office. Lincoln was assassinated and Congress revoked the Greenback Law and enacted, in its place, the National Banking Act supporting privately owned national banks. "The money powers prey upon the nation in times of peace and conspire against it in times of adversity. It is more despotic than a monarchy, more insolent than autocracy, and more selfish than beaurocracy. It denounces as public enemies all who question its methods or throw light upon its crimes. I have two great enemies, the Southern Army in front of me and the bankers in the rear. Of the two, the one at my rear is my greatest foe." - Abraham Lincoln President James Garfield (1881) was opposed to Private Bankers from running the USA money supply Garfield said “He who controls the money supply of a nation controls the nation. Whoever controls the volume of money in any country is absolute master of all industry and commerce”. He was assassinated four months into his term of President by a lone gunman, Charles J. Guiteau. The William McKinley administration passed the Gold Standard Act of 1900 making gold the standard for the entire nation’s currency. McKinley was assassinated on September 1901 by lone gunman Leon Czolgosz. The fifth, Andrew Jackson (1825-37) stopped Rothschild’s Second Bank of the United States from gaining monopoly control over the nation’s financial system. Andrew Jackson told his vice president Martin Van Buren: “The bank, Mr. Van Buren, is trying to kill me, but I will kill it, you are a den of vipers and thieves. I intend to rout you out, and by the eternal God I will rout you out."” There was an assassination attempt on Andrew Jackson, Jan. 30, 1835. The assailant Richard Lawrence fired two pistols at point blank range but the gun misfired twice and Davy Crockett wrestled the man to the ground. Jackson was convinced that his political enemies in the rival Whig Party had hired Lawrence to assassinate him over his ultimately successful effort to scuttle the Bank of the United States. Andrew Jackson completely paid off the national debt, which was the only time in American history this was done. Of course we are supposed to believe that a lone crazed person assassinated these Presidents but that is for people who believe in fairy stories. Kennedy ordered the CIA to be brought to heel and transferred most of its powers to the Joint Chiefs of Staff through National Security Action memorandums 55, 56, and 57. He had fired Alan Dulles (CIA Head) over the Bay of Pigs fiasco. Dulles was then given a position on the Warren Commission (which investigated the death of JFK) and that makes no sense at all. Another reason that Kennedy had to be removed was that he wanted U.S involvement in Vietnam ended. Kennedy gave orders that U.S. involvement in South Vietnam should be ended and as soon as it was practicable that U.S. troops were to begin their pull out and be returned to the U.S. On October 2nd he instructed Defense Secretary McNamara to publicly announce his order, which was addressed to the Secretary of State, Secretary of Defense and the Joint Chiefs of Staff. All American troops were to be withdrawn from Vietnam by October 1965. Of course another reason that most people miss is that Kennedy wanted Israel to stop the development of a nuclear bomb. On May 18, 1963, the President warned Israel Prime Minister David Ben Gurion that unless American inspectors were allowed into Dimona, Israel would find itself totally isolated and would not receive any aid from the U.S. Rothschild was the controller of the Israeli state (and the Federal Reserve) and to control the Middle East he needed Israel to have a nuclear bomb. Ben Gurion stepped down in 1963 because of the pressure from Kennedy appointing a new Prime Minister, Levi Eshkol. Kennedy took the occasion to write a polite diplomatic letter to Eshkol, and asked for date for another inspection of of the Dimona facility “within six months”, to be followed with regular inspections every six months. And he added the following ultimatum, “As I wrote Mr. Ben-Gurion, this Government’s commitment to and support of Israel could be seriously jeopardized if it should be thought that we were unable to obtain reliable information on a subject as vital to the peace as the question of Israel’s effort in the nuclear field.” There were no nuclear proliferation treaties in those days, so Kennedy was informing the Israeli government that US direct aid and more important trade agreements were now on the table. Below is part of the letter Kennedy sent to the Prime Minister of Israel on July 5th 1963: “Dear Mr. Prime Minister (Levi Eshkol of Israel): It gives me great personal pleasure to extend congratulations as you assume your responsibilities as Prime Minister of Israel. You have our friendship and best wishes in your new tasks. It is on one of these that I am writing you at this time. You are aware, I am sure, of the exchange which I had with Prime Minister Ben-Gurion concerning American visits [i.e.: inspections] to Israel’s nuclear facility at Dimona. Most recently, the Prime Minister wrote to me on May 27th. His words reflected a most intense personal consideration of a problem that I know is not easy for your Government, as it is not for mine. We welcomed the former Prime Minister’s strong reaffirmation that Dimona will be devoted exclusively to peaceful purposes and the reaffirmation also of Israel’s willingness to permit periodic visits [inspections] to Dimona. Therefore, I asked our scientists to review the alternative schedules of visits we and you had proposed. If Israel’s purposes are to be clear beyond reasonable doubt, I believe that the schedule which would best serve our common purposes would be a visit early this summer, another visit in June 1964, and thereafter at intervals of six months”. - President of the United States https://history.state.gov/historicaldocuments/frus1961-63v18/d87 https://history.state.gov/historicaldocuments/frus1961-63v18/pg_197 John F. Kennedy Administration: Letter to Israeli PM Ben-Gurion Regarding Visit to Dimona https://www.jewishvirtuallibrary.org/kennedy-letter-to-ben-gurion-regarding-visit-to-dimona Of course these visits to the Dimona plant never came about because as you know Kennedy was assassinated in November 1963 and the man who took his place Lyndon Johnson reversed all of Kennedy’s policies. Lyndon B. Johnson looked the other way and let Israel get its nuclear capability. Kennedys$4 billion dollars of U.S treasury notes were removed from circulation and the Vietnam War was escalated. In Kennedy's last fiscal budget year of 1964, Israeli aid was $40 million. In LBJ's first budget of 1965, it soared to$71 million, and in 1966 more than tripled from two years earlier to $130 million! Plus, during Kennedy's administration, almost none of U.S aid to Israel was military in nature. Instead, it was split equally between development loans and food assistance under the PL480 Program. Yet in 1965 under the Johnson administration, 20% of U.S aid to Israel was for the military, while in 1966, 71% was used for war-related materials. These figures show that Kennedy was not keen on making Israel the dominate force in the Middle East and that is one of the main reasons he had to be removed. Freed Israeli nuclear spy Mordechai Vanunu said in an interview published in 2004 that Israel was behind the 1963 assassination of U.S. President John F. Kennedy. Vanunu, a former nuclear technician who was released from Israeli prison after serving an 18-year sentence for exposing Israel’s nuclear program at Dimona to Britain’s Sunday Times, has been barred from leaving the country, talking to the media or meeting with foreigners. This claim about Israel’s involvement in the Kennedy assassination was also made by the late Colonel Gaddafi of Libya. If anyone doubts that the Rothschild\Jewish Zionist Establishment could have done this then you need to read Michael Collins Piper’s book Final Judgment, linking Israel to the Kennedy assassination. Final Judgment documents how Israel's leaders, the Mossad, the Jewish Meyer Lansky-run organized crime syndicate, and a pro-Zionist faction of the CIA colluded to assassinate President, John F. Kennedy. The general pattern of the JFK covert operation, to include the skilful use of "limited hang-outs," "patsies," and "false flags," has very likely been repeated in various later forms such as in the assassination of Bobby Kennedy, the murder of Martin Luther King, the mysterious death of former CIA Director William Colby, the very suspicious Oklahoma City bombing, and the Mossad-linked "controlled demolition" of World Trade Center towers on September 11, 2001. More recently, we have seen how the "High Priests of War" have flexed raw Israeli-lobby power by pushing American interventions in Afghanistan and Iraq and by promoting saber-rattling at Iran and Syria. Below the video 'The Men who killed Kennedy at Dimona' JFK was killed through an organization called Permindex (Permanent Industrial Expositions) which was a MI6\Mossad front company. It was a front company for espionage and assassinations. https://larouchepub.com/eiw/public/1981/eirv08n15-19810414/eirv08n15-19810414_032-the_permindex_connection.pdf The Rothschild Illuminati entrusted MI6 Operative William Stephenson with organising Kennedy’s execution. Stephenson brought in Major Mortimer Louis Bloomfield. Bloomfield was one of his agents and both were members of Special Operations Executive (SOE) of MI6. Stephenson had been part of Prime Minister Winston Churchill's circle, and a protégé of Lord Beaverbrook. Both were Canadians and Bloomfield was a Jewish Zionist and good friends with Prime Minister of Israel, Ben Gurion. Bloomfield was a major fund-raiser for Israel and known asset for Israeli intelligence. There are in the Bloomfield papers many documents showing that Louis M. Bloomfield was acting as attorney for the Rothschild family, and particularly for Baron Edmund de Rothschild. Bloomfield joined the British military and served in Palestine as an Intelligence Officer and was involved in training the Jewish army, Haganah (1936-1939). Permindex was formed out of the World Commerce Corp and it worked closely with World Trade Mart of New Orleans. The founder and chairman of the World Trade Mart was Col. Clay Shaw, who had first joined up with the British in World War II when he was an OSS liaison officer. Shaw was based in London and became a friend of Prime Minister, Winston Churchill, whose personal advisor was Sir William Stephenson. Also serving with the OSS in London at this time was James Jesus Angleton, the CIA’s head of counter-intelligence and the Israel desk when Kennedy was killed. Lord Victor Rothschild, another close friend of Churchill, the man behind Israel’s nuclear weapons project and one of the key people behind the creation of Israel. Rothschild knew Shaw, Bloomfield, Stephenson, and Angleton, who were all part of the team which conspired to kill Kennedy. Rothschild’s connections with Mossad and Israel were fundamental. He was at the heart of the Jewish terror and intelligence groups which brought Israel into existence Louis Bloomfield and Clay Shaw came together on the board of Permindex and worked together on its cover operation, the setting up of trade exhibitions around the world. On November 22nd 1963, President Kennedy was on his way to speak at the newly created Dallas Trade Mart when he was assassinated. JFK’s trip to Dallas was sponsored by a powerful business group known as the Dallas Citizens Council, dominated by two Jews Julius Schepps and Sam Bloom. It was that appointment at the Dallas Trade Mart which led his motorcade to pass through Dealey Plaza where the fatal shots were fired. A coincidence? I don’t think so, somehow. According to former British Intelligence Officer Colonel John Hughes-Wilson, it was Sam Bloom (Jewish) who suggested to the Police “that they move the alleged assassin [Oswald] from the Dallas police station to the Dallas County Jail in order to give the newsmen a good story and pictures.” Oswald was shot by Jack Ruby (Jewish) during this transfer. Hughes-Wilson adds that, “when the police later searched Ruby’s home, they found a slip of paper with Bloom’s name, address and telephone number on it”. Permindex was connected to the assassination attempt on French President Charles De Gaulle in 1962. A French colonel, Bastien Thiery, commanded the 1962 group of professional assassins who made the actual assassination attempt on DE Gaulle. Colonel Thiery set his group of assassins up at an intersection in the suburbs of Paris in this final attempt in to kill DE Gaulle. The gunmen fired more than one hundred rounds in the assassination attempt. But General DE Gaulle, traveling in his bullet proof car, evaded being hit, although all of the tires were shot out. President Charles de Gaulle forced the Swiss and Italian governments to expel Permindex after it was caught orchestrating this failed attempt to kill him and in fact Permindex operations in France were shut down after an investigation by SDEC (French intelligence) revealed that Permindex as an MI6\Mossad front. General DE Gaulle’s intelligence had traced the financing of his attempted assassination into the FBI's Division Five Permindex in Switzerland and Centro Mondiale Commerciale in Rome. The overall command of the DE Gaulle assassination unit was directed by Division Five of the FBI (Louis Bloomfield). Why would the Jewish controlled Permindex want to assassinate De Gaulle. Well De Gaulle was critical of Israel’s attempt to create a nuclear bomb and he also wanted to give Algeria its independence something Israel was against. Permindex and its subsidiary Centro Mondiale Commerciale in Rome was also behind the assassination of Enrico Mattei, leader of Italy's post-war industrial reconstruction. Mattei was deliberately killed by a bomb which blew up his plane on Oct. 27, 1962. Mattei was already a legendary figure, known throughout the world for his fight against the London-centred oil marketing cartel and for his anti-colonial policy. Mattei was trying to break up the ‘seven sisters’ (Exxon, Mobil etc.) that had a monopoly on world oil production which is owned by the Rothschild’s and Rockefellers. Some of the board members of Permindex at the time of the JFK assassination were: Bruce Medaris, Clay Shaw, Edgar Bronfman (Jewish), Ernest Israel Japheth (Jewish), Hans Seligman (Jewish), Louis Dreyfuss, Mortimer Louis Bloomfield (Jewish), Roy Cohn (Jewish), Viscount Samuel (Jewish), Tibor Rosenbaum(Jewish) and Max Fisher (Jewish). Don’t forget Clay Shaw was the one arrested by District Attorney Jim Garrison which was made famous by the movie JFK starring Kevin Costner. Shaw came under suspicion when he provided defence advice for Lee Harvey Oswald when Oswald was arrested. When Clay Shaw was arrested who provided the funds for his defence against Garrison? It was the powerful Stern family, leaders of the New Orleans Jewish community--were primary shareholders in the Apollo, Pennsylvania-based NUMEC nuclear facility from which American nuclear material was illicitly channelled to Israel with the collaboration of CIA chief of counterintelligence, James Angleton, a devoted ally of Israel as head of the CIA's Mossad desk. Shaw was tried on charges of conspiracy to assassinate JFK and acquitted. Yet, but for a legal technicality, Shaw would have been found guilty and his conviction would have led the investigation of the Kennedy murder directly to the door of Major Louis Mortimer Bloomfield, Permindex board member. So Shaw's Rothschild--and Mossad--connection via Permindex was solidified by the Stern link to Israel's nuclear weapons program we now know JFK was so determined to stop. The former Mossad agent, Victor Ostrovsky, has confirmed all this in his books, By Way Of Deception and The Other Side of Deception, which massively expose the extent of Mossad’s world-wide operations. At least eight of the board members of Permindex were Jews. All told, a variety of evidence indicates Permindex fronted for a Mossad operation funding Israel's drive to assemble the atomic bomb. Bloomfield, board member of Permindex and Rothschild stooge was good friends with David Ben Gurion (Israeli Prime Minister). Below is a photo of them together. Louis Bloomfield is on the front\left, Gurion on the right. The man who bankrolled Oliver Stone's JFK film, Israeli arms dealer Arnon Milchan--listed as "executive producer"--was a primary player in Israel's nuclear program. Head of a global empire in weapons, chemicals, electronics, aerospace and plastics--operating in the Rothschild sphere has been described as "Mr. Israel," and is a close friend and business partner of Rothschild-sponsored media baron Rupert Murdoch. The reason he bankrolled Stone’s movie was to make sure that there was no Mossad or Israeli link in the movie to the assassination of JFK. William Stephenson MI6 owned a Jamaican property at Montego Bay and named it the “Tryall Club.” The elite British club became a watering hole for Bloomfield, and others implicated in the JFK conspiracy. Stephenson was also colleagues with former MI6 Operative Ian Fleming and it was no coincidence that Ian Fleming‘s very first “James Bond” movie Dr No was filmed there. It was at this Tryall Club that the plan to Kennedy’s assassination was carried out. Mortimer Louis Bloomfield then chose the best seven marksmen from his intelligence agencies and sent them to Pueblo, Mexico in the summer of 1963 for training and rifle practice with German-made Mauser sniper rifles at a Christian mission run by Reverend Carl McIntyre. Louis M. Bloomfield, arranged for his close friend Reverend Carl McIntyre to found the American Council of Christian Churches. The ACCC was to conceal an extensive espionage and intelligence unit to be deployed throughout the United States, Canada, Mexico and Latin America. The spies and saboteurs were to operate under the cover of Christian missionaries. As part of the ACCC espionage net, Hoover, Stephenson, and Bloomfield created a secret assassination unit in 1943 under the direction of ACCC Minister Albert Osborne and it was ran from the missionary in Pueblo, Mexico. The group of twenty-five to thirty professional executioners have been based in Mexico and have been used by espionage agencies of the U.S. and various countries all over the world for political killings. It was Osborne and a team of seven expert riflemen from the Puebla unit who carried out the assassination of John F. Kennedy in Dallas on November 22, 1963. New Orleans District Attorney Jim Garrison documented that on October 10, 1963, Osborne had visited New Orleans, making three stops in town. First he visited the offices of Clay Shaw at the International Trade Mart building. Later that same day he visited the offices of FBI Division Five courier Jerry Brooks Gatlin (who transported money to France for the De Gaulle assassination attempt). Osborne’s final stop in New Orleans was at the office of FBI Division Five southern chief Guy Bannister, at 544 Camp Street. This is the address that Lee Oswald distributed his Pro-Castro leaflets from. The reason that New Orleans assumed a special role is that it headquartered the major U.S. subsidiary of Permindex, the International Trade Mart, directed by Colonel Clay Shaw. The rifleman with their Mauser 7.65 caliber sniper rifles arrived in Dallas in October to become familiar with the Dealey Plaza setup. The Mauser rifles have an accuracy of 1200 yards. At least three of the seven would do the shooting and the others were there as backup and to get rid of people who knew too much such as Oswald, Ruby, Ferrie, Bannister etc. When President Kennedy’s Limo passed the ‘grassy knoll’ area two mysterious men were photographed on the sidewalk. They became known as the ‘Umbrella Man’ and the Dark Complected Man (DCM). There are several key theories as to the purpose of the Umbrella Man. Most believe he was part of the conspiracy and was there to signal to the shooters that Kennedy was definitely in the car and which car he was in. Other people believe that the umbrella was actually used to shoot a dart (flechette) at the President to disable or paralyse him. Kennedy had been in the navy so he knows how to duck if he hears shots fired. The reason to paralyse Kennedy was so that he would not be ducking down to avoid the fatal head shot. Dark Complected Man (DCM), like Umbrella Man, was on the Grassy Knoll, and, like Umbrella Man, appears to reasonable observers to have been signalling. At the precise moment that JFK’s car passed, as Umbrella Man opened and pumped his umbrella repeatedly, Dark Complected Man shot his fist up into the air. To some, DCM seemed to be calling for a halt to the presidential limo, which did in fact either come to a complete halt or slowed down to a crawl. So it seems this man put his arm in the air to signal to Kennedy’s driver that he needs to slow down so that the fatal shot can be fired. The basic training of all drivers in intelligence agencies and security firms all over the world is: If you hear shots, your right foot hits the floor, and you get the hell out of the area as fast as the car will move. You do not slow down, the Limo slowed down when DCM raised his arm and you can see this on several videos on Youtube. There is also several videos on Youtube which shows the Connolly’s (sitting in the front of JFK’s Limo) are thrown forward by the driver hitting the brakes. It’s not just their actions at the moment that Kennedy’s head is blown apart. It’s how Umbrella Man and DCM behave afterwards. Instead of reacting with horror and springing into action, these two purported strangers sit down together, on the curb, and calmly just watch the chaos. While almost everyone in Dealey Plaza was reacting to the assassination by either falling to the ground or moving towards Grassy Knoll, both men sat down on the sidewalk of Elm Street. In this situation, several photographs indicate that the dark-complected man talked into a radio. An antenna or an antenna-like device can be seen jutting out from behind the man's head and his hands holding an object to his face. Just moments later, they both got up and walked away. Was he passing the message on that JFK had suffered a fatal headshot? Above DCM man and the Umbrella Man calmly sat down. There was no security agents on JFK’s car blocking line of sight and no motorcycle cops flanking JFK limousine blocking line of sight, all part of the plan. JFK Witnesses Deaths Many of the key witnesses in the plot to assassinate JFK were murdered or died mysteriously. Lee Harvey Oswald was shot by Jack Ruby while in Police custody. Jack Ruby was able to get into the Police building because he worked for the CIA and he used his CIA I.D to get in. Although Oswald was known to be a poor marksman with a rifle (several months earlier April 6, 1963, to be exact, he had taken a shot at right-wing Major General Edwin Walker and missed) yet the Warren Commission credited him with the miraculous feat of inflicting eight different wounds on President Kennedy and Governor John Connally within six seconds from a distance of more than 200 feet and at an angle that would have taxed the professional skills of even the world’s most expert marksman. The Warren Commission claims that only three shots were fired and all came from Oswald’s Mannlicher-Carcano rifle. Two of these shots were fired within 1.2 seconds (according to the Commission as evidenced by the Zapruder film). Officials of the Mannlicher Steyr factory in Italy found this most flattering, but stated that the time lapse between shots from their rifle had never been less than 2.3 seconds. Clay Shaw, age 60, died five years after he was charged by Jim Garrison for his involvement in the Kennedy assassination. Shaw was reportedly found dead in his home. Then he was given a quick embalming before a Coroner could be notified. It was then impossible to determine the cause of death. Jack Ruby (Rubinstein was his real name, he was Jewish) was in prison after he shot Lee Harvey Oswald. He died of cancer. He was taken into the hospital with Pneumonia. Twenty eight days later, he was dead from cancer. Mary Gray McCoy visited her friend, Ruby, in jail about a week after Oswald was shot. She continued to visit him until a few days before his mysterious death. The last visit was right before his death. At that time, McCoy says Ruby appeared to be in good health. Ruby's official cause of death was a pulmonary embolism caused by lung cancer. That's something McCoy doesn't believe for a second. "He didn't have cancer," said McCoy. "They poisoned him. Somebody did." It is believed that Ruby was injected with a fast spreading cancer while in the police jail cell. Of course they couldn’t walk into a police cell and shoot him so this injection was used instead. Mossad agent Bernie Weissman was the one who arranged for Mossad Sayanim asset, Jacob Leon Rubenstein (Jack Ruby), to assassinate Oswald. Weissman was seen talking to Ruby at Ruby’s Carousel Club in Dallas a week before the assassination. On May 15, 1975, Roger Dean Craig died of a massive gunshot wound to the chest. Craig was a witness to the slaughter of President Kennedy. Only Craig's story was different from the one the police told. Craig testified in the Jim Garrison trial. Before this, Craig had lost his job with the Dallas Police Dept. In 1961, he had been "Man of the Year." Because he would not change his story of the assassination, he was harassed and threatened, stabbed, shot at, and his wife left him. He stated the rifle that they found in the book depository building was a Mauser 7.65 caliber and not a Mannlicher-Carcano rifle that Oswald was supposed to have fired his shots from. Rose Cheramie was a close friend of Jack Ruby. On November 17th she was travelling to Miami when she was involved in a car accident. Cheramie was taken by ambulance to the East Louisiana Hospital at Jackson. On November 19 as doctors were bringing her out of a coma, she told them that President Kennedy was to be killed on November 22. The next day she told another doctor and two nurses that Kennedy will be assassinated. The doctor’s did not believe her and gave her a sedative. Rose Cheramie returned to Texas after her recovery, only to be killed in a hit-and-run “accident” in the Dallas area. The driver of the other car was never found. Another one that turned up dead was David Ferrie of New Orleans. Before he could be brought by Jim Garrison to trial for his involvement in the Kennedy assassination, he died of a brain haemorrhage. Just what caused his brain haemorrhage has not been established. Ferrie was to testify in the famous Jim Garrison trial, but death prevented him. David Ferrie was a pilot and it is said that he transported different people involved with the JFK plot around to different locations when needed. Guy Bannister was working for Division Five of the FBI who was closely involved in the Jim Garrison trial. Guy and his partner, Hugh Ward, died within a 10 day period. Guy supposedly died of a heart attack, but witnesses said he had a bullet hole in his body. Brooks Gatlin was part of FBI-Division Five and worked under Bannister and it was he who transported the money to Paris for the De-Gaulle assassins for Permindex. He was thrown out of a sixth-floor window in a San Juan Puerto Rico hotel in 1966. Dorothy Kilgallen was another reporter who died strangely and suddenly after her involvement in the Kennedy assassination. Miss Kilgallen is the only journalist who was granted a private interview with Jack Ruby after he killed Lee Harvey Oswald. Miss Kilgallen stated that she was "going to break this case wide open." She died on November 8, 1965. Her autopsy report took eight days. She was 52 years old. Kilgallen died weeks before a planned second trip to New Orleans for a meeting with a secret informant, telling a friend it was “cloak and daggerish.” “I’m going to break the real story and have the biggest scoop of the century,” she told her ­lawyer. According to David Welsh of Ramparts Magazine, Kilgallen “vowed she would ‘crack this case.'” Another New York showbiz friend said Dorothy told him in the last days of her life: “In five more days, I’m going to bust this case wide open.” She compiled a thick file of evidence, interviews and notes, ­always keeping it close or under lock and key. It was nowhere to be found after her death. She also gave a copy of her drafts, including interview notes, to her friend Florence Smith. Smith died two days after Kilgallen of a “cerebral haemorrhage.” Smith’s copy of Kilgallen’s draft was also never located. Gary Underhill, a CIA agent, told friends he knew who killed Kennedy, and that he was sure that they would soon get him (Underhill) also. On May 8, 1964, Underhill was shot to death in Washington. William Pitzer, a lieutenant in the U.S. Navy, had photographed the military-performed autopsy of JFK’s body, He told friends he had been ordered to keep quiet about what he saw and did so for years. Nevertheless, he was found dead with a bullet in his head on October 29, 1966. Tom Howard was Jack Ruby's chief attorney and we are told he died of a heart attack in March 1965. He was 48. The doctor, without benefit of an autopsy, said he had suffered a heart attack. Some reporters and friends of Howard's were not so certain. Some said he was "bumped off." Warren Reynolds was minding his used car lot on East Jefferson Street in Oak Cliff in Dallas, when he heard shots two blocks away. Then he saw a man having a great difficulty tucking "a pistol or an automatic" in his belt, and running at the same time. Reynolds gave chase for a short piece being careful to keep his distance, then lost the fleeing man. He didn't know it then, but he had apparently witnessed the flight of the killer (or one of the killers) of patrolman Jefferson David Tippit. Reynolds was not questioned until two months after the event. The FBI finally talked to him in January 1964. The FBI interview report said, "he was hesitant to definitely identify Oswald as the individual. Two days after Reynolds talked to the FBI, he was shot in the head. He was closing up his used car lot for the night at the time. Nothing was stolen. On Saturday November 23, 1963, Jack Zangetty, the manager of a$150,000 modular motel complex near Lake Lugert, Oklahoma, remarked to some friends that "Three other men--not Oswald--killed the President." He also stated that "A man named Ruby will kill Oswald tomorrow and in a few days a member of the Frank Sinatra family will be kidnapped just to take some of the attention away from the assassination." Two weeks later, Jack Zangetty was found floating in Lake Lugert with bullet holes in his chest. It appeared to witnesses he had been in the water one to two weeks. Listed above are just some of the mysterious deaths of people associated with the JFK assassination but they are at least eighty more according to sources.  If you believe all these deaths are just a ’coincidence’ then you are delusional.  They were killed so that that the real perpetrators behind the crime remained hidden. The autopsy forgeries, the tampering of the skull, the replacement of Kennedy’s brain could not have been done without the complicity of the Secret Service, the FBI and elements of the CIA. The removal of Kennedy’s body from the jurisdiction of the State of Texas is another example of complicity by the Secret Service. The forgeries were intended to bolster the case of the “lone gunman.” The man who played the key role in fabricating the government lie purveyed by the Warren Commission was the Jewish Arlen Specter, the inventor of what came to be called the “magic bullet” theory: a single bullet supposed to have caused seven wounds to Kennedy and John Connally.  38 years later the Jews would also run the 9/11 commission, what a coincidence. The Illuminati’s public execution of the President of the United States was shown in front of the whole world on T.V.  It was a message to all future President’s, ‘if you try to remove the Federal Reserve, if you don’t do what we tell you then this is what will happen’. Do not dismiss the fact that a Jew and probable Mossad (SAYANIM) asset, Abraham Zapruder, just happened to be at the perfect place at the perfect time to capture the gruesome film footage of the actual murder and recorded the gruesome murder without flinching (just like the 5 dancing Israelis who had a video camera already set up to capture footage of the planes hitting the Twin Towers on 9/11). The question must be asked: Was Abraham Zapruder (Jewish) a Mossad operative on assignment positioned at exactly the right place at the right time to capture the gruesome murder as a graphic lesson for posterity? Zapruder either had incredible nerves of steel to be able to capture the event without so much as flinching; or he knew exactly what was coming and performed his assignment perfectly. As soon as Kennedy was removed Lyndon B. Johnson (33rd-degree Freemason) escalated the war in Vietnam and removed Kennedy’s Executive Order 11110 and removed the $4.3 billion dollars of silver backed currency that was printed by JFK. L.B Johnson then turned a blind eye to Israel’s nuclear bomb facilities at Dimona and money that was given to Israel’s military budget by the USA was increased dramatically. In Kennedys last budget 40 million dollars was given to Israel for development loans and food assistance. Two years later under Lyndon B. Johnson, Israel was given 130 million dollars with 70% for military purposes. That money to Israel has increased ever since to the point where Israel now receives at least$3 billion dollars per year from the USA.  Johnson had always been Israel’s man. His electoral campaigns had been funded since 1948 by Zionist Jewish financier Abraham Feinberg. If you ask who profited from Kennedy’s death, the answer was the Rothschild’s and Israel and I would suggest they were the major players behind his assassination. To further cement Israel as the dominant force in the Middle East and to remove its main enemy’s, Iraq, Syria and Libya another catastrophic event had to be brought about by Israel and its Mossad and that event was 9/11.  One in which they used their expertise to set up another Patsie, just like Lee Harvey Oswald but this time the Patsie was the Arabs. “The government's handling of the investigation of John Kennedy's murder was a fraud. It was the greatest fraud in the history of our country. It probably was the greatest fraud ever perpetrated in the history of humankind. That doesn't mean that we have to accept the continued existence of the kind of government which allows this to happen. We can do something about it. We're forced either to leave this country or to accept the authoritarianism that has developed - the authoritarianism which tells us that in the year 2029 we can see the evidence about what happened to John Kennedy”.  - Jim Garrison (Closing speech at trial of Clay Shaw Feb 28, 1969). JFK was also trying to get the Jewish lobby AIPAC to register as a foreign entity.  That is what it is.  AIPAC’s aim is to get all American Presidents and Politicians to favour Jews and Israel. “President Kennedy’s assassination was carried out with great attendant publicity and with the utmost brutality to serve as a warning to world leaders not to get out of line.  The Kennedy assassination was the signal for a massive transfer of power in a coup de-etat. American public cannot hope to get close to the truth about the Kennedy assassination unless they are somehow able to pry loose the stranglehold grip that the media has on this, the most important case in America's near history... The American media participated in the ongoing cover-up.  Kennedy ordered a review of the Federal Reserve.  When the review was handed to him, and after studying it, Kennedy issued an executive order to return constitutional money to the United States.  He signed Executive Order 11110 dated June 4, 1963, calling upon the Treasury to print and issue directly United States dollars, as opposed to Federal Reserve notes, thus bypassing the Federal Reserve banks” - The Conspirators’ Hierarchy: The Committee of 300 by Dr. John Coleman “The Israeli Government, working in tandem with the Chicago Jewish Mafia and Zionist-controlled Military-Industrial Complex, ordered the assassination of President John F. Kennedy which was actually carried out by the Central Intelligence Agency.  There were several reasons why the Khazarian Mafia killed Kennedy, each of which represented a direct threat by JFK to the global power structure bankrolled by the International Banking Cartel.  As a matter of historical fact, the Zionist perps went so far as to circulate “Wanted for Treason” leaflets all over Dallas during the week of Kennedy’s assassination.  Furthermore, Vice-President Lyndon B. Johnson, a crypto-Jewish politico, was the point-man in the White House overseeing the assassination plot and especially the cover-up” - See the link below titled ‘The Chicago Supermob: The Jewish Mafia That Killed Kennedy’ - http://stateofthenation2012.com/?p=93107 “So what are the possible repercussions of the Kennedy’s not being able to get the Zionists and Jews to register as foreign agents? How powerful have the Zionists become? What do the Zionist/Jews/Dual-US-Israeli Citizens own and control?  Nearly all major media (96%) and global means of communicating. Banking and finance are controlled at every corner and the Rothschild Zionist’s Crowning Jewel? The Not-So-Federal “Federal” Reserve that has been destroying America and the world around us since 1913 along with their Criminal Collection Agency the IRS that same year, and the creation of the Anti-defamation League in 1913 to protect the Jewish bankers that had taken over America”, - Charles Maultsby, American Researcher, Author of ‘Who Should Go Down in History’, www.chuckmaultsby.net “Israel’s Mossad was a primary (and critical) behind the scenes player in the conspiracy that ended the life of John F. Kennedy. Through its own vast resources and through its international contacts in the intelligence community and in organized crime, Israel had the means, it had the opportunity, and it had the motive to play a major frontline role in the crime of the century—and it did”, - Michael Collins Piper - Final Judgment: The Missing Link in the JFK Assassination Conspiracy “All Mossad agents, operate on a war-time footing. The Mossad has a tremendous advantage over other intelligence services in that every country in the world has a large Jewish community, which is useful.  The Mossad also has the advantage of having access to the records of all U.S. law enforcement agencies and U.S. intelligence services. The office of Naval Intelligence (ONI) services the Mossad at no cost to Israel.  The Mossad has a skillful disinformation service. The amount of disinformation it feeds to the American "market" is embarrassing, but even more embarrassing is how America swallows hook, line and sinker such propaganda” - The Conspirators’ Hierarchy: The Committee of 300 by Dr. John Coleman The Jewish state of Israel and its followers have assassinated many politicians and leaders who do not favor Israel.  On November 6, 1944, members of the Stern Gang, led by future Prime Minister Yitzhak Shamir, assassinated Lord Moyne, the British resident minister in the Middle East, for his anti-Zionist positions. The bodies of his murderers, executed in Egypt, were later exchanged for twenty Arab prisoners and buried at the “Monument of Heroes” in Jerusalem. On September 17, 1948, the same terrorist group murdered in Jerusalem Count Folke Bernadotte, a Swedish diplomat appointed as United Nations mediator in Palestine. He had just submitted his report, which described “large-scale Zionist plundering and destruction of villages,” and called for the “return of the Arab refugees rooted in this land for centuries.” His assassin, Nathan Friedman-Yellin, was arrested, convicted, and then amnestied; in 1960 he was elected to the Knesset. In 1946, three months after members of the Irgun, led by future Prime Minister Menachem Begin, killed ninety-one people in the headquarter of the British Mandate’s administration (King David Hotel), the same terrorist group attempted to murder British Prime Minister Clement Attlee and Foreign Secretary Ernest Bevin, according to British Intelligence documents declassified in 2006. These killings and more are documented by Israeli journalist Ronen Bergman in Rise and Kill First: The Secret History of Israel’s Targeted Assassinations. “Using the same logic that I apply to the 9-11 cover-up we can conclude that the people involved in the JFK cover-up are part of the criminal network behind the assassination.  So, why would the Zionist-controlled media engage in a cover-up and who are they doing it for? It doesn't make any sense that the Zionist elite that controls the U.S. media would cover up the JFK conspiracy for the Mafia, the CIA, or the anti-Castro Cubans.  The Zionist-controlled media is, most likely, covering up the conspiracy in order to protect the Zionist criminals who masterminded the assassination of President Kennedy.  The cover-up of the conspiracy to kill President Kennedy began immediately after the murder in Dallas and was clearly orchestrated at the highest level.  Lyndon B. Johnson, who became the president on the day Kennedy died, was an ardent Zionist and devoted supporter of the state of Israel.  His aunt Jessie Johnson Hatcher, who was a major influence on Lyndon, was a member of the Zionist Organization of America. While it seems very likely that his family was in fact Jewish (i.e.Crypto-Jews), there is absolutely no question that Lyndon Johnson was a high-level Zionist who had been involved for decades with the Jewish Freemasonic organization, the International Order of the B'nai B'rith, in the illegal immigration of Jews from Poland to Galveston. The smuggling operation of Jews to Texas in which LBJ was involved from 1937 is known as "Operation Texas".  From the earliest days of his political career Lyndon Johnson was clearly working with and for the B'nai B'rith and the Zionist mobsters who were bringing Polish Jews to Texas and later sending U.S. weapons - illegally - to Zionist forces in Palestine. Johnson's history as a high-level Zionist agent needs to be taken into consideration when looking at his role in the cover-up of the evidence of a conspiracy to kill President Kennedy and the cover-up of the murderous Israeli attack on the USS Liberty in 1967.   President Lyndon Johnson was the official in the highest position to mastermind the JFK cover-up and the person who appointed the Warren Commission.  It is important to realize that Johnson was an active and dedicated Zionist agent who set the stage for the assassination as Vice President and then masterminded the cover-up as the new president.  Lyndon Johnson and the Zionist-controlled media have covered-up the truth of the assassination of President John F. Kennedy because it was a crime that was carried out by the Zionist criminal network, which took power in the coup d'etat in Dallas in November 1963 and which has been in power ever since. The role of the Zionist-controlled media is to keep the population from realizing that a criminal regime of Zionist agents controls their government” – Christopher Bollyn – Author Solving 9-11: The Deception that Changed the World – Bollyn.com Michael Collins Piper lays out the thesis of his ground-breaking book 'Final Judgment' in a series of interviews with Rick Adams from 2011. Final Judgment documents how Israel's leaders, the Mossad, the Meyer Lansky-run organized crime syndicate, and a pro-Zionist faction of the CIA colluded to assassinate President, John F. Kennedy. Arguably this is the best analysis in existence of one of the most pivotal crimes of the Twentieth Century. JFK’s brother Robert Kennedy was also assassinated in 1968 Robert Kennedy scored major victories when he won both the California and South Dakota primaries on June 4 in the Presidential race. He addressed his supporters shortly after midnight on June 5, 1968, in a ballroom at The Ambassador Hotel in Los Angeles.  Leaving the ballroom, he went through the hotel kitchen where he was shot and killed.  Sirhan Sirhan, a 24-year-old Palestinian was blamed for the shooting even though all evidence suggests he can’t possibly have shot RFK. According to the autopsy report of Chief Medical Examiner-Coroner Thomas Noguchi, Robert Kennedy died of a gunshot wound to the brain, fired from behind the right ear at point blank range, following an upward angle. Nogushi restated his conclusion in his 1983 memoirs, Coroner. Yet the sworn testimony of twelve shooting witnesses established that Robert had never turned his back on Sirhan and that Sirhan was five to six feet away from his target when he fired. Tallying all the bullet impacts in the pantry, and those that wounded five people around Kennedy, it has been estimated that at least twelve bullets were fired, while Sirhan’s gun carried only eight. On April 23, 2011, attorneys William Pepper and his associate, Laurie Dusek, gathered all this evidence and more in a 58-page file submitted to the Court of California, asking that Sirhan’s case be reopened. They documented major irregularities in the 1968 trial, including the fact that the bullet tested in laboratory to be compared to the one extracted from Robert’s brain had not been shot by Sirhan’s revolver, but by another gun, with a different serial number; thus, instead of incriminating Sirhan, the ballistic test in fact proved him innocent. Pepper has also provided a computer analysis of audio recordings during the shooting, made by engineer Philip Van Praag in 2008, which confirms that two guns are heard. The presence of a second shooter was signaled by several witnesses and reported on the same day by a few news media. There are strong suspicions that the second shooter was Thane Eugene Cesar, a security guard hired for the evening, who was stood behind Kennedy at the moment of the shooting, and seen with his pistol drawn by several witnesses. One of them, Don Schulman, positively saw him fire. Cesar was never investigated, even though he did not conceal his hatred for the Kennedys, who according to his recorded statement, had “sold the country down the road to the commies.” Even if we assume that Sirhan did kill Robert Kennedy, a second aspect of the case raises question: according to several witnesses, Sirhan seemed to be in a state of trance during the shooting. More importantly, Sirhan has always claimed, and continues to claim, that he has never had any recollection of his act:  “I was told by my attorney that I shot and killed Senator Robert F. Kennedy and that to deny this would be completely futile, but I had and continue to have no memory of the shooting of Senator Kennedy.” We know that in the 1960s, American military agencies were experimenting on mental control. Dr Sidney Gottlieb, son of Hungarian Jews, directed the infamous CIA MKUltra project, which, among other things, were to answer questions such as: “Can a person under hypnosis be forced to commit murder?” according to a declassified document dated May 1951.  According to Israeli journalist Ronen Bergman, author of Rise and Kill First: The Secret History of Israel’s Targeted Assassinations , in 1968, an Israeli military psychologist by the name of Benjamin Shalit had concocted a plan to take a Palestinian prisoner and “brainwash and hypnotize him into becoming a programmed killer” aimed at Yasser Arafat. "There is no question he was hypno-programmed," lawyer William F. Pepper told ABCNews.com. "He was set up, he was used, and he was manipulated." "Ten independent witnesses say Sirhan was always in front of Bobby, never behind him," said Pepper, "but the autopsy says Bobby was shot at close range from behind the right ear." “Sirhan was standing in front of Kennedy when, as the autopsy definitively showed, RFK was shot from the rear at point blank range, three bullets entering his body, with the fatal head-shot coming upward at a 45 degree angle from 1-3 inches behind his right ear. In addition, an audio recording shows that many more bullets than the eight in Sirhan’s gun were fired in the hotel pantry that night. It was impossible for Sirhan to have killed RFK.  While Sirhan sits in prison to this day, the real killers of Senator Kennedy went free that night. For anyone who studies the case with an impartial eye (see this, this, this, this, and this), the evidence is overwhelming that there was a very sophisticated conspiracy at work, one that continued long after as police, FBI, intelligence agencies, and the legal system covered up the true nature of the crime.  That Sirhan was a Manchurian candidate hypnotized to play his part as seeming assassin is also abundantly clear. Dr. Daniel P. Brown, an Associate Clinical Professor of Psychology at Harvard Medical School, an international expert on hypnosis, affirms the obvious: that Sirhan was hypno-programmed to shoot his pistol in response to a post hypnotic touch cue, most likely from the girl in the polka-dot dress. Dr. Brown states that Sirhan “did not have the knowledge, or intention, to shoot a human being, let alone Senator Kennedy.” At the request of Sirhan’s defense team seeking a new trial and a parole for Sirhan (efforts led by the lawyer William Peppers and the heroic Paul Schrade), Dr. Brown “conducted a forensic assessment in six different two-day sessions over a three year span spending over sixty hours interviewing and testing Sirhan at Corona Penitentiary and Pleasant Valley in California.” - The Blatant Conspiracy behind Senator Robert F. Kennedy’s Assassination By Edward Curtin - https://www.globalresearch.ca/the-blatant-conspiracy-behind-senator-robert-f-kennedys-assassination/5642125
ODS - Maple Help ODS file format Description • ODS (OpenDocument Spreadsheet) is an XML-based spreadsheet file format used by OpenOffice. • The commands ImportMatrix and ExportMatrix can read and write to the ODS format. • The general-purpose commands Import and Export also support this format. • The default output from Import for this format is a DataSeries, the individual elements of which are DataFrames corresponding to worksheets within the ODS spreadsheet. Notes Examples Import an ODS spreadsheet listing the highest mountain peaks in the world. > $\mathrm{Import}\left("example/HighestPeaks.ods",\mathrm{base}=\mathrm{datadir}\right)$ $\left[\begin{array}{cc}{"Maple Data"}& \left[\begin{array}{cccc}{}& {\mathrm{Height \left(m\right)}}& {\mathrm{Location}}& {\mathrm{First ascent}}\\ {\mathrm{Mount Everest}}& {8848}& {"2759\text{'}17"N 8655\text{'}31"E"}& {1953}\\ {\mathrm{K2}}& {8611}& {"3552\text{'}53"N 7630\text{'}48"E"}& {1954}\\ {\mathrm{Kangchenjunga}}& {8586}& {"2742\text{'}12"N 8808\text{'}51"E"}& {1955}\\ {\mathrm{Lhotse}}& {8516}& {"2757\text{'}42"N 8655\text{'}59"E"}& {1956}\\ {\mathrm{Makalu}}& {8485}& {"2753\text{'}23"N 875\text{'}20"E"}& {1955}\\ {\mathrm{Cho Oyu}}& {8188}& {"2805\text{'}39"N 8639\text{'}39"E"}& {1954}\\ {\mathrm{Dhaulagiri I}}& {8167}& {"2841\text{'}48"N 8329\text{'}35"E"}& {1960}\\ {\mathrm{Manaslu}}& {8163}& {"2833\text{'}00"N 8433\text{'}35"E"}& {1956}\\ {\mathrm{Nanga Parbat}}& {8126}& {"3514\text{'}14"N 7435\text{'}21"E"}& {1953}\\ {\mathrm{Annapurna I}}& {8091}& {"2835\text{'}44"N 8349\text{'}13"E"}& {1950}\end{array}\right]\end{array}\right]$ (1) Import the same data as above but returned as a table of Matrices. > $\mathrm{Import}\left("example/HighestPeaks.ods",\mathrm{base}=\mathrm{datadir},\mathrm{output}=\mathrm{table}\right)$ ${table}{}\left(\left[{"Maple Data"}{=}\begin{array}{c}\left[\begin{array}{cccc}{"Name"}& {"Height \left(m\right)"}& {"Location"}& {"First ascent"}\\ {"Mount Everest"}& {8848}& {"2759\text{'}17"N 8655\text{'}31"E"}& {1953}\\ {"K2"}& {8611}& {"3552\text{'}53"N 7630\text{'}48"E"}& {1954}\\ {"Kangchenjunga"}& {8586}& {"2742\text{'}12"N 8808\text{'}51"E"}& {1955}\\ {"Lhotse"}& {8516}& {"2757\text{'}42"N 8655\text{'}59"E"}& {1956}\\ {"Makalu"}& {8485}& {"2753\text{'}23"N 875\text{'}20"E"}& {1955}\\ {"Cho Oyu"}& {8188}& {"2805\text{'}39"N 8639\text{'}39"E"}& {1954}\\ {"Dhaulagiri I"}& {8167}& {"2841\text{'}48"N 8329\text{'}35"E"}& {1960}\\ {"Manaslu"}& {8163}& {"2833\text{'}00"N 8433\text{'}35"E"}& {1956}\\ {"Nanga Parbat"}& {8126}& {"3514\text{'}14"N 7435\text{'}21"E"}& {1953}\\ {⋮}& {⋮}& {⋮}& {⋮}\end{array}\right]\\ \hfill {\text{11 × 4 Matrix}}\end{array}\right]\right)$ (2) Export a random matrix to an ODS spreadsheet in the home directory of the current user. > $M≔\mathrm{LinearAlgebra}:-\mathrm{RandomMatrix}\left(100,2\right):$ > $\mathrm{Export}\left("example.ods",M,\mathrm{base}=\mathrm{homedir}\right)$ ${24639}$ (3) Compatibility • With Maple 2016, the Import command applied to ODS files now produces DataSeries objects by default. To produce a table, use Import(...,output=table).
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Main content # Intro to order of operations CCSS.Math: ## Video transcript In this video we're going to talk a little bit about order of operations. And I want you to pay close attention because really everything else that you're going to do in mathematics is going to be based on you having a solid grounding in order of operations. So what do we even mean when we say order of operations? So let me give you an example. The whole point is so that we have one way to interpret a mathematical statement. So let's say I have the mathematical statement 7 plus 3 times 5. Now if we didn't all agree on order of operations, there would be two ways of interpreting this statement. You could just read it left to right, so you could say well, let me just take 7 plus 3, you could say 7 plus 3 and then multiply that times 5. And 7 plus 3 is 10, and then you multiply that by 5. 10 times 5, it would get you 50. So that's one way you would interpret it if we didn't agree on an order of operations. Maybe it's a natural way. You just go left to right. Another way you could interpret it you say, I like to do multiplication before I do addition. So you might interpret it as -- I'll try to color code it -- 7 plus -- and you do the 3 times 5 first. 7 plus 3 times 5, which would be 7 plus 3 times 5 is 15, and 7 plus 15 is 22. So notice, we interpreted this statement in two different ways. This was just straight left to right doing addition then the multiplication. This way we did the multiplication first then the addition, we got two different answers, and that's just not cool in mathematics. If this was part of some effort to send something to the moon because two people interpreted it a different way or another one computer interpreted one way and another computer interpreted it another way, the satellite might go to mars. So this is just completely unacceptable, and that's why we have to have an agreed upon order of operations. An agreed upon way to interpret this statement. So the agreed upon order of operations is to do parentheses first -- let me write it over here -- then do exponents. If you don't know what exponents are don't worry about it right now. In this video we're not going to have any exponents in our examples, so you don't really have to worry about them for this video. Then you do multiplication -- I'll just right mult, short for multiplication -- then you do multiplication and division next, they kind of have the same level of priority. And then finally you do addition and subtraction. So what does this order of operations -- let me label it -- this right here, that is the agreed upon order of operations. If we follow these order of operations we should always get to the same answer for a given statement. So what does this tell us? What is the best way to interpret this up here? Well we have no parentheses -- parentheses look like that. Those little curly things around numbers. We don't have any parentheses here. I'll do some examples that do have parentheses. We don't have any exponents here. But we do have some multiplication and division or we actually just have some multiplication. So we'll order of operations, do the multiplication and division first. So it says do the multiplication first. That's a multiplication. So it says do this operation first. It gets priority over addition or subtraction. So if we do this first we get the 3 times 5, which is 15, and then we add the 7. The addition or subtraction -- I'll do it here, addition, we just have addition. Just like that. So we do the multiplication first, get 15, then add the 7, 22. So based upon the agreed order of operations, this right here is the correct answer. The correct way to interpret this statement. Let's do another example. I think it'll make things a little bit more clear, and I'll do the example in pink. So let's say I have 7 plus 3 -- I'll put some parentheses there -- times 4 divided by 2 minus 5 times 6. So there's all sorts of crazy things here, but if you just follow the order of operations you'll simplify it in a very clean way and hopefully we'll all get the same answer. So let's just follow the order of operations. The first thing we have to do is look for parentheses. Are there parentheses here? Yes, there are. There's parentheses around the 7 plus 3. So it says let's do that first. So 7 plus 3 is 10. So this we can simplify, just looking at this order operations, to 10 times all of that. Let me copy and paste that so I don't have to keep re-writing it. So that simplifies to 10 times all of that. We did our parentheses first. Then what do we do? There are no more parentheses in this expression. Then we should do exponents. I don't see any exponents here, and if you're curious what exponents look like, an exponent would look like 7 squared. You'd see these little small numbers up in the top right. We don't have any exponents here so we don't have to worry about it. Then it says to do multiplication and division next. So where do we see multiplication? We have a multiplication, a division, a multiplication again. Now, when you have multiple operations at the same level, when our order of operations, multiplication and division are the same level, then you do left to right. So in this situation you're going to multiply by 4 and then divide by 2. You won't multiply by 4 divided by 2. Then we'll do the 5 times 6 before we do the subtraction right here. So let's figure out what this is. So we'll do this multiplication first. We could simultaneously do this multiplication because it's not going to change things. But I'll do things one step at a time. So the next step we're going to do is this 10 times 4. 10 times 4 is 40. 10 times 4 is 40, then you have 40 divided by 2 and it simplifies to that right there. Remember, multiplication and division, they're at the exact same level so we're going to do it left to right. You could also express this as multiplying by 1/2 and then it wouldn't matter the order. But for simplicity, multiplication and division go left to right. So then you have 40 divided by 2 minus 5 times 6. So, division, you just have one division here, you want to do that. You have this division and you have this multiplication, they're not together so you can actually kind of do them simultaneously. And to make it clear that you do this before you do the subtraction because multiplication and division take priority over addition and subtraction, we could put parentheses around them to say look, we're going to do that and that first before I do that subtraction, because multiplication and division have priority. So 40 divided by 2 is 20. We're going to have that minus sign, minus 5 times 6 is 30. 20 minus 30 is equal to negative 10. And that is the correct interpretation of that. So I want to make something very, very, very clear. If you have things at the same level, so if you have 1 plus 2 minus 3 plus 4 minus 1. So addition and subtraction are all the same level in order of operations, you should go left to right. So you should interpret this as 1 plus 2 is 3, so this is the same thing as 3 minus 3 plus 4 minus 1. Then you do 3 minus 3 is 0 plus 4 minus 1. Or this is the same thing as 4 minus 1, which is the same thing as 3. You just go left to right. Same thing if you have multiplication and division, they're at the same level. So if you have 4 times 2 divided by 3 times 2, you do 4 times 2 is 8 divided by 3 times 2. And you say 8 divided by 3 is, well, we got a fraction there. It would be 8/3. So this would be 8/3 times 2. And then 8/3 times to is equal to 16 over 3. That's how you interpret it. You don't do this multiplication first or divide the 2 by that and all of that. Now the one time where you can be loosey-goosey with order of operations, if you have all addition or all multiplication. So if you have 1 plus 5 plus 7 plus 3 plus 2, it does not matter what order you do it in. You can do the 2 plus 3, you can go from the right to the left, you can go from the left to the right, you could start some place in between. If it's only all addition. And the same thing is true if you have all multiplication. It's 1 times 5 times 7 times 3 times 2. It does not matter what order you're doing it. But it's only with all multiplication or all addition. If there was some division in here, if there's some subtraction in here, you're best off just going left to right.
Thanks! Image text transcribed for accessibility: Consider a lossless parallel plate transmission line with a characteristic impedance Zo. We want to play with its dimensions; however, we want to keep its characteristic impedance constant. Therefore, If the plate with (w) is kept constant, and the relative dielectric constant( epsilon r) between the plates is doubled, how should the thickness (d) between the plates be changed? How should w be changed for a given d if epsilon r is doubled? How should w be changed for a given epsilon r if d is doubled? (Hint: For a parallel plate transmission line C = epsilon w/d and L = mu d/w)
# 6.2: One-sample t-test — A new t-test Now we are ready to talk about t-test. We will talk about three of them. We start with the one-sample t-test. Commonly, the one-sample t-test is used to estimate the chances that your sample came from a particular population. Specifically, you might want to know whether the mean that you found from your sample, could have come from a particular population having a particular mean. Straight away, the one-sample t-test becomes a little confusing (and I haven’t even described it yet). Officially, it uses known parameters from the population, like the mean of the population and the standard deviation of the population. However, most times you don’t know those parameters of the population! So, you have to estimate them from your sample. Remember from the chapters on descriptive statistics and sampling, our sample mean is an unbiased estimate of the population mean. And, our sample standard deviation (the one where we divide by n-1) is an unbiased estimate of the population standard deviation. When Gosset developed the t-test, he recognized that he could use these estimates from his samples, to make the t-test. Here is the formula for the one sample t-test, we first use words, and then become more specific: ## Formulas for one-sample t-test $\text{name of statistic} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{t} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{t} = \frac{\text{Mean difference}}{\text{standard error}} \nonumber$ $\text{t} = \frac{\bar{X}-u}{S_{\bar{X}}} \nonumber$ $\text{t} = \frac{\text{Sample Mean - Population Mean}}{\text{Sample Standard Error}} \nonumber$ $\text{Estimated Standard Error} = \text{Standard Error of Sample} = \frac{s}{\sqrt{N}} \nonumber$ Where, $$s$$ is the sample standard deviation. Some of you may have gone cross-eyed looking at all of this. Remember, we’ve seen it before when we divided our mean by the standard deviation in the first bit. The t-test is just a measure of a sample mean, divided by the standard error of the sample mean. That is it. ## What does t represent? $$t$$ gives us a measure of confidence, just like our previous ratio for dividing the mean by a standard deviations. The only difference with $$t$$, is that we divide by the standard error of mean (remember, this is also a standard deviation, it is the standard deviation of the sampling distribution of the mean) Note What does the t in t-test stand for? Apparently nothing. Gosset originally labelled it z. And, Fisher later called it t, perhaps because t comes after s, which is often used for the sample standard deviation. $$t$$ is a property of the data that you collect. You compute it with a sample mean, and a sample standard error (there’s one more thing in the one-sample formula, the population mean, which we get to in a moment). This is why we call $$t$$, a sample-statistic. It’s a statistic we compute from the sample. What kinds of numbers should we expect to find for these $$ts$$? How could we figure that out? Let’s start small and work through some examples. Imagine your sample mean is 5. You want to know if it came from a population that also has a mean of 5. In this case, what would $$t$$ be? It would be zero: we first subtract the sample mean from the population mean, $$5-5=0$$. Because the numerator is 0, $$t$$ will be zero. So, $$t$$ = 0, occurs, when there is no difference. Let’s say you take another sample, do you think the mean will be 5 every time, probably not. Let’s say the mean is 6. So, what can $$t$$ be here? It will be a positive number, because $$6-5= +1$$. But, will $$t$$ be +1? That depends on the standard error of the sample. If the standard error of the sample is 1, then $$t$$ could be 1, because $$1/1 = 1$$. If the sample standard error is smaller than 1, what happens to $$t$$? It get’s bigger right? For example, 1 divided by $$0.5 = 2$$. If the sample standard error was 0.5, $$t$$ would be 2. And, what could we do with this information? Well, it be like a measure of confidence. As $$t$$ get’s bigger we could be more confident in the mean difference we are measuring. Can $$t$$ be smaller than 1? Sure, it can. If the sample standard error is big, say like 2, then $$t$$ will be smaller than one (in our case), e.g., $$1/2 = .5$$. The direction of the difference between the sample mean and population mean, can also make the $$t$$ become negative. What if our sample mean was 4. Well, then $$t$$ will be negative, because the mean difference in the numerator will be negative, and the number in the bottom (denominator) will always be positive (remember why, it’s the standard error, computed from the sample standard deviation, which is always positive because of the squaring that we did.). So, that is some intuitions about what the kinds of values t can take. $$t$$ can be positive or negative, and big or small. Let’s do one more thing to build our intuitions about what $$t$$ can look like. How about we sample some numbers and then measure the sample mean and the standard error of the mean, and then plot those two things against each each. This will show us how a sample mean typically varies with respect to the standard error of the mean. In the following figure, I pulled 1,000 samples of N=10 from a normal distribution (mean = 0, sd = 1). Each time I measured the mean and standard error of the sample. That gave two descriptive statistics for each sample, letting us plot each sample as dot in a scatterplot What we get is a cloud of dots. You might notice the cloud has a circular quality. There’s more dots in the middle, and fewer dots as they radiate out from the middle. The dot cloud shows us the general range of the sample mean, for example most of the dots are in between -1 and 1. Similarly, the range for the sample standard error is roughly between .2 and .5. Remember, each dot represents one sample. We can look at the same data a different way. For example, rather than using a scatterplot, we can divide the mean for each dot, by the standard error for each dot. Below is a histogram showing what this looks like: Interesting, we can see the histogram is shaped like a normal curve. It is centered on 0, which is the most common value. As values become more extreme, they become less common. If you remember, our formula for $$t$$, was the mean divided by the standard error of the mean. That’s what we did here. This histogram is showing you a $$t$$-distribution. ## Calculating t from data Let’s briefly calculate a t-value from a small sample. Let’s say we had 10 students do a true/false quiz with 5 questions on it. There’s a 50% chance of getting each answer correct. Every student completes the 5 questions, we grade them, and then we find their performance (mean percent correct). What we want to know is whether the students were guessing. If they were all guessing, then the sample mean should be about 50%, it shouldn’t be different from chance, which is 50%. Let’s look at the table: suppressPackageStartupMessages(library(dplyr)) students <- 1:10 scores <- c(50,70,60,40,80,30,90,60,70,60) mean_scores <- mean(scores) Difference_from_Mean <- scores-mean_scores Squared_Deviations <- Difference_from_Mean^2 the_df<-data.frame(students, scores, mean=rep(mean_scores,10), Difference_from_Mean, Squared_Deviations) the_df <- the_df %>% rbind(c("Sums",colSums(the_df[1:10,2:5]))) %>% rbind(c("Means",colMeans(the_df[1:10,2:5]))) %>% rbind(c(" "," "," ","sd ",round(sd(the_df[1:10,2]),digits=2))) %>% rbind(c(" "," "," ","SEM ",round(sd(the_df[1:10,2])/sqrt(10), digits=2))) %>% rbind(c(" "," "," ","t",(61-50)/round(sd(the_df[1:10,2])/sqrt(10), digits=2))) knitr::kable(the_df) |students |scores |mean |Difference_from_Mean |Squared_Deviations | |:--------|:------|:----|:--------------------|:------------------| |1 |50 |61 |-11 |121 | |2 |70 |61 |9 |81 | |3 |60 |61 |-1 |1 | |4 |40 |61 |-21 |441 | |5 |80 |61 |19 |361 | |6 |30 |61 |-31 |961 | |7 |90 |61 |29 |841 | |8 |60 |61 |-1 |1 | |9 |70 |61 |9 |81 | |10 |60 |61 |-1 |1 | |Sums |610 |610 |0 |2890 | |Means |61 |61 |0 |289 | | | | |sd |17.92 | | | | |SEM |5.67 | | | | |t |1.94003527336861 | You can see the scores column has all of the test scores for each of the 10 students. We did the things we need to do to compute the standard deviation. Remember the sample standard deviation is the square root of the sample variance, or: $\text{sample standard deviation} = \sqrt{\frac{\sum_{i}^{n}({x_{i}-\bar{x})^2}}{N-1}} \nonumber$ $\text{sd} = \sqrt{\frac{2890}{10-1}} = 17.92 \nonumber$ The standard error of the mean, is the standard deviation divided by the square root of N $\text{SEM} = \frac{s}{\sqrt{N}} = \frac{17.92}{10} = 5.67 \nonumber$ $$t$$ is the difference between our sample mean (61), and our population mean (50, assuming chance), divided by the standard error of the mean. $\text{t} = \frac{\bar{X}-u}{S_{\bar{X}}} = \frac{\bar{X}-u}{SEM} = \frac{61-50}{5.67} = 1.94 \nonumber$ And, that is you how calculate $$t$$, by hand. It’s a pain. I was annoyed doing it this way. In the lab, you learn how to calculate $$t$$ using software, so it will just spit out $$t$$. For example in R, all you have to do is this: scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50) One Sample t-test data: scores t = 1.9412, df = 9, p-value = 0.08415 alternative hypothesis: true mean is not equal to 50 95 percent confidence interval: 48.18111 73.81889 sample estimates: mean of x 61 ## How does t behave? If $$t$$ is just a number that we can compute from our sample (it is), what can we do with it? How can we use $$t$$ for statistical inference? Remember back to the chapter on sampling and distributions, that’s where we discussed the sampling distribution of the sample mean. Remember, we made a lot of samples, then computed the mean for each sample, then we plotted a histogram of the sample means. Later, in that same section, we mentioned that we could generate sampling distributions for any statistic. For each sample, we could compute the mean, the standard deviation, the standard error, and now even $$t$$, if we wanted to. We could generate 10,000 samples, and draw four histograms, one for each sampling distribution for each statistic. This is exactly what I did, and the results are shown in the four figures below. I used a sample size of 20, and drew random observations for each sample from a normal distribution, with mean = 0, and standard deviation = 1. Let’s look at the sampling distributions for each of the statistics. $$t$$ was computed assuming with the population mean assumed to be 0. We see four sampling distributions. This is how statistical summaries of these summaries behave. We have used the word chance windows before. These are four chance windows, measuring different aspects of the sample. In this case, all of the samples came from the same normal distribution. Because of sampling error, each sample is not identical. The means are not identical, the standard deviations are not identical, sample standard error of the means are not identical, and the $$t$$s of the samples are not identical. They all have some variation, as shown by the histograms. This is how samples of size 20 behave. We can see straight away, that in this case, we are unlikely to get a sample mean of 2. That’s way outside the window. The range for the sampling distribution of the mean is around -.5 to +.5, and is centered on 0 (the population mean, would you believe!). We are unlikely to get sample standard deviations of between .6 and 1.5, that is a different range, specific to the sample standard deviation. Same thing with the sample standard error of the mean, the range here is even smaller, mostly between .1, and .3. You would rarely find a sample with a standard error of the mean greater than .3. Virtually never would you find one of say 1 (for this situation). Now, look at $$t$$. It’s range is basically between -3 and +3 here. 3s barely happen at all. You pretty much never see a 5 or -5 in this situation. All of these sampling windows are chance windows, and they can all be used in the same way as we have used similar sampling distributions before (e.g., Crump Test, and Randomization Test) for statistical inference. For all of them we would follow the same process: 1. Generate these distributions 2. Look at your sample statistics for the data you have (mean, SD, SEM, and $$t$$) 3. Find the likelihood of obtaining that value or greater 4. Obtain that probability 5. See if you think your sample statistics were probable or improbable. We’ll formalize this in a second. I just want you to know that what you will be doing is something that you have already done before. For example, in the Crump test and the Randomization test we focused on the distribution of mean differences. We could do that again here, but instead, we will focus on the distribution of $$t$$ values. We then apply the same kinds of decision rules to the $$t$$ distribution, as we did for the other distributions. Below you will see a graph you have already seen, except this time it is a distribution of $$t$$s, not mean differences: Remember, if we obtained a single $$t$$ from one sample we collected, we could consult this chance window below to find out the $$t$$ we obtained from the sample was likely or unlikely to occur by chance. ## Making a decision From our early example involving the TRUE/FALSE quizzes, we are now ready to make some kind of decision about what happened there. We found a mean difference of scores <- c(50,70,60,40,80,30,90,60,70,60) mean(scores)-50 11 . We found a $$t$$ = scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50)$statistic t: 1.94117647058824 . The probability of this $$t$$ or larger occurring is $$p$$ = scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50)$p.value 0.0841503080536893 . We were testing the idea that our sample mean of scores <- c(50,70,60,40,80,30,90,60,70,60) mean(scores) 61 could have come from a normal distribution with mean = 50. The $$t$$ test tells us that the $$t$$ for our sample, or a larger one, would happen with p = 0.0841503. In other words, chance can do it a kind of small amount of time, but not often. In English, this means that all of the students could have been guessing, but it wasn’t that likely that were just guessing. We’re guessing that you are still a little bit confused about $$t$$ values, and what we are doing here. We are going to skip ahead to the next $$t$$-test, called a paired samples t-test. We will also fill in some more things about $$t$$-tests that are more obvious when discussing paired samples t-test. In fact, spoiler alert, we will find out that a paired samples t-test is actually a one-sample t-test in disguise (WHAT!), yes it is. If the one-sample $$t$$-test didn’t make sense to you, read the next section. This page titled 6.2: One-sample t-test — A new t-test is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1. Words are simply strings separated by whitespace. Note that words which only differ in capitalization are considered separate (e.g. great and Great are considered different words). 2. You might find some useful functions in util.py. Have a look around in there before you start coding. Problem 1: Building intuition Here are two reviews of Perfect Blue, from Rotten Tomatoes: Rotten Tomatoes has classified these reviews as "positive" and "negative,", respectively, as indicated by the intact tomato on the left and the splattered tomato on the right. In this assignment, you will create a simple text classification system that can perform this task automatically. We'll warm up with the following set of four mini-reviews, each labeled positive $(+1)$ or negative $(-1)$: 1. $(-1)$ pretty bad 2. $(+1)$ good plot 3. $(-1)$ not good 4. $(+1)$ pretty scenery Each review $x$ is mapped onto a feature vector $\phi(x)$, which maps each word to the number of occurrences of that word in the review. For example, the first review maps to the (sparse) feature vector $\phi(x) = \{\text{pretty}:1, \text{bad}:1\}$. Recall the definition of the hinge loss: $$\text{Loss}_{\text{hinge}}(x, y, \mathbf{w}) = \max \{0, 1 - \mathbf{w} \cdot \phi(x) y\},$$ where $x$ is the review text, $y$ is the correct label, $\mathbf{w}$ is the weight vector. 1. Suppose we run stochastic gradient descent once for each of the 4 samples in the order given above, updating the weights according to $$\mathbf{w} \leftarrow \mathbf{w} - \eta \nabla_\mathbf{w} \text{Loss}_{\text{hinge}}(x, y, \mathbf{w}).$$ After the updates, what are the weights of the six words ("pretty", "good", "bad", "plot", "not", "scenery") that appear in the above reviews? • Use $\eta = 0.1$ as the step size. • Initialize $\mathbf{w} = [0, 0,0,0,0, 0]$. • The gradient $\nabla_\mathbf{w} \text{Loss}_{\text{hinge}}(x, y, \mathbf{w}) = 0$ when margin is exactly 1. A weight vector that contains a numerical value for each of the tokens in the reviews ("pretty", "good", "bad","plot", "not", "scenery"), in this order. For example: $[0.1, 0.2,0.3,0.4,0.5, 0.6]$. 2. Given the following dataset of reviews: 1. ($-1$) bad 2. ($+1$) good 3. ($+1$) not bad 4. ($-1$) not good Prove that no linear classifier using word features can get zero error on this dataset. Remember that this is a question about classifiers, not optimization algorithms; your proof should be true for any linear classifier, regardless of how the weights are learned. Propose a single additional feature for your dataset that we could augment the feature vector with that would fix this problem. 1. a short written proof (~3-5 sentences). 2. a viable feature that would allow a linear classifier to have zero error on the dataset (classify all examples correctly). Problem 2: Predicting Movie Ratings Suppose that we are now interested in predicting a numeric rating for movie reviews. We will use a non-linear predictor that takes a movie review $x$ and returns $\sigma(\mathbf w \cdot \phi(x))$, where $\sigma(z) = (1 + e^{-z})^{-1}$ is the logistic function that squashes a real number to the range $(0, 1)$. For this problem, assume that the movie rating $y$ is a real-valued variable in the range $[0, 1]$. Do not use math software such as Wolfram Alpha to solve this problem. 1. Suppose that we wish to use squared loss. Write out the expression for $\text{Loss}(x, y, \mathbf w)$ for a single datapoint $(x,y)$. A mathematical expression for the loss. Feel free to use $\sigma$ in the expression. 2. Given $\text{Loss}(x, y, \mathbf w)$ from the previous part, compute the gradient of the loss with respect to w, $\nabla_w \text{Loss}(x, y, \mathbf w)$. Write the answer in terms of the predicted value $p = \sigma(\mathbf w \cdot \phi(x))$. A mathematical expression for the gradient of the loss. 3. Suppose there is one datapoint $(x, y)$ with some arbitrary $\phi(x)$ and $y = 1$. Specify conditions for $\mathbf w$ to make the magnitude of the gradient of the loss with respect to $\mathbf w$ arbitrarily small (i.e. minimize the magnitude of the gradient). Can the magnitude of the gradient with respect to $\mathbf w$ ever be exactly zero? You are allowed to make the magnitude of $\mathbf w$ arbitrarily large but not infinity. Hint: try to understand intuitively what is going on and what each part of the expression contributes. If you find yourself doing too much algebra, you're probably doing something suboptimal. Motivation: the reason why we're interested in the magnitude of the gradients is because it governs how far gradient descent will step. For example, if the gradient is close to zero when $\mathbf w$ is very far from the optimum, then it could take a long time for gradient descent to reach the optimum (if at all). This is known as the vanishing gradient problem when training neural networks. 1-2 sentences describing the conditions for $\mathbf w$ to minimize the magnitude of the gradient, 1-2 sentences explaining whether the gradient can be exactly zero. Problem 3: Sentiment Classification In this problem, we will build a binary linear classifier that reads movie reviews and guesses whether they are "positive" or "negative." Do not import any outside libraries (e.g. numpy) for any of the coding parts. Only standard python libraries and/or the libraries imported in the starter code are allowed. In this problem, you must implement the functions without using libraries like Scikit-learn. 1. Implement the function extractWordFeatures, which takes a review (string) as input and returns a feature vector $\phi(x)$, which is represented as a dict in Python. 2. Implement the function learnPredictor using stochastic gradient descent and minimize hinge loss. Print the training error and validation error after each epoch to make sure your code is working. You must get less than 4% error rate on the training set and less than 30% error rate on the validation set to get full credit. 3. Write the generateExample function (nested in the generateDataset function) to generate artificial data samples. Use this to double check that your learnPredictor works! You can do this by using generateDataset() to generate training and validation examples. You can then pass in these examples as trainExamples and validationExamples respectively to learnPredictor with the identity function lambda x: x as a featureExtractor. 4. When you run the grader.py on test case 3b-2, it should output a weights file and a error-analysis file. Find 3 examples of incorrected predictions. For each example, give a one-sentence explanation on why the classifier got it wrong. State what additional information the classifier would need to get these examples correct. Note: The main point is to convey intuition about the problem. There isn't always a single correct answer. You do not need to pick 3 different types of errors and explain each. It suffices to show 3 instances of the same type of error, and for each explain why the classification was incorrect. 1. 3 sample incorrect predictions, each with one sentence explaining why the classifications for these sentences was incorrect. 2. a single separate paragraph (3-5 sentences) outlining what information the classifier would need to get these predictions correct. 5. Some languages are written without spaces between words, so is splitting the words really necessary or can we just naively consider strings of characters that stretch across words? Implement the function extractCharacterFeatures (by filling in the extract function), which maps each string of $n$ characters to the number of times it occurs, ignoring whitespace (spaces and tabs). 6. Run your linear predictor with feature extractor extractCharacterFeatures. Experiment with different values of $n$ to see which one produces the smallest validation error. You should observe that this error is nearly as small as that produced by word features. Why is this the case? Construct a review (one sentence max) in which character $n$-grams probably outperform word features, and briefly explain why this is so. Note: There is code in submission.py that will help you test different values of $n$. Remember to write your final written solution in sentiment.pdf. 1. a short paragraph (~4-6) sentences. In the paragraph state which value of $n$ produces the smallest validation error, why this is likely the value that produces the smallest error. 2. a one-sentence review and explanation for when character $n$-grams probably outperform word features. Problem 4: K-means clustering Suppose we have a feature extractor $\phi$ that produces 2-dimensional feature vectors, and a toy dataset $\mathcal D_\text{train} = \{x_1, x_2, x_3, x_4\}$ with 1. $\phi(x_1) = [10, 0]$ 2. $\phi(x_2) = [30, 0]$ 3. $\phi(x_3) = [10, 20]$ 4. $\phi(x_4) = [20, 20]$ 1. Run 2-means on this dataset until convergence. Please show your work. What are the final cluster assignments $z$ and cluster centers $\mu$? Run this algorithm twice with the following initial centers: 1. $\mu_1 = [20, 30]$ and $\mu_2 = [20, -10]$ 2. $\mu_1 = [0, 10]$ and $\mu_2 = [30, 20]$ Show the cluster centers and assignments for each step. 2. Implement the kmeans function. You should initialize your $k$ cluster centers to random elements of examples. After a few iterations of k-means, your centers will be very dense vectors. In order for your code to run efficiently and to obtain full credit, you will need to precompute certain quantities. As a reference, our code runs in under a second on cardinal, on all test cases. You might find generateClusteringExamples in util.py useful for testing your code. Do not use libraries such as Scikit-learn. 3. Sometimes, we have prior knowledge about which points should belong in the same cluster. Suppose we are given a set $G$ of disjoint set of points that must be assigned to the same cluster. For example, suppose we have 6 examples; then $G = \{ (1,5), (2,3,4), (6) \}$ says that examples 2, 3, and 4 must be in the same cluster and that examples 1 and 5 must be in the same cluster. 6 is in its own group and is unbounded, so it can be freely assigned to its own cluster, or to a cluster with any other group, depending on initialization and the value of $K$ in kmeans. All examples must appear in $G$ exactly once. Provide the modified k-means algorithm that performs alternating minimization on the reconstruction loss: $$\sum \limits_{i=1}^n \| \mu_{z_i} - \phi(x_i) \|^2,$$ where $\mu_{z_i}$ is the assigned centroid for the feature vector $\phi(x_i)$. Hint 1: recall that alternating minimization is when we are optimizing two variables jointly by alternating which variable we keep constant. We recommend starting by first keeping $z$ fixed and optimizing over $\mu$ and then keeping $\mu$ fixed and optimizing over $z$. A mathematical expression representing the modified cluster assignment update rule for the k-means steps, and a brief explanation for each step. Do not modify the problem setup or make additional assumptions on the inputs. 4. What is the advantage of running K-means multiple times on the same dataset with the same K, but different random initializations? A ~1-3 sentences explanation. 5. If we scale all dimensions in our initial centroids and data points by some factor, are we guaranteed to retrieve the same clusters after running K-means (i.e. will the same data points belong to the same cluster before and after scaling)? What if we scale only certain dimensions? If your answer is yes, provide a short explanation; if not, give a counterexample. counterexample. This response should have two parts. The first should be a yes/no response and explanation or counterexample for the first subquestion (scaling all dimensions). The second should be a yes/no response and explanation or counterexample for the second subquestion (scaling only certain dimensions).
## Kyoto Journal of Mathematics ### A Fock space model for decomposition numbers for quantum groups at roots of unity #### Abstract In this paper we construct an abstract Fock space for general Lie types that serves as a generalization of the infinite wedge $q$-Fock space familiar in type A. Specifically, for each positive integer $\ell$, we define a $\mathbb{Z}[q,q^{-1}]$-module $\mathcal{F}_{\ell }$ with bar involution by specifying generators and straightening relations adapted from those appearing in the Kashiwara–Miwa–Stern formulation of the $q$-Fock space. By relating $\mathcal{F}_{\ell }$ to the corresponding affine Hecke algebra, we show that the abstract Fock space has standard and canonical bases for which the transition matrix produces parabolic affine Kazhdan–Lusztig polynomials. This property and the convenient combinatorial labeling of bases of $\mathcal{F}_{\ell }$ by dominant integral weights makes $\mathcal{F}_{\ell }$ a useful combinatorial tool for determining decomposition numbers of Weyl modules for quantum groups at roots of unity. #### Article information Source Kyoto J. Math., Volume 59, Number 4 (2019), 955-991. Dates Revised: 21 February 2017 Accepted: 22 June 2017 First available in Project Euclid: 22 October 2019 https://projecteuclid.org/euclid.kjm/1571731341 Digital Object Identifier doi:10.1215/21562261-2019-0031 Mathematical Reviews number (MathSciNet) MR4032204 Zentralblatt MATH identifier 07194002 #### Citation Lanini, Martina; Ram, Arun; Sobaje, Paul. A Fock space model for decomposition numbers for quantum groups at roots of unity. Kyoto J. Math. 59 (2019), no. 4, 955--991. doi:10.1215/21562261-2019-0031. https://projecteuclid.org/euclid.kjm/1571731341 #### References • [1] S. Ariki, On the decomposition numbers of the Hecke algebra of $G(m,1,n)$, J. Math. Kyoto Univ. 36 (1996), no. 4, 789–808. • [2] H. Bao, P. Shan, W. Wang, and B. Webster, Categorification of quantum symmetric pairs, I, Quantum Topol. 9 (2018), no. 4, 643–714. • [3] H. Bao and W. Wang, A new approach to Kazhdan–Lusztig theory of type $B$ via quantum symmetric pairs, Astérisque 402, Soc. Math. France, Paris, 2018. • [4] A. Beĭlinson and J. Bernstein, “A proof of Jantzen conjectures” in I. M. Gel’fand Seminar, Part 1 (Moscow, 1993), Adv. Soviet Math. 16, Amer. Math. Soc., Providence, 1993, 1–50. • [5] N. Bourbaki, Éléments de mathématique: Groupes et algèbres de Lie, chapitres 4–6, Masson, Paris, 1981. • [6] J. Dixmier, Enveloping Algebras, revised reprint of the 1977 translation, Grad. Stud. Math. 11, Amer. Math. Soc., Providence, 1996. • [7] J. Du, “IC bases and quantum linear groups” in Algebraic Groups and Their Generalizations: Quantum and Infinite-Dimensional Methods, Part 2 (University Park, 1991), Proc. Sympos. Pure Math. 56, Amer. Math. Soc., Providence, 1994, 135–148. • [8] M. Ehrig and C. Stroppel, Nazarov-Wenzl algebras, coideal subalgebras and categorified skew Howe duality, Adv. Math. 331 (2018), 58–142. • [9] B. Elias and G. Williamson, Soergel calculus, Represent. Theory 20 (2016), 295–374. • [10] Z. Fan, C.-J. Lai, Y. Li, L. Luo, and W. Wang, Affine flag varieties and quantum symmetric pairs, to appear in Mem. Amer. Math. Soc., preprint, arXiv:1602.04383v2 [math.RT]. • [11] F. M. Goodman and H. Wenzl, A path algorithm for affine Kazhdan-Lusztig polynomials, Math. Z. 237 (2001), no. 2, 235–249. • [12] I. Grojnowski, Representations of affine Hecke algebras (and affine quantum $\mathrm{GL}_{n}$) at roots of unity, Int. Math. Res. Not. IMRN 1994, no. 5, 215–217. • [13] I. Grojnowski and M. Haiman, Affine Hecke algebras and positivity of LLT and Macdonald polynomials, preprint, 2007, http://math.berkeley.edu/~mhaiman/ftp/llt-positivity/new-version.pdf. • [14] T. Hayashi, $q$-analogues of Clifford and Weyl algebras—spinor and oscillator representations of quantum enveloping algebras, Comm. Math. Phys. 127 (1990), no. 1, 129–144. • [15] J. E. Humphreys, Reflection Groups and Coxeter Groups, Cambridge Stud. Adv. Math. 29, Cambridge Univ. Press, Cambridge, 1990. • [16] G. James and A. Mathas, A q-analogue of the Jantzen-Schaper theorem, Proc. Lond. Math. Soc. (3) 74 (1997), no. 2, 241–274. • [17] V. G. Kac, Infinite-Dimensional Lie Algebras, 3rd ed., Cambridge Univ. Press, Cambridge, 1990. • [18] M. Kashiwara, T. Miwa, and E. Stern, Decomposition of $q$-deformed Fock spaces, Selecta Math. (N.S.) 1 (1995), no. 4, 787–805. • [19] M. Kashiwara and T. Tanisaki, Kazhdan-Lusztig conjecture for affine Lie algebras with negative level: Nonintegral case, Duke Math. J. 77 (1995), no. 1, 21–62. • [20] M. Kashiwara and T. Tanisaki, Kazhdan-Lusztig conjecture for affine Lie algebras with negative level, II: Nonintegral case, Duke Math. J. 84 (1996), no. 3, 771–813. • [21] D. Kazhdan and G. Lusztig, Representations of Coxeter groups and Hecke algebras, Invent. Math. 53 (1979), no. 2, 165–184. • [22] D. Kazhdan and G. Lusztig, Tensor structures arising from affine Lie algebras, I, J. Amer. Math. Soc. 6 (1993), no. 4, 905–947. ; II, 949–1011. ; III, 7 (1994), no. 2, 335–381. ; IV, 383–453. • [23] A. Kleshchev, Linear and Projective Representations of Symmetric Groups, Cambridge Tracts in Math. 163, Cambridge Univ. Press, Cambridge, 2005. • [24] A. Lascoux, B. Leclerc, and J.-Y. Thibon, Une conjecture pour le calcul des matrices de décomposition des algèbres de Hecke de type A aux racines de l’unité, C. R. Acad. Sci. Paris Sér. I Math. 321 (1995), no. 5, 511–516. • [25] B. Leclerc, “Fock space representations of $U_{q}(\widehat{sl}_{n})$” in Geometric Methods in Representation Theory, I, Sémin. Congr. 24-I, Soc. Math. France, Paris, 2012, 343–385. • [26] B. Leclerc and J-Y. Thibon, “Littlewood-Richardson coefficients and Kazhdan-Lusztig polynomials” in Combinatorial Methods in Representation Theory (Kyoto, 1998), Adv. Stud. Pure Math. 28, Kinokuniya, Tokyo, 2000, 155–220. • [27] G. Lusztig, “Left cells in Weyl groups” in Lie Group Representations, I (College Park, 1982/1983), Lecture Notes in Math. 1024, Springer, Berlin, 1983, 99–111. • [28] G. Lusztig, “Modular representations and quantum groups” in Classical Groups and Related Topics (Beijing, 1987), Contemp. Math. 82, Amer. Math. Soc., Providence, 1989, 59–77. • [29] G. Lusztig, Canonical bases arising from quantized enveloping algebras, J. Amer. Math. Soc. 3 (1990), no. 2, 447–498. • [30] G. Lusztig, On quantum groups, J. Algebra 131 (1990), no. 2, 466–475. • [31] G. Lusztig, Monodromic systems on affine flag manifolds Proc. Roy. Soc. London Ser. A 445 (1994), no. 1923, 231–246. ; Errata, Proc. Roy. Soc. London Ser. A 450 (1995), no. 1940, 731–732. • [32] T. Miwa, M. Jimbo, and E. Date, Solitons: Differential Equations, Symmetries and Infinite-Dimensional Algebras, Cambridge Tracts in Math. 135, Cambridge Univ. Press, Cambridge, 2000. • [33] K. Nelsen and A. Ram, “Kostka–Foulkes polynomials and Macdonald spherical functions” in Surveys in Combinatorics, 2003 (Bangor), London Math. Soc. Lecture Note Ser. 307, Cambridge Univ. Press, Cambridge, 2003, 325–370. • [34] R. Orellana and A. Ram, “Affine braids, Markov traces and the category $\mathcal{O}$” in Algebraic Groups and Homogeneous Spaces, Tata Inst. Fund. Res. Stud. Math. 19, Tata Inst. Fund. Res., Mumbai, 2007, 423–473. • [35] A. Ram and P. Tingley, Universal Verma modules and the Misra-Miwa Fock space, Int. J. Math. Math. Sci. 2010, art. ID 326247. • [36] S. Riche and G. Williamson, Tilting Modules and the $p$-Canonical Basis, Astérisque 397, Soc. Math. France, Paris, 2018. • [37] P. Shan, Graded decomposition matrices of $v$-Schur algebras via Jantzen filtration, Represent. Theory 16 (2012), 212–269. • [38] W. Soergel, Kazhdan-Lusztig polynomials and a combinatoric[s] for tilting modules, Represent. Theory 1 (1997), 83–114. • [39] W. Soergel, Character formulas for tilting modules over Kac-Moody algebras, Represent. Theory 2 (1998), 432–448. • [40] J. R. Stembridge, The partial order of dominant weights, Adv. Math. 136 (1998), no. 2, 340–364. • [41] P. Tingley, Notes on Fock space, preprint, 2011, http://webpages.math.luc.edu/~ptingley/lecturenotes/Fock_space-2010.pdf. • [42] M. Varagnolo and E. Vasserot, On the decomposition matrices of the quantized Schur algebra, Duke Math. J. 100 (1999), no. 2, 267–297.
# Canonical ensemble Variables: • Number of Particles, $N$ • Volume, $V$ • Temperature, $T$ ## Partition Function The partition function, $Q$, for a system of $N$ identical particles each of mass $m$ is given by $Q_{NVT}=\frac{1}{N!h^{3N}}\iint d{\mathbf p}^N d{\mathbf r}^N \exp \left[ - \frac{H({\mathbf p}^N,{\mathbf r}^N)}{k_B T}\right]$ where $h$ is Planck's constant, $T$ is the temperature, $k_B$ is the Boltzmann constant and $H(p^N, r^N)$ is the Hamiltonian corresponding to the total energy of the system. For a classical one-component system in a three-dimensional space, $Q_{NVT}$, is given by: $Q_{NVT} = \frac{V^N}{N! \Lambda^{3N} } \int d (R^*)^{3N} \exp \left[ - \beta U \left( V, (R^*)^{3N} \right) \right] ~~~~~~~~~~ \left( \frac{V}{N\Lambda^3} \gg 1 \right)$ where: • $\beta := \frac{1}{k_B T}$, with $k_B$ being the Boltzmann constant, and T the temperature. • $U$ is the potential energy, which depends on the coordinates of the particles (and on the interaction model) • $\left( R^*\right)^{3N}$ represent the 3N position coordinates of the particles (reduced with the system size): i.e. $\int d (R^*)^{3N} = 1$
Low-dose CT (LDCT) is widely accepted as the preferred method for detecting pulmonary nodules. However, the determination of whether a nodule is benign or malignant involves either repeated scans or invasive procedures that sample the lung tissue. Noninvasive methods to assess these nodules are needed to reduce unnecessary invasive tests. In this study, we have developed a pulmonary nodule classifier (PNC) using RNA from whole blood collected in RNA-stabilizing PAXgene tubes that addresses this need. Samples were prospectively collected from high-risk and incidental subjects with a positive lung CT scan. A total of 821 samples from 5 clinical sites were analyzed. Malignant samples were predominantly stage 1 by pathologic diagnosis and 97% of the benign samples were confirmed by 4 years of follow-up. A panel of diagnostic biomarkers was selected from a subset of the samples assayed on Illumina microarrays that achieved a ROC-AUC of 0.847 on independent validation. The microarray data were then used to design a biomarker panel of 559 gene probes to be validated on the clinically tested NanoString nCounter platform. RNA from 583 patients was used to assess and refine the NanoString PNC (nPNC), which was then validated on 158 independent samples (ROC-AUC = 0.825). The nPNC outperformed three clinical algorithms in discriminating malignant from benign pulmonary nodules ranging from 6–20 mm using just 41 diagnostic biomarkers. Overall, this platform provides an accurate, noninvasive method for the diagnosis of pulmonary nodules in patients with non–small cell lung cancer. Significance: These findings describe a minimally invasive and clinically practical pulmonary nodule classifier that has good diagnostic ability at distinguishing benign from malignant pulmonary nodules. With cigarette smoking as the acknowledged root cause, lung cancer remains the primary source of cancer-related deaths worldwide. This is, in part, due to the lack of adequate early detection protocols, and, in part, because early symptoms are so subtle. The demonstration that lung cancer screening by low-dose CT (LDCT) reduces mortality among high-risk current and former smokers (>55 years, >30 pack years; refs 1–3) led to an overall increase in LDCT screening programs (4). Although LDCT does identify significantly smaller nodules than conventional X-rays, this ability comes with the challenge of distinguishing the small percentage of pulmonary nodules that are malignant from the majority of those detected that are benign (5). The National Lung Screening Trial (NLST) detected lung nodules ≥4 mm in diameter in 40% of the patients screened, with 96.4% being false positives over the 3 rounds of screening (6). To reduce this high false positive rate, the recent Lung-RADS classification (7) and new guidelines from the Fleischner group (8) set the detection of nodules ≥6 mm as the positive threshold. However, positive CT scans remain particularly problematic for that class of indeterminate pulmonary nodules (IPN), which range in size from 6 to 20 mm, for which the best course of clinical action is not well specified (6). Our earlier studies demonstrated that rapidly purified (within 2 hours) peripheral blood mononuclear cells (PBMC) contain gene expression data that can distinguish benign from malignant lung nodules with high accuracy (9). This work established a new paradigm in nodule diagnosis by showing that even an early-stage cancer in the lung affects gene expression in PBMC that is predictive of malignancy. However, this approach was limited by the need to rapidly purify PBMCs from blood samples to maintain sample consistency and RNA integrity. This made it difficult to collect samples in environments where rapid isolation of PBMC was not possible, including most community clinics and physician offices. In addition, the microarrays, which were so useful for diagnostic development, are technically complicated and prone to variabilities associated with reagent batches and enzymatic processes, making them less amenable to clinical applications. The high quality of RNA required for microarray studies is also potentially problematic for studies with patient-derived samples (10). This retrospective/prospective study sought to determine whether accuracies similar to what we achieved in our PBMC studies (9) could be achieved with RNA from whole blood collected in RNA-stabilizing PAXgene tubes. PAXgene RNA is stabilized at the time of collection, immediately fixing the gene expression patterns. The RNA is stable at 15°C–25°C for 5 days and at −20 to −70°C for 8 years. This allows samples to be collected in any clinical setting where blood is drawn without the need for special equipment for storage or for cell purifications (11–13) and allows samples to be transferred to a central facility for testing, as routinely as with other blood tests. In addition, long-term storage with no loss of RNA integrity makes the system well suited for retrospective analyses. We also asked whether a PAXgene signature developed on Illumina microarrays could be transitioned to the NanoString nCounter platform already FDA-approved for the Prosigna Breast Cancer prognosis assay (14) and more recently used to develop a clinical-grade assay that predicts clinical response to PD-1 checkpoint blockade. This PD-1 assay is currently being evaluated in ongoing pembrolizumab clinical trials (15). Because the NanoString assays do not include any enzymatic reactions or amplification steps, the system avoids potential reagent batch effects and PCR biases while decreasing the opportunities for cross contamination by minimizing sample handling. Although we recognized that a gene expression profile from whole blood would be of a greater complexity and could potentially result in a reduction in important diagnostic signals, there was also the prospect that important additional cell types might contribute to the classifier performance. We now report that the gene expression in whole blood, collected using PAXgene RNA stabilization tubes, can distinguish benign from malignant lung nodules detected by LDCT with high accuracy on independent validation and also report the successful transition of this pulmonary nodule classifier (PNC) from the microarray developmental platform to the NanoString nCounter platform. ### Study design The process of biomarker selection and validation across all studies is summarized in Fig. 1. A total of 821 samples from patients with malignant and benign pulmonary nodules were analyzed across three platforms: Illumina microarrays, the NanoString Pan Cancer Immune (PCI) panel, and finally a custom NanoString custom panel. Microarray data from 264 patient samples (Table 1; Supplementary Table S1) from 4 clinical sites was used for microarray model development. Estimations of performance were based on an independent validation set of 51 samples. In addition, 220 samples, including 201 of the 264 microarray samples, were analyzed on the NanoString PCI platform to select additional biomarkers to be included in the custom NanoString panel. Samples from a fifth collection site not included in the biomarker selection process were analyzed only on the custom NanoString platform. The final NanoString PNC (nPNC) was developed on the data generated from the custom NanoString panel using 583 training samples [included 215 samples used originally in the microarray training set, and 368 samples (70%) never used for the biomarker selection] and validated using a set of 158 independent samples never involved in probe selection. The characteristics of the samples used at the different steps of the classifier development are shown in Supplementary Table S2. Figure 1. Study design. A total of 821 unique samples were analyzed in this study. Illumina HT12v4 microarrays and the NanoString PCI panel were used to select candidate biomarker probes using 283 total samples. A total of 264 samples were used for biomarker selection on microarrays and 201 of the 264 + 19 new samples were used to select the biomarkers from the PCI panel. The 51 samples used for validation were not used in any biomarker selection. A total of 559 of the biomarkers selected from the microarray and the PCI panel analyses were successfully designed for the NanoString custom panel. The custom panel was assayed with 237 of the samples used in probe selection, to ensure that the new platform successfully reproduced the microarray results, and an additional 346 independent samples not previously assayed on any platform (total 583). The 583 training samples were used to create a nPNC. An additional 158 samples that were never involved in NanoString probe selection were used for Nanostring custom platform (346 for training and 141 for validation) for a total of 821 independent samples. MN, malignant; BN, benign. Figure 1. Study design. A total of 821 unique samples were analyzed in this study. Illumina HT12v4 microarrays and the NanoString PCI panel were used to select candidate biomarker probes using 283 total samples. A total of 264 samples were used for biomarker selection on microarrays and 201 of the 264 + 19 new samples were used to select the biomarkers from the PCI panel. The 51 samples used for validation were not used in any biomarker selection. A total of 559 of the biomarkers selected from the microarray and the PCI panel analyses were successfully designed for the NanoString custom panel. The custom panel was assayed with 237 of the samples used in probe selection, to ensure that the new platform successfully reproduced the microarray results, and an additional 346 independent samples not previously assayed on any platform (total 583). The 583 training samples were used to create a nPNC. An additional 158 samples that were never involved in NanoString probe selection were used for Nanostring custom platform (346 for training and 141 for validation) for a total of 821 independent samples. MN, malignant; BN, benign. Close modal Table 1. Patient demographics for samples in the microarray study Illumina training setIllumina validation CategoryMalignant nodulesBenign nodulesPMalignant nodulesBenign nodulesP Total N 131 133  33 18 Gender Female 73 (56%) 68 (51%) 0.454 24 (73%) 11 (61%) 0.529 Male 58 (44%) 65 (49%)  9 (27%) 7 (39%) Age 67 ± 7 65 ± 7 0.029 72 ± 7 64 ± 7 0.0057 Race Black 19 (15%) 17 (13%) 0.121 3 (9%) 2 (11%) White 107 (82%) 108 (81%)  30 (91%) 15 (83%) 0.375 Other 5 (3%) 8 (6%)  1 (6%) Smoking status Current 33 (25%) 47 (35%)  8 (24%) 6 (33%) Former 90 (69%) 81 (61%) 0.165 23 (70%) 10 (56%) 0.466 Never 8 (6%) 5 (5%)  2 (6%) 1 (5.5%) Unknown 0 (0%) 0 (0%)  0 (0%) 1 (5.5%) Pack years 40 ± 21 38 ± 14 0.725 36 ± 19 41 ± 21 0.962 Lesion size, mm 22 ± 8 8 ± 4 1 × 10−13 17 ± 4 15 ± 4 0.327 Cancer stage I 87 (66%)   33 (100%) II 23 (18%)   0 (0%) III 7 (5%)   0 (0%) IV 9 (7%)   0 (0%) Unknown 5 (4%)   0 (0%) Illumina training setIllumina validation CategoryMalignant nodulesBenign nodulesPMalignant nodulesBenign nodulesP Total N 131 133  33 18 Gender Female 73 (56%) 68 (51%) 0.454 24 (73%) 11 (61%) 0.529 Male 58 (44%) 65 (49%)  9 (27%) 7 (39%) Age 67 ± 7 65 ± 7 0.029 72 ± 7 64 ± 7 0.0057 Race Black 19 (15%) 17 (13%) 0.121 3 (9%) 2 (11%) White 107 (82%) 108 (81%)  30 (91%) 15 (83%) 0.375 Other 5 (3%) 8 (6%)  1 (6%) Smoking status Current 33 (25%) 47 (35%)  8 (24%) 6 (33%) Former 90 (69%) 81 (61%) 0.165 23 (70%) 10 (56%) 0.466 Never 8 (6%) 5 (5%)  2 (6%) 1 (5.5%) Unknown 0 (0%) 0 (0%)  0 (0%) 1 (5.5%) Pack years 40 ± 21 38 ± 14 0.725 36 ± 19 41 ± 21 0.962 Lesion size, mm 22 ± 8 8 ± 4 1 × 10−13 17 ± 4 15 ± 4 0.327 Cancer stage I 87 (66%)   33 (100%) II 23 (18%)   0 (0%) III 7 (5%)   0 (0%) IV 9 (7%)   0 (0%) Unknown 5 (4%)   0 (0%) NOTE: Median ± interquartile range is given for continuous values. P values indicate significance of comparison between malignant and benign nodule groups. ### Study population Samples were prospectively collected from incidental subjects with a positive LDCT from 5 clinical sites including Helen F. Graham Cancer Center (Newark, DE), The Hospital of the University of Pennsylvania (Philadelphia, PA), Roswell Park Comprehensive Cancer Center (Buffalo. NY), Temple University Hospital (Philadelphia, PA), and subjects from New York University Langone Medical Center (New York, NY). The NYU subjects included patients recruited as a part of an EDRN lung screening program at NYU. The study was Institutional Review Board–approved at each participating site and conducted according to the principles expressed in the Declaration of Helsinki. All participants signed an informed consent before being enrolled. The study population was primarily smokers and ex-smokers, >50 years of age with >20 pack-years of smoking history, and no previous cancer in the past 5 years (except for nonmelanoma skin cancer). Nodules were confirmed as malignant or benign by repeated imaging or by pathologic diagnosis through bronchoscopy, biopsy, and/or lung resection. In addition, >97% of benign nodules had 4 or more years of follow-up with the remainder having 2 or more years at the time of analysis. Samples associated with MNs were collected within 3 months of definitive diagnosis or prior to any invasive procedure including curative surgery. A small number of participants were found to be never smokers after they had been assayed. The effect on classifier performance of including these samples was assessed. In cases where multiple nodules were present, the diameter of the largest nodule was reported. ### RNA purification, quality assessment, and microarrays Each collection site was provided with a standard protocol for sample collection and storage as specified by Preanalytix [https://www.preanalytix.com/products/blood/rna for the PAXgene Blood RNA Tube (IVD)]. Samples were either stored on site and then bulk transferred overnight on dry ice, or they were transferred to Wistar by courier on the day of collection and stored at −70°C until processing. Total RNA was isolated using the PAXgene miRNA Kit (Qiagen), to capture miRNAs as well as mRNAs. Samples were quantitated with NanoDrop 1000 Spectrophotometer (Thermo Fisher Scientific) and assayed for RNA integrity on the Agilent 2100 BioAnalyzer. Average RNA yields were 3 μg/2.5 mL of blood and on average, RNA integrity numbers (RIN) are >8. Only samples with RIN >7.5 were used for the microarray studies. A constant amount (100 ng) of total RNA was amplified (aRNA) using the Illumina-approved RNA Amplification Kit (Epicenter) and hybridized to the Human-HT12 v4 human whole-genome bead arrays. Microarrays were processed in sets of 48 to minimize potential batch effects. ### NanoString assay conditions The NanoString hybridization was carried out for a constant 19 (within the recommended 12–25 hours) hours at 65°C. Posthybridization processing in the nCounter Prep Station used the standard settings. The cassette scanning parameter was set at high [555 field of view (FOV)]. Supplementary Fig. S1A shows the total normalized counts detected using 100 ng, 200 ng, and 300 ng of total RNA scanned at the low (280 FOV) and the high (555 FOV) settings. The 555 FOV setting significantly increases the overall signal. All assays were carried out using this setting. The standard sample size was 100 ng, an amount we expected to have in all samples. Supplementary Figure S1B shows the stability of the assay across multiple repeats of the Universal Human RNA control sample. Variations less than 5% were observed for the majority of the gene probes with 50 or more detected counts. Supplementary Figure S1C–S1F show that although most sample RIN numbers were above eight, even samples with RINs ≤3 met all four NanoString quality measures, supporting the platforms utility with degraded RNA samples. We found no significant impact on the overall expression profiles for the degraded RNA. ### Statistical analysis #### Microarrays. Microarray raw expression data were exported for analysis using Genome Studio software. The raw data was quantile normalized and log2-scaled. Genes with average expression values ≥2× the background levels were used to develop the PNC using support vector machines and recursive feature elimination (SVM-RFE) and the 10-fold 10-resample cross-validation (see details below). The top ranked probes (Borda count) that most accurately distinguished malignant from benign nodules were selected as candidates for the NanoString custom panel. The SVM training set was also stratified into subset A, which contained smaller nodules and stage I and II cancers and subset B, which contained malignant and benign nodules that were balanced for lesion size (Supplementary Table S1). The additional analyses of sets A and B considered either nodule class alone (malignant or benign) or sample class plus collection site as the factors in a linear regression model for each observed gene expression. This resulted in six different regression models and two additional sets of genes were selected from these analyses for inclusion in the NanoString model based on the following parameters: (i) 59 genes with a minimal P value across the comparisons using P < 1 × 10−4 threshold and (ii) 76 genes with a maximum regression coefficient b > log2(1.2) at P < 0.01. Housekeeping (HK) genes were selected from a candidate pool of well-expressed genes (>5× background) with coefficients of variation (CV) for the absolute and log2-scaled expression less than 20% and 2.5%, respectively. Twenty candidate HK genes were ultimately selected: the 12 HK candidate genes with the least CV and 8 candidate genes that overlapped with existing NanoString HK probes. #### NanoString. Background correction was performed on NanoString PanCancer Immune Panel samples by subtracting the geometric mean of the counts of negative controls. The sample counts were normalized by scaling all the values by the ratios of geometric mean of sample controls to the overall geometric mean of control gene counts across all samples. This was done for both spike-in positive controls as well as for HK genes. The NanoString custom panel was quantile normalized and the NanoString Code Set batch differences were corrected using the ratios of expression of samples replicated between code sets, as per NanoString's recommendation. Z-scores were calculated from the final values of custom panel counts and used as inputs for SVM-RFE. #### SVM-RFE data analysis. Supervised classification using a linear kernel SVMs with RFE (16) was used to analyze a z-score–transformed gene expression data set to develop the microarray classifier based on a training set that can distinguish malignant and benign patient classes. A balanced set of cases and controls was used in classifier development as SVM have been shown to require a balanced input for the development of the most accurate classifiers (17). The independent validation, which tests the validity of the classifier developed in the training on a completely new set of samples, is blinded to the identification of samples as either cases or controls. As described previously (9), we employed a 10-fold cross-validation approach with folds resampled 10 times (100 training–testing split models). For each split of the microarray data, the top 1,000 probes ranked by P value (two-tailed t test on 9-folds) were selected and linear kernel SVM was trained on 9-folds and tested on the remaining folds. Each RFE iteration eliminated 10% of the features with the least absolute model weights, in each round as described by Guyon and colleagues (16). A single feature elimination per SVM iteration was used for NanoString data analysis. The final average scores were calculated as follows. The final score for any sample in a training set is calculated as an average among the scores generated for that sample in all testing folds (10 such folds among all 100 splits). The final score for any independent sample in a validation set is calculated as an average among all 100 split models. Each sample is then assigned to a class using the final average scores and a score threshold determined from the training set (0 for unbiased accuracy, or at a fixed threshold corresponding to 90% sensitivity) and sensitivity, specificity and accuracy were calculated. Probes ranking across all 100 splits were combined according to the following procedure based on the Borda count method. In each ranked list n, each gene i was assigned a score: {s_{i,n}} = {\frac{1}{{{r_{i,n}}}}$⁠, where ri,n is the rank of the gene i in list n. A final score FS for each gene i was calculated by taking the sum of the scores of gene i across all 100 lists: F{S_i} = \sum _{n = 1}^{100} {s_{i,n}}$⁠. The resulting final scores for each gene were then used to assign their ranking in the classifier. Final ranking of the probes was produced using all 741 available NanoString samples. Optimal number of probes for microarray data was determined as the minimum number of probes that maintained an ROC-AUC within 1% of the ROC-AUC achieved by the SVM with top 1,000 gene probes. For the NanoString custom panel, the optimal number of genes was chosen by determining that point where the removal of additional genes/probes resulted in a decline in classification performance. The performance was assessed by determining the ROC-AUC after the removal of each gene using the moving average with a smoothing window size of 5. The probe number at which the ROC-AUC was at maximum was selected as the final optimal classifier. ### Data and materials availability Microarray data have been deposited to NCBI GEO database (https://www.ncbi.nlm.nih.gov/geo) under accession number GSE108375. ### Testing for a lung cancer–related gene signature using peripheral blood RNA The demographic characteristics for the 315 patients used to develop and validate the microarray lung cancer signature are shown in Table 1. The samples used for model building were primarily early stage non–small cell lung cancer with stage I + II cancers comprising 84% of the training set population, and with 100% of the cancers in the independent validation set being stage I. Gene expression from 264 samples (Table 1, Illumina Training set) was used to select the microarray gene signature that most accurately distinguished malignant from benign lung nodules using SVM-RFE (9, 16). The accuracy of classification was stable across a wide range of probe numbers (Supplementary Fig. S2A) and a panel of the 1,000 highest ranked probes achieved an ROC-AUC of 0.878. As the performance slowly decreased with elimination of lower ranked probes, we selected the smallest number of probes that maintained an ROC-AUC within 1% of the 0.878 achieved by the top 1,000 SVM gene probes. We identified 311 probes that returned an AUC of 0.866 [95% confidence interval (CI): 0.824–0.910] in the training set (sensitivity 77.9%, specificity 74.4%). The performance was well maintained on independent validation (Table 1, Illumina Validation), achieving an AUC of 0.847 (95% CI: 0.742–0.951) (sensitivity 72.7%, specificity 88.9%), similar to the performance of the training set (Supplementary Fig. S2B and S2C). This demonstrated the accuracy of prediction is similar to that of the 29 gene classifier reported from our purified PBMC study (9) and indicates that the presence of cancer in the lung can be also detected in the PAXgene-collected blood RNA with an equal and, in some cases, better performance. Importantly, the performance is maintained as the number of probes is reduced (Supplementary Fig. S2A and S2B), indicating a robust signature is maintained across different numbers of genes. ### Transitioning the PNC from microarrays to the NanoString platform Having verified that mRNA expression from PAXgene samples can distinguish malignant from benign pulmonary nodules, we developed a strategy to transition the microarray-based PNC to the NanoString nCounter platform (14). Because it was difficult to know, a priori, how the microarray expression measurements would replicate on the NanoString platform, we designed the custom panel to contain enough redundancy to mitigate platform differences. We included the top ranked 300 biomarkers from the Illumina gene panel identified by the SVM-RFE analysis. We also included an additional set of 59 markers representing the most significantly differentially expressed probes at P < 1 × 10−4 and 79 probes that exhibited the largest fold change in expression between the malignant and benign groups while maintaining P < 0.01. A set of 20 HK genes with the most consistent expression on the microarrays (see Materials and Methods) was also added. Because the mRNA samples to be processed on the NanoString platform did not undergo reverse transcription and PCR amplification, we were concerned that some of the microarray probes we selected might be expressed at levels too low for detection without amplification. To establish performance criteria, we analyzed 220 of the mRNA samples with the NanoString PCI panel (catalog no. XT-CSO-HIP1-12; 115 malignant, 105 benign). Although the actual probes were not identical to the Illumina probes, 755 out of 770 genes represented in the PCI panel were also represented on the microarray platform. This study allowed us to correlate the detectable levels of gene expression between the two platforms, providing an estimate of the expression levels that could be robustly detected on both platforms. The results suggested that the probes detected at 2× the background levels on microarrays were robustly detected on the NanoString platform. The PCI data was also analyzed using SVM-RFE with 10-fold 10-resample cross-validation and although the PCI panel only demonstrated a ROC-AUC = 0.754 compared with the 0.866 achieved with the microarrays (Supplementary Fig. S2D), we selected 106 of the most discriminatory PCI probes for inclusion in our custom panel. An additional 55 probes for genes that were identified as being associated with outcome in our PBMC microarray studies (18, 19) were also added, bringing the number to 619 potential probes for the NanoString custom panel. Supplementary Table S3 summarizes the sources of the final list of the candidate biomarkers selected for the custom NanoString panel. The NanoString probes were then designed to target the same or closely located transcriptome regions as those targeted by the Illumina microarray probes whenever possible. Probes that met the NanoString quality control criteria were successfully designed for 559 of the 619 selected biomarkers (Supplementary File S1). ### Developing, refining, and validating the nPNC We first assessed how well the classification accuracies had been retained between the Illumina and NanoString platforms by reassaying 199 of the samples from the microarray training set. For the comparison of the classification accuracies, we only used the 276 microarray biomarkers that were successfully designed as NanoString probes. We observed a Spearman correlation ρ = 0.73 (P < 1 × 10−12) for the sample classification scores between the two platforms (Supplementary Fig. S3A). The ROC-AUC based on the 276 probes was 0.881 for the microarrays and 0.838 for the NanoString (Supplementary Fig. S3B), indicating that the platform transition was successful. To carry out an unbiased assessment of the performance of the custom panel, we analyzed a total of 741 patient samples, including samples from a new fifth collection site. The final nPNC training set of 583 samples and the validation of 158 samples had balanced numbers of malignant and benign samples (Table 2) to provide the best conditions for selecting a classifier with good sensitivity as well as specificity (17). Table 2. Demographics of samples assayed with NanoString custom panel NanoString lung nodule classifier training setNanoString lung nodule classifier validation set CategoryMalignant nodulesBenign nodulesPMalignant nodulesBenign nodulesP Total N 290 293  74 84 Gender Female 155 (53%) 145 (49%) 0.2530 45 (61%) 44 (52%) 0.2728 Male 135 (47%) 146 (50%)  29 (39%) 38 (45%) Unknown 0 (0%) 2 (1%)  0 (0%) 2 (2%) Age 68 ± 6 62 ± 6 3 × 10−10 69 ± 7 65 ± 7 0.0097 Race Black 34 (12%) 20 (7%) 0.0020 5 (7%) 10 (12%) 0.5134 Other 35 (12%) 17 (6%)  8 (11%) 10 (12%) White 221 (76%) 256 (87%)  61 (82%) 64 (76%) Smoking status Current 73 (25%) 110 (38%) 0.0124 23 (31%) 22 (26%) 0.8885 Former 198 (68%) 170 (58%)  45 (61%) 54 (64%) Never 15 (5%) 11 (4%)  5 (7%) 6 (7%) Unknown 4 (1%) 2 (1%)  1 (1%) 2 (2%) Pack years 40 ± 20 38 ± 13 0.6066 42 ± 19 42 ± 15 0.5767 Lesion size, mm 22 ± 10 8 ± 4 4 × 10−37 18 ± 6 7 ± 4 6 ×10−10 Cancer stage I 173 (60%)   47 (64%) II 41 (14%)   9 (12%) III 49 (17%)   10 (14%) IV 4 (1%)   1 (1%) Unknown 23 (8%)   7 (9%) Sitea HFGCC 146 (50%) 49 (17%) 6 × 10−26 37 (50%) 15 (18%) 2 × 10−6 NYU 74 (26%) 198 (68%)  24 (32%) 62 (74%) Roswell-Park 11 (4%) 13 (4%)  7 (9%) 6 (7%) Temple 3 (1%) 11 (4%)  0 (0%) 0 (0%) UPenn 56 (19%) 22 (8%)  6 (8%) 1 (1%) NanoString lung nodule classifier training setNanoString lung nodule classifier validation set CategoryMalignant nodulesBenign nodulesPMalignant nodulesBenign nodulesP Total N 290 293  74 84 Gender Female 155 (53%) 145 (49%) 0.2530 45 (61%) 44 (52%) 0.2728 Male 135 (47%) 146 (50%)  29 (39%) 38 (45%) Unknown 0 (0%) 2 (1%)  0 (0%) 2 (2%) Age 68 ± 6 62 ± 6 3 × 10−10 69 ± 7 65 ± 7 0.0097 Race Black 34 (12%) 20 (7%) 0.0020 5 (7%) 10 (12%) 0.5134 Other 35 (12%) 17 (6%)  8 (11%) 10 (12%) White 221 (76%) 256 (87%)  61 (82%) 64 (76%) Smoking status Current 73 (25%) 110 (38%) 0.0124 23 (31%) 22 (26%) 0.8885 Former 198 (68%) 170 (58%)  45 (61%) 54 (64%) Never 15 (5%) 11 (4%)  5 (7%) 6 (7%) Unknown 4 (1%) 2 (1%)  1 (1%) 2 (2%) Pack years 40 ± 20 38 ± 13 0.6066 42 ± 19 42 ± 15 0.5767 Lesion size, mm 22 ± 10 8 ± 4 4 × 10−37 18 ± 6 7 ± 4 6 ×10−10 Cancer stage I 173 (60%)   47 (64%) II 41 (14%)   9 (12%) III 49 (17%)   10 (14%) IV 4 (1%)   1 (1%) Unknown 23 (8%)   7 (9%) Sitea HFGCC 146 (50%) 49 (17%) 6 × 10−26 37 (50%) 15 (18%) 2 × 10−6 NYU 74 (26%) 198 (68%)  24 (32%) 62 (74%) Roswell-Park 11 (4%) 13 (4%)  7 (9%) 6 (7%) Temple 3 (1%) 11 (4%)  0 (0%) 0 (0%) UPenn 56 (19%) 22 (8%)  6 (8%) 1 (1%) NOTE: Median ± interquartile range is given for continuous values. P values indicate significance of comparison between malignant and benign nodule groups. aHFGCC, Helen F. Graham Cancer Center; NYU, Langone Medical Center; Temple, Temple University Hospital; Roswell, Roswell Park Cancer Center; UPenn, University of Pennsylvania, Perelman School of Medicine. The classification model using all 559 probes demonstrated an ROC-AUC of 0.833 (95% CI: 0.799–0.864) on training set and ROC-AUC of 0.826 (95% CI: 0.760–0.891) on the independent validation set (Fig. 2A). The training set performance remained stable during the recursive feature elimination process (Supplementary Fig. S4A). Incrementally decreasing sets of probes achieved similar ROC-AUCs (Fig. 2B–D; Supplementary Fig. S4B). Sensitivities, specificities, and positive and negative predictive values (PPV and NPV) are also similar in both training and validation sets (Table 3). The rank of each gene in the classifier based on the training set (Supplementary Fig. S4A) and the combined set (Supplementary Fig. S4C) is shown in Supplementary File S1. Although the 41 probe classifier achieved an AUC of 0.834 (95% CI: 0.800–0.865) for training and 0.825 (95% CI: 0.759–0.890) for the independent validation (Fig. 2C), even using as few as six probes maintained ROC-AUC above 0.8, though there is a slight drop in the validation set performance (Fig. 2D). Figure 2. Classification performance of NanoString lung nodule classifier. A–D, Comparison of ROC-AUC in training and validation sets with progressive reduction of the numbers of probes. E, The calculated probability of malignancy for an individual nodule for different classification scores using the 41 probe nPNC. Figure 2. Classification performance of NanoString lung nodule classifier. A–D, Comparison of ROC-AUC in training and validation sets with progressive reduction of the numbers of probes. E, The calculated probability of malignancy for an individual nodule for different classification scores using the 41 probe nPNC. Close modal Table 3. Classification performance using different number of probes N probes SetPerformance metric559100416 Training Sensitivity 76.5% 73.4% 74.7% 73.7% Specificity 76.6% 73.8% 74.8% 73.4% Accuracy 75.8% 74.6% 74.3% 73.6% ROC-AUC 0.833 0.825 0.834 0.800 ROC-AUC 95% CI 0.799–0.864 0.790–0.857 0.800–0.865 0.765–0.836 Specificity at 90% sensitivity 53.2% 52.9% 51.9% 45.1% Positive predictive value (PPV)a 0.056 0.049 0.052 0.048 Negative predictive value (NPV)a 0.994 0.993 0.994 0.993 Validation Sensitivity 67.6% 67.6% 68.9% 52.7% Specificity 83.3% 83.3% 82.1% 85.7% Accuracy 75.9% 75.9% 75.9% 70.3% ROC-AUC 0.826 0.817 0.825 0.782 ROC-AUC 95% CI 0.760–0.891 0.749–0.885 0.759–0.890 0.709–0.855 Specificity at 90% sensitivity 46.4% 36.9% 51.2% 32.1% Positive predictive value (PPV)a 0.069 0.069 0.066 0.063 Negative predictive value (NPV)a 0.993 0.993 0.993 0.990 N probes SetPerformance metric559100416 Training Sensitivity 76.5% 73.4% 74.7% 73.7% Specificity 76.6% 73.8% 74.8% 73.4% Accuracy 75.8% 74.6% 74.3% 73.6% ROC-AUC 0.833 0.825 0.834 0.800 ROC-AUC 95% CI 0.799–0.864 0.790–0.857 0.800–0.865 0.765–0.836 Specificity at 90% sensitivity 53.2% 52.9% 51.9% 45.1% Positive predictive value (PPV)a 0.056 0.049 0.052 0.048 Negative predictive value (NPV)a 0.994 0.993 0.994 0.993 Validation Sensitivity 67.6% 67.6% 68.9% 52.7% Specificity 83.3% 83.3% 82.1% 85.7% Accuracy 75.9% 75.9% 75.9% 70.3% ROC-AUC 0.826 0.817 0.825 0.782 ROC-AUC 95% CI 0.760–0.891 0.749–0.885 0.759–0.890 0.709–0.855 Specificity at 90% sensitivity 46.4% 36.9% 51.2% 32.1% Positive predictive value (PPV)a 0.069 0.069 0.066 0.063 Negative predictive value (NPV)a 0.993 0.993 0.993 0.990 aCalculated using prevalence of 1.8% of lung cancer observed in the National Lung Screening Trial (NLST). The optimal 41 gene signature had an unbiased sensitivity and specificity of 68.1% and 82.1%, respectively. It achieved a specificity of 51% at a sensitivity of 90% for both training and validation with NPV and PPV values of 0.99 and 0.0066, respectively, for the independent validation. The classifier detected cancers with 64% sensitivity for stage I and a sensitivity of 70% for the late-stage cancers. Probabilities of malignancy across a range of nPNC classification scores are shown in Fig. 2E. It should be noted that a small number of individuals with malignant or benign pulmonary nodules that had no history of smoking were included in the analysis. Removal of these subjects from the study did not change AUC validation performance for the 41 probe classifier (AUC difference of 0.001) or the full 559 probe classifier (AUC difference of 0.010). Adding age, sex, race, and smoking history as additional factors did not have an impact on the classification producing ROC-AUC of 0.837, as compared with 0.840, when only the gene expression was used. ### nPNC classifier outperforms existing clinical models Focusing on the 41 biomarker panel classifier, we compared the performance of the nPNC on all the samples in the difficult to assess 6–20 mm size range with the performance of three clinical algorithms, the Brock University (St. Catharines, Ontario, Canada) developed in a high-risk population (20, 21) and the Mayo Clinic (Rochester, MN; ref. 22) and Veteran's Affairs (VA; ref. 23) models developed using data from a more incidental nodule population. These algorithms assess the cancer risk of a pulmonary nodule based on a variety of demographic and pathologic parameters including nodule size and location (Fig. 3A). The nPNC outperforms all three clinical models on nodules in the 6–20 mm diameter range. Because nodule size is a well-accepted risk factor included in each of the clinical models, we also demonstrate an increased accuracy of the classification as compared with a classification using only size for the samples in the 6–20 mm range (Fig. 3B). Figure 3. Performance of the 41 probe nPNC for benign and malignant nodules in the 6–20 mm range. A, Compared with The Brock University, Mayo Clinic, and VA lung cancer risk clinical models. B, Compared with classification by nodule maximum diameter Figure 3. Performance of the 41 probe nPNC for benign and malignant nodules in the 6–20 mm range. A, Compared with The Brock University, Mayo Clinic, and VA lung cancer risk clinical models. B, Compared with classification by nodule maximum diameter Close modal ### Classifier performance for different nodules size ranges Because nodule size is an important risk factor and the definition of IPNs is not very well defined (6) with the changing guidelines, we examined the performance when comparing malignant and benign pulmonary nodules that were similar in size ranges. We calculated the performance of the 41 probe classifier across the various nodule size ranges using baseline positive thresholds of 4 mm from the NLST study (24), 6 mm and 8 mm as discussed in the recent reports from the Fleischner Society (8), and the Lung-RADS (7) as well as a baseline threshold of 10 mm. The ROC-AUCs and the specificities when the sensitivity is held at a performance of 90% were calculated for all possible ranges for the selected thresholds for training and independent validation sets (Fig. 4). Overall, training and validation set performance was highly conserved across all size ranges except where only a few validation set samples fall within a particular size range, as is evident in the smaller validation set. The 41 probe nPNC performs particularly well on independent validation with nodules in the difficult-to-diagnose 8–14 mm range, achieving a 64% specificity at 90% sensitivity, although the specificity drops to 48% in the larger combined dataset. Whether 4 mm or 6 mm is used as the threshold for a positive screen, our classifier demonstrates its utility in classifying IPNs by performing well across all the ranges with a ROC-AUC of 0.83 and 0.81, respectively, in the combined dataset and a threshold of 8 mm only reduces the AUC to 0.80. The specificities at 90% sensitivity are similarly stable and are calculated as 0.50, 0.46, and 0.48 for 4, 6, and 8 mm, respectively. Figure 4. Classification performance across different nodule size ranges. Performance values (ROC-AUC, blue; specificity at 90% sensitivity, red) are given for the training (top), validation (middle), and combined data sets (bottom). Each row is labeled on the left side of the figure with the lower nodule size range from minimum (any size) to 10 mm. The column labels across the bottom correspond to the upper nodule size range from 10 mm to maximum (any size). Each square of a panel then shows classification performance in distinguishing benign from malignant nodules that fall in range from lower to upper size in mm along with numbers of nodules being compared for both benign (BN) and malignant (MN) classes. The color intensity is used for visual accent and is proportional to the reported performance values with the color scales shown at the top of the panels. For example, nPNC demonstrated the best ROC-AUC performance of 0.87 and a specificity of 0.64 at 90% sensitivity in distinguishing 8–10 mm nodules (set contained 6 malignant and 14 benign). Figure 4. Classification performance across different nodule size ranges. Performance values (ROC-AUC, blue; specificity at 90% sensitivity, red) are given for the training (top), validation (middle), and combined data sets (bottom). Each row is labeled on the left side of the figure with the lower nodule size range from minimum (any size) to 10 mm. The column labels across the bottom correspond to the upper nodule size range from 10 mm to maximum (any size). Each square of a panel then shows classification performance in distinguishing benign from malignant nodules that fall in range from lower to upper size in mm along with numbers of nodules being compared for both benign (BN) and malignant (MN) classes. The color intensity is used for visual accent and is proportional to the reported performance values with the color scales shown at the top of the panels. For example, nPNC demonstrated the best ROC-AUC performance of 0.87 and a specificity of 0.64 at 90% sensitivity in distinguishing 8–10 mm nodules (set contained 6 malignant and 14 benign). Close modal The overall benefits of lung cancer screening programs using LDCT are evident in the reported 20% increase in the patient survival. However, this success comes with the associated problem of to how to evaluate the large numbers of primarily benign IPN being detected and the concern for over-management (25). The recent Lung-RADS assessment has also suggested that the implementation of a positive screening threshold of 6–7 mm rather than the 4 mm used in the NLST study may be more appropriate in the management of lung cancer screening results (7) and that this change would reduce the magnitude of the IPN problem with a minimal effect on patient care (8). Even with new guidelines, the potential for over-management of the estimated 1.6 × 106 lung nodules detected each year in the United States remains a significant challenge particularly for nodules ≥6 mm and less than ≤20 mm, where the risk of malignancy can range from approximately 8%–64% (26). The development of alternative noninvasive approaches to assess these IPNs in a clinically meaningful way is an important goal in pulmonary medicine. Most noninvasive early detection approaches have depended on the identification of tumor-derived nucleic acids, antibodies, or proteins present in blood, plasma, serum, or sputum (27–29), with the caveat that these analytes are frequently rare in the presence of smaller early-stage cancers that are most amenable to curative surgery and which are now more being readily detected by LDCT. Additional studies that have avoided this issue have combined bronchoscopy with gene expression in normal airway epithelial cells or with gene expression associated with nasal brushings. This approach is based on the concept of “field cancerization” whereby the tumor induces gene expression alterations in the uninvolved respiratory tract that differs with the presence of a malignant or benign lung nodule. These approaches work well for nodules likely to be accessed by bronchoscopy (27, 30, 31), but are less effective with smaller IPNs that also represent a major management concern. We previously showed that a malignant lesion in the lung can extend its influence beyond the pulmonary cancer field to the peripheral blood, as the gene expression in PBMC-derived RNA efficiently distinguishes malignant from benign lung nodules (9). The existence of this extra-pulmonary effect is supported by early reports from mouse models for lung cancer demonstrating that soluble factors produced by premalignant lesions in the lung influenced the expression of specific activation markers in the bone marrow macrophages and that this effect was enhanced with tumor progression (32–34). Although the PBMC studies provided an important proof of principle for extrapulmonary involvement, the need for the rapid purification of the PBMC samples to stabilize the transcriptional profiles was a hindrance to expanding to the collection sites outside of the academic environments and to the development of a robust clinical platform. We have now demonstrated that RNA from whole blood, easily collected in PAXgene RNA stabilization tubes, can also be mined for gene expression information that distinguishes malignant from benign lung nodules. This minimally invasive, 2.5-mL blood collection system allows samples to be collected not only at major medical centers, but wherever blood is routinely drawn. The RNA stability at room temperature for 5 days means that no special storage system is required to maintain sample integrity, thereby facilitating sample collection and subsequent transfer to a central testing facility even from remote locations. The quality of the RNA makes it amenable to analysis on a wide variety of platforms including a variety of sequencing platforms that require high-quality RNA. We have tested the utility of the PAXgene collection system using samples collected at four academic pulmonary centers and from a community hospital. Samples were collected, stored, and transferred in bulk, or were collected daily and then transferred by courier to our test site for storage and final processing without any detectable effect on platform performance. We built our diagnostic model from global gene expression assayed on Illumina microarrays with cancers that were primarily stage I (69%) and II (17%) and nodules that ranged in size from the 4 mm threshold measurement of the NLST study to 20 mm, spanning the range of malignancy risk from <1% to 64% (8). Importantly, our PAXgene microarray classifier maintained a ROC-AUC of 0.847 (95% CI: 0.742–0.951) on independent validation almost identical to that of the training set used for classifier development. In many studies, validation set accuracy is somewhat diminished, suggesting the model used for classifier development was not large enough to adequately capture the potential subject diversity (9, 35, 36). Moving forward from the microarray developmental platform, we successfully transitioned the nodule classifier to the NanoString nCounter platform. The nCounter platform requires minimal sample handling, is technically simple, and has the ability to evaluate degraded and nondegraded RNA in the same assay. The FDA approval of the NanoString-based Prosigna Breast Cancer Prognostic Assay based on the PAM50 gene signature (14) and the more recent development of a NanoString-based immune signature that predicts the clinical response to PD1 blockade (37) further supports the clinical utility of this platform. Although the preliminary gene panel for our NanoString-based classifier included 559 biomarkers, that number could be reduced to 41 probes while maintaining the ROC-AUC and, thus, suggesting the potential for simplifying the test platform. In assessing the contributions of the various probes represented in the 41 probes, 46% of the top ranked probes came from the SVM analysis, 29% from PCI panel, and with the fewest candidates being selected by P value. The myeloid-related genes linked to the survival in our PBMC studies (18) were not represented in the 41 probe classifier, but were well represented in the top 100 ranked probes whereas the NK-related probes were mostly in the lower half of the probe set, perhaps because the NK signal is significantly diluted in the PAXgene samples. As patient outcome data is accumulated, we will further assess the utility of the prognostic biomarkers included in our panel that were selected because of an association with recurrence/survival in our previous PBMC studies (18, 19, 38). Although robust technical performance is important for any clinical platform, the resultant benefit to the patient is primary. The performance of our NanoString custom panel on the 741 samples analyzed on that platform has significant clinical implications with a potential to impact the use of invasive approaches for assessing some classes of difficult-to-diagnose IPN. Our study does not depend on the presence of circulating tumor cells, tumor proteins, or tumor RNA, whose presence is more consistent with more advanced cancers. In this study, we have primarily addressed that class of IPNs that are 6–20 mm in diameter, are of moderate to high risk (39) and frequently not easily accessible by either bronchoscopy or fine needle biopsies and early-stage cancers that are most amenable to surgical approaches. We have also assessed the performance with smaller nodules in the 4–6 mm range where the risk of malignancy is small but whose presence can remain of some concern. Importantly, our nPNC outperformed clinical algorithms presently used to stratify candidates with IPN for treatment or follow-up, including the Brock University (20, 21), Mayo Clinic (22), and VA (23) clinical models in the 6–20 mm range. Although these algorithms work well when applied to datasets that include mostly smaller benign nodules and larger cancers, performance is somewhat diminished when applied only to malignant and benign in the problematic size range. Although the size range of pulmonary nodules we have analyzed is important, there is still a significant difference in the median size between the malignant and the benign in our study. It will be important to address how well biomarkers and clinical algorithms function when benign and malignant nodules are more closely matched in size and where clinical algorithms are likely to perform poorly. We attempted to test this type of comparison as shown in Fig. 4. Although the overall AUCs, sensitivities, and specificities are well conserved whether we use 4, 6, or 8 mm as a positive threshold, as the comparisons get more granular, some comparisons are significantly more accurate than others and this is particularly evident in the validation study where samples number are smaller. We achieved a specificity of 64% at 90% sensitivity for benign and malignant in the 8–14 mm size range dropping to 40% in the 6–14 mm range. Nodule size is the primary consideration in how IPNs are treated (40). Although this study has interrogated a large number of patient samples and demonstrated a potential utility, a further assessment with a larger number of samples where malignant and benign nodules are more closely related by size will extend that utility as this is the scenario where size is no longer informative. In moving forward, it will be important to more completely address the issue by the comparison of benign and malignant nodules of similar sizes across the range of nodule sizes that remain problematic. The highly simplified and proven method for the acquisition of large numbers of samples of consistent quality from a variety of locations will facilitate this process. Expanded studies will also allow us to address the biological basis for the differences we are detecting between the patient classes and to assess whether those differences may have therapeutic implications. A.V. Kossenkov has ownership interests (including stock, patients, etc) in patent applications WST155P1 and WST117P. G. Criner has ownership interests (including stock, patients, etc) in HGE Health Care Solutions and is a consultant/advisory board member for AstraZeneca, Boehringer Ingelheim, AVISA, Lungpacer, GlaxoSmithKline, Philips Respironics, NAMDRC, EOLO Medical Inc, Helios Medical, Novartis, Olympus, Spiration, Holaria Inc., Mereo BioPharma, Third Pole, PneumRx, BTG plc, Pearl Therapeutics, Pearl Therapeutics, and Broncus Medical. A. Vachani reports receiving a commercial research grant from Oncocyte, Inc., MagArray, Johnson & Johnson, and Broncus Medical and is a consultant/advisory board member for Oncocyte, Inc. L.C. Showe is a adjunct professor at the University of Pennsylvania and reports receiving a commercial research grant from SRA from OncoCyte and has ownership interests (including stock, patients, etc) as an inventor on pending unpublished provisional patent application(s) assigned to The Wistar Institute that relates to compositions and methods of using a new lung nodule classifier. No potential conflicts of interest were disclosed by the other authors. Conception and design: A.V. Kossenkov, H. Pass, A. Vachani, B. Nam, W.N. Rom, M.K. Showe, L.C. Showe Development of methodology: A.V. Kossenkov, R. Qureshi, C. Chang, T. Kumar, L.C. Showe Acquisition of data (provided animals, acquired and managed patients, provided facilities, etc.): R.S. Majumdar, C. Chang, T. Kumar, E. Kannisto, G. Criner, J.-C.J. Tsay, H. Pass, S. Yendamuri, A. Vachani, T. Bauer, W.N. Rom, L.C. Showe Analysis and interpretation of data (e.g., statistical analysis, biostatistics, computational analysis): A.V. Kossenkov, R. Qureshi, N.B. Dawany, J. Wickramasinghe, Q. Liu, W.-H. Horng, H. Pass, S. Yendamuri, M.K. Showe, L.C. Showe Writing, review, and/or revision of the manuscript: A.V. Kossenkov, R. Qureshi, N.B. Dawany, J. Wickramasinghe, Q. Liu, G. Criner, J.-C.J. Tsay, H. Pass, S. Yendamuri, A. Vachani, T. Bauer, W.N. Rom, M.K. Showe, L.C. Showe Administrative, technical, or material support (i.e., reporting or organizing data, constructing databases): A.V. Kossenkov, R.S. Majumdar, C. Chang, S. Widura, T. Kumar, H. Pass, L.C. Showe Study supervision: C. Chang, L.C. Showe We wish to thank the Wistar Genomics and Bioinformatics facilities for support, Dr. Rachel Locke for editorial assistance, and everyone responsible for recruitment and sample collection. This study was supported by the PA Department of Health grant #4100059200 (Diagnostic Markers for Early-stage Lung Cancer in PAXgene Blood Samples), NCI U01 CA200495-02, and OncoCyte Sponsored Research Agreement to L.C. Showe. NCI U01 CA111295 (to H. Pass), NCI U01 CA086137 (to W.N. Rom), R21 CA156087-02 (to A. Vachani, S. Yendamuri, and L.C. Showe), R50 CA211199-01 (to A.V. Kossenkov), Support for Shared Resources utilized in this study was provided by Cancer Center Support Grant (CCSG) P30 CA010815. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1. Crowswell J , Baker S , Marcus P , Clapp J , Kramer B . Incidence of false-positive test results in lung cancer screening . Ann Intern Med 2010 ; 152 : 505 12 . 2. Henschke CI , Yankelevitz DF , Altorki NK . The role of CT screening for lung cancer . Thorac Surg Clin 2007 ; 17 : 137 42 . 3. Heuvers ME , Wisnivesky J , Stricker BH , Aerts JG . Generalizability of results from the National Lung Screening Trial . Eur J Epidemiol 2012 ; 27 : 669 72 . 4. Ruparel M , Quaife SL , Navani N , Wardle J , Janes SM , Baldwin DR . Pulmonary nodules and CT screening: the past, present and future . Thorax 2016 ; 71 : 367 75 . 5. Henschke CI , Yankelevitz DF . CT screening for lung cancer: update 2007 . Oncologist 2008 ; 13 : 65 78 . 6. Aberle DR , AM , Berg CD , Black WC , Clapp JD , Fagerstrom RM , et al Reduced lung-cancer mortality with low-dose computed tomographic screening . N Engl J Med 2011 ; 365 : 395 409 . 7. Martin MD , Kanne JP , Broderick LS , Kazerooni EA , Meyer CA . . 2017 ; 37 : 1975 93 . 8. Azharuddin M , N , Malik A , Livornese DS . Evaluating pulmonary nodules to detect lung cancer: Does Fleischner criteria really work? J Cancer Res Pract 2018 ; 5 : 13 9 . 9. Showe MK , Vachani A , Kossenkov AV , Yousef M , Nichols C , Nikonova EV , et al Gene expression profiles in peripheral blood mononuclear cells can distinguish patients with non-small cell lung cancer from patients with nonmalignant lung disease . Cancer Res 2009 ; 69 : 9202 10 . 10. Draghici S , Khatri P , Eklund AC , Szallasi Z . Reliability and reproducibility issues in DNA microarray measurements . Trends Genet 2006 ; 22 : 101 9 . 11. Debey-Pascher S , Eggle D , Schultze JL . RNA stabilization of peripheral blood and profiling by bead chip analysis . Methods Mol Biol 2009 ; 496 : 175 210 . 12. Fricano MM , Ditewig AC , Jung PM , Liguori MJ , Blomme EA , Yang Y . Global transcriptomic profiling using small volumes of whole blood: a cost-effective method for translational genomic biomarker identification in small animals . Int J Mol Sci 2011 ; 12 : 2502 17 . 13. Kennedy L , Vass JK , Haggart DR , Moore S , Burczynski ME , Crowther D , et al Hematopoietic Lineage Transcriptome Stability and Representation in PAXgene Collected Peripheral Blood Utilising SPIA Single-Stranded cDNA Probes for Microarray . Biomark Insights 2008 ; 3 : 403 17 . 14. Wallden B , Storhoff J , Nielsen T , Dowidar N , Schaper C , Ferree S , et al Development and verification of the PAM50-based Prosigna breast cancer gene signature assay . BMC Med Genomics 2015 ; 8 : 54 . 15. Ayers M , Lunceford J , Nebozhyn M , Murphy E , Loboda A , Kaufman DR , et al IFN-γ–related mRNA profile predicts clinical response to PD-1 blockade . J Clin Invest 2017 ; 127 : 2930 40 . 16. Guyon I , Weston J , Barnhill S , Vapnik V . Gene selection for cancer classification using support vector machines, machine learning . Machine Learning 2002 ; 46 : 389 422 . 17. Wei Q , Dunbrack RL Jr The role of balanced training and testing data sets for binary classifiers in bioinformatics . PLoS One 2013 ; 8 : e67863 . 18. Kossenkov AV , Dawany N , Evans TL , Kucharczuk JC , Albelda SM , Showe LC , et al Peripheral immune cell gene expression predicts survival of patients with non-small cell lung cancer . PLoS One 2012 ; 7 : e34392 . 19. Kossenkov AV , Vachani A , Chang C , Nichols C , Billouin S , Horng W , et al Resection of non-small cell lung cancers reverses tumor-induced gene expression changes in the peripheral immune system . Clin Cancer Res 2011 ; 17 : 5867 77 . 20. Chung K , Mets OM , Gerke PK , Jacobs C , den Harder AM , Scholten ET , et al Brock malignancy risk calculator for pulmonary nodules: validation outside a lung cancer screening population . Thorax 2018 ; 73 : 857 63 . 21. McWilliams A , Tammemagi MC , Mayo JR , Roberts H , Liu G , Soghrati K , et al Probability of cancer in pulmonary nodules detected on first screening CT . N Engl J Med 2013 ; 369 : 910 9 . 22. Swensen SJ , Silverstein MD , Ilstrup DM , Schleck CD , Edell ES . The probability of malignancy in solitary pulmonary nodules. Application to small radiologically indeterminate nodules . Arch Intern Med 1997 ; 157 : 849 55 . 23. Gould MK , Ananth L , Barnett PG , Veterans Affairs SNAP Cooperative Study Group . A clinical model to estimate the pretest probability of lung cancer in patients with solitary pulmonary nodules . Chest 2007 ; 131 : 383 8 . 24. National Lung Screening Trial Research Team , Aberle DR , AM , Berg CD , Black WC , Clapp JD , et al Reduced lung-cancer mortality with low-dose computed tomographic screening . N Engl J Med 2011 ; 365 : 395 409 . 25. Redberg RF , O'Malley PG . Important questions about lung cancer screening programs when incidental findings exceed lung cancer nodules by 40 to 1 . JAMA Intern Med 2017 ; 177 : 311 2 . 26. Massion PP , Walker RC . Indeterminate pulmonary nodules: risk for having or for developing lung cancer? Cancer Prev Res 2014 ; 7 : 1173 8 . 27. Kathuria H , Gesthalter Y , Spira A , Brody JS , Steiling K . Updates and controversies in the rapidly evolving field of lung cancer screening, early detection, and chemoprevention . Cancers 2014 ; 6 : 1157 79 . 28. Hoseok I , Cho JY . Lung cancer biomarkers . 2015 ; 72 : 107 70 . 29. Li CM , Chu WY , Wong DL , Tsang HF , Tsui NB , Chan CM , et al Current and future molecular diagnostics in non-small-cell lung cancer . Expert Rev Mol Diagn 2015 ; 15 : 1061 74 . 30. AEGIS Study Team , Perez-Rogers JF , Gerrein J , Anderlind C , Liu G , Zhang S , Alekseyev Y , et al Shared gene expression alterations in nasal and bronchial epithelium for lung cancer detection . J Nat Cancer Inst 2017 ; 109 . doi: 10.1093/jnci/djw327. 31. Silvestri GA , Vachani A , Whitney D , Elashoff M , Porta Smith K , Ferguson JS , et al A bronchial genomic classifier for the diagnostic evaluation of lung cancer . N Engl J Med 2015 ; 373 : 243 51 . 32. Redente EF , Dwyer-Nield LD , Merrick DT , Raina K , Agarwal R , Pao W , et al Tumor progression stage and anatomical site regulate tumor-associated macrophage and bone marrow-derived monocyte polarization . Am J Pathol 2010 ; 176 : 2972 85 . 33. Redente EF , Higgins DM , Dwyer-Nield LD , Orme IM , Gonzalez-Juarrero M , Malkinson AM . Differential polarization of alveolar macrophages and bone marrow-derived monocytes following chemically and pathogen-induced chronic lung inflammation . J Leukoc Biol 2010 ; 88 : 159 68 . 34. Redente EF , Orlicky DJ , Bouchard RJ , Malkinson AM . Tumor signaling to the bone marrow changes the phenotype of monocytes and pulmonary macrophages during urethane-induced primary lung tumorigenesis in A/J mice . Am J Pathol 2007 ; 170 : 693 708 . 35. Li XJ , Hayward C , Fong PY , Dominguez M , Hunsucker SW , Lee LW , et al A blood-based proteomic classifier for the molecular characterization of pulmonary nodules . Sci Transl Med 2013 ; 5 : 207ra142 . 36. Silvestri GA , Tanner NT , Kearney P , Vachani A , Massion PP , Porter A , et al Assessment of plasma proteomics biomarker's ability to distinguish benign from malignant lung nodules: results of the PANOPTIC (Pulmonary Nodule Plasma Proteomic Classifier) Trial . Chest 2018 ; 154 : 491 500 . 37. Ayers M , Lunceford J , Nebozhyn M , Murphy E , Loboda A , Kaufman DR , et al IFN-gamma-related mRNA profile predicts clinical response to PD-1 blockade . J Clin Invest 2017 ; 127 : 2930 40 . 38. Showe MK , Kossenkov AV , Showe LC . The peripheral immune response and lung cancer prognosis . Oncoimmunology 2012 ; 1 : 1414 6 . 39. Massion PP , Walker RC . Indeterminate pulmonary nodules: risk for having or for developing lung cancer? Cancer Prev Res 2014 ; 7 : 1173 8 . 40. Vachani A , Tanner NT , Aggarwal J , Mathews C , Kearney P , Fang KC , et al Factors that influence physician decision making for indeterminate pulmonary nodules . Ann Am Thoracic Soc 2014 ; 11 : 1586 91 .
Alice generates a qubit in one of 4 states $$\{|1\rangle, |0\rangle, |+\rangle, |-\rangle \}$$ and sends it to Bob. The states $$|1\rangle$$ and $$|+\rangle$$ mean bit $$1$$ and the states $$|0\rangle$$ and $$|-\rangle$$ mean bit $$0$$. Upon receiving the qubit Bob measures it in $$\{0,1\}$$ basis or in $$\{-,+\}$$ basis, chosen randomly. If Bob measured in say $$\{0,1\}$$ basis and Alice sent $$|0\rangle$$ or $$|1\rangle$$ qubit, then Alice and Bob shared 1 bit of a secret key. What remains is to check that Bob used the right basis. This is done by using an open classical channel, for example Bob sends information which basis he used and Alice replies (also in open) is this basis right or not. If the basis is wrong, the bit is discarded.
# Systems of Linear Equations and Matrices ## Presentation on theme: "Systems of Linear Equations and Matrices"— Presentation transcript: Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices Introduction Why matrices? What are matrices? Information in science and mathematics is often organized into rows and columns to form regular arrays, called matrices. What are matrices? tables of numerical data that arise from physical observations Why should we need to learn matrices? because computers are well suited for manipulating arrays of numerical information besides, matrices are mathematical objects in their own right, and there is a rich and important theory associated with them that has a wide variety of applications Introduction to Systems of Linear Equations Any straight line in the xy-plane can be represented algebraically by the equation of the form an equation of this form is called a linear equation in the variables of x and y. generalization: linear equation in n variables the variables in a linear equation are sometimes called unknowns examples of linear equations examples of non-linear equations Introduction to Systems of Linear Equations Solution of a Linear Equation A solution of a linear equation a1x1+a2x2+...+anxn=b is a sequence of n numbers s1, s2,...,sn such that the equation is satisfied when we substitute x1=s1, x2=s2,...,xn=sn. Solution Set The set of solutions of the equation is called its solution set, or general solution. Example: Find the solution set of (a) 4x-2y=1, and (b) x1-4x2+7x3=5. Linear Systems A finite set of linear equations in the variables x1, x2,...,xn is called a system of linear equations or a linear system. A sequence of numbers s1, s2,...,sn is a solution of the system if x1=s1, x2=s2,...,xn=sn is a solution of every equation in the system. Introduction to Systems of Linear Equations Not all linear systems have solutions. A linear system that has no solutions is said to be inconsistent; if there is at least one solution of the system, it is called consistent. Every system of linear equations has either no solutions, exactly one solution, or infinitely many solutions. An arbitrary system of m linear equations in n unknown can be written as The double subscripting of the unknown is a useful device that is used to specify the location of the coefficients in the system. Introduction to Systems of Linear Equations Augmented Matrices A system of m linear equations in n unknown can be abbreviated by writing only the rectangle array of numbers: This is called the augmented matrix of the system. The basic method for solving a system of linear equations is to replace the given system by a new system that has the same solution set but which is easier to solve. Multiply an equation through by a nonlinear constant Interchange two equations. Add a multiple of one equation to another. Introduction to Systems of Linear Equations Elementary Row Operations Since the rows of an augmented matrix correspond to the equations in the associated system, the three operations above correspond to the following elementary row operations on the rows of the augmented matrix: Multiply a row through by a nonzero constant. Interchange two rows. Add a multiple of one row to another row. Example: Using elementary row operations to solve the linear system Gaussian Elimination Echelon Forms A matrix with the following properties is in reduced row-echelon form: If a row does not consist entirely of zeros, then the first nonzero number in the row is 1. We call this a leading 1. If there are any rows that consist entirely of zeros, then they are grouped together at the bottom of the matrix. In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right than the leading 1 in the higher row. Each column that contains a leading 1 has zeros everywhere else. A matrix that has the first three properties is said to be in row-echelon form. Example 1 Gaussian Elimination A matrix in row-echelon form has zeros below each leading 1, whereas a matrix in reduced row-echelon form has zeros below and above each leading 1. The solution of a linear system may be easily obtained by transforming its augmented matrix to reduced row-echelon form. Example: Solutions of Four Linear Systems leading variables vs. free variables Gaussian Elimination Elimination Methods to reduce any matrix to its reduced row-echelon form Locate the leftmost column that does not consist entirely of zeros. Interchange the top row with another row, if necessary, to bring a nonzero entry to the top of the column found in Step 1. If the entry that is now at the top of the column found in Step 1 is a, multiply the first row by 1/a in order to produce a leading 1. Add suitable multiples of the top row to the rows below so that all entries below the leading 1 become zeros. Now cover the top row in the matrix and begin again with Step 1 applied to the submatrix that remains. Continue in this way until the entire matrix is in row-echelon form. Beginning with the last nonzero row and working upward, add suitable multiples of each row to the rows above to introduce zeros above the leading 1’s. The first five steps produce a row-echelon form and is called Gaussian elimination. All six steps produce a reduced row-echelon form and is called Gauss-Jordan elimination. Gaussian Elimination Example: Gauss-Jordan Elimination Solve by Gauss-Jordan elimination. Back-Substitution to solve a linear system from its row-echelon form rather than reduced row-echelon form Solve the equations for the leading variables. Beginning with the bottom equation and working upward, successively substitute each equation into all the equations above it. Assign arbitrary values to the free variables, if any. Such arbitrary values are often called parameters. Gaussian Elimination The augmented matrix is Row-echelon Example: Gaussian Elimination Solve by Gaussian elimination and back-substitution. The augmented matrix is Row-echelon The system corresponding to matrix yields Solving the leading variables Homogeneous Linear Systems A system of linear equations is said to be homogeneous if the constant terms are all zero. Every homogeneous system is consistent, since all system have x1=0, x2=0,...,xn=0 as a solution. This solution is called the trivial solution; if there are other solutions, they are called nontrivial solutions. A homogeneous system has either only the trivial solution, or has infinitely many solutions. Gaussian Elimination The augmented matrix for the system is Example: Gauss-Jordan Elimination Solve the following homogeneous system of linear equations by using Gauss-Jordan Elimination. The augmented matrix for the system is The corresponding system of equation is Reducing this matrix to reduced roe-echelon form, we obtain Gaussian Elimination Solving for the leading variables yields Thus the general solution is Note that the trivial solution is obtained when s=t=0 Matrices and Matrix Operations Matrix Notation and Terminology A matrix is a rectangular array of numbers. The numbers in the array are called the entries in the matrix. Examples: The size of a matrix is described in terms of the number of rows and columns it contains. Example: The sizes of above matrices are 32, 14, 33, 21, and 11, respectively. A matrix with only one column is called a column matrix(or a column vector), and a matrix with only one row is called a row matrix (or a row vector). When discussing matrices, it is common to refer to numerical quantities as scalars. Matrices and Matrix Operations We often use capital letters to denote matrices and lowercase letters to denote numerical quantities. The entries that occurs in row i and column j of a matrix A will be denoted by aij or (A)ij. A general mn matrix might be written as or [aij]mn or [aij]. Row and column matrices are denoted by boldface lowercase letters. Matrices and Matrix Operations A matrix A with n rows and n columns is called a square matrix of order n, and entries a11, a22,...,ann are said to be on the main diagonal of A. Operations on Matrices Definition: Two matrices are defined to be equal if they have the same size and their corresponding entries are equal. Definition: If A and B are matrices of the same size, then the sum A+B is the matrix obtained by adding the entries of B to the corresponding entries of A, and the difference A-B is the matrix obtained by subtracting the entries of B from the corresponding entries of A. Matrices of different sizes cannot be added or subtracted. Matrices and Matrix Operations Definition: If A is any matrix and c is any scalar, then the product cA is the matrix obtained by multiplying each entry of the matrix A by c. The matrix cA is said to be a scalar multiple of A. If A1, A2,...,An are matrices of the same size and c1, c2,...,cn are scalars, then an expression of the form is called a linear combination of A1, A2,...,An with coefficients c1, c2,...,cn. Definition: If A is an mr matrix and B is an rn matrix, then the product AB is the mn matrix whose entries are determined as follows. To find the entry in row i and column j of AB, single out row i from the matrix A and column j from the matrix B, Multiply the corresponding entries from the row and column together and then add up the resulting products. Matrices and Matrix Operations Example: Multiplying Matrices The number of columns of the first factor A must be the same as the number of rows of the second factor B in order to form the product AB. Example: A: 3x4, B: 4x7, C: 7x3, then AB, BC, CA are defined, AC, CB, BA are undefined. Partitioned Matrices A matrix can be subdivided or partitioned into smaller matrices by inserting horizontal and vertical rules between selected rows and columns. Matrices and Matrix Operations Matrix Multiplication by Columns and by Rows to find a particular row or column of a matrix product AB jth column matrix of AB = A[jth column matrix of B] ith row matrix of AB = [ith row matrix of A]B Example: find the second column matrix of AB If a1, a2,...,am denote the row matrices of A and b1, b2,...,bn denote the column matrices of B, then Matrices and Matrix Operations Matrix Products as Linear Combinations Suppose that Then The product Ax of a matrix A with a column matrix x is a linear combination of the column matrices of A with the coefficients coming from the matrix x. Matrices and Matrix Operations The product yA of a 1m matrix y with an mn matrix A is a linear combination of the row matrices of A with scalar coefficients coming from y. Example: Find Ax and yA. The jth column matrix of a product AB is a linear combination of the column matrices of A with the coefficients coming from the jth column of B. Example: Matrices and Matrix Operations Matrix Form of a Linear System Consider any system of m linear equations in n unknowns. We can replace the m equations in the system by the single matrix equation Ax=b Matrices and Matrix Operations A is called the coefficient matrix. The augmented matrix of this system is Transpose of a Matrix Definition: If A is any mn matrix, then the transpose of A, denoted by AT, is defined to be the nm matrix that results from interchanging the rows and columns of A. Example: Find the transposes of the following matrices. Matrices and Matrix Operations (AT)ij = (A)ji Definition: If A is a square matrix, then the trace of A, denoted by tr(A), is defined to be the sum of the entries on the main diagonal of A. The trace of A is undefined if A is not a square matrix. Example: Find the traces of the following matrices. Tr(A)= a11+a22+a33 Inverses: Rules of Matrix Arithmetic Properties of Matrix Operations Commutative law for scalars is not necessarily true. when AB is defined by BA is undefined when AB and BA have different sizes It is still possible that AB is not equal to BA even when 1 and 2 holds. Example: Laws that still hold in matrix arithmetic (a) A+B=B+A (Commutative law for addition) (b) A+(B+C)=(A+B)+C (Associative law for addition) (c) A(BC)=(AB)C (Associative law for multiplication) (d) A(B+C)=AB+AC (Left distributive law) (e) (B+C)A=BA+CA (Right distributive law) A(B-C)=AB-AC (j) (a+b)C=aC+bC (B-C)A=BA-CA (k) (a-b)C=aC-bC a(B+C)=aB+aC (l) a(bC)=(ab)C a(B-C)=aB-aC (m) a(BC)=(aB)C=B(aC) Inverses: Rules of Matrix Arithmetic Zero Matrices A matrix, all of whose entries are zero, is called a zero matrix. A zero matrix is denoted by 0 or 0mn. A zero matrix with one column is denoted by 0. The cancellation law does not necessarily hold in matrix operation. Valid rules in matrix arithmetic for zero matrix. A+0 = 0+A = A A-A = 0 0-A = -A A0 = 0; 0A = 0 Identity Matrices square matrices with 1’s on the main diagonal and 0’s off the main diagonal are called identity matrices and is denoted by I Inverses: Rules of Matrix Arithmetic In : nn identity matrix If A is an mn matrix, then AIn = A and ImA = A Theorem: If R is the reduced row-echelon form of an nn matrix A, then either R has a row of zeros or R is the identity matrix. Definition: If A is a square matrix, and if a matrix B of the same size can be found such that AB = BA = I, then A is said to be invertible and B is called an inverse of A. If no such matrix B can be found, then A is said to be singular. Example: B is an inverse of A Example: A is singular Inverses: Rules of Matrix Arithmetic Properties of Inverses Theorem: If B and C are both inverses of the matrix A, then B=C. If A is invertible, then its inverse will be denoted by A-1. AA-1=I and A-1A=I Theorem: The matrix is invertible if ad-bc0, in which case the inverse is given by the formula Inverses: Rules of Matrix Arithmetic Theorem: If A and B are invertible matrices of the same size, then AB is invertible and (AB)-1 = B-1A-1. Generalization: The product of any number of invertible matrices is invertible, and the inverse of the product is the product of the inverse in the reverse order. Example: Inverse of a product Powers of a Matrix Definition: If A is a square matrix, then we define the nonnegative integer powers of A to be Moreover, if A is invertible, then we define the nonnegative integer powers to be Example: Inverses: Rules of Matrix Arithmetic Theorem: (Laws of Exponents) If A is a square matrix and r and s are integers, then Theorem: If A is an invertible matrix. then A-1 is invertible and (A-1)-1=A An is invertible and (An)-1=(A-1)n for n = 0, 1, 2,... For any nonzero scalar k, the matrix kA is invertible and Example: Find A3 and A-3 Polynomial Expressions Involving Matrices If A is a mm square matrix and if p(x)=a0+a1x+...+anxn is any polynomial, the we define p(A) = a0I+a1A+...+anAn where I is the mm identity matrix. Inverses: Rules of Matrix Arithmetic Properties of the Transpose Theorem: If the sizes of the matrices are such that the stated operations can be performed, then ((A)T)T = A (A+B)T = AT + BT and (A-B)T = AT-BT (kA)T = kAT, where k is any scalar (AB)T = BTAT The transpose of a product of any number of matrices is equal to the product of their transpose in the reverse order. Invertibility of a Transpose Theorem: If A is an invertible matrix, then AT is also invertible and Elementary Matrices and a Method for Finding A-1 Definition: An nn matrix is called an elementary matrix if it can be obtained from the nn identity matrix In by performing a single elementary row operation. Examples: Theorem: If the elementary matrix E results from performing a certain row operation on Im and if A is an mn matrix, then the product EA is the matrix that results when this same row operation is performed on A. Example: Elementary Matrices and a Method for Finding A-1 Inverse row operations Theorem: Every elementary matrix is invertible, and the inverse is also an elementary matrix. Theorem: If A is an nn matrix, then the following statements are equivalent, that is, all true or all false. A is invertible. Ax=0 has only the trivial solution. The reduced row-echelon form of A is In. A is expressible as a product of elementary matrices. Row Operation on I That Produces E Row Operations on E That Reproduces I Multiply row i by c0 Multiply row i by 1/c Interchange row i and j Add c times row i to row j Add –c times row i to row j Example: Multiply the second Multiply the second row by 7 row by 1/7 Multiply the second Multiply the second row by 7 row by 1/7 Interchange the first Interchange the first and second rows and second rows Add 5 times the second Add – 5 times the second row to the first row to the first Elementary Matrices and a Method for Finding A-1 Row Equivalence Matrices that can be obtained from one another by a finite sequence of elementary row operations are said to be row equivalence. An nn matrix A is invertible if and only if it is row equivalent to the nn identity matrix. A Method for Inverting Matrices To find the inverse of an invertible matrix A, we must find a sequence of elementary row operations that reduces A to the identity and then perform this same sequence of operations on In to obtained A-1. Example: Find the inverse of Sample Problem 7: (a) Determine the inverse for A, where: (b) Use the inverse determined above to solve the system of linear equations: Example: Find the inverse of Solution: We added –2 times the first row to the second and –1 times the first row to the third We added 2 times the second row to the third We multiplied the third row by We added 3 times the third row to the second and –3 times the third row to the first We added –2 times the second row to the first Thus Elementary Matrices and a Method for Finding A-1 If an nn matrix is not invertible, then it cannot be reduced to In by elementary row operations, i.e. the reduced row-echelon form of A has at least one row of zeros. We may stop the computations and conclude that the given matrix is not invertible when the above situation occurs. Example: Show that A is not invertible. Since we have obtained a row of zeros on the left side, A is not invertible. Further Results on Systems of Equations and Invertibility A Basic Theorem Theorem: Every system of linear equations has either no solutions, exactly one solution, or infinitely many solutions. Solving Linear Systems by Matrix Inversion method besides Gaussian and Gauss-Jordan elimination exists Theorem: If A is an invertible nn matrix, then for each n1 matrix b, the system of equations Ax=b has exactly one solution, namely, x=A-1b. Example: Find the solution of the system of linear equations. The method applies only when the system has as many equations as unknowns and the coefficient matrix is invertible. Solution: In matrix form the system can be written as Ax=b, where It is shown that A is invertible and By theorem the solution of a system is or Further Results on Systems of Equations and Invertibility Linear Systems with a Common Coefficient Matrix solving a sequence of systems Ax=b1, Ax=b2, Ax=b3,..., Ax=bk, each of which has the same coefficient matrix A If A is invertible, then the solutions can be obtained with one matrix inversion and k matrix multiplications. A more efficient method is to form the matrix By reducing the above augmented matrix to its reduced row-echelon form we can solve all k systems at once by Gauss-Jordan elimination. also applies when A is not invertible Example: Solve the systems Solution: Two systems have the same coefficient matrix. If we augment this coefficient matrix with the column of constants on the right sides of these systems, we obtain (a) (b) Further Results on Systems of Equations and Invertibility Properties of Invertible Matrices Theorem: Let A be a square matrix. If B is a square matrix satisfying BA=I, then B=A-1. If B is a square matrix satisfying AB=I, then B=A-1. Theorem: If A is an nn matrix, then the following are equivalent. A is invertible. Ax=0 has only the trivial solution. The reduced row-echelon form of A is In. A is expressible as a product of elementary matrices. Ax=b is consistent for every n1 matrix b. Ax=b has exactly one solution for every n1 matrix b. Theorem: Let A and B be square matrices of the same size. If AB is invertible, then A and B must also be invertible. A Fundamental Problem: Let A be a fixed mn matrix. Find all m1 matrices b such that the system of equations Ax=b is consistent. Further Results on Systems of Equations and Invertibility If A is an invertible matrix, for every mx1 matrix b, the linear system Ax=b has the unique solution x=A-1b. If A is not square or not invertible, the matrix b must usually satisfy certain conditions in order for Ax=b to be consistent. Example: Find the conditions in order for the following systems of equations to be consistent. (a) (b) Diagonal, Triangular, and Symmetric Matrices Diagonal Matrices A square matrix in which all the entries off the main diagonal are zero are called diagonal matrices. Example: A general nn diagonal matrix D can be written as A diagonal matrix is invertible if and only if all of its diagonal entries are nonzero. Diagonal, Triangular, and Symmetric Matrices Powers of diagonal matrices Example: Find A-1, A5, and A-5 for Multiplication of diagonal matrices Diagonal, Triangular, and Symmetric Matrices Triangular Matrices A square matrix in which all the entries above the main diagonal are zero is called lower triangular, and a square matrix in which all the entries below the main diagonal are zero is called upper triangular. A matrix that is either upper triangular or lower triangular is called triangular. The diagonal matrices are both upper and lower triangular. A square matrix in row-echelon form is upper triangular. characteristics of triangular matrices A square matrix A=[aij] is upper triangular if and only if the ith row starts with at least i-1 zeros. A square matrix A=[aij] is lower triangular if and only if the jth column starts with at least j-1 zeros. A square matrix A=[aij] is upper triangular if and only if aij=0 for i>j. A square matrix A=[aij] is lower triangular if and only if aij=0 for i<j. Diagonal, Triangular, and Symmetric Matrices Theorem1.7.1: The transpose of a lower triangular matrix is upper triangular, and the transpose of a upper triangular matrix is lower triangular. The product of lower triangular matrices is lower triangular, and the product of upper triangular matrices is upper triangular. A triangular matrix is invertible if and only if its diagonal entries are all nonzero. The inverse of an invertible lower triangular matrix is lower triangular, and the inverse of an invertible upper triangular matrix is upper triangular. Example: Let Find A-1 and AB. Diagonal, Triangular, and Symmetric Matrices A square matrix is called symmetric if A = AT. Examples: A matrix A=[aij] is symmetric if and only if aij = aji for all i, j. Theorem: If A and B are symmetric matrices with the same size, and if k is any scalar, then: AT is symmetric. A+B and A-B are symmetric. kA is symmetric. The product of two symmetric matrices is symmetric if and only if the matrices commute(i.e. AB=BA). Diagonal, Triangular, and Symmetric Matrices In general, a symmetric matrix need not be invertible. Theorem: If A is an invertible symmetric matrix, then A-1 is symmetric. Products AAT and ATA Both AAT and ATA are square matrices and are always symmetric. Example: Theorem: If A is invertible matrix, then AAT and ATA are also invertible.
# Plotly Python - An Interactive Data Visualization Before going into the specifics of Plotly Python, let me ask you a question. If you have to go someplace, which of the following would you prefer: 1. Your relative telling you to drive straight for 15 kms, then take a left at 6th Avenue, then right, then a slight left and drive for 10 minutes to reach the destination 2. A map with the route highlighted. Obviously, we would prefer a map as it is easier to reference. In a similar manner, we prefer bar charts and plots over tables to give us a better way to compare different entities. And Plotly Python does just that! Plotly Python is a library which helps in data visualisation in an interactive manner. But you might be wondering why do we need Plotly when we already have matplotlib which does the same thing. Plotly was created to make data more meaningful by having interactive charts and plots which could be created online as well. The fact that we could visualise data online removed a lot of hurdles which are associated with the offline usage of a library. However, Plotly can be used as both, an offline as well as online tool, thus giving us the best of both worlds. Let us see the different aspects of Plotly in this blog. The topics covered are as follows: ## Introduction If you go through the objectives of the company which developed Plotly, you will find that it can be broadly divided into three parts: • To make visualising data more interactive and intuitive • To make the graphing capabilities of a the library available in an online environment as well • To truly make data science tools open source and accessible to everyone. In fact, the charts created by using plotly have the unique feature that when you hover on the individual element on the graph, the number associated with the figure comes up. According to its official website, Plotly has support for over 40 chart types and can even be used for 3 dimensional use cases. Considering the collaborative environment of Python, the company behind the library has kept the library open source and free so that it can be beneficial for everyone. The plotly python library consists of the following packages: • plotly: This is the main package which contains all the functionality • graph_objs: This package contains objects, or templates of figures which are used for visualising • matplotllib: The interesting thing about plotly is its support matplotlib figures as well Let us now begin our journey in the world of plotly by first installing it. ## How to install Plotly in Python The great thing about Python is we can easily install most of the packages using the ‘pip’ command. Thus, the python code is as follows: pip install plotly You can use it in the Anaconda terminal as shown below: If you want to check the version of plotly installed, you can use the following command pip show plotly You will find the output as shown below: Plotly also contains an “express” feature which makes it even easier to create graphs and objects. ## Online Vs Offline Usage Initially, the creators of plotly had given both online and offline capabilities for users of the plotly package, but it led to confusion on how the graphs were rendered. Thus, starting with version 4, the creators used the offline capabilities as default and bundled the online capabilities in a new package called “chart-studio”. We are going to focus on the offline version for this blog. There are two ways to use Offline plotting of Plotly Python. Let’s see them now ## Rendering as an HTML file or in the Notebook We will first see how we can use the plots as a seperate html file. ### Creating an HTML file (plotly.offline.plot) We use plotly.offline.plot() to create a standalone HTML file of the plot or chart. This html file can be saved and rendered in any web browser. A sample code of a graph with the x axis and y axis elements is given below: As we have written the optional code “auto_open=True”, this will open a new browser tab with the graph. We will now see how to render it in the python notebook itself. ### Jupyter notebook We use the command plotly.offline.iplot() when working offline in a Jupyter Notebook. This will generate the graph or plot in the same notebook itself. The same graph will be plotted with the following code: Great! With this out of the way, let us now see the various kinds of cool figures we can make by using Plotly. Since line charts were done on basic numbers, we will see how we can plot the Closing price of Tesla by using the Plotly Python library now. ## OHLC Chart Initially we will use Yahoo finance to download the OHLC data of Tesla from 1st February 2020 to 3 March 2020. The python code for importing the libraries as well as the OHLC data is as follows The python code for plotting the OHLC data is given below: You can see the OHLC data plotted as follows: As we have seen before, the great thing is you can hover at any candlestick and get the data of the respective day. Try it out on the plot above. Let us now try plotting the scatter plot of Tesla and Apple daily percentage changes using plotly. ## Scatter Plot Usually, a scatter plot is used as a visual of the correlation between two entities. Here, we will try to see if there is any correlation between the Adjusted Closing prices of Tesla and Apple, respectively. Since we had used the data of Tesla from 1 February to 3 March. We will import the data of the same time frame. Thus, the Python code for importing the Apple data is as follows: Now, we will calculate the percentage change of both Tesla and Apple and store it in a new column as “Percentage Change” The scatter plot is given below: Awesome! Here’s a fun idea. Why don’t you plot the results for a whole year of data? You can let us know in the comments what inference you could gather from it. ## Line Chart using Plotly Express Earlier, we saw how to create a scatter plot using the good old Plotly Python library. We also plotted a line chart using Plotly. But there is another way to plot a line chart. And that is, Plotly express. Plotly express can be used to reduce your lines of code by a large margin. Let us see the python code for line charts: The graph created is as follows: You can use plotly express for a lot more things than just a line chart. If you are interested in knowing more about Plotly express, head on over to their official documentation. Now, we have seen how the OHLC data is represented. Can we find the frequency of the percent change. Let’s try to plot a histogram using Plotly Python now. ## Histogram We will use Plotly Express to plot the histogram of percentage change of Tesla. The Python code is as follows: The Output is given as follows: Of course, since we do not have many data points, the histogram will not exactly have a normal distribution. All right, let us try another cool looking chart, which we call the contour chart. ## Contour Charts Contour charts were initially created to understand the topography of a landmass in a 2d format. We have all seen google Earth, right. How if you zoom in on a certain mountain, it shows the height changes as you get closer to the peak. That’s exactly it. Plotly has conveniently given us a way to plot a histogram in a contour format. Let us use the percentage change in the daily values of Tesla and Apple as our x and y variables. The code is as follows: The contour chart created is as follows Here, you can say that the central point shows the maximum occurence of the percentage change of the two stocks. You can always use the contour map to compare the daily percentage change of any two stocks to find any common occurrence. ## Scatter Plot in 3D We will use the OHLC data of Tesla for creating this plot. The python code is as follows: The 3d scatter plot is as follows: You can deduce that for most of the days, the volume remained below 20M but the Closing price kept fluctuating wildly. While we did not need a 3d scatter plot to come to this conclusion, it is easier to do so when presented with a diagram like this. Fascinated by the third dimension? Do let us know in the comments how you would like to use the 3d plots in your trading analysis. ## Conclusion We have seen how intuitive and interactive plots and charts such as Line charts, OHLC data in the form of candlesticks, histograms and also Contour charts using Plotly Python in this article. Plotly also gives you the option to save the charts in a stand-alone html file which can rendered on any web browser as well as different languages too. We can use Plotly Python in our endeavour to analyse financial data and improve our strategies. Do check out the Quantitative strategies and Models course on Quantra to create and execute trading strategies and also improve your financial knowledge.
# Doctrine – IF in ORDERBY The problem: there is a table with a field ranking, which is supposed to define a ranking. What this is supposed to do is rank the data in ascending order (ORDER BY ranking ASC), except for those which are NULL. The null entries are supposed to be the last ones. What I want to achieve is something like this: SELECT * FROM table ORDER BY IF(ranking IS NULL, 9999, ranking) ASC I tried for hours and hours to achieve this with doctrine, as I would have had to change a lot of stuff if I were to resign to a native query. It just wouldn’t work, until someone answered this question. It seems that doctrine cannot do an IF in ORDERBY (…) but what it can do is a CASE WHEN in the select which can be referenced within the order by. $query =$this ->createQueryBuilder( 'a' ) ->select('a') ;
# Why Wet Crude Needs Surge Vessel And Dehydrator Desalter 1. Sep 30, 2015 Hi We have 2 type of crude wet crude and dry crude . My questions is 1- is the different between these 2 is the water percentage ? is this the only different ? 2- During the separation the crude the dry crude go through the separators then to boot then to flow suction tank. but why the wet crude have to go through a surge vessel then to hydrator desalter , what’s the function of surge vessel and the dehydrator and why it must go through these 2 2. Oct 5, 2015
## Poster Presentation ### On the relation between approximation rates of stochastic integrals and properties of its integrands Stefan Geiss, Christel Geiss The approximation of stochastic integrals appears in Stochastic Finance while replacing continuously adjusted portfolios by discretely adjusted ones, where often equidistant time nets are used. With respect to the quadratic risk there are situations in which a significant better asymptotic quadratic error is obtained via arbitrary deterministic time nets (for example for the Binary option). We investigate how much the asymptotics for equidistant nets and arbitrary deterministic nets differ from each other in case of European type options obtained by deterministic pay-off functions applied to a price process modeled by a diffusion. In particular we give a characterization, that the approximation rate for the quadratic risk for equidistant nets (with $n$ time knots) behaves like $n^{-\eta}$, $0<\eta\le 1/2$, in terms of (1) the asymptotics of certain variances of the hedging strategy, (2) the asymptotics of a certain $L_2$-convexity of the value process, (3) a decomposition of the portfolio using the K-functional from interpolation theory.
### Exploring Wild & Wonderful Number Patterns EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. ### I'm Eight Find a great variety of ways of asking questions which make 8. ### Dice and Spinner Numbers If you had any number of ordinary dice, what are the possible ways of making their totals 6? What would the product of the dice be each time? # Route Product ##### Stage: 2 Challenge Level: Neil from Kells Lande Primary School wrote to say: The route with the largest product is going up from A along the $5$ line, and down along the $2$ line, and then along the $0.5$ line to B, which has a product of $5$. The route with the smallest product is to go horizontally along the $3$ line from A, then down the first $0.5$ line, along the $1$ line going horizontally, and then up to B along the $0.1$ line, which has a product of $0.15$. You're right, well done, Neil. Class K's Magic Mathematicians at Charter Primary School told us how they found out the solution: We worked in a group of four and found out all the routes from A to B, then we did all the sums and worked out all the answers. Many of you also said that to find the largest and smallest products, it helped to look for the largest and smallest numbers. That's a good strategy, well done.
# How do you exponentiate a section of the adjoint bundle to get a gauge transformation? Suppose $E$ is a vector bundle with structure group $G$ and let $P = F(E)$ be the frame bundle. Let $\mathfrak{g}_P$ denote the associated bundle to the adjoint representation of $G$ on its Lie algebra (i.e. $\mathfrak{g}_P = P \times_G \mathfrak{g}$ where $\mathfrak{g}$ is the Lie algebra of $G$). Given a section of $\mathfrak{g}_P$, I should be able to "exponentiate" it pointwise to get a gauge transformation. How is this defined? I couldn't come up with anything well-defined. For some context, I was reading Chapter 2 of Donaldson and Kronheimer's "The Geometry of 4-Manifolds," and they mention this pointwise exponential in passing on p. 33. I'm guessing they assume the reader is familiar with it from a more elementary text, but I looked in a few other books and couldn't find it. - I'm not sure what it is that you tried, but the exponential map should work. First of all, let $U_i$ be a trivialising cover for $P$ and its associated bundles. A section through $\mathfrak{g}_P$ is given by functions $\omega_i: U_i \to \mathfrak{g}$ which, on overlaps, transform according to $$\omega_i(p) = \operatorname{Ad}_{g_{ij}(p)} \omega_j(p) \qquad \forall p \in U_i \cap U_j,$$ where I use $\operatorname{Ad}$ to mean the adjoint representation of $G$ on its Lie algebra $\mathfrak{g}$. Now simply compose $\omega_i$ with the exponential map $\exp: \mathfrak{g} \to G$, resulting in functions $\exp\omega_i : U_i \to G$ which, on overlaps, transform according to $$\exp\omega_i(p) = g_{ij}(p) \exp\omega_j(p) g_{ij}(p)^{-1} \qquad \forall p \in U_i \cap U_j.$$ But this is just a section through the associated fibre bundle usually denoted $\operatorname{Ad} P$, and that is the same thing as a gauge transformation. - Thanks- that's exactly what I tried, except that I was thinking in terms of matrices for simplicity and I forgot the relation $Y(e^X)Y^{-1} = e^{YXY^{-1}}$. Actually, this relation seems a bit mysterious to me in its matrix form; it seems like it's easier to see when exp is defined abstractly using one-parameter subgroups. Thanks for clearing things up! –  Andy Manion Jul 30 '10 at 5:19 Isn't $Y(e^X)Y^{-1} = e^{YXY^{-1}}$ obvious from the Taylor series expansion of $e^X$? –  Deane Yang Aug 1 '10 at 4:54
### One-Zero Figure B.1 gives the signal flow graph for the general one-zero filter. The frequency response for the one-zero filter may be found by the following steps: By factoring out from the frequency response, to balance the exponents of , we can get this closer to polar form as follows: We now apply the general equations given in Chapter 7 for filter gain and filter phase as a function of frequency: A plot of and for and various real values of , is given in Fig.B.2. The filter has a zero at in the plane, which is always on the real axis. When a point on the unit circle comes close to the zero of the transfer function the filter gain at that frequency is low. Notice that one real zero can basically make either a highpass ( ) or a lowpass filter ( ). For the phase response calculation using the graphical method, it is necessary to include the pole at . Next Section: One-Pole Previous Section: Phasor Analysis: Factoring a Complex Sinusoid into Phasor Times Carrier
MATLAB Examples # Evaluating a system and Jacobian Section 7.3.1 from Numerically solving polynomial systems with Bertini, by Daniel J. Bates, Jonathan D. Haunstein, Andrew J. Sommese and Charles W. Wampler (SIAM 2013). This code instructs Bertini to evaluate the system of polynomials and its Jacobian for the intersection of a circle and its parabola. This system was already solved in "Sharpening using Newton's method". With TrackType = -4, Bertini will evaluate the system, while with TrackType = -3, it will evaluate the Jacobian as well. A start file must be provided with the points, which in this example are $(3/4,2)$ and $(10^{-15},1/4)$. polysyms x y circle_parabola_intersection.config = struct('TrackType',-3,'MPType',1,'Precision',64); circle_parabola_intersection.starting_points = [.75, 2; 1e-15 .25].'; circle_parabola_intersection = solve(circle_parabola_intersection); results = circle_parabola_intersection.solve_summary; istart = strfind(results,'Evaluating point'); disp(results(istart:end)) Evaluating point 0 of 2 ------------------------------------------------------------------------------------------ The following files may be of interest to you: function: The function values evaluated at the given points. Jv: The Jacobian w.r.t. variables evaluated at the given points. ------------------------------------------------------------------------------------------ The files function and Jv can be read using the basic utility read_solutions. This simply returns a column for each solution: fprintf('%15.11g %15.11g\n',double(fvals).') 1.5 2e-15 2 -1.5 -3 -4e-15 1 1 The matrices are presented in row major ordering, so they correspond to the matrices $\left[ \begin{array}{cc} 1.5 & 2\\ -3 & 1 \end{array} \right] and \left[ \begin{array}{cc} 2\cdot 10^{-15} & -1.5\\ -4\cdot 10^{-15} & 1 \end{array} \right].$
Transfer function of this circuit? Discussion in 'Homework Help' started by pww, Mar 3, 2010. 1. pww Thread Starter New Member Feb 5, 2010 7 0 I'm having a hard time analyzing this circuit... I've been told the transfer function that I've obtained is incorrect. I got this as the transfer function: I did that by finding Vin = [R + C||R] + [C||R] + C and Vout= C Please help, I've been told this is wrong but I don't know how to do it otherwise File size: 13.1 KB Views: 37 • eq.png File size: 932 bytes Views: 68 Last edited: Mar 3, 2010 2. The Electrician AAC Fanatic! Oct 9, 2007 2,281 326 First, it's obvious that your result is wrong because there are 3 capacitors and your result is only second order. The correct result will have to be third order. You can solve this problem as a cascade of voltage dividers. R1 and C1 form a loaded voltage divider. You have to add the effect of R2, C2, R3 and C3 loading C1 with their various series/parallel combinations. You start at the output and work your way backwards to figure out the load on C1 and then get the voltage divider ratio of the R1/C1 divider. Once you have that, move one step forward and figure the loaded divider ratio of R2/C2, etc. You might also do a web search for "ladder networks" and see if you can find any material on solving such networks. Give it a try and show us your work if you have a problem; then we can help you. 3. pww Thread Starter New Member Feb 5, 2010 7 0 Ok, thanks... I think I got it now So I set Vout = (C||R) + (C||R) + (C||R) and then I set Vin = R + (C||R) + (C||R) + C Then to find the gain I take Vout/Vin and I get this as the transfer function: Once I plug in all the numbers I got |T(s)| = 0.702747 Is this right? • eq2.png File size: 1 KB Views: 57 Last edited: Mar 4, 2010 4. The Electrician AAC Fanatic! Oct 9, 2007 2,281 326 The transfer function will have to be third order; that means that the denominator will be a cubic. The highest power of s will be 3; you only have s squared. Imagine that R2, C2, R3 and C3 were gone. Then you would have a simple, unloaded, voltage divider. The transfer function would be: $\frac{1/sC1}{1/sC1+R1}=\frac{1}{sR1C1+1}$ Now imagine R2 and C2 were restored. You would have the series combination of R2 and C2 in parallel with C1. Then the R1/C1 voltage divider would be different because you wouldn't just have C1 as the shunt element in the divider. Do you understand how voltage dividers work, and how to calculate them? Have a look at this: http://en.wikipedia.org/wiki/Voltage_divider
# Quark Matter 2012 12-18 August 2012 US/Eastern timezone ## Low mass di-electron production in Au+Au collisions at $\sqrt{s_{_{NN}}} = 19.6$ GeV at STAR 16 Aug 2012, 16:00 2h Poster Electroweak probes ### Speaker Dr Huang Bingchu (Brookhaven National Lab) ### Description An enhancement of low-mass di-electron production which is compared to expected yields from known hadron sources was observed by the CERES experiment at CERN SPS in 158 A GeV central Pb+Au collisions (sqrt(s)=17.3GeV). More recently, NA60 reported their di-muon measurements in 158 A GeV In+In collisions. The enhancement of di-muon at $M_{\mu\mu} < 1$ GeV/$c^{2}$ can be described by a broadened spectral function. At RHIC, PHENIX experiment observed a significant enhancement in the di-electron continuum in Au+Au collisions at $0.15 < M_{ee} < 0.75$ GeV/$c^2$ at low transverse momentum ($p_{T}$ < 1 GeV/c). The models, which describe the SPS di-lepton data, have not been able to consistently describe the PHENIX data in the low mass and low $p_{T}$ region. STAR has recently presented preliminary results on the di-electron production in Au+Au at 200 GeV[1] Which was made possible by the addition of full-coverage time-of-flight detector. The Beam Energy Scan program covering beam energies down to SPS energies, and STAR's large acceptance, allow for measurements that can provide invaluable insights in this subject. We will present the mid-rapidity di-electron measurements in the $M<1.2$ GeV/$c^{2}$ mass region in Au+Au collision at $\sqrt{s_{NN}}$ = 19.6 GeV taken in 2011 with the full Time-of-Flight detector coverage at STAR. The di-electron production will be compared to hadronic cocktail simulation. Comparisons to model calculations with in-medium vector meson modifications will be made. [1] Jie Zhao (for the STAR collaboration) 2011. J. Phys. G: Nucl. Part. Phys. 38 124134 ### Primary author Dr Huang Bingchu (Brookhaven National Lab) Slides
## Notes of Irit Dinur’s lecture nr 2 1. Reductions among CSPs Let me repeat what a ${q}$-CSP is. It is a constraint satisfaction problem, i.e. there are boolean variables ${x_i}$. An instance is a collection of ${q}$-ary constraints, i.e. boolean functions of subsets of ${q}$ variables each. Notation: for an instance ${\phi}$, ${Sat(\phi)}$ is the maximum fraction of constraints that can be simultaneously sarisfied by an assignment of variables. Example 1 3SAT is a ${3}$-CSP. MAX CUT is a ${2}$-CSP. ${3}$-COLORING is a ${2}$-CSP provided one accepts ${3}$-valued variables. PCP theorem says that for all these CSPs, there is an inapproximability ratio. Indeed, these problems reduce to each others. Example 2 Reducing ${3}$-COLORING to 3SAT. This uses a tiny gadget: coding ${3}$-valued variables by sets (of size ${q}$) of boolean variables, and cooking up a set of ${q}$-ary constraints translating the ${2}$-ary constraint on ${3}$-valued variables expressing ${x_i \not=x_j}$. Example 3 Reducing ${3}$-COLORING to LABEL COVER. An instance of LABEL COVER is a bipartite graph, two sets of labels ${[L]}$ and ${[R]}$ and a constraint for each edge telling which pairs of colors at its endpoints are allowed. If ${G}$ is ${3}$-colorable, then ${Sat(LC)=1}$. If ${G}$ is at most ${1-\epsilon}$-colorable, then ${Sat(LC)<1-\frac{\epsilon}{2}}$. So reduction apriori induces a loss on inapproximability ratios. We want to avoid that, i.e. amplify gaps instead. 2. Repetition Sequential repetition, i.e. launching the PCP verifier independantly ${\ell}$ times improves the soundness (probability of false acceptance) to ${1-2^{\ell}}$. On the CSP side, the corresponding operation amounts to replacing a set of constraints ${\mathcal{C}=\{C_1 ,\ldots,C_m\}}$ by $\displaystyle \begin{array}{rcl} \mathcal{C}^{\ell}=\{C_{i_{1}}\wedge C_{i_{2}}\cdots\wedge C_{i_{\ell}}\}. \end{array}$ If ${Sat(\mathcal{C})=1-\epsilon}$, then ${Sat(\mathcal{C}^{\|\ell})=(1-\epsilon)^{\ell}}$. However, the arity has been multiplied by ${\ell}$. The cost of reducing back to ${q}$-arity kills the benefit of sequential repetition. So this trivial gadget does not help. One possible way out of this vicious circle is the use of samplers, i.e. selecting only randomly chosen ${\ell}$-tuples of constraints. We can even derandomize sequential repetition, using expanders… An other, more efficient, way is parallel repetition. Example 4 Parallel repetition of LABEL COVER. Let ${(B,A)}$ be a bipartite graph, part of a LABEL COVER instance. Then ${B^{\ell},A^{\ell})}$ is a bipartite graph as follows. Put an edge between ${(a_1, \ldots,a_{\ell})}$ and ${(b_1,\ldots,b_{\ell})}$ iff there are edges between ${(a_i ,b_i)}$ for all ${i=1,\ldots,\ell}$. A labelling is consistent if it is consistent coordinatewise. Ran Raz’s theorem will be stated precisely on wednesday afternoon. Theorem 1 If an instance ${LC}$ is satisfiable, so is ${LC^{\|\ell}}$. If ${LC}$ is ${<1-\epsilon}$-satisfiable, ${LC^{\|\ell}}$ is ${<\epsilon'(\epsilon)}$-satisfiable. Note that this theorem cannot be derandomised (Feige and Kilian). 3. Proof of PCP theorem 3.1. Scheme of proof We want to reduce ${3}$-COLORING to GAP-${3}$-COLORING. Start with a graph ${G}$. If ${G}$ is not ${3}$-coloring, ${Sat(G)\leq 1-\frac{1}{|E|}}$. That’s a small gap, we want to amplify it until it gets independant on ${G}$. We do this recursively: at each round, we want to double the gap. After ${\log(|E|)}$ rounds, we shall be done. Each doubling round will have 3 steps. 1. Regularisation (some innocuous preprocessing). 2. Amplification by parallel repetition. 3. Alphabet reduction. 3.2. Amplification step Keep the same set of vertices, take as new edges walks of length ${t}$. Get a multi-graph. Thus the adjacency matrix is taken to its ${t}$-th power. ${q}$-colorings are taken to ${q'}$-colourings, wher ${q'>q}$ as follows: the new colour of a vertex encodes all the colors of its neighbours of radius ${t}$ in original graph ${G}$. Constraints Proposition 2 Unsatisfaction gap increases by a definite factor, that gets ${\geq 2}$ for ${t}$ large enough. This looks a bit like parallel repetition, but it differs in that the size of the instance is merely multiplied by a constant, whereas parallel repetition would change ${n}$ into ${n^2}$. You may think of it as as a derandomized form of parallel repetition. 3.3. Alphabet reduction To iterate the procedure, one needs to convert back the multigraph with a ${q}$-coloring into a graph with a ${3}$-coloring. This is more complicated. It involves composition, an idea that has not been exploited enough so far.
## Jgeurts one year ago Below you are given a function f(x) and its first and second derivatives. Use this information to solve the following. 1. Abarnett what is the function? 2. Abarnett ok 3. Jgeurts Determine the intervals where the function is concave up and concave down. 4. Jgeurts $f(x) = (x^2 - 4)/(x^2 +1)$$f'(x) = 10x/(x^2 +1)^2$$f''(x) = 10(1-3x^2)/(x^2 +1)^3$ 5. Abarnett for the first one the lowest point is at y=-4 6. Jgeurts at f(0) i got that, isn't there a rule having to do with the second derivative i can use? 7. Abarnett Are you thinking of Leibniz's notation? 8. agent0smith Use the second derivative to find concavity. It's concave down where f'' is negative, ie f'' < 0 Concave up where f'' is positive, f'' > 0 Find where f'' = 0 firstly. 9. Abarnett f' is equal to 0 when x is 2 10. Abarnett @Jgeurts 11. Jgeurts oh i see 12. Jgeurts i got negative concavity for both sides of zero in [-2,3] 13. Jgeurts so does that mean it looks something like this? |dw:1367705711666:dw| 14. Abarnett for f'... yes 15. Jgeurts 16. Abarnett the graph you drew is f(x)... sorry my bad! 17. Abarnett f(x)' will look like this 18. Jgeurts I also need to label the inflection point and asymptotes. To find the inflection point i need to find f''(0) right? 19. Abarnett |dw:1367706115115:dw| 20. Jgeurts what the.. how did you get that? 21. Abarnett when it is down x is approx. -0.638 and y is approx. -3.222 22. Abarnett Graphing calculator 23. Abarnett and when it is up it is the same numbers... just positive 24. Abarnett @Jgeurts 25. Jgeurts ahhh I see 26. Jgeurts so what does that tell us? 27. Jgeurts oh the slope! duh 28. Abarnett Just type in the functions. https://www.desmos.com/calculator this is a good online graphing calculator 29. Jgeurts Thanks 30. Abarnett np, tag me if you need me! 31. Jgeurts Thumps up! Im working on my practice final all afternoon so i might need you! @abarnett 32. Abarnett ok I will be on as long as i can! what class are you taking? 33. Jgeurts Calc 1 34. Abarnett cool 35. agent0smith Did you find f''? once you do, you can find where it's concave up and down. Just from looking at the graph, I'd guess concave down between about -inf and -1, and +1 and +infinity Concave up between about -1 and 1. 36. Jgeurts Yea I needed to completely work the problem beginning to end 37. agent0smith Once you get the x values where f'' = 0, pick points to the left and right of those x values, and check if f'' is positive (concave down) or negative (concave up) to help find the intervals. 38. Jgeurts @agent0smith ok, and that is really the only thing i need with the second derivative right? 39. Abarnett |dw:1367707501528:dw| very top y=10 and x=0 40. agent0smith Yep = it's just used to determine inflection points and concavity. Wherever f'' is neg, it's concave down, and vice versa. 41. Abarnett that is f'' 42. Jgeurts f'' just gives me inflection points at f'' = 0 and concavity.. cool! :) 43. agent0smith @Abarnett probably easier to just link to one than hand draw a graph :P f'': https://www.google.com/search?q=10(1-3x%5E2)%2F(x%5E2+%2B1)%5E3&aq=f&oq=10(1-3x%5E2)%2F(x%5E2+%2B1)%5E3&aqs=chrome.0.57j60l3j62l2.174j0&sourceid=chrome&ie=UTF-8 44. Abarnett @agent0smith you are right my bad! :P 45. agent0smith Oh and @Jgeurts f'' can also be used to find if a critical point (where f' = 0) is a max or min - at a maximum, f'' is negative. At a minimum, f'' is positive. But that's part of the concavity anyway, since maximums are concave down, minimums are concave up. If f'' is zero at a point where f' = 0, then it's an inflection point. 46. Jgeurts @Abarnett @agent0smith So I have Global max (3,1/2), Global Min (0,-4), Decreasing on the interval [-2,0], Increasing on the interval [0,3], local max is f(3), local min is f(0), concave down at [-inf,-sqrt1/3]U[sqrt1/3, inf], concave up at [-sqrt1/3, sqrt1/3], inflection points occur at +-sqrt1/3 47. Jgeurts by the way im supposed to be using the interval [-2,3] for everything i forgot to tell you, haha 48. Abarnett looks good to me, what do you think @agent0smith 49. Jgeurts $f(x) = (x^2 - 4)/(x^2 +1)$$f'(x) = 10x/(x^2 +1)^2$$f''(x) = 10(1-3x^2)/(x^2 +1)^3$ 50. agent0smith lol that interval [-2,3] makes a big difference... as the original function has no maximum, it just has an asymptote. https://www.google.com/search?q=(x%5E2+-+4)%2F(x%5E2+%2B1)&aq=f&oq=(x%5E2+-+4)%2F(x%5E2+%2B1)&aqs=chrome.0.57j60l3j62l2.269j0&sourceid=chrome&ie=UTF-8 That's the original function... all your values look good. Except: concave down at [-inf,-sqrt1/3]U[sqrt1/3, inf], Something seems off here, with an interval [-2,3] ;) 51. Jgeurts ha! you got me, no infinities now! 52. Jgeurts so if there was no boundaries, the local and global maximum DNE right? 53. agent0smith If you can learn to kinda read the graph, you can check your values. The max/mins should be easy to find (turning points), same with where the graph is increasing or decreasing (the slope f' is positive or negative). Points of inflection are more difficult to see... you kinda have to gauge where it changes from concave up to down which can be hard to see, but still possible to estimate where it is. 54. agent0smith Correct, since f' will never equal 0 except at the minimum. $\large f'(x) = 10x/(x^2 +1)^2$ - note that this can be only zero if x=0 (the denominator can never make the f' equal zero) - ie there's no other turning points/maximums, only a minimum. 55. Jgeurts Ok, i feel so informed, haha. Hold up, last question! What about asymptotes? Horizontal/Vertical? Obviously theres one around y=1 but how do i prove it? 56. Abarnett try this site it might help! never tried it http://calculator.tutorvista.com/math/601/asymptote-calculator.html 57. Abarnett @Jgeurts 58. Jgeurts I need to know how to do it from the ground up, no calculators 59. agent0smith To find asymptotes... you just have to look at the original function, there's no calculus involved. $\large f(x) = (x^2 - 4)/(x^2 +1)$ What happens when x gets really huge (positive or negative infinity? The -4 and +1 become tiny compared to the huge x^2, so it becomes $\Large \frac{ x^2 }{ x^2 }$ which is equal to ...? 60. Jgeurts 1! 61. Jgeurts so x^3/x^2 would have no horizontal, but have vertical? 62. agent0smith Correct! :) that's for horizontal asymptotes. So y= 1 is a horizontal asymptote. To find vertical asymptotes, find values of x that cause the denominator to be zero $\Large f(x) = \frac{ (x^2 - 4)}{(x^2 +1)}$are there any x values that make the denominator zero? x^2 is always positive, and you're adding 1... What about instead for this function? $\Large f(x) = \frac{ (x^2 - 4)}{(x +1)}$ 63. Jgeurts and x^2/x^3 would have a horizontal of y=0? 64. Jgeurts no, and yes -1 65. agent0smith For x^3/x^2, that simplifies to just x - it's actually a slant asymptote (neither horizontal or vertical) - there is no horizontal. BUT x can't equal zero, so it has a vertical at x=0 (or just a 'hole' at x=0, not really an asymptote in this case), since the denominator is zero. 66. Jgeurts i see so its like the graph of X 67. Jgeurts and if its negative im guessing thats what flips it over 68. agent0smith Yep. With a hole at x=0. So for vertical - look for values of x that give a denominator zero For horizontal - look for how the function behaves when x is approaching +infinity or -infinity. 69. Jgeurts excellent 70. agent0smith Slant asymptotes are a bit more confusing, sometimes you have to use polynomial long division to find them. 71. Jgeurts ok :) thank you so much, i wish i could give out 2 best answers 72. Abarnett @Jgeurts I gave him one for you! 73. Jgeurts :) @abarnett
Randomly generate a list with predetermined mean Aim: randomly generate $n$ numbers between $a$ and $b$ with (roughly) mean $m$. Problem: The simple beginner's code I wrote becomes very inefficient as $m$ moves away from $\frac{a + b}{2}$. import random import numpy as np import time low = 0 high = 100 data = 50 def lst_w_mean(target): HT = [x for x in range(low, high+1)] target_mean = 0 while target_mean <> int(target): outcomes = [random.choice(HT) for x in range(data)] target_mean = np.mean(outcomes) return outcomes t1 = time.clock() data_new = lst_w_mean(54) print "elapsed time: ", time.clock() - t1 print data_new t1 = time.clock() data_new = lst_w_mean(32) print "elapsed time: ", time.clock() - t1 print data_new Works pretty decently for means close to the the mean of HT (e.g., means between between 40 and 60). Takes for ever (without generating a run-time error) beyond that narrow range (roughly 28 minutes for mean =30) %run elapsed time for mean = 54: 0.00734800000009 [46, 82, 6, 0, 90, 73, 99, 2, 34, 88, 51, 14, 89, 72, 40, 8, 97, 44, 46, 45, 89, 10, 22, 52, 96, 98, 43, 52, 59, 74, 52, 54, 64, 66, 21, 71, 92, 34, 76, 33, 26, 36, 53, 74, 64, 85, 59, 26, 69, 24] elapsed time for mean = 30: 1695.210358 [67, 43, 30, 78, 67, 33, 13, 24, 27, 1, 2, 4, 4, 2, 49, 1, 41, 16, 6, 9, 6, 90, 1, 31, 52, 91, 5, 11, 4, 94, 2, 74, 60, 38, 8, 2, 10, 20, 92, 25, 17, 36, 12, 9, 7, 22, 16, 72, 44, 32] Reformulating HT with something like: if target < 40: high = 70 etc., so that the target mean is closer to the mean of HT does not generate realistic data besides being an ad hoc solution. And yes, I would like what I call "realistic data" to have a "high" standard deviation perhaps with a number of "outliers". The process speeds up significantly as the value of data diminishes, so maybe the better approach is to try to use recursion with smaller data values (sort of like in merge-sort)? • Do you have any other requirement for the variables? In other words, could you just sample from a gaussian with mean target? – Graipher Dec 14 '16 at 11:12 • which version of python is used? – Alex Dec 14 '16 at 14:19 • @Alex Judging from the print statements 2.x... – Graipher Dec 14 '16 at 15:58 • Yes, 2.7 @Graipher Sorry, neither a programmer nor a statistician--just a philosophy teacher who is interested in these things. So I am not sure exactly what you mean by "sampling from a Gaussian". If it means generating samples that have Gaussian distributions, and checking their means, that would sort of go against my intention of accepting large variation and outliers. I noticed that python's random module had some tools that seem to do things like that, is that what you had in mind? – user2738815 Dec 14 '16 at 22:53 • @200_success If you can please explain what was wrong with my edit (since you deleted it), I will refrain from repeating the error. Such edits are common in Stackoverflow--mine was intended to help others. – user2738815 Dec 16 '16 at 23:43 Review The official Python style-guide recommends using lower_case for variables (your variable HT violates this). I would make all relevant parameters actual parameters of the function. This makes it a lot easier to test. The main code of a module should be wrapped in a if __name__ == "__main__": guard. This allows you to do in another script: from random_with_mean import lst_w_mean without executing the tests. I would try to find a different name for the function, without too many abbreviations. Alternative approach I would use a completely different approach to this. First, note that the Poisson distribution is a positive-semidefinite distribution for integers. If we want to generate a integers in the range (0, 100) (similar to your example, but note the off-by-one error), with mean 50, We can just take a np.random.poisson(50) (which I will call P(50) from now on), ensuring to throw away all values which are larger than 100 and redraw them. This also works when generating values in the range (0, 100) but with mean 10. Only when the mean becomes larger than the half-way point do we start getting into trouble. Suddenly we are starting to loose a larger and larger part of the tail to the upper boundary. To avoid this, we can just swap the problem around and declare the upper boundary to be zero and generate the values as high - P(high - target), so we generate how far the value is away from the boundary. Similarly, when the lower bound is not zero, we need to shift the values upwards and generate low + P(target - low). An implementation of this is: import numpy as np def lst_w_mean(low, high, num, target): swap = target > (high - low) / 2 out = [] to_generate = num while to_generate: if swap: x = high - np.random.poisson(high - target, size=to_generate) x = x[low <= x] # x can't be larger than high by construction else: x = low + np.random.poisson(target - low, size=to_generate) x = x[x < high] # x can't be smaller than low by construction out.extend(x) to_generate = num - len(out) return out if __name__ == "__main__": print np.mean(lst_w_mean(0, 100, 50, 50)) print np.mean(lst_w_mean(0, 100, 50, 10)) print np.mean(lst_w_mean(0, 100, 50, 70)) Note that I always generate multiple values (enough to fill the out list if all were inside the boundaries) at the same time, then throw away all values outside of the boundaries and loop until the out list is the right size. Most of the time only one iteration is needed, because it can only go outside of the boundary on the side away from the closer boundary. Note that the variance of the Poisson distribution is the same as its mean and so its standard deviation is the square root of that. This means that you will get some outliers, but the distribution is far from uniform over your range. To visualize this, here are three different samples (with n=5000), one with mean 10, 50, 75 and 99 each using the Poisson distribution: As an alternative, you could take the Beta distribution, which is only defined on the interval [0, 1], which helps us a lot here, because we don't need to take care of edge effects, we only need to shift and rescale our values. Given a mean and sigma (I took sigma to be an arbitrary value), we can derive the necessary parameters $\alpha$ and $\beta$, as given as alternative parametrization on wikipedia: $\alpha = \mu \left(\frac{\mu(1 - \mu)}{\sigma^2} - 1\right),\quad \beta = (1 - \mu) \left(\frac{\mu(1 - \mu)}{\sigma^2} - 1\right)$ which only holds when $\sigma^2 < \mu(1 - \mu)$. With this, I ended up with the following code: from __future__ import division def lst_beta(low, high, num, mean): diff = high - low mu = (mean - low) / diff sigma = mu * (1 - mu) c = 1 / sigma - 1 a, b = mu * c, (1 - mu) * c return low + diff * np.random.beta(a, b, size=num) Visualized for $\mu = 10, 50, 75, 99$ this looks like this: In comparison, this is what @Peilonrayz's algorithm produces for these means: Edit: Here is the code to generate the graphs: import matplotlib.pyplot as plt #function definitions here def lst_w_mean(low, high, num, target): ... plt.figure() means = 10, 50, 75, 99 for i, mean in enumerate(means, 1): plt.subplot(len(means), 1, i) l = lst_w_mean(0, 100, 5000, mean) plt.hist(l, bins=100, range=(0, 100)) plt.show() • With your alternate approach, I would be tempted to use a normal distribution with trimmed tails for target means near the middle of the range, rather than only poisson, as trimming changes the mean. Something like lower third poisson, middle third normal, upper third flipped poisson – Caleth Dec 14 '16 at 17:15 • There are probably multiple useful distributions to choose from. Working out the error should be analytically tractable, all it requires is looking at CDFs at the cutoff points – Caleth Dec 14 '16 at 17:19 • Thank you, works and fast. However, yields very low standard deviations (between 4 and 6). I am looking to generate data with stdev over 10, preferably higher. I commented above about this worry regarding Gaussian or other tailored distributions. – user2738815 Dec 15 '16 at 4:39 • @user2738815 Yes, that is the disadvantage of the poisson distribution, as noted in my last paragraph. A gaussian would have the advantage that you can choose the std dev, at the price of manually having to deal with the edge effects. I will think if I can find a better distribution. – Graipher Dec 15 '16 at 7:54 • Yes, this is excellent! However, initially, the code generated a "divison by zero" error when called with, say: lst_beta(0, 100, 164, 70), so I patched it by changing the second line of the function to: mu = (mean - low) / float(diff). (Might not be the best way to patch, but it works.) Here is a plot where an actual distribution I have (blue), and a distribution generated by the code (green) with the same mean is plotted together. Impressive. – user2738815 Dec 15 '16 at 20:30 If you want to keep your algorithm, then you should: • Take; low, high, and data, as arguments to lst_w_mean. • Use != not <>. • Upgrade to Python 3 to have the option to remove the reliance on NumPy. With the new statistics.mean Your algorithm reminds me of the infamous Bogosort. You're generating a bunch of random numbers and hoping for the best. This is clearly not going to be performant. First off I'd recommend that you create a helper function. This is as thinking of algorithms as $0-\text{stop}$ with a step of $1$ is easier than thinking of $\text{start}-\text{stop}$ with a step of $\text{step}$. To do this you can perform a transform on the output of you function so that you multiply by the step, and add by the start. But you should take into account that some means are not possible with certain steps. Finally if you make your input work like a Python range, then it'll be familiar to other Python programmers. And so you could get a decorator like: from functools import wraps from math import copysign def normalize_input(fn): @wraps(fn) def wrapper(mean, length, start, stop=None, step=None): if stop is None: stop = start start = 0 if step is None: step = 1 if not (start <= mean * copysign(1, step) < stop): raise ValueError('mean is not within distribution range.') if mean * length % step != 0: raise ValueError('unreachable mean.') stop -= start mean -= start if step != 1: stop /= step mean /= step numbers = fn(mean, length, stop) for i, num in enumerate(numbers): numbers[i] = num * step + start return numbers return wrapper This allows you to define functions that only take a mean, length and stop. And so could make your code: @normalize_input def list_with_mean(mean, length, stop): HT = [x for x in range(stop+1)] target_mean = 0 while target_mean != mean: outcomes = [random.choice(HT) for x in range(length)] target_mean = sum(outcomes)/len(outcomes) return outcomes numbers = list_with_mean(50, 100, 25, 75) Which will generate $100$ random numbers with a mean of $50$, and with numbers ranging from $25-75$. After this you want to look into a better way to generate these random numbers. At first I used an algorithm that generates $n$ random numbers that add to a number. As then we can use an algorithm documented by David Schwartz. To prove that we can use this algorithm rather than the one that you are using, we have two of the numbers in the mean equation: $$\text{mean} = \frac{\Sigma_{i=0}^{\text{len}(n)}(n_i)}{\text{len}(n)}$$ $$\Sigma_{i=0}^{\text{len}(n)}(n_i) = \text{mean} * \text{len}(n)$$ And so we can quickly implement the algorithm with: from random import randrange from itertools import tee # Itertools Recipe def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) def fixed_sum(total, length): for a, b in pairwise(sorted([0, total] + [randrange(total) for _ in range(length - 1)])): yield b - a def bias_mean(lower=False): def bias_mean_inner(fn): @wraps(fn) def wrapper(mean, length, stop): flip_mean = (mean > stop / 2) is lower if flip_mean: mean = stop - mean numbers = fn(mean, length, stop) if flip_mean: for i, num in enumerate(numbers): numbers[i] = stop - num return numbers return wrapper return bias_mean_inner @normalize_input @bias_mean(True) def by_sum(mean, length, stop): numbers = list(fixed_sum(int(mean * length), length)) while True: for i in range(0, length): num = numbers[i] if num >= stop: numbers[i] = stop else: if add + num > stop: n = stop else: numbers[i] = n break return numbers As pointed out by @Graipher this algorithm isn't very good to generate the random numbers. And so instead of using the above algorithm, you can use one that adds a random amount to each number, as long as it doesn't exceed the maximum size. To do this you can create an amount variable that you subtract from, when you generate a random number. You then want to loop through you list and pick a random number to add to the current item. The domain of this random item is $0-\min(\text{amount}, \text{stop} - \text{item})$. This is so that it doesn't exceed the maximum number size, and all the items add to the total we want. This can get you the code: @normalize_input @bias_mean(False) def by_sum_distribution(mean, length, stop): numbers = [0] * length amount = length * mean while amount: for i in range(length): n = randrange(min(amount, stop - numbers[i]) + 1) numbers[i] += n amount -= n return numbers This has a uniform output when the $\text{mean} = \frac{\text{stop}}{2}$, but otherwise the beta function is much better. • Thanks, I am studying/testing your answer, but am hitting a snag. When I make the call numbers_with_mean(x, 50, 1, 100, 1) using a loop with x ranging from 30 to 70, output time for each step takes around .0002 seconds until a certain mean -- then the program gets stuck in run-time (no error message, stuck at different mean each time). Also, where did you get the idea that my code was an "algorithm" :) I am just a beginner who is asking for help. – user2738815 Dec 15 '16 at 4:08 • @user2738815 Yeah, I messed up. I was transforming the array whilst modifying it, and so if you randomly got stop at the end you'd infinitely loop. It should now be fixed, this was also the cause of the odd means. If you look at a definition of algorithm, "an algorithm is a self-contained step-by-step set of operations to be performed" - Wikipedia, it's what your code is. – Peilonrayz Dec 15 '16 at 10:06 • Sincerely grateful for the time you are putting into this. Yes, since I have been teaching logic for years, I am familiar with that wider sense of "algorithm" according to which even well-written pizza recipes are one :) However, I do not think they will be taught in a CS course called "Algorithms". – user2738815 Dec 15 '16 at 15:33
# Tour:Cayley's theorem PREVIOUS: Equivalence of definitions of group action| UP: Introduction five (beginners)| NEXT: External direct product General instructions for the tour | Pedagogical notes for the tour | Pedagogical notes for this part WHAT YOU NEED TO DO: • Read and understand the statement of Cayley's theorem, both in terms of group actions and in terms of homomorphisms. • Understand the proof of the theorem. ## Statement ### In terms of group actions Let $G$ be a group. The group multiplication $G \times G \to G$, defines a group action of $G$ on itself. In other words, the left multiplication gives an action of $G$ on itself, with the rule $g.h = gh$. This action is termed the left-regular group action. This group action is faithful -- no non-identity element of $G$ acts trivially. ### In terms of homomorphisms Let $G$ be a group. There is a homomorphism from $G$ to $\operatorname{Sym}(G)$ (the symmetric group, i.e., the group of all permutations, on the underlying set of $G$). Moreover, this homomorphism is injective. Thus, every group can be realized as a subgroup of a symmetric group. ## Proof ### In terms of group actions Given: A group $G$. To prove: $G$ acts on itself by left multiplication, and this gives an injective homomorphism from $G$ to the symmetric group on $G$. Proof: Define the left-regular group action of $G$ on itself by $g.h = gh$. 1. This is a group action: $e.s = s$ follows from the fact that $e$ is the identity element, while $g.(h.s) = (gh).s$ follows from associativity. 2. The action is faithful; every non-identity element of the group gives a non-identity permutation: Assume that there are $g, h \in G$ such that their action by left multiplication is identical. But then $ge = he$ so $g = h$. Therefore, the action is faithful. Thus, we get a homomorphism from $G$ to $\operatorname{Sym}(G)$. Since the action is faithful, distinct elements of $G$ go to distinct elements of $\operatorname{Sym}(G)$, so the map is injective. In particular, $G$ is isomorphic to a subgroup of $\operatorname{Sym}(G)$.
arXiv - CS - Discrete Mathematics Pub Date : 2019-10-01 , DOI: arxiv-1910.00308 Thomas Bläsius; Tobias Friedrich; Martin Schirneck We investigate the maximum-entropy model $\mathcal{B}_{n,m,p}$ for random $n$-vertex, $m$-edge multi-hypergraphs with expected edge size $pn$. We show that the expected size of the minimization $\min(\mathcal{B}_{n,m,p})$, i.e., the number of inclusion-wise minimal edges of $\mathcal{B}_{n,m,p}$, undergoes a phase transition with respect to $m$. If $m$ is at most $1/(1-p)^{(1-p)n}$, then $\mathrm{E}[|\min(\mathcal{B}_{n,m,p})|]$ is of order $\Theta(m)$, while for $m \ge 1/(1-p)^{(1-p+\varepsilon)n}$ for any $\varepsilon > 0$, it is $\Theta( 2^{(\mathrm{H}(\alpha) + (1-\alpha) \log_2 p) n}/ \sqrt{n})$. Here, $\mathrm{H}$ denotes the binary entropy function and $\alpha = - (\log_{1-p} m)/n$. The result implies that the maximum expected number of minimal edges over all $m$ is $\Theta((1+p)^n/\sqrt{n})$. Our structural findings have algorithmic implications for minimizing an input hypergraph, which has applications in the profiling of relational databases as well as for the Orthogonal Vectors problem studied in fine-grained complexity. We make several technical contributions that are of independent interest in probability. First, we improve the Chernoff--Hoeffding theorem on the tail of the binomial distribution. In detail, we show that for a binomial variable $Y \sim \operatorname{Bin}(n,p)$ and any $0 < x < p$, it holds that $\mathrm{P}[Y \le xn] = \Theta( 2^{-\!\mathrm{D}(x \,{\|}\, p) n}/\sqrt{n})$, where $\mathrm{D}$ is the binary Kullback--Leibler divergence between Bernoulli distributions. We give explicit upper and lower bounds on the constants hidden in the big-O notation that hold for all $n$. Secondly, we establish the fact that the probability of a set of cardinality $i$ being minimal after $m$ i.i.d. maximum-entropy trials exhibits a sharp threshold behavior at $i^* = n + \log_{1-p} m$. down wechat bug