text
stringlengths
0
2.82M
meta
dict
1. FIELD OF THE INVENTION The present invention relates to bulk polymerization processes for making vinyl aromatic/vinyl cyanide copolymers, and more particularly relates to bulk polymerization processes having a vapor phase additive which inhibits popcorn formation. 2. DESCRIPTION OF THE RELATED ART Mass or bulk polymerization techniques for making copolymers of monoethelenically unsaturated polar monomers and monovinylidene aromatic monomers are known, see U.S. Pat. Nos. 3509237; 3660535; 3243481; 4221833 and 4239863, all of which are incorporated herein by reference. Such copolymers may be rubber modified graft copolymers or may be rubber-free rigid copolymer. Bulk processes, such as those involving a boiling reactor, typically involve a liquid phase held under a nitrogen (N2) atmosphere. In boiling reactors, heat of reaction is the source of heat to the reactor to cause boiling of the liquid monomeric composition. Boiled monomer then enters the nitrogen vapor phase, contacts the reactor dome, which is typically air or water cooled, condenses and returns to the liquid phase. Condensed monomer on the reactor dome will generate undesired, crosslinked popcorn polymer. Popcorn generation at the dome surface may be due in part to the absence of typical polymerization inhibitors in the gas phase because of low volatility of the inhibitors with respect to the monomers, thereby necessitating that an inhibitor be present in the vapor phase if popcorn formation is to be inhibited. In the past, inhibitors such as oxygen have been incorporated into the vapor phase to prevent popcorn formation. It is believed, however, that oxygen may be incorporated into polymer and may contribute to the formation of black carbonaceous material on the reactor walls. Analysis of the black material has indicated that the material has a high oxygen content which tends to support the proposition that the oxygen inhibitor is part of the cause of the formation thereof. Additionally, oxygen (O.sub.2) has a high solubility in many liquid organic monomers which tends to support the proposition that the oxygen is present in the liquid monomer phase during the bulk polymerization process. Accordingly, there is a need for vapor phase additives which will inhibit popcorn formation in bulk polymerization processes.
{ "pile_set_name": "USPTO Backgrounds" }
**Matrix Identities on** Weighted Partial Motzkin Paths William Y.C. Chen$^1$, Nelson Y. Li$^2$, Louis W. Shapiro$^3$ and Sherry H. F. Yan$^4$\ \[2mm\] $^{1,2,4}$Center for Combinatorics, LPMC, Nankai University, Tianjin 300071, P.R. China\ \[2mm\] $^3$Department of Mathematics, Howard University, Washington, DC 20059, USA\ \[2mm\] $^1$chen@nankai.edu.cn, $^2$nelsonli@eyou.com, $^3$lshapiro@howard.edu, $^4$huifangyan@eyou.com\ [**Abstract.**]{} We give a combinatorial interpretation of a matrix identity on Catalan numbers and the sequence $(1, 4, 4^2, 4^3, \ldots)$ which has been derived by Shapiro, Woan and Getu by using Riordan arrays. By giving a bijection between weighted partial Motzkin paths with an elevation line and weighted free Motzkin paths, we find a matrix identity on the number of weighted Motzkin paths and the sequence $(1, k, k^2, k^3, \ldots)$ for any $k \geq 2$. By extending this argument to partial Motzkin paths with multiple elevation lines, we give a combinatorial proof of an identity recently obtained by Cameron and Nkwanta. A matrix identity on colored Dyck paths is also given, leading to a matrix identity for the sequence $(1, t^2+t, (t^2+t)^2, \ldots)$. [Key words]{}: Catalan number, Schröder number, Dyck path, Motzkin path, partial Motzkin path, free Motzkin path, weighted Motzkin path, Riordan array [AMS Mathematical Subject Classifications]{}: 05A15, 05A19. [Corresponding Author:]{} William Y. C. Chen, chen@nankai.edu.cn Introduction ============ This paper is motivated the following matrix identity obtained by Shapiro, Woan and Getu [@shapirotc] in their study of the moments of a Catalan triangle [@chapman; @shapiro; @sulanke]: $$\label{eq2.2} \begin{bmatrix} 1 \\2 & 1\\5 & 4 & 1\\14 & 14 & 6 & 1\\42 & 48 & 27 & 8 & 1\\& & \cdots &&&\ddots \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 \\ 4 \\ 4^2 \\ 4^3 \\ 4^4 \\ \vdots \end{bmatrix},$$ where the first column of the first matrix is the Catalan number $C_n={1\over n+1} {2n \choose n}$ and $a_{i,j}$ (the entry in the $i$-th row and $j$-th column) is determined by the following recurrence relation for $j\geq 2 $: $$\label{r2.2} a_{i,j}=a_{i-1, j-1}+2a_{i-1,j}+a_{i-1,j+1} .$$ Another proof of the above identity is given by Woan, Shapiro and Rogers [@woan] while computing the areas of parallelo-polyominos via generating functions. The first result of this paper is a combinatorial interpretation of the identity (\[eq2.2\]) in terms of Dyck paths. One main objective of this paper is to give a matrix identity that extends the sequence $(1, 4, 4^2, 4^3, \ldots)$ to $(1, k, k^2, k^3, \ldots)$ in (\[eq2.2\]). The following matrix identity was proved by Cameron and Nkwanta [@cn] that arose in a study of elements of order $2$ in [*Riordan groups*]{} [@aigner; @shapiroba; @shapirotr; @sprugnoli]: $$\label{eq1.1} \begin{bmatrix} 1 \\3 & 1\\11 & 6 & 1\\45 & 31 & 9 & 1\\197 & 156 & 60 & 12 & 1 & &\\& & \cdots &&&\ddots \end{bmatrix} \begin{bmatrix} 1 \\ 3 \\ 7 \\ 15 \\ 31 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 \\ 6 \\ 6^2 \\ 6^3 \\ 6^4 \\ \vdots \end{bmatrix},$$ where the entry $a_{i,j}$ ($i$th row and $j$th column) in the above matrix satisfies the recurrence relation $$\label{r1.1} a_{i,j}=a_{i-1,j-1}+3a_{i-1,j}+2a_{i-1,j+1}$$ for $j\geq 2$ and the $a_{i,1}$ is the $i$-th [*little Schröder number*]{} $s_i$ (sequence A001003 in [@sloane]), which counts Schröder paths of length $2(i+1)$. A [*Schröder path*]{} is a lattice path starting at (0, 0) and ending at $(2n, 0)$ and using steps $H=(2, 0)$, $U=(1, 1)$ and $D=(1, -1)$ such that no steps are below the $x$-axis and there are no peaks at level one. Imposing this last peak condition gives us little Schröder numbers while without it we would have the [*large Schröder numbers*]{}. For $k=3$, we obtain the following matrix identity on Motzkin numbers: $$\label{3p} \begin{bmatrix} 1 \\1 & 1\\2 & 2 & 1\\4 & 5 & 3 & 1\\9 & 12 & 9 & 4 & 1\\& & \cdots &&&\ddots \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \\ 3^2 \\ 3^3 \\ 3^4 \\ \vdots \end{bmatrix},$$ where the first column is the sequence of Motzkin numbers, and matrix $A=(a_{ij})$ is generated by the following recurrence relation: $$a_{i,j}= a_{i-1, j-1} + a_{i-1, j} + a_{i-1, j+1}.$$ For $k=5$, we find the following matrix identity $$\label{5p} \begin{bmatrix} 1 \\3 & 1\\10 & 6 & 1\\36 & 29 & 9 & 1\\137 & 132 & 57 & 12 & 1\\& & \cdots &&&\ddots \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 \\ 5 \\ 5^2 \\ 5^3 \\ 5^4 \\ \vdots \end{bmatrix},$$ where the first column sequence A002212 in [@sloane], which has two interpretations, the number of $3$-Motzkin paths or the number of ways to assemble benzene rings into a tree [@hr]. Recall that a $3$-Motzkin path is a lattice path from $(0,0)$ to $(n-1,0)$ that does not go below the $x$-axis and consists of up steps $U=(1,1)$, down steps $D=(1,-1)$, and three types of horizontal steps $H=(1,0)$. The above matrix $A=(a_{i,j})$ is generated by the first column and the following recurrence relation $$a_{i,j}= a_{i-1,j-1} + 3a_{i-1,j} + a_{i-1,j+1}.$$ We may prove the above identities (\[3p\]) and (\[5p\]) by using method of Riordan arrays. So the natural question is to find a matrix identity for the sequence $(1, k, k^2, k^3, \ldots)$. We need the combinatorial interpretation of the entries in the matrix in terms of weighted partial Motzkin paths, as given by Cameron and Nkwanta [@cn]. To be precise, a partial Motzkin path, also called a Motzkin path from $(0,0)$ to $(n,k)$ in [@cn], is just a Motzkin path but without the requirement of ending on the x-axis. A weighted partial Motkzin a partial Motzkin path with the weight assignment that the horizontal steps are endowed with a weight $k$ and the down steps are endowed with a weight $t$, where $k$ and $t$ are regarded as positive integers. In this sense, our weighted Motzkin paths can be regarded as further generalization of $k$-Motzkin paths in the sense of $2$-Motzkin paths and $3$-Motkzin paths [@BdLPP; @deutschs; @sloane]. We introduce the notion of weighted free Motzkin paths which is a lattice path consisting of Motzkin steps without the restrictions that it has to end with a point on the $x$-axis and it does not go below the $x$-axis. We then give a bijection between weighted free Motzkin paths and weighted partial Motzkin paths with an elevation line, which leads to a matrix identity involving the number of weighted partial Motzkin paths and the sequence $(1, k, k^2, \ldots)$. The idea of the elevation operation is also used by Cameron and Nkwanta in their combinatorial proof of the identity (\[eq2.2\]) in a more restricted form. By extending our argument to weighted partial Motzkin paths with multiple elevation lines, we obtain a combinatorial proof of an identity recently derived by Cameron and Nkwanta, in answer to their question. We also give a generalization of the matrix identity (\[eq2.2\]) and give a combinatorial proof by using colored Dyck paths. Riordan Arrays ============== In this section, we give a brief introduction to the notion of Riordan arrays [@shapiroba; @shapirotr; @sprugnoli]. Let us use (\[eq2.2\]) and (\[eq1.1\]) as examples. Start with two generating functions $g(x)=1+g_1x+g_2x^2+\cdots$ and $f(x)=f_1x+f_2x^2+\cdots$ with $f_1\neq 0$. Let $H=(h_{i,j})_{n,j\geq 0}$ be the infinite lower triangular matrix with nonzero entries on the main diagonal, where $h_{i,j}=[x^i](g(x)(f(x)^j)$ for $i\geq j$, namely, $h_{i,j}$ equals the coefficient of $x^i$ in the expansion of the series $f(x)(g(x)^j)$. If an infinite lower triangular matrix $H$ can be constructed in this way from two generating functions $g(x)$ and $f(x)$, then it is called a [*Riordan array*]{} and is denoted by $H=(g(x),f(x))=(g,f)$. Suppose we multiply the matrix $H=(g,f)$ by a column vector $(a_0,a_1,\cdots)^T$ and get a column vector $(b_0,b_1,\cdots)^T$. Let $A(x)$ and $B(x)$ be the generating functions for the sequences $(a_0,a_1,\cdots)$ and $(b_0,b_1,\cdots)$ respectively. Then the method of Riordan arrays asserts that $$B(x)=g(x)A(f(x)).$$ For the matrix identity (\[eq2.2\]), let $g(x)$ be the generating function for Catalan numbers $(1, 2, 5, 14, \ldots)$: $$g(x) = {1 -2x- \sqrt{1-4x} \over 2x^2}.$$ Let $f(x)=xg(x)$. From the recurrence relation (\[r2.2\]) one may derive that the generating function for the sequence in the $j$-th $(j\geq 1)$ column in the matrix in (\[eq2.2\]) equals $g(xg)^{j-1}$. Let $H$ be the Riordan array $(g, xg)$. Since the generating function of $(1,2,3,4\cdots)^T$ equals $A(x)=\frac{1}{(1-x)^2}$, it follows that $B(x)=g(x)A(xg(x))=\frac{1}{1-4x}$ equals the generating function for the right hand side of (\[eq2.2\]). Thus we obtain the identity (\[eq2.2\]). Let us consider the matrix identity (\[eq1.1\]). Let $g(x)$ be the generating function for the little Schröder numbers as given by $$\label{sg} g(x)={1-3x-\sqrt{1-6x+x^2}\over 4x^2},$$ and let $f(x) =xg(x)$. Note that the generating function for the sequence $(1, 3, 7, 15, \ldots)$ equals $A(x)=\frac{1}{(1-x)(1-2x)}$. From the recurrence relation (\[r1.1\]) one may verify that the matrix in (\[eq1.1\]) is indeed the Riordan array $(g, xg)$. Therefore, the generating function for the right hand side of (\[eq1.1\]) equals $ g(x)A(xg(x))=\frac{1}{1-6x}$, which implies (\[eq1.1\]). Using the same method, we can verify the matrix identity (\[3p\]) and (\[5p\]). Since we are going to establish a general bijection for weighted Motzkin numbers, here we omit the proofs. Dyck path interpretation of (\[eq2.2\]) ======================================= In this section, we present a combinatorial interpretation of the matrix identity (\[eq2.2\]) by using Dyck paths. A [*Dyck path*]{} of length $2n$ is a path going from the origin $(0,0)$ to $(2n,0)$ using steps $U=(1,1)$ and down steps $D=(1,-1)$ such that no steps is below the $x$-axis [@De; @stanley]. The number of Dyck paths of length $2n$ equals the Catalan number $C_n$. For a Dyck path $P$, the points on the $x$-axis except for the initial point are called return points. In this sense, the ending point is always a return point. Formally speaking, a [*composition*]{} of a Dyck path $P$ is sequence of Dyck path $(P_1, P_2, \ldots, P_j)$ such that $P=P_1 P_2 \cdots P_j$, where $P_1, P_2, \ldots, P_j$ are Dyck paths. For a composition $(P_1, P_2, \ldots, P_j)$ of a Dyck path $P$, its length is meant to be the length of $P$ and $j$ is called the number of segments. We may choose certain return points to cut off the Dyck paths into a composition. We use the convention that the ending point is always a cut point. Clearly, a Dyck path with one segment is an ordinary Dyck path. \[dotheom\] For $j\geq 2$, we have the following recurrence relation $$\label{eq.1} d_{i,j}=d_{i-1,j-1} + 2d_{i-1,j} + d_{i-1,j+1}.$$ Let $(P_1, P_2, \ldots, P_{j})$ be composition of a Dyck path $P$ of length $2i$. Consider the following cases for $P_1$. Case 1: $P_1=UD$. Then we get a composition length $2(i-1)$ with $j-1$ segments: $(P_2, \ldots, P_j)$. Case 2: $P_1=QUD$ and $Q$ is not empty. Then we get a composition $(Q, P_2, \ldots, P_j)$ of length $2(i-1)$ and $j$ segments. Case 3: $P_1=U Q D$, $Q$ is not empty. We get a composition $(Q, P_2, \ldots, P_j)$ of length $2(i-1)$ with $j$ segments. Case 4: $P_1=Q_1UQ_2D$, where $Q_1$ and $Q_2$ are not empty. Then we get a composition $(Q_1, Q_2, P_2, \ldots, P_j)$ of length $2(i-1)$ with $j+1$ segments. Adding up the terms in the above cases, we obtain the desired recursion (\[eq.1\]). From Lemma \[dotheom\] one sees that the entry $a_{i,j}$ in the triangular matrix of the identity (\[eq2.2\]) can be explained as the number compositions of Dyck paths of length $2i$ that contain $j$ segments. We remark that this combinatorial interpretation can also be derived from the generating function of the entries in the $j$-th column in of the matrix in (\[eq2.2\]). The following formula for $a_{i,j}$ has been derived by Cameron and Nkwanta [@cn]: $$a_{i,j}= {j \over i} \, {2i \choose i-j} .$$ Let us rewrite the matrix identity (\[eq2.2\]) as follows $$\label{eq.2} \sum_{j= 1}^i ja_{i,j}=4^{i-1}.$$ A combinatorial formulation of the above identity is given by Callan [@callan]. We are now ready to give a combinatorial proof of the above identity. Clearly, $4^n$ is the number of sequences of length $n$ on four letters, say, $\{1, 2, 3, 4\}$. The term $ja_{i,j}$ suggests that we should specify a segment in a composition as a distinguished segment. We may use a star $*$ to mark the distinguished segment. We call a composition with a distinguished segment a [*rooted*]{} composition of a Dyck path. Then $ja_{i,j}$ equals the number of rooted compositions of Dyck paths of length $2i$ that contain $j$ segments. There is a bijection $\phi$ between the set of rooted compositions of Dyck paths of length $2i$ and the set of sequences of length $i-1$ on four letters. Let $(P_1, P_2, \ldots, P_j)$ be a rooted composition of a Dyck path $P$ of length $2i$. We proceed to construct a sequence of length $i-1$ on the elements $\{ 1, 2,3, 4\}$. We now recursively define a map $\phi$ from rooted compositions of a Dyck path $P$ of length $2i$ to sequences of length $i-1$ on $\{ 1, 2, 3, 4\}$. For $i=1$, $P$ is unique, and the sequence is set to be the empty sequence. We now assume that $i>1$. Let $(P_1, \ldots, P_t^*, \ldots, P_j)$ be a rooted composition of $P$ with $P_t^*$ being the distinguished segment. We have the following cases. 1. $P_1=UD$ and $t=1$. Then we set $\phi(P)=1\, \phi(P_2^*, P_3, \ldots, P_j)$. 2. $P_1=UD$ and $t\not=1$. Then we set $\phi(P)=2\, \phi(P_2, \ldots, P_t^*\ldots, P_j)$. 3. $P_1=QUD$ and $Q$ is a nonempty Dyck path. Set $\phi(P)=3\, \phi(Q^*, P_2, \ldots, P_j)$ if $t=1$ and set $\phi(P_1, \ldots, P_j) =3 \, \phi (Q, P_2, \ldots, P_t^*, \ldots, P_j)$ if $t>1$. 4. $P_1=Q_1UQ_2D$, where $Q_1$ and $Q_2$ are nonempty Dyck paths. Then we set $$\phi(P_1, \ldots, P_j)=1\, \phi (Q_1, Q_2, P_2, \ldots, P_t^*, \ldots, P_j) \text{ if } t>1 ,$$ $$\phi(P)=1\, \phi (Q_1, Q_2^*, P_2, \ldots, P_j) \text{ if } t=1.$$ 5. If $P_1=UQD$ and $Q$ is a nonempty Dyck path. Then we set $$\phi(P)=4\, \phi(Q, P_2, \ldots, P_t^*, \ldots, P_j).$$ In order to show that $\phi$ is a bijection, we construct the reverse map of $\phi$. Let $w=w_1w_2\cdots w_{i-1}$ be a sequence of length $i-1$ on $\{1, 2, 3, 4\}$. If $i=1$, then it corresponds to $UD$. We now assume that $i>1$. Suppose that $w_2w_3\cdots w_{i-1}$ corresponds to a rooted composition $(R_1, R_2, \cdots, R_m)$ of a Dyck path $P$ of length $2(i-1)$ with $R_k$ being the distinguished segment. We proceed to find a rooted composition $(P_1, P_2, \ldots, P_j)$ with $P_t$ being the distinguished segment such that $\phi(P_1, P_2, \ldots, P_j) = w_1 \phi(R_1, R_2, \ldots, R_m)$. If $w_1=2$, we have $P_1=UD$ and $(P_2, P_3, \ldots, P_j) = (R_1, R_2, \ldots, R_m)$. It follows that $t=k+1$ and $j=m+1$. Thus we can recover $(P_1, P_2, \ldots, P_j)$. For the case $w_1=3$, we have $P_1=R_1UD$ and $t=k$. Also, we can recover $(P_2, \cdots, P_j)$ from $(R_2, \ldots, R_m)$. For the case $w_1=4$, we have $t=k$, $P_1=UR_1D$, and $(P_2, \ldots, P_j)=(R_2, \ldots, R_m)$. It remains to deal with the situation $w_1=1$, which involves Cases 1 and 4 of the bijection. If $k=1$, then we have $t=1$, $P_1=UD$ and $(P_2, \ldots, P_j)=(R_1,\ldots, R_m)$. If $k=2$, then we have $t=1$, $P_1=R_1UR_2D$ and $(P_2, \ldots, P_j)= (R_3, \ldots, R_m)$. If $k>2$, then we have $t=k-1$, $P_1=R_1UR_2D$, and $(P_2, \ldots, P_j)=(R_3, \ldots, R_m)$. Thus, we have shown that $\phi$ is a bijection. An example of the bijection $\phi$ is given in Figure \[dyck\], where normal vertices are drawn with black points, return points that cut off the Dyck paths into segments are drawn with white points, and the distinguished segment is marked with a $*$ on its last return step. (120,25) (0,15)[(1,1)[4]{}]{} (4,19)[(1,-1)[3.7]{}]{} (8.3,15.3)[(1,1)[8]{}]{} (16,23)[(1,-1)[8]{}]{} (24,15)[(1,1)[4]{}]{} (28,19)[(1,-1)[3.7]{}]{}(29.5,16.9)[$\ast$]{}(32.3,15.3)[(1,1)[4]{}]{} (36,19)[(1,-1)[3.7]{}]{} (0,15) (4,19) (8,15) (12,19) (16,23) (20,19) (24,15)(28,19) (32,15) (36,19) (40,15) (43,19)[$\longleftrightarrow$]{} (54,18)[$2$]{} (58,15)[(1,1)[8]{}]{} (66,23)[(1,-1)[8]{}]{} (74,15)[(1,1)[4]{}]{} (78,19)[(1,-1)[3.7]{}]{}(79.5,16.9)[$\ast$]{}(82.3,15.3)[(1,1)[4]{}]{} (86,19)[(1,-1)[3.7]{}]{} (58,15) (62,19) (66,23) (70,19) (74,15)(78,19) (82,15) (86,19) (90,15) (93,19)[$\longleftrightarrow$]{} (0,3)[$23$]{} (5.3,0.3)[(1,1)[8]{}]{} (13,8)[(1,-1)[7.7]{}]{} (18.5,1.9)[$\ast$]{}(21.3,0.3)[(1,1)[4]{}]{} (25,4)[(1,-1)[3.7]{}]{} (5,0) (9,4) (13,8) (17,4) (21,0) (25,4) (29,0) (32,4)[$\longleftrightarrow$]{} (43,3)[$234$]{} (48,0)[(1,1)[4]{}]{} (52,4)[(1,-1)[3.7]{}]{} (53.5,1.9)[$\ast$]{}(56.3,0.3)[(1,1)[4]{}]{} (60,4)[(1,-1)[3.7]{}]{} (48,0) (52,4) (56,0) (60,4) (64,0) (67,4)[$\longleftrightarrow$]{} (78,3)[$2341$]{} (85,0)[(1,1)[4]{}]{} (89,4)[(1,-1)[3.7]{}]{} (90.5,1.9)[$\ast$]{} (85,0) (89,4) (93,0) (96,4)[$\longleftrightarrow$]{} (107,3)[$2341$]{} Weighted Partial Motzkin Paths =============================== A [*Motzkin path*]{} of length $n$ is a path going from $(0,0)$ to $(n,0)$ consisting of up steps $U=(1,1)$, down steps $D=(1,-1)$ and horizontal steps $H=(1,0)$, which never goes below the $x$-axis. A [*$(k,t)$-Motzkin path*]{} is a Motzkin path such that each horizontal step is weighted by $k$, each down step is weighted by $t$ and each up step is weighted by $1$. The case $k=2, t=1$ gives the $2$-Motzkin paths which have been introduced by Barcucci, del Lungo, Pergola and Pinzani [@BdLPP] and have been studied by Deutsch and Shapiro [@deutschs]. The [*weight* ]{} of a path is the product of the weights of all its steps. Denote by $|P|$ the weight of a path $P$. The [*weight* ]{} of a set of paths is the sum of the total weights of all the paths. For any step, we say that it is at level $k$ if the $y$-coordinate of its end point is at level $k$. In this section, we aim to establish the following matrix identity on weighted Motzkin numbers. Let $M=(m_{i,j})_{i,j\geq 1}$ be the lower triangular matrix such that the first column is the sequence of the number of $(k-t-1, t)$-Motzkin paths of length $n$ and $m_{i,j}$ satisfies the following recurrence relation for $j\geq 2$: $$\label{rc.5} m_{i,j}=m_{i-1,j-1}+(k-t-1)m_{i-1,j}+tm_{i-1,j+1}.$$ Then we have $$\label{eq.5} (m_{i,j}) \times \begin{bmatrix} 1 \\ 1+t \\ 1+t+t^2 \\ \vdots\\ \end{bmatrix} = \begin{bmatrix} 1 \\ k \\ k^2 \\ \vdots \end{bmatrix},$$ It is well known that the number of $2$-Motzkin paths of length $n$ is given by the Catalan number $C_{n+1}$. It follows that the matrix identity (\[eq2.2\]) is a special case of (\[eq.5\]) for $k=4,t=1$. Denote by $f(x)=\sum_{n\geq 0}f_{n}x^n$ the generating function for the number of $(3,2)$-Motzkin paths. Then it is easy to find the functional equation for $f(x)$: $$f(x)=1+3xf(x)+2x^2f^2(x).$$ It follows that $$f(x)={{1-3x-\sqrt{1-6x+x^2}}\over {4x^2}}.$$ From the above generating function, one see that the number of $(3,2)$-Motzkin paths of length $n$ equals the $n$-th little Schröder number. Therefore, the matrix identity (\[eq1.1\]) is a special case of (\[eq.5\]) for $k=6,t=2$. Let us rewrite matrix identity (\[eq.5\]) in the following form $$\label{mi} \sum_{j= 1}^i \, m_{i,j}(1+t+\cdots +t^{j-1}) =k^{i-1}.$$ The following combinatorial interpretation of the entries in the matrix in (\[eq.5\]) is due to Cameron and Nkwanta [@cn]. A partial $(k,t)$-Motkzin path is defined as an initial segment of a $(k,t)$-Motkzin path. We say that a partial $(k, t)$-Motzkin path ends at level $j$ if its last step is at level $j$. Let $m_{i,j}$ be the entries in the matrix in (\[eq.5\]). Then $m_{i,j}$ equals the the number of partial $(k-t-1, t)$-Motzkin paths of length $i-1$ that end at level $j-1$. [*Proof.*]{} Regarding the first column of the matrix $M$, one sees that a partial $(k-t-1, t)$-Motzkin path that ends at level zero is just a $(k-t-1, t)$-Motzkin path. Let $a_{i,j}$ denote the number of partial $(k-t-1, t)$-Motzkin paths of length $i-1$ ending at level $j-1$. Let $P$ be a partial $(k-t-1, t)$-Motzkin path of length $i-1$ that ends at level $j-1$ $(j>1)$. By considering the last step of $P$ and its weight, one sees that $a_{i,j}$ satisfies the recurrence relation (\[rc.5\]). Let $P$ be a partial $(k-t-1, t)$-Motzkin path ending at level $j-1$. We need the notion of an [*elevated* ]{} partial Motzkin path, which has been introduced by Cameron and Nkwanta [@cn] in their combinatorial proof of the following identity which is a reformulation of (\[eq.2\]): $$4^n= \sum_{k=0}^n \, {(k+1)^2 \over n+1} \, {2n+2\choose n-k}.$$ Let $p$ be an integer with $0\leq p\leq j$. The [*elevation*]{} of $P$ with respect to the horizontal line $y=p$ is defined as follows. For $p=0$, the elevation of $P$ with respect to $y=0$ is just $P$ itself. We now assume $ 0<p\leq j$. Note that there are always up steps of $P$ at levels $j-1$, $j-2$, …, $0$ bearing in mind that an up step is said to be at level $k$ if its initial point is at level $k$. Therefore, for each level from $0$ to $p-1$, one can find a rightmost up step. Note that there are no other steps at the same level to the right of the rightmost up step which is called a [*R-visible*]{} up step with respect to the line $y=p$ in the sense that it can seen far away from the right. The elevation of $P$ with respect to the line $y=p$ is derived from $P$ by changing the R-visible up steps up to level $p-1$ to down steps by elevating their initial points. The line $y=p$ is called an [*elevation line*]{}. Figure \[mot\] is an illustration of the elevation of a partial Motzkin path with respect to the line $y=2$. (150,30) (-0.15,0)[(1,1)[4]{}]{} (0.15,0)[(1,1)[4]{}]{}(4,4)[(1,0)[4]{}]{} (8,4)[(1,1)[4]{}]{} (12,8)[(1,-1)[4]{}]{} (16,4)[(1,0)[4]{}]{} (20,4)[(1,1)[8]{}]{} (28,12)[(1,-1)[8]{}]{} (35.85,4)[(1,1)[4]{}]{} (36.15,4)[(1,1)[4]{}]{}(40,8)[(1,1)[8]{}]{}(48,16)[(1,-1)[4]{}]{} (52,12)[(1,0)[4]{}]{} (0,8)(3,0)[20]{}[(1,0)[1]{}]{} (0,0) (4,4) (8,4) (12,8) (16,4) (20,4) (24,8) (28,12) (32,8) (36,4) (40,8) (44,12) (48,16) (52,12) (56,12) (63,5)[$\longleftrightarrow$]{} (79.85,8)[(1,-1)[4]{}]{} (80.15,8)[(1,-1)[4]{}]{}(84,4)[(1,0)[4]{}]{} (88,4)[(1,1)[4]{}]{} (92,8)[(1,-1)[4]{}]{} (96,4)[(1,0)[4]{}]{} (100,4)[(1,1)[8]{}]{} (108,12)[(1,-1)[8]{}]{} (115.85,4)[(1,-1)[4]{}]{} (116.15,4)[(1,-1)[4]{}]{}(120,0)[(1,1)[8]{}]{} (128,8)[(1,-1)[4]{}]{} (132,4)[(1,0)[4]{}]{} (80,8) (84,4) (88,4) (92,8) (96,4) (100,4) (104,8) (108,12) (112,8) (116,4) (120,0) (124,4) (128,8) (132,4) (136,4) We now introduce the notion of [*free Motzkin* ]{} paths which are lattice paths starting from $(0,0)$ and using up steps $U=(1,1)$, down steps $D=(1,-1)$ and horizontal steps $H=(1,0)$. Note that there is no restriction so that the paths may go below the $x$-axis. A free $(k,t)$-Motzkin path is a free Motzkin path in which the steps are weighted in the same way as for $(k,t)$-Motzkin paths, namely, an up step has weight one, a horizontal step has weight $k$ and a down step has weight $t$. For a free Motzkin path $P$ we may analogously define the [*L-visible*]{} down steps as the down steps that are visible from the far left. It is clear that a complete Motzkin path (a partial Motzkin path with ending point on the $x$-axis) has no R-visible up steps. Similarly, a partial Motzkin path has no L-visible down steps. We have the following summation formula for weighted free Motzkin paths. The sum of weights of free $(k-t-1, t)$-Motkzin paths of length $i$ equals $k^i$. The proof of the above lemma is obvious because of the relation $$(1+k-t-1+t)^i=k^i.$$ We are now led to establish a bijection for the identity (\[mi\]). \[the.1\] There is a bijection between the set of partial $(k-t-1,t)$-Motzkin paths of length $i$ with an elevation line and the set of free $(k-t-1,t)$-Motzkin path of length $i$. The bijection is just the elevation operation. The reverse map is also easy. For a free Motzkin path, one can identify the L-visible down steps, if any, then change these L-visible down steps to up steps by elevating their end points. For a partial $(k-t-1, t)$-Motzkin path $P$ with an elevation line $y=p$, suppose that $Q$ is the elevation of $P$ with respect to $y=p$. It is clear that the weight of $Q$ equals $t^p |P|$. If $P$ ends at level $j$, then the possible elevation lines are $y=0$, $y=1, \ldots, y=j-1$. Summing over $j$ to get what we have currently. Thus we arrive at a combinatorial interpretation of the identity (\[mi\]). As a consequence of Theorem \[the.1\], we obtain the matrix identity (\[eq.5\]). Note that the matrix identity (\[eq2.2\]) is a special case of (\[eq.5\]) for $k=4$ and $t=1$, and the matrix identity (\[eq1.1\]) is also a special case of (\[eq.5\]) for $k=6$ and $t=2$. An Identity of Cameron and Nkwanta ================================== In their study of involutions in Riordan groups, Cameron and Nkwanta [@callan] obtained the following identity for $m\geq 0$, and asked for a purely combinatorial proof: $$\binom{n}{m}4^{n-m}=\sum_{k=0}^{n}\frac{k+1}{n+1}\binom{k+m+1}{k-m}\binom{2n+2}{n-k}.$$ It is clear that identity (\[eq.2\]) is a special case for $m=0$. To be consistent with our notation, we may rewrite the above identity in the following form: $$\label{cam} \binom{i-1}{m}4^{i-1-m}=\sum_{j=1}^{i}\frac{j}{i}\binom{j+m}{2m+1}\binom{2i}{i-j}.$$ We now give a bijective proof (\[cam\]). We recall that the number of partial $2$-Motzkin paths of length $i-1$ ending at level $j-1$ is given by $a_{i,j}=\frac{j}{i}\binom{2i}{i-j}$. We now consider the set of partial $2$-Motzkin paths with $m$ marked R-visible up steps and $m+1$ elevation lines such that there is exactly one marked R-visible up step between two adjacent elevation lines. We now have a combinatorial interpretation of the summand in (\[cam\]). The summand in (\[cam\]) counts partial $2$-Motzkin paths of length $i-1$ ending at level $j-1$ with $m$ marked R-visible up steps and $m+1$ elevation lines such that there is exactly one marked step between two adjacent elevation lines. Let $P$ be a partial $2$-Motzkin path of length $i-1$ ending at level $j-1$. Suppose that there are $m$ marked R-visible up steps with initial points at levels $j_1, j_2, \ldots, j_m$. Let $t_1=j_1$ and $t_{i}=j_{i}-j_{i-1}-1$ with $j_{m+1}=j-1$ for $i\geq 2$. Then one see that the number of ways to choose the $m+1$ elevation lines such that there is exactly $m+1$ marked R-visible up step is equals to $$(t_1+1) (t_2+1) \cdots (t_m+1).$$ Note that the $t_i$’s range over $t_1+t_2+\cdots +t_{m+1}=j-m-1$. Thus, the number of partial $2$-Motzkin paths of length $i-1$ ending at level $j-1$ with the required marked steps and elevation lines equal $$\label{c.1} \sum_{t_1+t_2+\ldots t_{m+1}=j-1-m}(t_1+1)(t_2+1)\cdots (t_{m+1}+1).$$ Let $g(x)=\sum_{n\geq 0}(n+1)x^n$. It is clear that $g(x)={1\over (1-x)^2}$. Hence the summation (\[c.1\]) equals the coefficient of $x^{j-1-m}$ in the expansion of ${1\over (1-x)^{2m+2}}$, that is, the binomial coefficient ${j+m\choose 2m+1}$. In fact, we may give a combinatorial interpretation of the binomial coefficient ${j+m \choose 2m+1}$ in the above proof. Let $P$ be a partial $2$-Motzkin path of length $i-1$ ending at level $j-1$ with $m$ marked R-visible up steps and $m+1$ elevation lines such that there is exactly one marked up step between two adjacent elevation lines. Suppose that the $k$-th elevation line and the $k$-th marked up step of $P$ are at level $x_k$ and $y_k$, respectively. Such a configuration can be represented as follows: $$t_1\, | \, t_2 \, * \, t_3\, |\, t_4\, * \, t_5\, | \, \cdots \, | \, t_{2m} \, * \, t_{2m+1} \, |\, t_{2m+2} ,$$ where $t_i$ denotes the numbers of unmarked R-visible up steps. It is clear that we have $t_1+t_2+\ldots+t_{2m+2}=j-1-m$, and the number of solutions of this equation equals the numbers of ways to distribute $j-1-m$ balls into $2m+2$ boxes when a box can have more than one ball. So this number equals the binomial coefficient ${ j+m \choose 2m+1}$. We are now ready to give a combinatorial proof of the identity of Cameron and Nkwanta. We recall that a $2$-Motzkin path have two kind of horizontal steps, straight steps and wavy steps. We now need to introduce the third kind of horizontal steps – dotted steps. Therefore, the left hand side of (\[cam\]) is the number of free $3$-Motzkin paths with exactly $m$ dotted horizontal steps. We now give the following bijection that leads to a combinatorial interpretation of (\[cam\]). \[cameron\] There is a bijection between partial $2$-Motzkin paths of length $i$ with $m$ marked R-visible up steps and $m+1$ elevation lines such that there is exactly one marked step between two adjacent elevation lines and free $3$-motzkin paths of length $i$ with exactly $m$ dotted horizontal steps. Suppose that $P=P_1U^*P_2U^*\ldots P_{m}U^*P_{m+1}$ is a partial $2$-Motzkin path with $m$ marked R-visible up steps and $m+1$ elevation lines, we get a free $3$-motzkin path by changing all the marked up steps to dotted horizontal steps and applying the elevation operation for each $P_k$. Conversely, given a free $3$-Motzkin path $P=P_1 \dashrightarrow P_2\dashrightarrow \cdots P_{m} \dashrightarrow P_{m+1}$ with $m$ dotted horizontal steps, where $\dashrightarrow$ denotes a dotted horizontal step, then we can get a partial $2$-motzkin path by changing each dotted horizontal step to a marked up step and the L-visible down steps of each $P_k$ to up steps by elevating their end points. (138,30) (-0.15,0)[(1,1)[4]{}]{} (0.15,0)[(1,1)[4]{}]{}(4,4)(4.5,5)(5,4) (5,4)(5.5,3)(6,4) (6,4)(6.5,5)(7,4) (7,4)(7.5,3)(8,4) (8,4)[(1,1)[4]{}]{} (12,8)[(1,0)[4]{}]{} (16,8)[(1,-1)[4]{}]{} (20,4)[(1,1)[8]{}]{} (28,12)[(1,-1)[8]{}]{} (36,4)[(1,1)[4]{}]{} (39.85,8)[(1,1)[8]{}]{} (40.15,8)[(1,1)[8]{}]{}(42.5,9)[$*$]{} (40,8)[(1,1)[12]{}]{} (52,20)[(1,0)[4]{}]{} (0,4)(3,0)[20]{}[(1,0)[1]{}]{} (0,16)(3,0)[20]{}[(1,0)[1]{}]{} (0,0) (4,4) (8,4) (12,8) (16,8) (20,4) (24,8) (28,12) (32,8) (36,4) (40,8) (44,12) (48,16) (52,20) (56,20) (63,5)[$\longleftrightarrow$]{} (79.85,8)[(1,-1)[4]{}]{} (80.15,8)[(1,-1)[4]{}]{}(84,4)(84.5,5)(85,4) (85,4)(85.5,3)(86,4) (86,4)(86.5,5)(87,4) (87,4)(87.5,3)(88,4) (88,4)[(1,1)[4]{}]{} (92,8)[(1,0)[4]{}]{} (96,8)[(1,-1)[4]{}]{} (100,4)[(1,1)[8]{}]{} (108,12)[(1,-1)[8]{}]{} (116,4)[(1,1)[4]{}]{} (120,8)(1,0)[5]{}[(1,0)[0.4]{}]{} (123.85,8)[(1,-1)[4]{}]{} (124.15,8)[(1,-1)[4]{}]{}(128,4)[(1,1)[4]{}]{} (132,8)[(1,0)[4]{}]{} (80,8) (84,4) (88,4) (92,8) (96,8) (100,4) (104,8) (108,12) (112,8) (116,4) (120,8) (124,8) (128,4) (132,8) (136,8) We conclude this section by giving a more general identity. Let $a_{i,j,k}$ be the number of partial $k$-Motzkin paths of length $i-1$ ending at level $j-1$. Then we have $$\label{cameron1} \binom{i-1}{m}k^{i-1-m}=\sum_{j=1}^{i}a_{i,j,k-2}\binom{j+m}{2m+1}.$$ A Dyck path generalization of (\[eq1.1\]) ========================================= In this section, we give a Dyck path generalization of the matrix identity (\[eq1.1\]) on the little Schröder numbers. A $k$-Dyck path is a Dyck path in which an up step is colored by one of the $k$ colors $\{ 1, 2, \ldots, k\}$ if it not immediately followed by a down step . In this section, we aim to give the following generalization of (\[eq1.1\]). Let $M=(m_{i,j})$ be a lower triangular matrix with the first column being the weight of $(t^2-t)$-Dyck paths of length $2i$. The other columns of $M$ are given by the following relation: $$\label{eq.6} m_{i,j}=m_{i-1,j-1}+(t^2-t+1)m_{i-1,j}+(t^2-t)m_{i-1,j+1}.$$ Then we have the following matrix identity $$\label{eq.t} (m_{i,j}) \times \begin{bmatrix} 1 \\ t^2-(t-1)^2 \\ \ t^3-(t-1)^3 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 \\ t^2+t \\ (t^2+t)^{2} \\ \vdots \end{bmatrix}.$$ The matrix identity (\[eq1.1\]) is a consequence of (\[eq.t\]) by setting $t=2$. By using generating functions, one can verify that the number of $2$-Dyck paths of length $2n$ equals the number of little Schröder paths of length $n$. We now proceed to give a combinatorial proof of (\[eqt.t\]). To this end, we need to give a combinatorial interpretation of the entries in the matrix $M$ in (\[eq.t\]). We may define a composition of $k$-Dyck path $P$ as a sequence of $k$-Dyck paths $(P_1, P_2, \ldots, P_j)$ such that $P=P_1P_2\cdots P_j$, where $j$ is called the number of segments. Let $a_{i,j}$ be the sum of weights of compositions of $(t^2-t)$-Dyck paths of length $2i$ with $j$ segments. Then $a_{i,j}$ satisfies the recurrence relation (\[eq.6\]). The proof of the above lemma is similar to that of Lemma \[dotheom\]. Let us rewrite (\[eq.t\]) as follows $$\label{eqt.t} \sum_{j\geq 1}m_{i,j}(t^j-(t-1)^j)=(t^2+t)^{i-1}.$$ In order to deal with $m_{i,j}(t^j-(t-1)^j)$ combinatorially, we introduce a coloring scheme on a composition of a $(t^2-t)$-Dyck path with $j$ segments. Suppose that we have $t$ colors $c_1, c_2, \ldots, c_t$. If we use these $t$ colors to color the $j$ segments such that the first color $c_1$ must be used, then there are $t^j-(t-1)^j$ ways to accomplish such colorings. We simply call such colorings [*$t$-feasible colorings*]{}. We can now present a bijection leading to a combinatorial proof of (\[eqt.t\]). There is a bijection between the set of compositions of $(t^2-t)$-Dyck paths of length $2i$ with a $t$-feasible coloring on the segments and the set of sequences of length $i-1$ on $t^2+t$ letters. The desired bijection $\sigma$ is constructed as follows. Let $(P_1, P_2, \ldots, P_j)$ be a composition of a $(t^2-t)$-Dyck path $P$ of length $2i$ with a $t$-feasible coloring on the segments. We will use the following alphabet that contains $t^2+t$ letters: $$\label{alphabet} \{ \alpha_r, \;|\; 1\leq r\leq t\} \cup \{ \beta_s, \;|\; 1\leq s\leq t-1\} \cup \{ \gamma_{k} \; | \; 1 \leq k \leq t^2-t\} \cup \{ \delta\}.$$ For $i=1$, both the composition and the $t$-feasible coloring are unique. We set the corresponding sequence to be empty. For $i\geq 2$, we consider the following cases: 1. If $P_1=UD$, $P_1$ is colored by $c_r$ ($1\leq r\leq t$) and $(P_2, \ldots, P_j)$ still has a $t$-feasible coloring. Then we set $\sigma(P_1, \ldots P_j)=\alpha_r \sigma(P_2, \ldots, P_j)$. 2. If $P_1=UD$, $P_1$ is colored by $c_1$ and $(P_2, \ldots, P_j)$ does not inherit a $t$-feasible coloring. Assume that $P_2$ is colored by $c_{s+1}$ ($1\leq s\leq t-1$). Then we change the color of $P_2$ to $c_1$ and set $\sigma(P_1, \ldots, P_j)=\beta_s \sigma(P_2, \ldots, P_j)$. 3. If $P_1=UDQ$, where $Q$ is not empty. Then we set $\sigma(P_1, \ldots, P_j)=\delta\sigma(Q, P_2, \ldots, P_j)$. 4. If $P_1=UQD$, where $Q$ is not empty and the first up step of $P$ has color $k$ ($1 \leq k \leq t^2-t$) because the first step of $Q$ is an up step. Then we set $\sigma(P_1, \ldots, P_j)=\gamma_{k} \sigma(Q, P_2, \ldots, P_j)$. 5. If $P_1=U Q_1 D Q_2$, neither $Q_1$ nor $Q_2$ is empty and the first up step of $P$ has color $c_k$. Since $k$ ranges from $1$ to $t(t-1)$, we may encode a color $c_k$ by a pair of colors $(c_p, c_q)$ where $p$ ranges from $1$ to $t$ and $q$ ranges from $1$ to $t-1$. Moreover, we may use $(c_r, \beta_s)$ to denote a color $c_k$. Then we assign color $c_r$ to $Q_1$, pass the color of $P_1$ to $Q_2$, and set $\sigma(P)=\beta_s\sigma(Q_1, Q_2, P_2, \ldots, P_j)$. For each case, the resulting path is always a sequence of length $i-1$. In order to show that $\sigma$ is a bijection, we proceed to construct the inverse map of $\sigma$. Let $S$ be a sequence of length $i-1$ on the alphabet (\[alphabet\]). If $i=1$, then we get the unique Dyck path $UD$ and the unique composition with a $t$-feasible coloring. Note that the up step in the Dyck path $UD$ is not colored. We now assume that $i>1$. It is easy to check that Cases 1, 3, and 4 are reversible. It remains to show that Cases 2 and 5 are reversible. In fact, we only need to ensure that Case 2 and Case 5 can be distinguished from each other. For Case 2, either $j=2$ or $(P_3, \ldots, P_j)$ does not have a $t$-feasible coloring. On the other hand, for Case 5, $(Q_2, P_2,\ldots, P_j)$ is always nonempty and it has a $t$-feasible coloring. This completes the proof. We also have a combinatorial interpretation of the matrix identity (\[eq1.1\]) based on little Schröder paths. The idea is similar to the proof given above, so the proof is omitted. [**Acknowledgments.**]{} This work was supported by the 973 Project on Mathematical Mechanization, the National Science Foundation, the Ministry of Education, and the Ministry of Science and Technology of China. The third author is partially supported by NSF grant HRD 0401697. [100]{} E. Barcucci, A. del Lungo, E. Pergola and R. Pinzani, A construction for enumerating $k$-coloured Motzkin paths, Lecture Notes in Computer Science, Vol. 959, Springer, Berlin, 1995, pp. 254-263. M. Aigner, Catalan-like numbers and determinants, [*J. Combin. Theory, Ser. A*]{}, 87 (1999) 33-51. D. Callan, A combinatorial interpretation of a Catalan numbers identity, [*Math. Mag.*]{}, 72 (1999) 295-298. N. Cameron and A. Nkwanta, On some (pseado) involutions in the Riordan group, [*J. Integer Sequences*]{}, 8 (2005), Article 05.3.7. R. Chapman, Moments of Dyck paths, [*Discrete Math.*]{}, 204 (1999) 113-117. E. Deutsch, Dyck path enumeration, [*Discrete Math.*]{}, 204 (1999) 167-202. E. Deutsch and L. Shapiro, A bijection between ordered trees and $2$-Motzkin paths and its many consequences, [*Discrete Math.*]{}, 256 (2002) 655-670. F. Harary and R. C. Read, The enumeration of tree-like polyhexes, [ *Proc. Edinburgh Math. Soc.*]{}, (2) 17 (1970) 1-13. L. Shapiro, Bijections and the Riordan group, [*Theoretical Computer Science*]{}, 307 (2003) 403-413. L. Shapiro, S. Getu, Wen-Jin Woan, and L.C. Woodson, The Riordan group, [*Discrete Appl. Math.*]{}, 34 (1991) 229-239. L. Shapiro, Wen-Jin Woan, and S. Getu, Runs, slides and moments, [ *SIAM J. Alg. Disc. Math.*]{}, 4 (1983), 459-466. L. Shapiro, A Catalan triangle, [*Discrete Math.*]{}, 14 (1976), 83-90. N.J.A. Sloane, S. Plouffe, The Encyclopedia of Integer Sequence, Academic Press, San Diego, 1995, online at [www.research.att.com/126 njas/sequences/]{}. R. Sprugnoli, Riordan arrays and combinatorial sums, [*Discrete Math.*]{}, 132 (1994) 267-290. R.P. Stanley, [*Enumerative Combinatorics, Vol. 2*]{}, Cambridge University Press, Cambridge, 1999. R.A. Sulanke, Moments of generalized Motzkin pahts, [*J. Integer Sequences*]{}, 3 (2000) 00.1.1. Wen-Jin Woan, L. Shapiro, and D.G. Rogers, The Catalan numbers, the Lebesgue integral, and $4^{n-2}$, [*The Amer. Math. Monthly*]{}, 104 (1997) 927-931.
{ "pile_set_name": "ArXiv" }
For the first time ever Ford introduces a drag race vehicle with a full-electric drivetrain system. It is named Mustang Cobra and is capable to crush the quarter-mile in the low 8-second range at more than 170mph (307km/h). This battery-powered Mustang delivers a total of 1,400hp and 1,100lb-ft of torque. The vehicle follows the debut of the first all-electric Mustang - the Mach-E SUV and represents another opportunity to advance the Mustang heritage and performance while, at the same tie, in $15,000 Giveaway Sponsored by RAXIOM AmericanMuscle’s Biggest Giveaway of the Year! PAOLI, Pa. (February 25th, 2020) – Attention Dodge Challenger and Ford Mustang owners: here is your chance to be the winner of AmericanMuscle’s (AM) biggest giveaway prize of 2020! AM’s $15K Giveaway, sponsored by RAXIOM, is an enter-daily competition where participants can increase their chances of winning by visiting RAXIOM’s brand pages and completing the entry form every day. Even though RAXIOM is kn The Oklahoma-based custom coachbuilder Classic Recreations has introduced its latest vehicle – a 1969 Ford Mustang Mach 1. The customized vehicle is nicknamed “Hitman” and is the first within an exclusive lineup, marking the start of a collaborative project between Ford Motor Company and Classic Recreations team. “Hitman” features an authentic 1969 Mach 1 body, restored to factory-new condition, before receiving its inter-cooled, twin-turbocharged Ford 32-valve Coyote V8 that generates the wh Win 1 of 3 $1,500 Gift Cards | Enter Daily Until 2/12/20 PAOLI, Pa. (January 28th, 2020) – Attention Mustang and Challenger owners: here is your chance to take home one of three $1,500 gift cards from AmericanMuscle (AM)! AM’s “Refund Your Build” sweepstakes is an enter-daily giveaway giving entrants multiple chances to enter for the chance to take home one of three AM gift cards. Here in perfect time to supplement your vehicle build, AM’s “Refund Your Build” sweepstakes grants three final The famous custom car manufacturer and high-quality billet parts builder Ringbrothers unveiled its latest and advanced 1969 Mustang, known as “UNKL” at the 2019 SEMA Show. UNKL will leave the Spring Green, Wisconsin shop with numerous upgrades and changes, including widened body and exclusive race-inspired theme. Let’s check out more! UNKL is powered by a 520-cubic-inch Jon Kaase Boss engine that produces a total of 700hp. This power unit is mated to a six-speed Tremec gearbox and a QA1 carbo SR Performance $5k Giveaway on AmericanMuscle ATTENTION MUSTANG OWNERS; START YOUR ENGINES! This is your chance to win a $5,000 Mustang parts shopping spree on AmericanMuscle (AM) courtesy of SR Performance. SR Performance is fast becoming one of the most trusted names in the Mustang aftermarket offering performance parts which surpass factory OEM standards. Even though SR Performance specializes in go-fast parts, prizes are not limited solely to SR’s line as the finalist can choose from a MMD $5k Giveaway on AmericanMuscle ATTENTION DODGE CHALLENGER AND FORD MUSTANG OWNERS: this is your chance to win a $5000 shopping spree on AmericanMuscle (AM)! The MMD $5000 Giveaway is sponsored by none other than Modern Muscle Design (MMD), an industry leading automotive exterior styling brand combining iconic muscle design with next-generation manufacturing processes to ensure the highest quality standards. Potential participants can enter daily to maximize their chances in winning a site As it comes to the new Mustang, tuning specialists from Schropp Tuning have already presented their exclusive supercharger kit for the EU 2018 facelift model. Something more, the team has expanded the components that are part of the pack and now Mustang could be changed entirely and significantly. Let’s check out more! As a slogan, the team has chosen “Performance Matters”. And there’s a fine reason for that. For the updated 2018 version, Schropp Tuning includes a boost in performance to up t Attention all 1979-2019 Mustang and 2008-2019 Challenger owners: AmericanMuscle’s latest giveaway could earn you a $5,000 sitewide shopping spree. Each vehicle model has its own entry form where participants can enter daily for their best chance to win their choice of products from AmericanMuscle.com. AmericanMuscle Mustang’s (AMM) Giveaway is sponsored by ROVOS Wheels—an industry leader in aftermarket Mustang Wheels. Even though ROVOS is strictly a wheel company, the grand finalist is welcom Ford team would aid Juvenile Diabetes Research Foundation by giving a single unit of the iconic Dark Highland Green Mustang Bullitt Kona Blue to a lucky participant that would drive the exclusive vehicle home. Recently revealed, the vehicle showcases tons of incredible features that altogether merge into a single next-gen machine that would impress even the sceptics. JDRF Mustang Bullitt features prominent gray wheels that perfectly enhance the Kona Blue exterior. It comes with a large 5.0-li Ford Performance reveals details about one of the fastest drag vehicles that the brand has ever produced – a menacing Mustang that is capable of covering quarter of a mile in mere mid-eight-second range. This Mustang Cobra Jet is also a limited-edition vehicle that honors the 50th anniversary of the original drab beast, revealed in 1968. Making its debut this weekend at the 2018 Woodward Dream Cruise, the Cobra Jet would try to become the most powerful and quickest Mustang Cobra Jet ever crea
{ "pile_set_name": "Pile-CC" }
Sunday, April 25, 2010 Tweety in the slot It takes an unusual person to try to flip a town on an auction website. It takes unusual people, too, to buy this isolated place that's surrounded by cattle ranches, vast stretches of evergreens, grazing land and the occasional sagebrush rolling along Highway 20. On this highway, Wauconda is a pit stop at elevation 3,600 feet, a windy 25 miles east of Tonasket, and 12 miles west of Republic, the nearest towns with actual city streets. OK. It's not a very good setup, because it's off target. It's talking about the people, not the conditions of the sale,* so it doesn't give you the contrast you want for the "but sell it did." But at least the writer put the tense on the auxiliary and not on the main verb, because the writer was not Tweety Bird and did not jump up and down going "They did! They did sold a town on April 12!" Look, we always appreciate having fresh material for class, but seriously. What were you thinking here? * I'm prepared to be a little slack on the meaning of "surrounded" in some cases, but no -- you cannot be surrounded by an occasional sagebrush.
{ "pile_set_name": "Pile-CC" }
Na+/Ca2+ exchange in catfish retina horizontal cells: regulation of intracellular Ca2+ store function. The role of the Na+/Ca2+ exchanger in intracellular Ca2+ regulation was investigated in freshly dissociated catfish retinal horizontal cells (HC). Ca2+-permeable glutamate receptors and L-type Ca2+ channels as well as inositol 1,4,5-trisphosphate-sensitive and caffeine-sensitive intracellular Ca2+ stores regulate intracellular Ca2+ in these cells. We used the Ca2+-sensitive dye fluo 3 to measure changes in intracellular Ca2+ concentration ([Ca2+]i) under conditions in which Na+/Ca2+ exchange was altered. In addition, the role of the Na+/Ca2+ exchanger in the refilling of the caffeine-sensitive Ca2+ store following caffeine-stimulated Ca2+ release was assessed. Brief applications of caffeine (1-10 s) produced rapid and transient changes in [Ca2+]i. Repeated applications of caffeine produced smaller Ca2+ transients until no further Ca2+ was released. Store refilling occurred within 1-2 min and required extracellular Ca2+. Ouabain-induced increases in intracellular Na+ concentration ([Na+]i) increased both basal free [Ca2+]i and caffeine-stimulated Ca2+ release. Reduction of external Na+ concentration ([Na+]o) further and reversibly increased [Ca2+]i in ouabain-treated HC. This effect was not abolished by the Ca2+ channel blocker nifedipine, suggesting that increases in [Na+]i promote net extracellular Ca2+ influx through a Na+/Ca2+ exchanger. Moreover, when [Na+]o was replaced by Li+, caffeine did not stimulate release of Ca2+ from the caffeine-sensitive store after Ca2+ depletion. The Na+/Ca2+ exchanger inhibitor 2',4'-dimethylbenzamil significantly reduced the caffeine-evoked Ca2+ response 1 and 2 min after store depletion.
{ "pile_set_name": "PubMed Abstracts" }
This invention relates to a guide drum apparatus for a magnetic tape, and more particularly to a guide drum apparatus for guiding a magnetic tape around the periphery thereof for the purpose of recording and/or reproducing video signals on the tape. A guide drum apparatus according to this invention is preferably used in a video tape recorder or a video tape player in which a field or a frame of the video signal is successively recorded in the longitudinal direction of a magnetic tape, and the video signal is reproduced by magnetic heads mounted on the guide drum apparatus while the tape is moving around the periphery of the guide drum apparatus. In such video equipment, it is very important for the quality of a picture image to maintain the height and the straightness of a video track on the tape accurately while the tape is moving around the guide drum. It is also important that the rotating video heads should scan the video track on the tape accurately for good reproduction of the video signals. Conventionally, it is difficult in such video equipment to provide a simple construction of the guide drum apparatus in which the video heads can be easily adjusted to exactly trace the video track on the tape while the video heads are rotating at a high speed.
{ "pile_set_name": "USPTO Backgrounds" }
Here’s a step by step guide on how to analyze a Pinterest account and its competitors. 1. Analyze the overall performance. To see the big picture of where Pinterest stands in your social media strategy, compare your Pinterest account results with your pages in other social networks. And find out which social network tends to be more effective for your brand. Example: Walmart has the highest Amplification (Repins per Pin) and Applause (Likes per Pin) rates among top 6 retailers on Pinterest. Target totally loses the battle - having the biggest number of Followers but extensively lower number of engagements among 6 brands. 3. Analyze your Content. Analyze how effective your content strategy is at this point by looking at the % of ‘Dead posts’ in your account - absolutely ineffective pins with 0 Likes, Comments, and Repins. And see how it diverges from the results or your closest rivals. Example: Kohl’s is the winner with only 1% of absolutely unengaging pins and Target is totally defeated with outraging 99% of unengaging pins. Study your Posting Density effectiveness: How often you and your competitors post and how much engagement everyone gets - taking into account the Number of Followers. If posting a lot but not getting enough engagement: You may be spending a lot of resources on content creation in vain, because you are posting too much and followers don’t have enough time to engage with all the pins; or you are missing the best time to post; or your content ideas are not appealing to your fans. Example: Walmart is a true orator: While posting less than 100 pins per month (compared to 250-300 pins/month by other brands) they get more than 5 times more social interactions than all 5 other brands - taking into account the Number of Followers for each brand. 4. Gather ideas for improvement. Look at the best and worst pins among your competition and your own account to steal some great ideas from rivals and learn from their mistakes. Example: WalMart's best posts show that cooking receipts and visual step-by-step guides tend to resonate with brands’ fans on Pinterest. Take a look at the Best time to post report to increase the activity within your account. Compare ‘Current posting’ tab (time when you post now) with the ‘Engagement’ tab (best time to post) to see if you are missing the sweet spot.
{ "pile_set_name": "Pile-CC" }
Nonlinear structural mechanics based modeling of carbon nanotube deformation. A nonlinear structural mechanics based approach for modeling the structure and the deformation of single-wall and multiwall carbon nanotubes (CNTs) is presented. Individual tubes are modeled using shell finite elements, where a specific pairing of elastic properties and mechanical thickness of the tube wall is identified to enable successful modeling with shell theory. The effects of van der Waals forces are simulated with special interaction elements. This new CNT modeling approach is verified by comparison with molecular dynamics simulations and high-resolution micrographs available in the literature. The mechanics of wrinkling of multiwall CNTs are studied, demonstrating the role of the multiwalled shell structure and interwall van der Waals interactions in governing buckling and postbuckling behavior.
{ "pile_set_name": "PubMed Abstracts" }
Q: Google charts are generating a 1px seperator I am using google line chart and I am observing a issue. Whenever the chart values are stable there is seperator/1px bottom border is appearing but when there a change in value its not appearing. I wanted to get rid of border-bottom. JS: http://kunal-b2b.000webhostapp.com/scripts/google-charts.js Working Code: var data = google.visualization.arrayToDataTable([ ['Element', 'Density', { role: 'style' }], ['', 2, 'red'], ['', 3, 'green'], ['', 2, 'red'], ['', 2, 'green'], ['', 2 , 'green'] ]); Issue Code: var data = google.visualization.arrayToDataTable([ ['Element', 'Density', { role: 'style' }], ['', 2, 'red'], ['', 2, 'green'], ['', 2, 'red'], ['', 2, 'green'], ['', 2 , 'green'] ]); Refer to URL: URL: http://kunal-b2b.000webhostapp.com/test.html A: add the following option... baselineColor: 'transparent' see following working snippet... google.charts.load("current", {packages:['corechart']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable([ ['Element', 'Density', { role: 'style' }], ['', 2, 'red'], ['', 2, 'green'], ['', 2, 'red'], ['', 2, 'green'], ['', 2 , 'green'] ]); var options = { title: "", bar: {groupWidth: '100%'}, chartArea: { left: 0, width: "100%", top: 0 }, legend: 'none', height: '50', hAxis: { title: '' }, pointSize: 2, vAxis: { baselineColor: 'transparent', gridlines: { color: 'transparent' } }, 'backgroundColor': 'transparent', }; var chart_div = document.getElementById('chart_div'); var chart = new google.visualization.LineChart(chart_div); // Wait for the chart to finish drawing before calling the getImageURI() method. google.visualization.events.addListener(chart, 'ready', function () { chart_div.innerHTML = '<img src="' + chart.getImageURI() + '">'; console.log(chart_div.innerHTML); }); chart.draw(data, options); } <script src="https://www.gstatic.com/charts/loader.js"></script> <div id="chart_div"></div>
{ "pile_set_name": "StackExchange" }
Olefins, including ethylene, propylene and butenes, are major building blocks in the chemical process industries. These materials are either recovered from refinery streams or produced by cracking naphtha or LPG. Not with standing the success of these processes, there is an incentive to use methane as a raw material because of the large reserves of natural gas throughout the world. From the prior art (Kirk-Othmer, Encyclopedia of Chemical Technology, 4th ed., Vol. 5, p. 1031), methyl chloride, when heated to very high temperatures, is known to couple giving ethylene and hydrogen chloride. At somewhat lower temperatures, catalytic reactions involving methyl chloride also produce ethylene and other olefins. The literature (U.S. Pat. No. 5,099,084) further discloses a process for the chlorination of methane using hydrogen chloride as the source of chlorine. This process, however, is attended by several drawbacks. Not only is methyl chloride produced, but the higher chlorinated methanes, including methylene chloride, chloroform and carbon tetrachloride, are also generated. In addition, when air is employed in the catalytic reaction, a substantial quantity of gases must be vented, thereby complicating emission control problems and related environmental concerns. On the other hand, the use of pure oxygen hinders the reaction due to the formation of hot spots in the catalyst bed. There consequently exists a need for a process that starts with methane as a raw material and converts it through the formation of methyl chloride into olefins. Such an integrated process must at once be economical to operate and reduce the inefficiencies characterizing conventional processes.
{ "pile_set_name": "USPTO Backgrounds" }
Music sex marriage Voluntarily so and does with her heroines and other andrea corr oprah that the command. Marriage Music sex. More often asked to say yes to all your parents gay dating simulator games waking. Dating sites for outdoor lovers. It toys you smiling, hundred stone broadly review of some of the parish Azerbaijani dating sites so that you cry the. Religious vows, rituals, readings and music should be allowed in civil marriage, study shows Binding "I once it worked a bit of the lifestyle from it," marrkage feels. And you have yourself to be a Tremendous leader. Inwith a young Supreme Court same-sex cucumber zephyr on the rear, the director of photos anticipating same-sex peanuts assigning lurched. Inwith a landmark Supreme Court same-sex marriage case on the horizon, the number of videos depicting same-sex couples msrriage multiplied. Official guidance requires registrars to exclude anything they understand to be "religious in nature". And so much of what this was about was just learning to be mindful and thoughtful about the people who are partly responsible for the future of our industry. Marriage Music sex The top major-label presidents all declined to be interviewed for this story. The mariage had too marriagf quotes that were coming back to haunt him for a trade organization with the goal of furthering the worldwide love of country music. I expect that we can be better, all of us, me included. But the law in this area is in urgent need of reform—at a minimum to clarify what is required, and to eliminate inconsistencies in practice, and ideally to permit greater flexibility in what can be included in such ceremonies. The band took the dissent, which included bizarre phrases like "Ask the nearest hippie," added some of Scalia's other famously weird dissent quotesand set it to music. We're looking at matriage, Bey! Just as important as this success, however, is the far more personal purpose the songs have served. The two men, a real-life item, got engaged on the video's set. Inwith a landmark Supreme Court same-sex marriage case on the horizon, the number of marriagw depicting same-sex couples marrying multiplied. On the flipside, a song such as February channels her love for her wife, who was pregnant with their first child as the marriage debate was taking place. But when people can look online and see that the CMA, which is the gold standard in our industry, had a discussion around issues of inclusion, and that the guy who was pandering to old belief systems was uninvited to the table, that absolutely moves the needle for the industry — which sets the tone for what the fan base will and will not tolerate. Relaxing this restriction would allow couples to create marriage covenants using words that are most meaningful to them. Olympics as crucial as this quality, however, is the far more adaptive purpose the women have seen. The band portrayed the dissent, which required bizarre tubes like "Ask the fastest growing," experienced some of Scalia's other far weird dissent quotesand set it to prostitution. The band took the dissent, which included bizarre phrases like "Ask the nearest hippie," added some of Scalia's other famously weird dissent quotes narriage, and set it to music. Nashville is the 10th-fastest-growing metropolitan area in the U. They come to attend Vanderbilt and Belmont universities and stay for a low unemployment rate, thriving restaurant scene, and a culture some compare to Austin in its heyday. JUNE Jennifer Hudson pulled our heartstrings by showing an estranged father and son mending their relationship just in time for the son's marriage to his male partner in her video for "I Still Love You.
{ "pile_set_name": "Pile-CC" }
On occasions classes do have to be cancelled at short notice. You can find out if your chosen class is cancelled by calling 01293 438160 before you leave home. The automated message will let you know if any of our Crawley Wellbeing classes are cancelled for that day. Take your first steps to a healthier future Walking is a great form of exercise regardless of how fit or old you are. Walking is especially important for those with specific health problems or with limited fitness. This easy, safe and gentle exercise can help to reduce the risk of cancer, coronary heart disease, strokes, diabetes, high blood pressure, Alzheimer’s disease, osteoporosis, arthritis and stress. Health walks are provided free of charge and all you need are comfortable, laced shoes. Walks are supervised by qualified walk leaders to ensure your safety and enjoyment. Graded Walks The type of walks available Level 1 (LV1) Walk: 30 minutes in duration, suitable for people who have not walked much before, are looking to be more active, or are returning from injury or illness. They are on flat ground or gentle slopes with mainly firm surfaces and no steps or stiles. Level 2 (LV2) Walk: suitable for people who are looking to increase their activity levels. They are between 30-60 minutes and may include some moderate slopes, steps, uneven surfaces and possible stiles. Level 3 (LV3) Walk: for people looking for more challenging walks and increasing their level of physical activity. They are generally 45 – 90 minutes and may include steeper slopes and uneven surfaces. We aim to provide up-to-date and accurate information, but we will not accept liability for the accuracy or currency of the information provided by third parties. Listings of events on our website should not necessarily be taken as an endorsement of that event. See our disclaimer statement for further information.
{ "pile_set_name": "Pile-CC" }
Search Main Column Levels Prime Content Mekhrubon Sanginov Moves To Vegas To Fulfil Dream Main News13-Jun-18 Super-welterweight prospect Mekhrubon Sanginov (5-0 3 KOs), from Dushanbe, Tajikistan, has moved to Las Vegas, Nevada to continue his dream of becoming a world champion. The highly decorated amateur returns to action June 23, 2018 in Tijuana, Mexico. "I am excited to take my training to a new level working in Las Vegas," said Sanginov. "Las Vegas is the fight capital of the world and I feel that training in Las Vegas will take my career to the next level." Sanginov, who won the WBC Youth Middleweight title in his last bout, has begun training with Justin Gamber, the coach of undefeated super middleweight contender, Caleb "Sweethands" Plant (17-0, 10 KOs). Mekhrubon is looking forward to taking his career to new heights, as he will campaign at super-welterweight moving forward. Sanginov is currently a promotional free agent. "I am excited to be training with Justin Gamber," Sanginov continued. "He is an experienced coach, who is making me into a world champion. The sparring I am getting is top notch sparring as well. I’ll be looking to be signing with a credible promoter in the near future." Sanginov was an outstanding amateur, amassing a record of 105-14, which made him a heavy fan favorite in his native country of Tajikistan. His hometown fans are wanting to see how he progresses as a professional. "I am going to make a statement in the world of boxing, especially in the super-welterweight division, and the world will know my name after my upcoming performances." Mekhrubon concluded.
{ "pile_set_name": "Pile-CC" }
2017 Registration begins Sept 1st, 2016 and runs through December 31st, 2016. After that time, a $25 late fee will be imposed. However, THERE IS LIMITED AVAILABILITY AS LEVELS (BOYS & GIRLS - U9, U11, U13 & U15)
{ "pile_set_name": "Pile-CC" }
1. Field of the Invention The present application relates to an optical transmitter, in particular, the application relates to a circuit for driving a semiconductor laser diode (hereafter denoted as LD). 2. Related Background Art One type of drivers for an LD has been known as, what is called, a shunt-driver. Various prior arts have disclosed the shunt-driver circuits. The shunt-driver absorbs a primary portion of the bias current supplied to the LD responding to a driving signal provided to an input of the driver. Thus, the bias current flowing in an LD connected to the output of the shunt-driver in a primary portion thereof is shunted to the driver as leaving a rest portion in the LD. Thus, the LD is modulated by the driving signal. In order to adjust average power and an extinction ratio of an optical output of the LD, the driver often controls an input bias level thereof. However, an optical transmitter including the shunt-driver leaves a subject that, in the start of the operation, the activation of the bias current for the LD precedes the powering of the driver circuit. Under such a situation, the driver circuit absorbs no bias current; that is a whole bias current is supplied to the LD, which results in an excess emission of the LD. Also, at stopping the optical transmitter, the power-down of the driver circuit occasionally precedes the cut-off the bias current to the LD. In such a case, the whole the bias current instantaneously flows in the LD to cause the excess emission. The present application is to provide a technique to prevent such an excess emission of the LD.
{ "pile_set_name": "USPTO Backgrounds" }
During an informal meeting at its Amsterdam headquarters Paypal has announced it will be facilitating In App Purchases in iPhone and Android applications. According to Paypal Benelux Country Manager Dennis van Allermeersch, Paypal has managed to come up with a solution that is acceptable to Apple, Allermeersch noting: “We have found a way, Apple is OK with it”. Android users will also be able to use Paypal as a payment method in the Android Market, adding an alternative to Google Checkout transactions. At the time of writing there are very few details explaining how Paypal’s surprising new service will work. It seems incredible that Paypal have managed to gain approval from both Apple and Google to provide a payment method that directly competes with proprietary services on both Android and iPhone devices. Paypal will launch their In App payment service in Q2 to the US, Canada, UK, France, Italy, The Netherlands and Australia. We have contacted Paypal for more information, watch this space. Update:Boxcar creator Jonathan George reached out to us to explain the iPhone In App payment system which Paypal refer to as its “iPhone Library” will be used to facilitate payments for physical goods and services, not for virtual purchases as first thought. George was present at last weeks iPadDevCamp in San Jose, where Paypal was advertising its iPhone Library. He speculates that Apple approved the service as it would compliment its virtual In App purchase system, providing tools that Apple doesn’t currently provide for iPhone owner. With our iPhone library, you’ll be able to start accepting payments for physical goods and services in a matter of minutes. Whether you’re looking to replicate a storefront you have online or you’re making something unique for the mobile device, this library can help you get paid. You won’t need to collect financial information or make users go to your website to complete a purchase. With our library, we’ll take care of the payment flow and you can keep focusing on making those great apps. Here’s how it will work. You’ll call a method to tell us how much you want to charge and who the money should go to. We’ll then slide up a view for the consumer to confirm the payment. When it’s complete, we’ll slide this back down. Paypal also linked to a video of Osama Bedier, VP, Product Development. PayPal demonstrating the service: We have been unable to confirm whether the service will be launching on Android, we are awaiting a response from Paypal. Jonathan George suggests that iPhone Library is Paypal’s answer to Twitter co-founder Jack Dorsey’s new payment startup Square. What do you think? Can you see this new service changing the way we buy goods via our smartphones?
{ "pile_set_name": "Pile-CC" }
Do aspects of the Education Department's waivers go too far? - What Obama can do now on his higher ed plan - Core backlash - Duncan under fire Text Size DO ASPECTS OF THE EDUCATION DEPARTMENT’S WAIVERS GO TOO FAR? -- Pro Education’s Caitlin Emma reports: “An unprecedented set of recent Education Department decisions about No Child Left Behind waivers is at the least an overreach and at the very worst, illegal, a chorus of critics say. Last week, the department declared NCLB waivers for Kansas, Oregon and Washington state “high-risk” because each state has more work to do in tying student growth to teacher evaluations – a major requirement for states that want out of the more arduous provisions of the law. And in early August, the department granted waivers to eight districts in California, the first time the department bypassed states on No Child Left Behind flexibility. Observers and analysts say the department’s high-risk waiver decision simply isn’t allowed under federal law. And they say Education Secretary Arne Duncan broke with what he told Congress in February about a preference not to grant district waivers, which these critics think are just plain bad policy. “NCLB is long overdue for reauthorization. With that renewal nowhere in sight, Duncan has granted more than 40 waivers of the law to states, D.C. and the group of California districts, freeing states from requirements such as having all students reading and doing math at grade level by the 2013-14 school year. ‘Why deal with pesky Congress when you get to make all the rules?’ said Michael Petrilli, executive vice president of the conservative Thomas B. Fordham Institute. The department doesn’t have the authority to declare waivers high-risk, he said, and one of the states should call Duncan’s bluff. ‘One of these states should sue,’ Petrilli said. ‘It’s absolutely nuts.’” Read the full story: http://politi.co/18Oagyo WHAT OBAMA CAN DO NOW ON HIS HIGHER ED PLAN -- The most dramatic parts of the president’s plan to reshape federal financial aid would require Congress to act. But there are still some things he can do as early as today. From my story: “The ambitious marquee financial aid proposal — tying federal dollars to the 'value' colleges offer students — requires Congress to act. But the Education Department could rate colleges based on access, affordability and outcomes, and make those ratings public, using the data to encourage colleges to change. It could offer flexibility in using federal financial aid to speed college completion. And it could push borrowers to consolidate loans and enroll in more generous repayment programs. … Throughout the Obama administration, the Education Department has pursued a far-reaching agenda with or without Congress. It’s waived key provisions of No Child Left Behind for states that met certain criteria, written controversial new rules governing for-profit colleges, and pushed to collect and publicize more education data. All those efforts could serve as templates — and cautionary tales — as the department puts the president’s ideas in practice.” Read the full story: http://politi.co/16kcSk3 HAPPY FRIDAY and welcome to Morning Education, where we’re counting down the hours to the end of another crazy week in education news. I’m heading to California for a half-marathon in wine country and a weeklong reporting trip in the Bay Area. But keep sending news tips, reactions, gossip and more to lnelson@politico.com and @libbyanelson. And follow us at @Morning_Edu and @POLITICOPro. PROGRAMMING NOTE -- Morning Education, and the rest of POLITICO’s morning newsletter lineup, are taking a break next week. We’ll be back in your inbox bright and early with your post-recess education news on Tuesday, Sept. 3. CORE BACKLASH – More turmoil on the Common Core front: Opponents in Maine are working to put a measure on the November, 2014 ballot that would repeal Common Core. They’ll need to collect nearly 60,000 signatures in the next six months to make that happen. Meanwhile, in Georgia, Republican Gov. Nathan Deal has urged the state Board of Education to “un-adopt” some of the standards. He also wants a comprehensive review that compares Common Core to the standards Georgia used to use. Deal is responding to fierce opposition to Common Core from the right; the Atlanta Journal-Constitution has the story here: http://bit.ly/12ukD9y BIG DATA -- The National School Boards Association says the Education Department is asking way too much of districts in its latest data collection proposal. The Education Department’s office for civil rights wants to collect data about bullying, absenteeism, expulsions and much, much more. The NSBA said some of the feds’ request isn’t relevant to its work, some of the requests are too vague to collect quality information and providing all of the information will take a lot of time and effort on districts’ part. The ED proposal: http://1.usa.gov/12ujnDz The NSBA letter: http://bit.ly/12ujwaa DUNCAN UNDER FIRE – A coalition of parent advocates from cities including Chicago, Baltimore, New York and Philadelphia are planning a series of rallies next week to call for Education Secretary Arne Duncan to resign. The group, known as Journey for Justice, has previously filed civil rights complaints protesting school closings. Members have plenty of passion ... but Duncan has made clear he’s not going anywhere. “Secretary Duncan will continue working to reduce college costs, provide more access to high quality pre-k programs and set high standards for our nation’s children … for years to come,” spokesman Cameron French said. Outspoken ed reform critic Diane Ravitch tells POLITICO she figured as much. She calls Duncan the worst secretary of education in U.S. history but adds: “I have not called for his resignation because I knew no one would listen.” OBAMA CALLS BOOKKEEPER WHO STOPPED SCHOOL SHOOTING -- POLITICO’s Nick Gass: “President [Barack] Obama called Antoinette Tuff, the woman who calmly talked down an armed 20-year-old as he walked into an Atlanta-area elementary school, on Thursday to thank her for her ‘courage.’ ‘This afternoon, the president called Antoinette Tuff to thank her for the courage she displayed while talking to a gunman who entered the school where she works earlier this week,’ the White House said in a statement.” http://politi.co/12ul8k2 BREAKING BAD: Inmates who participate in correctional education programs are 43 percent less likely to return to prison than their fellow inmates, according to findings from the largest-ever analysis of correctional educational studies. The research, funded by the Justice Department’s Bureau of Justice Assistance, was released Thursday by the RAND Corporation. “As it stands, too many individuals and communities are harmed, rather than helped, by a criminal justice system that does not serve the American people as well as it should,” Attorney General Eric Holder said. “This important research is part of our broader effort to change that.” Correctional education programs are also cost effective, the findings show. An investment of $1 in correctional education can reduce incarceration costs by $4-$5 dollars during the first three years after release, a time when the likelihood of returning to prison is high. Check out the results: http://1.usa.gov/14HSsln TODAY AND NEXT WEEK IN WASHINGTON -- American Action Forum panel discussion on using student achievement data to evaluate teachers, 9:30 a.m. http://bit.ly/19kUnAr. … Secretary Duncan will join a virtual conversation with Sal Khan, founder of Khan Academy, to talk about the future of education and steps to ensure all Americans have access to a high quality education. http://1.usa.gov/15DR7. … Florida education summit, Aug. 26-28, http://bit.ly/16TIlzN. … Advisory Committee on Student Financial Assistance meets Aug. 29. MOVERS AND SHAKERS -- Sonja Brookins Santelises, current chief academic officer in the Baltimore City school district, joins Ed Trust as VP of K-12 policy and practice.
{ "pile_set_name": "Pile-CC" }
This article was originally published online as an accepted preprint. The "Published Online" date corresponds to the preprint version. You can request a copy of the preprint by emailing the Biopolymers editorial office at biopolymers\@wiley.com INTRODUCTION {#bip22298-sec-0001} ============ According to the classical structure‐function paradigm, a specific function of a protein is determined by its unique 3D structure, which can be considered as an aperiodic crystal. For a small globular protein, all the information on how to gain functional 3D structure is encoded in its amino acid sequence.[1](#bip22298-bib-0001){ref-type="ref"}, [2](#bip22298-bib-0002){ref-type="ref"} This hypothesis represents a foundation of the "one sequence‐one structure‐one function" model, which is the cornerstone of modern structural biology.[1](#bip22298-bib-0001){ref-type="ref"}, [2](#bip22298-bib-0002){ref-type="ref"}, [3](#bip22298-bib-0003){ref-type="ref"} This structural rigidity of ordered proteins determined their ability to form crystals, which allowed the X‐ray‐based determination of 3D‐structure of many proteins down to the atomic resolution.[4](#bip22298-bib-0004){ref-type="ref"} The protein misfolding phenomenon, when due to the effect of environmental factors or because of the genetic defects (mutations), a polypeptide chain has lost its capability to gain a proper functional 3D structure (i.e., became misfolded), and which has multiple detrimental consequences (such as lost of function, gain of toxic function, aggregation, disbalance in proteostasis, potential cell death, etc.) that constitute molecular basis of various conformational diseases, seems to support this concept.[5](#bip22298-bib-0005){ref-type="ref"}, [6](#bip22298-bib-0006){ref-type="ref"}, [7](#bip22298-bib-0007){ref-type="ref"} However, the recent revelation of countless examples of intrinsically disordered proteins (IDPs) and hybrid protein containing ordered domains and IDP regions (IDPRs) has cast doubt on the general validity of the structure‐function paradigm and revealed an intriguing route of functional disorder.[8](#bip22298-bib-0008){ref-type="ref"}, [9](#bip22298-bib-0009){ref-type="ref"}, [10](#bip22298-bib-0010){ref-type="ref"}, [11](#bip22298-bib-0011){ref-type="ref"}, [12](#bip22298-bib-0012){ref-type="ref"}, [13](#bip22298-bib-0013){ref-type="ref"}, [14](#bip22298-bib-0014){ref-type="ref"}, [15](#bip22298-bib-0015){ref-type="ref"}, [16](#bip22298-bib-0016){ref-type="ref"}, [17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"}, [21](#bip22298-bib-0021){ref-type="ref"}, [22](#bip22298-bib-0022){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [25](#bip22298-bib-0025){ref-type="ref"}, [26](#bip22298-bib-0026){ref-type="ref"} These findings clearly show that there are at least three globally different forms accessible to a protein in a living cell \[functional and folded, functional and intrinsically disordered (nonfolded), and nonfunctional and misfolded\], and that the propensities to be in one of these forms are encoded in the protein\'s amino acid sequence.[19](#bip22298-bib-0019){ref-type="ref"}, [27](#bip22298-bib-0027){ref-type="ref"}, [28](#bip22298-bib-0028){ref-type="ref"} Therefore, a polypeptide chain is constantly facing a choice between three potential routes, nonfolding, folding, and misfolding, with the last two representing competitive routes to higher structural order (see Figure [1](#bip22298-fig-0001){ref-type="fig"}).[19](#bip22298-bib-0019){ref-type="ref"}, [27](#bip22298-bib-0027){ref-type="ref"} For a single‐chain protein, folding, nonfolding, and misfolding pathways represents a choice of each individual molecule, whereas unproductive protein aggregation/fibrillation (that frequently follows protein misfolding and is often associated with the pathogenesis of several diseases) and functional oligomerization, and formation of various functional high order complexes is a fate of the ensemble of molecules. ![Fate of a newly synthesized polypeptide chain in a cell.](BIP-99-870-g001){#bip22298-fig-0001} Multiple factors, originating from the peculiarities of protein amino acid sequence and/or features of protein environment, might affect the choice between folding, misfolding, and nonfolding. At given environmental conditions, the primary selection between folding and nonfolding is determined only by the amino acid composition. For example, an abnormally highly charged polypeptide with low overall hydrophobicity will not fold, giving rise to an extended IDP (also known as natively unfolded protein), whereas a polypeptide chain with a balanced distribution of polar and hydrophobic residues will choose the folding path at the identical conditions. However, some changes in the amino acid sequence (point mutations) may favor the misfolding pathway for both the natively unfolded and the natively folded proteins. Importantly, for a given polypeptide chain, a chosen fate is not a final one and a choice may be further modulated by the environmental pressure (Figure [1](#bip22298-fig-0001){ref-type="fig"}).[19](#bip22298-bib-0019){ref-type="ref"} For example, IDPs may be forced to fold or misfold via the modification of their environment (addition of natural binding partners, changes in properties of solvent, etc.), whereas a destabilizing environment may push an ordered protein to the misfolding route. Alternatively, the presence of chaperones may reverse the misfolding route and effectively dissolve small aggregates.[29](#bip22298-bib-0029){ref-type="ref"} Another important point is that the pathological misfolding of extended IDPs to some extent resembles the process of normal protein folding and assembly, that is, it represents a way from a simple, flexible, and disordered conformation (mostly structure‐less polypeptide chain) via somehow more ordered partially folded intermediate(s), to a complex and rigid structure, for example, amyloid fibril. However, the pathological misfolding of a rigid globular protein involves a step of transient disordering and formation of a partially unfolded intermediate, which is followed by the subsequent increase in the order originated from the formation of specific protein aggregates. Besides discussed above considerations, these recent developments re‐emphasized the biological importance of the under‐folded protein conformations (or partially folded protein species). Such under‐folded proteins do not have unique well‐defined 3D structures existing instead as collapsed or extended dynamically mobile conformational ensembles. In classical structure‐to‐function paradigm, under‐folded entities without unique structure were mostly of academic interest, since they would be typically found at the end of denaturation processes under the highly nonphysiological conditions or as transiently populated folding intermediates. However, in a new view of correlations between protein structure, function, and dysfunction, one can find important implementations of under‐folded states for each of the major protein forms, functional and folded, nonfunctional and misfolded, and functional and intrinsically disordered. Here, under‐folded protein states serve as important folding intermediates of ordered proteins, or as functional states of IDPs and IDPRs, or as pathology triggers of some misfolded proteins. Based on their origin, conformational ensembles of under‐folded proteins can be classified as transient (folding and misfolding intermediates) and permanent (IDPs and stable misfolded proteins; see Figure [2](#bip22298-fig-0002){ref-type="fig"}). Permanently under‐folded proteins can further be split into intentionally designed (IDPs and IDPRs) and unintentionally designed (misfolded proteins). These different categories of under‐foldedness are differently encoded in protein amino acid sequences and play different roles in protein life. Sections below contain brief discussions of various roles of conformational ensembles in protein folding, misfolding, and nonfolding. ![Diversity of conformational ensembles of under‐folded proteins.](BIP-99-870-g002){#bip22298-fig-0002} CONFORMATIONAL ENSEMBLES AND PROTEIN FOLDING {#bip22298-sec-0002} ============================================ The ability of ordered proteins to adopt their functional highly structured states in the intracellular environment during/after biosynthesis on the ribosome is one of the most remarkable evolutionary achievements of biology. In this view, protein folding is taken as crucial continuation of protein biosynthesis process, where the information encoded in the DNA/mRNA nucleotide sequence is read step‐by‐step, and the corresponding amino acids are gathered one after another into the polypeptide chain that eventually folds into unique functional structure. In other words, during these processes, the one‐dimensional information encoded in the DNA nucleotide sequence is sequentially transformed into the one‐dimensional information of the protein amino acid sequence, which codes for the peculiarities of protein folding, that is, a specific way of gaining unique three‐dimensional structure. As the interactions between remote amino acid residues play a crucial role in protein folding, this process obviously deviates from the linear information transduction. Therefore, protein folding can be regarded as a second part of the genetic code, as the protein amino acid sequence contains information about its functional 3D structure. Many proteins have rigid globular structures in aqueous solutions and are functional only in this state. The native state of these proteins is a unique conformation, which is entropically unfavorable since it has significant restrictions of the conformational freedom. However, the unfolded state of a polypeptide chain is entropically favorable, representing a dynamic ensemble of a large number of conformations originating from the main chain rotational isomerization around *F* and *Y* angles. Therefore, the possibility of a given polypeptide chain to fold into a compact state is determined by its ability to form numerous intramolecular contacts of different physical nature, to compensate the free energy increase due to the decrease in the entropy component.[30](#bip22298-bib-0030){ref-type="ref"} The first direct evidence that all the information necessary for a given polypeptide chain to fold into a unique tertiary structure is encoded in protein\'s amino acid sequence was obtained by Anfinsen\'s group,[1](#bip22298-bib-0001){ref-type="ref"} who showed that the reduced and urea‐denatured ribonuclease A was able to completely restore its native structure and functional state after the removal of the denaturant and the reducing agent. Later, the capability to regain the native structure in vitro was demonstrated for a variety of proteins. In recent years, our understanding of the mechanisms of the protein self‐organization process has increased dramatically.[19](#bip22298-bib-0019){ref-type="ref"}, [27](#bip22298-bib-0027){ref-type="ref"}, [31](#bip22298-bib-0031){ref-type="ref"}, [32](#bip22298-bib-0032){ref-type="ref"}, [33](#bip22298-bib-0033){ref-type="ref"}, [34](#bip22298-bib-0034){ref-type="ref"}, [35](#bip22298-bib-0035){ref-type="ref"}, [36](#bip22298-bib-0036){ref-type="ref"} It is recognized now that only some amino acid residues are crucial for protein folding. Therefore, proteins with very low sequence identity/homology can have similar structures, whereas a single amino acid replacement can significantly affect the rate of protein folding, or in some extreme cases, can completely halt the correct protein folding.[27](#bip22298-bib-0027){ref-type="ref"} For a very long time, one of the most essential questions in protein science was how an unstructured polypeptide folds into a unique native protein with specific biological function in a reasonable period of time despite the fact that there is an astronomically large number of possible conformational states.[37](#bip22298-bib-0037){ref-type="ref"} To resolve this problem, a framework model of protein folding (also known as sequential mechanism of protein folding) was proposed by Oleg Ptitsyn in 1973 (see Figure [3](#bip22298-fig-0003){ref-type="fig"}).[38](#bip22298-bib-0038){ref-type="ref"} According to this model, the folding of a globular protein from its unfolded state represents a multistage process accompanied by the formation of several folding intermediates (each is represented as specific conformational ensemble) with the increasing level of structural complexity. The first stage results in the formation of the fluctuating secondary structure elements. These elements then collapse to form a compact but highly dynamic intermediate with the native‐like secondary structure, where the backbone movements are mostly restricted but the mobility of the side chains is still high. At the final stage, the unique 3D‐structure is formed by restricting the side chain mobility.[38](#bip22298-bib-0038){ref-type="ref"} Therefore, partially folded species with increasing degree of structural complexity were proposed to serve as universal folding intermediates. In a due time, one of these folding intermediate later named the 'molten globule (MG) state'[39](#bip22298-bib-0039){ref-type="ref"} was found in a test tube.[40](#bip22298-bib-0040){ref-type="ref"} Other partially folded intermediates \[e.g., premolten globule (pre‐MG) and highly ordered MG\] were later found.[19](#bip22298-bib-0019){ref-type="ref"} ![An oversimplified representation of a protein folding landscape.](BIP-99-870-g003){#bip22298-fig-0003} According to the current view, protein folding is a more complex process, where the transition from the unfolded state to the uniquely folded native state can be realized via different pathways that are determined by the protein\'s energy landscape.[41](#bip22298-bib-0041){ref-type="ref"}, [42](#bip22298-bib-0042){ref-type="ref"} This complex landscape shows the dependence of the free energy on all the coordinates determining the protein conformation. Since the free energy of unfolded polypeptide chain represents a large "hilly plateau" describing the dynamic ensemble of a large number of conformations, and since the number of conformational states accessible by a polypeptide chain is reduced while approaching the native state, the resulting energetic surface is known as the "energy funnel" model. The conformational ensemble of unfolded conformations is separated from the entrance to the folding funnel by high energetic barrier(s) corresponding to the transitional state(s).[30](#bip22298-bib-0030){ref-type="ref"} This barrier is of great importance for the proper protein functioning, as its existence guarantees the structural identity of all the native protein molecules. The ability of native globular proteins to form crystals is the major proof of this hypothesis.[27](#bip22298-bib-0027){ref-type="ref"} It is now generally accepted that protein folding involves discrete pathways with distinct intermediate steps. In this view, the role of under‐folded protein species is in helping proteins to fold. As a result, the experimental and theoretical studies on protein folding were traditionally centered on the search for and structural characterization of partially folded intermediates as a route to defining pathways of protein folding.[43](#bip22298-bib-0043){ref-type="ref"} There is considerable support for the idea that equilibrium partially folded conformations of a protein molecule can be good models for transient kinetic intermediates in protein folding.[19](#bip22298-bib-0019){ref-type="ref"}, [44](#bip22298-bib-0044){ref-type="ref"}, [45](#bip22298-bib-0045){ref-type="ref"}, [46](#bip22298-bib-0046){ref-type="ref"}, [47](#bip22298-bib-0047){ref-type="ref"}, [48](#bip22298-bib-0048){ref-type="ref"}, [49](#bip22298-bib-0049){ref-type="ref"}, [50](#bip22298-bib-0050){ref-type="ref"}, [51](#bip22298-bib-0051){ref-type="ref"}, [52](#bip22298-bib-0052){ref-type="ref"}, [53](#bip22298-bib-0053){ref-type="ref"}, [54](#bip22298-bib-0054){ref-type="ref"}, [55](#bip22298-bib-0055){ref-type="ref"}, [56](#bip22298-bib-0056){ref-type="ref"}, [57](#bip22298-bib-0057){ref-type="ref"}, [58](#bip22298-bib-0058){ref-type="ref"}, [59](#bip22298-bib-0059){ref-type="ref"}, [60](#bip22298-bib-0060){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"}, [63](#bip22298-bib-0063){ref-type="ref"} Therefore discovery and structural characterization of such equilibrium conformational ensembles is believed to considerably facilitate the description of the structural properties of short‐lived kinetic (transient) intermediates. The fact that the partially folded forms at moderate guanidinium hydrochloride (GdmHCl) or urea concentrations usually can be obtained only in mixture with native and/or unfolded forms, whereas the acid forms of globular proteins can be studied in pure state, makes partially folded conformations induced by extremely low (or extremely high) pH values very attractive targets for such structural studies.[40](#bip22298-bib-0040){ref-type="ref"} Close look at the Figure [3](#bip22298-fig-0003){ref-type="fig"}, which shows an oversimplified framework model (which can be taken as one of the vertical slices through the folding funnel), indicates that the protein folding process can be considered as a set of conformational transitions between several intermediate states. One should keep in mind, however, that in this context the term "intermediate state" has a very loose meaning since none of these partially folded forms represents a specific state with unique structure, but each of these forms should be considered as a dynamic conformational ensemble. Therefore, the majority of experimental techniques used for the structural characterization of these ensembles provide observables that are by definition statistical averages over the ensemble of conformations accessible to a protein. Since proteins are evolutionary edited random polypeptides,[52](#bip22298-bib-0052){ref-type="ref"}, [64](#bip22298-bib-0064){ref-type="ref"}, [65](#bip22298-bib-0065){ref-type="ref"} the understanding of the common physicochemical principles underlying the protein folding process relies on the delineation of the common polymer roots and their impact on the protein structures. The traditional way of such an analysis is a determination of the correlation between different physical characteristics of a polymer (e.g., its molecular density) and its length. Implementation of such analysis to a set of proteins in a variety of conformational states established a correlation between the hydrodynamic dimensions and the length of polypeptide chain.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [27](#bip22298-bib-0027){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"} The analyzed protein categories included ordered globular proteins with nearly spherical shapes; equilibrium MG; pre‐MG; denaturant‐unfolded proteins without crosslinks in the presence of strong denaturants (8*M* urea or 6*M* GdmHCl); and extended IDPs (native coils and native pre‐MG). In all the cases, a correlation between the apparent molecular density (determined as *ρ = M*/(4π*R* ~S~ ^3^/3), where *M* is a molecular mass and *R* ~S~ is a hydrodynamic radius of a given protein) and molecular mass was observed that gave rise to a set of the standard equations, *R* ~S~ *= K* ~h~ *M^ε^* (here, *K* ~h~ is a constant related to the persistence length and *ε* is a scaling factor that depends on solvent quality), for a number of conformational states of a polypeptide chain.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"} Therefore, for a given conformational state, parameters *K* ~h~ and *ε* were invariable over a wide range of chain lengths suggesting that the effective protein dimensions in a variety of conformational states can be predicted based on the chain length with an accuracy of 10%.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"} Thus, regardless of the differences in the amino acid sequences and biological functions, protein molecules behave as polymer homologues in a number of conformational states. DIVERSITY OF CONFORMATIONAL ENSEMBLES INVOLVED IN PROTEIN FOLDING PROCESS {#bip22298-sec-0003} ========================================================================= The unique 3D structure of a globular protein is stabilized by a set of noncovalent interactions of different nature. These include hydrogen bonds, hydrophobic interactions, electrostatic interactions, Van der Waals interactions, etc. Complete (or almost complete) disruption of all these interactions can be achieved in concentrated solutions of strong denaturants (such as urea or GdmHCl). Here, an initially folded and highly ordered molecule of a globular protein unfolds, that is, transforms into a highly disordered random coil‐like conformation.[66](#bip22298-bib-0066){ref-type="ref"}, [67](#bip22298-bib-0067){ref-type="ref"}, [68](#bip22298-bib-0068){ref-type="ref"}, [69](#bip22298-bib-0069){ref-type="ref"} However, environmental changes can decrease (or eliminate) only some noncovalent interactions, whereas the remaining interactions could stay unchanged (or even could be intensified). Very often, a globular protein will lose its biological activity under these conditions, thus becoming denatured.[69](#bip22298-bib-0069){ref-type="ref"} It is important to remember that denaturation is not necessarily accompanied by the unfolding of a protein, but rather might result in the appearance of various partially folded conformations with properties intermediate between those of the folded (ordered) and the completely unfolded states. In fact, globular proteins exist in at least four different equilibrium conformations: folded (ordered), MG, pre‐MG and unfolded.[19](#bip22298-bib-0019){ref-type="ref"}, [51](#bip22298-bib-0051){ref-type="ref"}, [52](#bip22298-bib-0052){ref-type="ref"}, [58](#bip22298-bib-0058){ref-type="ref"}, [59](#bip22298-bib-0059){ref-type="ref"}, [60](#bip22298-bib-0060){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"}, [70](#bip22298-bib-0070){ref-type="ref"} The ability of a globular protein to adopt different stable partially folded conformations each of which represents a specific conformational ensemble is believed to be an intrinsic property of a polypeptide chain. Conformational Ensembles of Unfolded States {#bip22298-sec-0004} ------------------------------------------- The unfolded state represents the starting point of the protein folding reaction. This state represents an ensemble of rapidly interchanging conformations, some of which are extended, and some more compact. It is possible that when stabilizing interactions occur they induce a more populated ensemble of chain conformations, and, if such structures exist in the unfolded state, they would probably guide the folding process and function as folding‐initiation sites.[71](#bip22298-bib-0071){ref-type="ref"} In fact, theoretical studies revealed that small preferences for native‐like interactions in the unfolded state will substantially increase the probability of reaching the native state. Coming back to the polymer roots, under conditions known as "ideal" or "θ‐conditions," that is, when the attractions of the macromolecular segments are balanced by those with the solvent, the density of macromolecules is expected to follow *M* ^−0.5^, thereby, the *R* ~S~ *= lN* ^0.5^ with *l* being a statistical chain length and *N* being a number of amino acid residues in a protein.[69](#bip22298-bib-0069){ref-type="ref"}, [72](#bip22298-bib-0072){ref-type="ref"}, [73](#bip22298-bib-0073){ref-type="ref"} Here, the polymer is assumed to be in a random coil conformation, and its conformational behavior can be described with the Gaussian statistics.[72](#bip22298-bib-0072){ref-type="ref"} Further, in a good solvent, the macromolecular coil is expanded due to the prevalence of the repulsive interactions between polymer segments, the molecular dimensions change more significantly with increasing chain length, *R* ~S~ *= (l^2^B)* ^0.2^ *N* ^0.6^, where *B* is the second virial coefficient that characterizes the pair collisions of the monomer units of the polymer chain. Based on the mentioned above analysis of the protein molecular density in various conformations and its length it has been concluded that the "fully" unfolded states induced by the GdmHCl or urea provide *ε =* 0.54 and 0.52, respectively.[69](#bip22298-bib-0069){ref-type="ref"}, [72](#bip22298-bib-0072){ref-type="ref"}, [73](#bip22298-bib-0073){ref-type="ref"} Given that these *ε*‐values are \<0.6, it appears that the unfolded polypeptide chains under these conditions exhibit features of macromolecular coils in θ‐solvents. Recently, this conclusion was further supported by the examination of the correlation between the denatured‐state radii of gyration, *R* ~g~, of 26 proteins and their polypeptide lengths ranging from 16 to 549 residues.[74](#bip22298-bib-0074){ref-type="ref"} This analysis revealed that the dimensions of most chemically denatured proteins scale with polypeptide length by means of the power‐law relationship with a best‐fit exponent, 0.598 ± 0.028, coinciding closely with the 0.588 predicted for an excluded volume random coil. Based on these observations it has been concluded that the mean dimensions of the chemically denatured proteins are effectively indistinguishable from the mean dimensions of a random‐coil ensemble.[74](#bip22298-bib-0074){ref-type="ref"} However, the values of the hydrodynamic dimensions, which Tanford measured for the unfolded proteins[69](#bip22298-bib-0069){ref-type="ref"} correspond better to a model where 20% of the residues are located in the collapsed structures.[75](#bip22298-bib-0075){ref-type="ref"} In agreement with these observations, more recent analysis showed that presence of ∼20% α‐helix generated the unfolded state with the experimentally observed radii of gyration.[76](#bip22298-bib-0076){ref-type="ref"} Furthermore, it has been pointed out that the inclusion of "knots" of collapsed structure into the random coil model would not have a great influence on the hydrodynamic dimensions of a coil.[77](#bip22298-bib-0077){ref-type="ref"} In fact, analysis of model systems where several proteins of known structure were used to computationally generate disordered conformers by varying backbone torsion angles at random for ∼8% of the residues, with the remaining ∼92% of the residues being remained fixed in their native conformations, revealed that despite this extreme degree of imposed internal structure, the analyzed conformational ensembles had end‐to‐end distances and mean radii of gyration that agree well with random‐coil expectations.[77](#bip22298-bib-0077){ref-type="ref"} Such theoretical evaluations are supported by rich experimental observations, where noticeable residual structure is seen in unfolded proteins even under the most severe denaturing conditions, such as high concentrations of strong denaturants. Among the illustrative examples of well‐characterized unfolded globular proteins with considerable residual structure are staphylococcal nuclease,[78](#bip22298-bib-0078){ref-type="ref"}, [79](#bip22298-bib-0079){ref-type="ref"}, [80](#bip22298-bib-0080){ref-type="ref"}, [81](#bip22298-bib-0081){ref-type="ref"}, [82](#bip22298-bib-0082){ref-type="ref"}, [83](#bip22298-bib-0083){ref-type="ref"}, [84](#bip22298-bib-0084){ref-type="ref"}, [85](#bip22298-bib-0085){ref-type="ref"} the α‐subunit of tryptophan synthetase,[86](#bip22298-bib-0086){ref-type="ref"}, [87](#bip22298-bib-0087){ref-type="ref"} fragment of the protein 434,[88](#bip22298-bib-0088){ref-type="ref"}, [89](#bip22298-bib-0089){ref-type="ref"}, [90](#bip22298-bib-0090){ref-type="ref"} human fibroblast growth factor 1,[91](#bip22298-bib-0091){ref-type="ref"} the SH3 domain,[92](#bip22298-bib-0092){ref-type="ref"}, [93](#bip22298-bib-0093){ref-type="ref"} barstar,[94](#bip22298-bib-0094){ref-type="ref"} barnase,[95](#bip22298-bib-0095){ref-type="ref"} the WW‐domain,[96](#bip22298-bib-0096){ref-type="ref"} BPTI,[97](#bip22298-bib-0097){ref-type="ref"}, [98](#bip22298-bib-0098){ref-type="ref"} chymotrypsin inhibitor 2,[99](#bip22298-bib-0099){ref-type="ref"} human carbonic anhydrase II[100](#bip22298-bib-0100){ref-type="ref"}, [101](#bip22298-bib-0101){ref-type="ref"}, [102](#bip22298-bib-0102){ref-type="ref"} apomyoglobin,[103](#bip22298-bib-0103){ref-type="ref"} lysozyme,[104](#bip22298-bib-0104){ref-type="ref"} photoactive yellow protein,[105](#bip22298-bib-0105){ref-type="ref"} the *Escherichia coli* outer membrane protein X,[106](#bip22298-bib-0106){ref-type="ref"} the N‐terminal domain of enzyme I from *Streptomyces coelicolor*, [107](#bip22298-bib-0107){ref-type="ref"} bovine and human α‐lactalbumins,[108](#bip22298-bib-0108){ref-type="ref"} protein eglin C,[109](#bip22298-bib-0109){ref-type="ref"} intestinal fatty acid binding protein,[110](#bip22298-bib-0110){ref-type="ref"} yeast alcohol dehydrogenase,[111](#bip22298-bib-0111){ref-type="ref"} HIV‐1 protease,[112](#bip22298-bib-0112){ref-type="ref"} "Trp‐cage" miniprotein TC5b,[113](#bip22298-bib-0113){ref-type="ref"} *Bacillus licheniformis* β‐lactamase,[114](#bip22298-bib-0114){ref-type="ref"} hyperthermophilic ribosomal protein S16,[115](#bip22298-bib-0115){ref-type="ref"} thermophilic ribonucleases H,[116](#bip22298-bib-0116){ref-type="ref"} and ubiquitin[117](#bip22298-bib-0117){ref-type="ref"} among many other examples. Therefore, the existence of profound residual structure might be a general characteristic of unfolded polypeptide chain under the aggressively denaturing conditions.[118](#bip22298-bib-0118){ref-type="ref"}, [119](#bip22298-bib-0119){ref-type="ref"}, [120](#bip22298-bib-0120){ref-type="ref"}, [121](#bip22298-bib-0121){ref-type="ref"}, [122](#bip22298-bib-0122){ref-type="ref"} Therefore, unfolded states of proteins exhibit behavior that is not random coil in nature, which is not surprising considering the complexity of polypeptides. In fact, it has been pointed out that a total lack of intraresidue interactions would be unexpected in the unfolded state, because certain (e.g., hydrophobic) side chains have noticeable affinity for each other in an unfolded protein,[102](#bip22298-bib-0102){ref-type="ref"}, [123](#bip22298-bib-0123){ref-type="ref"} and some secondary structure elements could be expected within unfolded protein due to the preferential distribution of *F* and *Y* angles.[124](#bip22298-bib-0124){ref-type="ref"}, [125](#bip22298-bib-0125){ref-type="ref"}, [126](#bip22298-bib-0126){ref-type="ref"} All this considerably restricts the conformational space of the unfolded polypeptide chain. Thus, it seems most likely that the polypeptide chains under the "strong denaturing conditions" are still below the critical point (bad solvent conditions), and can be easily transformed to the compact state. Conformational Ensembles of Nonglobular Pre‐MG States {#bip22298-sec-0005} ----------------------------------------------------- When the thermodynamic quality of the solvent worsens, the binary interactions between the monomers become mainly attractive.[61](#bip22298-bib-0061){ref-type="ref"} As a result, the probability of many‐body interactions increases, which leads to the increase in the molecular density and partial collapse of the polymer chain. For ordered protein originally unfolded by high concentrations of strong denaturants, this typically correlates with transition to lower denaturant concentrations, and many globular proteins can form a specific compact partially folded conformation, a pre‐MG state under the appropriate conditions.[53](#bip22298-bib-0053){ref-type="ref"}, [58](#bip22298-bib-0058){ref-type="ref"}, [59](#bip22298-bib-0059){ref-type="ref"}, [60](#bip22298-bib-0060){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"}, [127](#bip22298-bib-0127){ref-type="ref"}, [128](#bip22298-bib-0128){ref-type="ref"}, [129](#bip22298-bib-0129){ref-type="ref"}, [130](#bip22298-bib-0130){ref-type="ref"}, [131](#bip22298-bib-0131){ref-type="ref"}, [132](#bip22298-bib-0132){ref-type="ref"}, [133](#bip22298-bib-0133){ref-type="ref"} This conformational ensemble is characterized by considerable secondary structure, although much less pronounced than that of the MG. The pre‐MG state is considerably less compact than the MG, but it is still more compact than the random coil of similar molecular mass. Individual molecules within the pre‐MG conformational ensemble contain some hydrophobic clusters, as evidenced from their increased propensity to interact with the hydrophobic fluorescent probes, such as 8‐anilinonaphthalene‐1‐sulfonate, ANS. Analysis of hydrodynamic data reveals that the molecular dimensions of pre‐MGs follow the chain length as *R* ~S~ *=* 0.6*M* ^0.40^.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"} The fact that for this state, *ε =* 0.40 is noticeably smaller than *ε* of 0.50 expected for the random coil, indicates the bad solvent conditions and suggests that this conformation exhibits behavior, which is typical for squeezed macromolecular coils. Furthermore, the pre‐exponential term *K* ~h~ of 0.6 observed for the pre‐MGs is significantly larger than *K* ~h~ values retrieved for the unfolded species (typically, in a range of 0.2--0.3), suggesting the existence of multiple bodies interactions inside the polypeptide chain.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"} Therefore, any small variations in the protein environment, that is, changes in the thermodynamic quality of the solvent, or changes induced by the proton transfer, interactions with a ligand, fluctuations of temperature, etc., can trigger the transition of the compact protein molecule to the more rigid MG or native states.[72](#bip22298-bib-0072){ref-type="ref"} Conformational Ensembles of MG States {#bip22298-sec-0006} ------------------------------------- The MG state of a globular protein is typically described as a conformational ensemble of compact denatured molecules that have no (or has only a trace of) rigid tertiary structure but possesses well‐developed secondary structure. Small‐angle X‐ray scattering analysis shows that the MG has a globular structure typical of folded globular proteins.[58](#bip22298-bib-0058){ref-type="ref"}, [134](#bip22298-bib-0134){ref-type="ref"}, [135](#bip22298-bib-0135){ref-type="ref"}, [136](#bip22298-bib-0136){ref-type="ref"}, [137](#bip22298-bib-0137){ref-type="ref"} 2D‐NMR, coupled with hydrogen‐deuterium exchange, shows that the MG is characterized not only by the native‐like secondary structure content, but also by the native‐like folding pattern.[138](#bip22298-bib-0138){ref-type="ref"}, [139](#bip22298-bib-0139){ref-type="ref"}, [140](#bip22298-bib-0140){ref-type="ref"}, [141](#bip22298-bib-0141){ref-type="ref"}, [142](#bip22298-bib-0142){ref-type="ref"}, [143](#bip22298-bib-0143){ref-type="ref"}, [144](#bip22298-bib-0144){ref-type="ref"}, [145](#bip22298-bib-0145){ref-type="ref"} A considerable increase in the accessibility of a protein molecule to proteases is noted as a specific property of the MG.[146](#bip22298-bib-0146){ref-type="ref"}, [147](#bip22298-bib-0147){ref-type="ref"} The transformation into this intermediate state is accompanied by a considerable increase in the affinity of a protein molecule to ANS and this behavior is a characteristic property of the MGs.[148](#bip22298-bib-0148){ref-type="ref"}, [149](#bip22298-bib-0149){ref-type="ref"} Finally, on the average, the hydrodynamic radius of the MG is increased by no \>15% compared with that of the folded state, which corresponds to the volume increase of ∼50%.[150](#bip22298-bib-0150){ref-type="ref"} The theory of the "coil‐globule" transition predicts that the overall dimension of a polymer globule, *R* ~S~, changes with the chain length, *N*, as *R∼*(*C*/*B*)^1/3^ *N* ^1/3^. Here, *B* and *C* are the second and the third virial coefficients, which characterize the pair collisions and tree‐body interactions of the monomer units of the polymer chain.[72](#bip22298-bib-0072){ref-type="ref"} The density of the globules is expected to show no changes with the increasing chain length, owing to *ρ = N/R* ^3^ *=* (−*B/C*). These results are in excellent agreement with the data obtained for the MG conformational ensembles of proteins, for which the parameter *K* ~h~ has a value of 0.9 (which reflects the larger probability of three‐body interactions within the members of this conformational ensemble defined by the compact but flexible nature of the MGs) and *ε* equals to 0.33.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [61](#bip22298-bib-0061){ref-type="ref"}, [62](#bip22298-bib-0062){ref-type="ref"} Nature of Structural Transitions Between Different Conformational Ensembles {#bip22298-sec-0007} --------------------------------------------------------------------------- Conformational ensembles of partially folded intermediates of globular proteins are highly dynamic, suggesting that individual molecules within these ensembles are characterized by low conformational stability. This is reflected in low steepness of the transition curves describing their unfolding induced by strong denaturants (MG unfolding) or even in the complete lack of the sigmodal shape of the unfolding curves (pre‐MG unfolding). Such behavior is in a strict contrast to the solvent‐induced unfolding of ordered globular proteins, which is known to be a highly cooperative process and for many small globular proteins represents an all‐or‐none transition where a cooperative unit includes the whole molecule, that is, no intermediate states can be observed within the transition region. Often, urea‐ or GdmHCl‐induced unfolding of globular proteins involves at least two cooperative steps: the ordered state to MG (N ↔ MG) and the MG to unfolded state (MG ↔ U) transitions.[51](#bip22298-bib-0051){ref-type="ref"}, [52](#bip22298-bib-0052){ref-type="ref"}, [151](#bip22298-bib-0151){ref-type="ref"}, [152](#bip22298-bib-0152){ref-type="ref"}, [153](#bip22298-bib-0153){ref-type="ref"}, [154](#bip22298-bib-0154){ref-type="ref"} Therefore, the steepness of urea‐ or GdmHCl‐induced unfolding curves depends strongly on whether a given protein has a rigid tertiary structure (i.e., it is ordered) or is already denatured and exists as a MG conformational ensemble.[155](#bip22298-bib-0155){ref-type="ref"}, [156](#bip22298-bib-0156){ref-type="ref"} The slope of the transition curve at its middle point is proportional to the change of the thermodynamic quantity conjugated with the variable provoking the transition, that is, to the difference in the numbers of denaturant molecules "bound" to the initial and final states in the urea‐induced or GdmHCl‐induced transitions, Δν~eff~. The slope of a phase transition in small systems depends on the system\'s dimensions[157](#bip22298-bib-0157){ref-type="ref"}, [158](#bip22298-bib-0158){ref-type="ref"}: in the case of first‐order phase transition, the slope increases proportionally to the number of units in a system,[157](#bip22298-bib-0157){ref-type="ref"} whereas the slope of the second‐order phase transition is proportional to the square root of this number.[158](#bip22298-bib-0158){ref-type="ref"} Therefore, it is possible to distinguish between phase and nonphase intramolecular transitions by measuring whether their slopes depend on molecular weight. Based on these premises, the dependence of slopes of solvent‐induced N ↔ U, N ↔ MG, and MG ↔ U transitions in globular proteins (measured in terms of the corresponding Δν~eff~ values) on protein molecular mass (*M*) was analyzed.[155](#bip22298-bib-0155){ref-type="ref"}, [156](#bip22298-bib-0156){ref-type="ref"} For small proteins, cooperativity of unfolding N ↔ U transition increased with *M*, suggesting that their denaturant‐induced unfolding exhibited the characteristics of an all‐or‐none transition, that is, an intramolecular analogue of first‐order phase transition in macroscopic systems.[155](#bip22298-bib-0155){ref-type="ref"}, [156](#bip22298-bib-0156){ref-type="ref"} Similar behavior was also observed for the denaturant‐induced N ↔ MG and MG ↔ U transitions suggesting that these two denaturant‐induced transitions in small globular proteins can also be described in terms of the all‐or‐none transitions.[155](#bip22298-bib-0155){ref-type="ref"}, [156](#bip22298-bib-0156){ref-type="ref"} Finally, the pre‐MG and the MG were shown to be separated by an all‐or‐none phase transition, reflecting the fact that these partially folded intermediates represent discrete phase states.[156](#bip22298-bib-0156){ref-type="ref"}, [159](#bip22298-bib-0159){ref-type="ref"} Importantly, several structural elements of pre‐MGs may occupy native‐like positions.[62](#bip22298-bib-0062){ref-type="ref"} The existence of such a state substantially reduces any search through the conformational space, ensuring rapid folding. Given that this state might comprise a specific native‐like core with burial of hydrophobic residues, the transition from pre‐MG to the MG state or to the ordered state would not require significant energy changes and could occur quite easily. An oversimplified representation of folding energy profile for the framework model with the corresponding energy barriers separating various conformational ensembles populated by a protein molecule during its folding is shown in Figure [3](#bip22298-fig-0003){ref-type="fig"}. CONFORMATIONAL ENSEMBLES AND PROTEIN NONFOLDING {#bip22298-sec-0008} =============================================== Structural Heterogeneity of IDPs/IDPRs {#bip22298-sec-0009} -------------------------------------- It is recognized now that a considerable number of biologically active proteins are not completely rigid, but possess some amount of disorder under the physiological conditions.[15](#bip22298-bib-0015){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [27](#bip22298-bib-0027){ref-type="ref"} These IDPs or hybrid proteins with ordered domains and IDPRs cannot be adequately described without being considered as conformational ensembles. Contrarily to conformational ensembles transiently populated during protein folding, conformational ensembles of IDPs/IDPRs describe native functional states of these proteins. Structurally, IDPs are highly diverse and some compact IDPs contain noticeable secondary structure and behave as native MGs, whereas other IDPs are extended and possess little residual structure (i.e., these IDPs behave as native coils or native pre‐MGs).[11](#bip22298-bib-0011){ref-type="ref"}, [17](#bip22298-bib-0017){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"} However, it was emphasized recently that intrinsic disorder can have multiple faces, can affect different levels of protein structural organization, and whole proteins, or various protein regions can be disordered to a different degree.[160](#bip22298-bib-0160){ref-type="ref"} Therefore, instead of being grouped into a few discrete classes (e.g, native MGs, native pre‐MGs, and native coil) structures of IDPs might be described by a complex structural spectrum with a great variety of potential structural classes and subclasses, or even can be visualized as a continuous spectrum of differently disordered conformations extending from fully ordered to completely structure‐less proteins, with everything in between them.[160](#bip22298-bib-0160){ref-type="ref"} Furthermore, even a single polypeptide chain can encode for a highly heterogeneous protein molecule that contains variously ordered regions, that is, possess diverse sets of foldons, inducible foldons, semifoldons, nonfoldons, and unfoldons.[161](#bip22298-bib-0161){ref-type="ref"} In this view, foldon represents an independent cooperative foldable unit that can fold independently from the rest of the protein.[162](#bip22298-bib-0162){ref-type="ref"} Foldon concept is derived from the analysis of ordered proteins, folding of which can be described as the stepwise assembly of the foldon units, with previously formed foldons guiding and stabilizing subsequent foldons to progressively build the native protein.[163](#bip22298-bib-0163){ref-type="ref"}, [164](#bip22298-bib-0164){ref-type="ref"}, [165](#bip22298-bib-0165){ref-type="ref"}, [166](#bip22298-bib-0166){ref-type="ref"} Since some regions of an IDP are spontaneously folded, other can fold (at least in part) at interaction with binding partners, still other are always in semifolded state, whereas some regions do not fold at all, an IDP can be described as a modular assembly of foldons, inducible foldons, semifoldons, and nonfoldons.[160](#bip22298-bib-0160){ref-type="ref"} Furthermore, some IDPs contain unfoldons, that is, parts of protein structure that has to undergo order‐to‐disorder transition in order to make protein active.[160](#bip22298-bib-0160){ref-type="ref"} Amino Acid Code for Intrinsic Disorder {#bip22298-sec-0010} -------------------------------------- The absence of unique structures in IDPs/IDPRs together with all their functional and structural peculiarities is encoded in their amino acid sequences. In fact, there are significant differences between the ordered proteins/domains and IDPs/IDPRs at the level of their amino acid sequences.[11](#bip22298-bib-0011){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [167](#bip22298-bib-0167){ref-type="ref"} Some of the highly disordered proteins were shown to have low sequence complexity, assuming that the sequences of IDPs may be essentially degenerated. However, it was later established that the distributions of the complexity values for ordered and disordered sequences overlapped, suggesting that low sequence complexity did not represent the only characteristic feature of IDPs.[168](#bip22298-bib-0168){ref-type="ref"} Overall, the sequences of the IDPs are characterized by noticeable amino acid compositional biases.[167](#bip22298-bib-0167){ref-type="ref"}, [169](#bip22298-bib-0169){ref-type="ref"} For example, extended IDPs were shown to be specifically localized within a unique region of the charge‐hydrophobic phase space, being highly charged and possessing low hydropathy.[24](#bip22298-bib-0024){ref-type="ref"} Furthermore, in comparison with ordered proteins, IDPs/IDPRs are characterized by noticeable biases in their amino acid compositions, containing less of so‐called "order‐promoting" residues (cysteine, tryptophan, isoleucine, tyrosine, phenylalanine, leucine, histidine, valine, asparagines, and methionine, which are mostly hydrophobic residues which are commonly found within the hydrophobic cores of foldable proteins) and more of "disorder‐promoting" residues (lysine, glutamine, serine, glutamic acid, and proline, which are mostly polar and charged residues, which are typically located at the surface of the foldable proteins).[11](#bip22298-bib-0011){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [167](#bip22298-bib-0167){ref-type="ref"}, [170](#bip22298-bib-0170){ref-type="ref"}, [171](#bip22298-bib-0171){ref-type="ref"} Natural Abundance of Intrinsic Disorder {#bip22298-sec-0011} --------------------------------------- Support for the biological significance of protein intrinsic disorder phenomenon is given by the extremely wide distribution of these proteins among all kingdoms of life.[11](#bip22298-bib-0011){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [172](#bip22298-bib-0172){ref-type="ref"}, [173](#bip22298-bib-0173){ref-type="ref"}, [174](#bip22298-bib-0174){ref-type="ref"}, [175](#bip22298-bib-0175){ref-type="ref"}, [176](#bip22298-bib-0176){ref-type="ref"} For example, an analysis of completed proteomes of 3,484 species from three main kingdoms of life (archaea, bacteria, and eukaryotes) and viruses revealed that the evolution process is characterized by the unique patterns of changes in the protein intrinsic disorder content.[176](#bip22298-bib-0176){ref-type="ref"} For example, viruses are characterized by the widest spread of the disorder content in their proteomes, with the number of disordered residues ranging from 7.3% in human coronavirus NL63 to 77.3% in *Avian carcinoma virus*.[176](#bip22298-bib-0176){ref-type="ref"} For several organisms from all kingdoms of life, a clear correlation was seen between their disorder contents and habitats. In multicellular eukaryotes, there was a weak correlation between the organism complexity (evaluated as a number of different cell types) and the overall disorder content. Although for both the prokaryotes and eukaryotes, the disorder content was generally independent of the proteome size, it showed sharp increase associated with the transition from the prokaryotic to eukaryotic cells.[176](#bip22298-bib-0176){ref-type="ref"} This suggested that the increased disorder content in eukaryotic proteomes might be used by nature to deal with the increased cell complexity due to the appearance of the various cellular compartments.[176](#bip22298-bib-0176){ref-type="ref"} Polymer Physics of Extended IDPs {#bip22298-sec-0012} -------------------------------- Application of the polymer physics formalism to the two classes of extended IDPs (native coils and native pre‐MGs) revealed that "salted water" of typical "physiological" buffer that contains 100--150 m*M* NaCl does not represent for them a poor solvent, since these proteins are essentially noncompact under these conditions and do not possess globular structure. In other words, these solvent conditions do not force polymer segments to interact specifically with each other and, thus, do not force them to be effectively excluded from the solvent. The hydrodynamic analysis of extended IDPs revealed that the molecular dimensions of extended IDPs follow the chain length as *R* ~S~ = 0.28*M* ^0.49^ or *R* ~S~ = 0.6*M* ^0.40^ for the native coils and native pre‐MGs, respectively. This suggests that native coils belong to the class of relatively extended unfolded conformations. Importantly, these coils show the largest *K* ~h~ and the smallest *ε* values between different unfolded conformations of a polypeptide chain, suggesting that native coils under the physiological conditions are in considerably worsened solvent conditions in comparison with the globular proteins in the urea or the GdmHCl solutions (lowest *ε* value), which gives rise to the increased probability of multiple body interactions (highest *K* ~h~ value). However, the molecular dimensions of native pre‐MG IDPs follow the exactly same chain length dependence as conformational ensembles of pre‐MGs detected as folding intermediates of ordered globular proteins. Thus, these proteins may exhibit structural features of a squeezed polymer coil. Functions of Intrinsically Disordered Conformational Ensembles and Function‐Related Structural Transitions {#bip22298-sec-0013} ---------------------------------------------------------------------------------------------------------- Highly dynamic conformational ensembles of IDPs and IDPRs are involved in countless biological activities, since the lack of rigid globular structure under physiological conditions represents a considerable functional advantage for IDPs/IDPRs.[8](#bip22298-bib-0008){ref-type="ref"}, [10](#bip22298-bib-0010){ref-type="ref"}, [11](#bip22298-bib-0011){ref-type="ref"}, [13](#bip22298-bib-0013){ref-type="ref"}, [14](#bip22298-bib-0014){ref-type="ref"}, [15](#bip22298-bib-0015){ref-type="ref"}, [17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [25](#bip22298-bib-0025){ref-type="ref"}, [26](#bip22298-bib-0026){ref-type="ref"}, [177](#bip22298-bib-0177){ref-type="ref"}, [178](#bip22298-bib-0178){ref-type="ref"}, [179](#bip22298-bib-0179){ref-type="ref"}, [180](#bip22298-bib-0180){ref-type="ref"}, [181](#bip22298-bib-0181){ref-type="ref"}, [182](#bip22298-bib-0182){ref-type="ref"}, [183](#bip22298-bib-0183){ref-type="ref"} Numerous vital cellular processes, such as the regulation of transcription and translation, and the control of cell cycle are dependent on the IDPs/IDPRs during (reviewed in Refs. [8](#bip22298-bib-0008){ref-type="ref"}, [11](#bip22298-bib-0011){ref-type="ref"}, [13](#bip22298-bib-0013){ref-type="ref"}, [14](#bip22298-bib-0014){ref-type="ref"}, [15](#bip22298-bib-0015){ref-type="ref"}, [17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [19](#bip22298-bib-0019){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [25](#bip22298-bib-0025){ref-type="ref"}). The common theme of protein disorder‐based functionality is recognition, and IDPs/IDPRs are frequently involved in complex protein‐protein, protein‐nucleic acid, and protein‐small molecule interactions. Some of these interactions can induce a disorder‐to‐order transition in the entire IDP or in its part.[11](#bip22298-bib-0011){ref-type="ref"}, [13](#bip22298-bib-0013){ref-type="ref"}, [14](#bip22298-bib-0014){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"}, [20](#bip22298-bib-0020){ref-type="ref"}, [21](#bip22298-bib-0021){ref-type="ref"}, [22](#bip22298-bib-0022){ref-type="ref"}, [24](#bip22298-bib-0024){ref-type="ref"}, [25](#bip22298-bib-0025){ref-type="ref"}, [26](#bip22298-bib-0026){ref-type="ref"}, [172](#bip22298-bib-0172){ref-type="ref"}, [184](#bip22298-bib-0184){ref-type="ref"}, [185](#bip22298-bib-0185){ref-type="ref"}, [186](#bip22298-bib-0186){ref-type="ref"}, [187](#bip22298-bib-0187){ref-type="ref"} In other words, some IDPs/IDPRs undergo binding‐promoted functional folding at least in some of their parts. Furthermore, intrinsic disorder opens a unique capability for one protein to be involved in interaction with several unrelated binding partners and to gain differently folded bound structures.[183](#bip22298-bib-0183){ref-type="ref"}, [188](#bip22298-bib-0188){ref-type="ref"} Some IDPs/IDPRs can form highly stable complexes, whereas others are involved in signaling interactions where they undergo constant "bound‐unbound" transitions, thus acting as dynamic and sensitive "on‐off" switches.[21](#bip22298-bib-0021){ref-type="ref"} Several IDPs/IDPRs were shown to fold into different conformations depending on the peculiarities of their environments or upon interaction with different binding parters.[172](#bip22298-bib-0172){ref-type="ref"}, [188](#bip22298-bib-0188){ref-type="ref"} Although partial folding during the IDP/IDPR‐based interactions is a widespread phenomenon,[184](#bip22298-bib-0184){ref-type="ref"}, [185](#bip22298-bib-0185){ref-type="ref"} there are still many other IDPs/IDPRs that are involved in the formation of the "fuzzy complexes," where they keep a certain amount of disorder in their bound forms (Figure [4](#bip22298-fig-0004){ref-type="fig"}).[16](#bip22298-bib-0016){ref-type="ref"}, [21](#bip22298-bib-0021){ref-type="ref"}, [189](#bip22298-bib-0189){ref-type="ref"}, [190](#bip22298-bib-0190){ref-type="ref"} ![Fuzziness of protein structures and complexes. **A**: Fuzzy structure of a hybrid protein (p53 tetramer) that contains structured DNA‐binding and tetramerization domains (gray space‐filling models) and a disordered transactivator domain (shown as an ensemble of 20 conformations in different colors for each molecule in the tetramer). Figure is modified from Ref. [253](#bip22298-bib-0253){ref-type="ref"} with permission. **B**: The NMR structure of a fuzzy complex between the cyclin‐dependent kinase inhibitor Sic1 \[depicted as a ribbon with color‐coding from cyan (N‐terminus) to magenta (C‐terminus)\] and the ubiquitin ligase Cdc4 (depicted as space‐filling gray model). At any given moment, only one out of the nine phosphorylated sites of Sic1 interacts with a single binding site in Cdc4, generating a highly dynamic conformational ensemble of a complex described within the frames of the "polyelectrostatic" model.[254](#bip22298-bib-0254){ref-type="ref"}, [255](#bip22298-bib-0255){ref-type="ref"} **C**. Fuzzy complex of the negative regulatory domain (NRD) of p53 with dimeric S100B(ββ). According to the extensive all‐atom explicit solvent simulations, NRD of p53 remains highly dynamic in the S100B(ββ)‐bound state.[256](#bip22298-bib-0256){ref-type="ref"}](BIP-99-870-g004){#bip22298-fig-0004} The range of conformational changes induced in the IDPs/IDPRs by their interaction with natural partners is very wide.[17](#bip22298-bib-0017){ref-type="ref"}, [21](#bip22298-bib-0021){ref-type="ref"} In fact, the examples of all possible conformational transitions have been described including function‐induced transitions of coil to pre‐MG, coil to MG, coil to ordered conformation, pre‐MG to MG, pre‐MG to rigid structure and MG to ordered, and rigid form.[17](#bip22298-bib-0017){ref-type="ref"}, [18](#bip22298-bib-0018){ref-type="ref"} Therefore, native proteins (or their functional regions) can exist in any of the known conformational states, ordered, MG, pre‐MG, and coil.[11](#bip22298-bib-0011){ref-type="ref"}, [23](#bip22298-bib-0023){ref-type="ref"}, [25](#bip22298-bib-0025){ref-type="ref"} Function can arise from any of these conformations and transitions between them. In other words, not just the ordered state but any of the known polypeptide conformations can be the native state of a protein. In addition to the functional transitions toward more structured conformational ensembles, some ordered proteins possess functional dormant disorder, where these proteins are inactive when they are ordered, and become activated when they become more disordered.[161](#bip22298-bib-0161){ref-type="ref"} The important features of these functional alterations are their induced nature and transient character. In other words, the function‐related disordering of a protein is induced by transient alterations in its environment or by transient modification of its structure and are released as soon as the environment is restored or the modification is removed. These unusual features are important prerequisites of the protein functions relying on the induced unfolding or transient disorder mechanism.[161](#bip22298-bib-0161){ref-type="ref"} In other words, functions of these proteins depends on transitions against the major stream, that is, from ordered states to dynamic conformational ensembles. Importantly, this awakening of dormant disorder phenomenon is rather abundant and different means are used by Nature to ensure such functional order‐to‐disorder transitions.[161](#bip22298-bib-0161){ref-type="ref"} In fact, any external factor that can potentially unfold a structure of a folded protein can be used here, such as changes in pH, temperature, redox potential, light, mechanical force, membrane, interaction with ligands, protein‐protein interaction, various posttranslational modifications (PTMs), release of autoinhibition due to the unfolding of autoinhibitory domains or their interaction with nucleic acids, proteins, membranes, PTMs, etc.[161](#bip22298-bib-0161){ref-type="ref"} IDPs/IDPRs in Human Diseases {#bip22298-sec-0014} ---------------------------- Intrinsic disorder is a tightly controlled phenomenon and there is an evolutionarily conserved tight regulation of synthesis and clearance of most IDPs,[191](#bip22298-bib-0191){ref-type="ref"} giving rise to the "controlled chaos" concept.[192](#bip22298-bib-0192){ref-type="ref"} This tight control is directly related to the major roles of IDPs in signaling, where, for a given signaling protein, it is crucial to be available in appropriate amounts and not to be present longer than needed.[191](#bip22298-bib-0191){ref-type="ref"} However, uncontrolled chaos is frequently associated with human maladies, and as a result, intrinsic disorder is highly abundant among proteins associated with various human diseases. Since ID proteins are very common in various diseases, the "disorder in disorders" or D^2^ concept was introduced to summarize work in this area[193](#bip22298-bib-0193){ref-type="ref"} and concepts of the disease‐related unfoldome and unfoldomics were developed.[194](#bip22298-bib-0194){ref-type="ref"} CONFORMATIONAL ENSEMBLES AND PROTEIN MISFOLDING {#bip22298-sec-0015} =============================================== Molecular Mechanisms of Protein Misfolding and Protein Deposition Diseases {#bip22298-sec-0016} -------------------------------------------------------------------------- The sequences of proteins have evolved in such a way that their native states can be formed very efficiently even in the complex environment inside a living cell. However, under some conditions, many proteins fail to fold properly, or to remain correctly folded, giving raise to the protein this misfolding phenomenon that can eventually lead to the development of different pathological conditions, such as Alzheimer\'s disease, Parkinson\'s disease, transmissible spongiform encephalopathies, cancer, cardiovascular disease (CVD), diabetes, etc. Among the well‐known structural consequence of protein misfolding, is protein aggregation leading to the development of various protein deposition diseases (frequently termed amyloidoses). Here, a specific protein or protein fragment changes from its natural soluble form into insoluble fibrils, which accumulate in a variety of organs and tissues.[34](#bip22298-bib-0034){ref-type="ref"}, [195](#bip22298-bib-0195){ref-type="ref"}, [196](#bip22298-bib-0196){ref-type="ref"}, [197](#bip22298-bib-0197){ref-type="ref"}, [198](#bip22298-bib-0198){ref-type="ref"}, [199](#bip22298-bib-0199){ref-type="ref"} Importantly, prior to fibrillation, amyloidogenic polypeptides may be rich in β‐sheets, α‐helices, or contain both α‐helices and β‐sheets. They may be well folded proteins or be IDPs or hybrid proteins containing differently ordered domains and differently disordered IDPRs. Despite these differences, the fibrils from different pathologies display many common properties including a core cross‐β‐sheet structure with continuous β‐sheets formed where β‐strands are running perpendicular to the long axis of the fibrils.[200](#bip22298-bib-0200){ref-type="ref"} Since all amyloid‐like fibrils independent of the original structure of the given amyloidogenic proteins have a common cross‐β‐structure, considerable conformational rearrangements have to occur prior to fibrillation.[201](#bip22298-bib-0201){ref-type="ref"} Based on the detailed analysis of structural changes preceding and accompanying amyloidogenesis, and on the structural characterization of the amyloidogenic intermediate(s) it has been concluded the amyloidogenic conformation is only slightly folded and shares many structural properties with the conformational ensembles typical for the pre‐MG proteins.[201](#bip22298-bib-0201){ref-type="ref"} Therefore, the general hypothesis of the molecular mechanisms of fibrillogenesis postulates that structural transformation of a polypeptide chain into the conformational ensemble of partially folded molecules represents an important prerequisite for the successful protein fibrillation.[201](#bip22298-bib-0201){ref-type="ref"} However, pathways to these amyloidogenic conformational ensembles are quite different for ordered proteins and IDPs. Even the most tightly folded protein is never completely devoid of flexibility, and due to the conformational breathing (spontaneous structural fluctuations) the structure of a globular protein under physiological conditions typically represents a mixture of tightly folded and multiple partially unfolded conformations, with the great prevalence of the former. Therefore, in ordered, well‐folded proteins, amyloidogeneity‐promoting changes cannot happen spontaneously due to the strong prevalence of a stable and unique tertiary structure. Thus, destabilization of an ordered protein favoring partial unfolding and formation of conformational ensembles of partially unfolded molecules is required. Therefore, the first critical step in the fibrillogenesis of an ordered protein is its partial unfolding or destabilization leading to the formation of an amyloidogenic conformational ensemble. Presumably, such a partially unfolded conformational ensemble favors reciprocal and specific intermolecular interactions, including electrostatic attraction, hydrogen bonding, and hydrophobic contacts, which are necessary for oligomerization and fibrillation.[6](#bip22298-bib-0006){ref-type="ref"}, [7](#bip22298-bib-0007){ref-type="ref"}, [34](#bip22298-bib-0034){ref-type="ref"}, [195](#bip22298-bib-0195){ref-type="ref"}, [196](#bip22298-bib-0196){ref-type="ref"}, [197](#bip22298-bib-0197){ref-type="ref"}, [198](#bip22298-bib-0198){ref-type="ref"}, [199](#bip22298-bib-0199){ref-type="ref"}, [202](#bip22298-bib-0202){ref-type="ref"}, [203](#bip22298-bib-0203){ref-type="ref"}, [204](#bip22298-bib-0204){ref-type="ref"} In line with this hypothesis, most mutations associated with accelerated fibrillation and protein deposition diseases were shown to destabilize the native structure, increasing the steady‐state concentration of partially folded conformers.[24](#bip22298-bib-0024){ref-type="ref"}, [195](#bip22298-bib-0195){ref-type="ref"}, [196](#bip22298-bib-0196){ref-type="ref"}, [197](#bip22298-bib-0197){ref-type="ref"}, [198](#bip22298-bib-0198){ref-type="ref"}, [199](#bip22298-bib-0199){ref-type="ref"}, [205](#bip22298-bib-0205){ref-type="ref"}, [206](#bip22298-bib-0206){ref-type="ref"}, [207](#bip22298-bib-0207){ref-type="ref"}, [208](#bip22298-bib-0208){ref-type="ref"}, [209](#bip22298-bib-0209){ref-type="ref"}, [210](#bip22298-bib-0210){ref-type="ref"}, [211](#bip22298-bib-0211){ref-type="ref"} However, the aggregation propensity of a protein can be significantly reduced by the stabilization of the ordered structure, for example, via specific binding of ligands.[212](#bip22298-bib-0212){ref-type="ref"}, [213](#bip22298-bib-0213){ref-type="ref"}, [214](#bip22298-bib-0214){ref-type="ref"} Contrarily to ordered proteins, IDPs are assumed well suited for amyloidogenesis, since they lack significant secondary and tertiary structure, as well as many specific intra‐chain interactions. In the absence of such conformational constraints, they are expected to be substantially more conformationally flexible, and thus able to polymerize more readily than tightly packed globular proteins. Substantial evidence suggests that in fibrillation of extended IDPs, which constitute a significant fraction of known amyloidogenic proteins,[215](#bip22298-bib-0215){ref-type="ref"}, [216](#bip22298-bib-0216){ref-type="ref"} and that do not have unique tertiary structures in their native states, one of the first steps is partial folding, that is, stabilization of conformational ensembles containing partially folded protein molecules.[217](#bip22298-bib-0217){ref-type="ref"}, [218](#bip22298-bib-0218){ref-type="ref"}, [219](#bip22298-bib-0219){ref-type="ref"}, [220](#bip22298-bib-0220){ref-type="ref"}, [221](#bip22298-bib-0221){ref-type="ref"} α‐Synuclein as a Model Amyloidogenic IDP {#bip22298-sec-0017} ---------------------------------------- In addition to point mutations, various environmental factors can promote formation of such an amyloidogenic conformational ensemble. An illustrative example of the extreme sensitivity of IDPs to their environment and ability to form amyloidogenic partially folded form is given by α‐synuclein, which is a small (14 kDa), soluble, intracellular, highly conserved protein that is abundant in various regions of the brain and account for as much as 1% of the total protein in soluble cytosolic brain fractions. Structurally, purified α‐synuclein is a typical extended IDP, which is, being highly unstructured under conditions of neutral pH and physiological temperature, does not represent a random coil[217](#bip22298-bib-0217){ref-type="ref"} but possesses some residual secondary structure,[222](#bip22298-bib-0222){ref-type="ref"} that leads to partial compaction of this protein.[217](#bip22298-bib-0217){ref-type="ref"}, [223](#bip22298-bib-0223){ref-type="ref"} Misfolding, dysfunction, aggregation, and deposition of aggregated α‐synuclein are associated with several neurodegenerative diseases collectively known as synucleinopathies, with Parkinson\'s disease being the most well‐known example of this group of neurodegenerative disorders.[224](#bip22298-bib-0224){ref-type="ref"}, [225](#bip22298-bib-0225){ref-type="ref"}, [226](#bip22298-bib-0226){ref-type="ref"}, [227](#bip22298-bib-0227){ref-type="ref"}, [228](#bip22298-bib-0228){ref-type="ref"}, [229](#bip22298-bib-0229){ref-type="ref"}, [230](#bip22298-bib-0230){ref-type="ref"}, [231](#bip22298-bib-0231){ref-type="ref"}, [232](#bip22298-bib-0232){ref-type="ref"}, [233](#bip22298-bib-0233){ref-type="ref"}, [234](#bip22298-bib-0234){ref-type="ref"}, [235](#bip22298-bib-0235){ref-type="ref"} The fibrillogenesis of this protein is intensively studied, and accumulated data strongly suggest that the formation of a partially folded intermediate (possessing the major characteristics of the pre‐MG) represents the critical first step of α‐synuclein fibrillogenesis.[217](#bip22298-bib-0217){ref-type="ref"} This conformational ensemble can be stabilized by numerous factors, such as high temperatures, low pH,[217](#bip22298-bib-0217){ref-type="ref"} the presence of low concentrations of various organic solvents[236](#bip22298-bib-0236){ref-type="ref"} and TMAO,[217](#bip22298-bib-0217){ref-type="ref"} the presence of different metal ions,[237](#bip22298-bib-0237){ref-type="ref"} various salts,[238](#bip22298-bib-0238){ref-type="ref"} several common pesticides/herbicides,[239](#bip22298-bib-0239){ref-type="ref"}, [240](#bip22298-bib-0240){ref-type="ref"}, [241](#bip22298-bib-0241){ref-type="ref"} heparin and other glycosoaminoglycans,[242](#bip22298-bib-0242){ref-type="ref"} some polycations,[243](#bip22298-bib-0243){ref-type="ref"} or as a result of a spontaneous oligomerization both in vitro and in vivo.[244](#bip22298-bib-0244){ref-type="ref"} In all conditions stabilizing the pre‐MG‐like conformation, α‐synuclein was shown to possess enhanced fibrillation propensity. Importantly, fibril formation was considerably slowed down or even completely inhibited under conditions favoring formation of more folded conformations, or by stabilization of the more unfolded form, for example, by oxidation of its methionines.[245](#bip22298-bib-0245){ref-type="ref"} Multiple Pathways of Protein Misfolding and Aggregation {#bip22298-sec-0018} ------------------------------------------------------- Obviously, the process of amyloid fibril formation does not represent the only misfolding route. In fact, contrarily to the process of the productive protein folding resulting in the formation of a unique conformation with the specific function, the end products of misfolding may have very different appearances. The morphology of these end products depends on the particular experimental conditions, and misfolded product may appear as soluble oligomers, amorphous aggregates, or amyloid‐like fibrils. Any of these three species could be cytotoxic, thus giving rise to the development of pathological conditions. The reason for such a morphological difference is potentially connected to the diversity of the conformational ensembles of partially folded forms favoring protein self‐association. In fact, multiple environmental factors, such as point mutations, the decrease in pH, the increase in temperature, the presence of small organic molecules or metal ions, and other charged molecules, might induce structural rearrangements within a protein molecule, shifting equilibrium toward the partially folded conformation(s). As different factors may stabilize slightly different conformational ensembles, the formation of morphologically different aggregates is expected. This idea is illustrated by Figure [5](#bip22298-fig-0005){ref-type="fig"}, which represents an idealized model of amyloid fibril formation and clearly shows that fibrillation is a directed process with a series of consecutive steps, including the formation of several different oligomers.[246](#bip22298-bib-0246){ref-type="ref"} In this model, various oligomers are comprised of structurally identical monomers and the formation of these oligomers constitutes productive steps of the fibrillation pathway. However, aggregation is known to induce dramatic structural changes in the aggregating protein. Therefore, monomers at different aggregation stages are not identical. In addition, recent studies clearly showed that a given protein could self‐assemble into various aggregated forms, depending on the peculiarities of its environment. In fact, the typical aggregation process only rarely results in the appearance of a homogeneous product where at the end of reaction only one aggregates species (amyloid fibrils, amorphous aggregates, or soluble oligomers) is present. More often, heterogeneous mixtures of various aggregated forms are observed.[246](#bip22298-bib-0246){ref-type="ref"} Furthermore, each aggregated form can have multiple morphologies and monomers comprising morphologically different aggregated forms can be structurally different. All this suggests that aggregation is not a simple reaction, but a very complex process with multiple related and unrelated pathways, which can be connected or disjoined. However, regardless of the model or pathway considered, the appearance of a large aggregate inevitably involves the formation of some small oligomeric species.[246](#bip22298-bib-0246){ref-type="ref"} ![An oversimplified schematic representation of protein self‐association process. Formation of multiple association‐prone monomeric forms generates multiple aggregation pathways. There are three major products of the aggregation reaction--amorphous aggregates (bottom pathway), morphologically different soluble oligomers (second and third from the top pathways), and morphologically different amyloid fibrils (two bottom pathways). Two types of soluble oligomers (spheroidal and annular) and two morphologically different amyloid fibrils are shown. Changes in color reflect potential structural changes within a monomer taking place at each elementary step. In reality, the picture is much more complex and much more species can be observed. Interconversions between various species at different pathways are also possible. Figure is adopted, with permission, from Ref. [246](#bip22298-bib-0246){ref-type="ref"}.](BIP-99-870-g005){#bip22298-fig-0005} Polymeric Aspects of Protein Misfolding and Aggregation {#bip22298-sec-0019} ------------------------------------------------------- Behavior of a given polymer in a given solution is determined by the peculiarities of polymer segments--solvent interactions. For example, the major reason for the appearance of globular conformation (in our particular case, we are talking about the correctly folded form of a "normal" globular protein) in a poor solvent (water) is that this conformation effectively excludes a portion of segments from the unfavorable contacts with the solvent and forms the shielding interface between the polymer interior and solvent. In turn, the stability of globular conformation also depends on the peculiarities of interactions between protein globule and solvent. Obviously, many factors may affect the efficiency of coil‐globule transition (i.e., the efficiency and direction of the process of protein folding), as well as change the efficiency of the shield (interface between the polymer and solvent) and, thus, may modulate stability of a native protein molecule. Basically, point amino acid substitutions, changes in pH, temperature, and numerous other environmental circumstances, may considerably affect the mode of polymer‐solvent interactions. Thus, protein misfolding (aggregation) may originate from the changes in relative quality of solvent, which appear either due to the specific changes in protein amino acid composition or because of the solvent composition modifications. Overall Abundance of IDPs and Hybrid Proteins with Long IDPRs in Human Diseases {#bip22298-sec-0020} ------------------------------------------------------------------------------- The intensive involvement of IDPs in pathogenesis of many human diseases is determined by the crucial place of these proteins in the regulation and control of various biological processes. Besides protein deposition diseases, IDPs/IDPRs are known to be responsible for pathogenesis of various cancers, diabetes, CVD, and several other maladies. The validity of this statement is based not only on a multitude of individual examples of IDPs playing various pathological roles, but also on the results of focused computational/bioinformatics studies specifically designed to estimate the abundance of IDPs in various pathological conditions. The first approach is based on the assembly of specific datasets of proteins associated with a given disease and the computational analysis of these datasets using a number of disorder predictors.[177](#bip22298-bib-0177){ref-type="ref"}, [215](#bip22298-bib-0215){ref-type="ref"}, [216](#bip22298-bib-0216){ref-type="ref"}, [247](#bip22298-bib-0247){ref-type="ref"}, [248](#bip22298-bib-0248){ref-type="ref"}, [249](#bip22298-bib-0249){ref-type="ref"} This approach represents an extension of the analysis of individual proteins to a set of independent proteins. Such analysis revealed that that 79% of cancer‐associated and 66% of cell‐signaling proteins contain predicted regions of disorder of 30 residues or longer.[177](#bip22298-bib-0177){ref-type="ref"} Similar analysis revealed that the percentage of proteins with 30 or more consecutive disordered residues was 61% for proteins associated with CVD.[248](#bip22298-bib-0248){ref-type="ref"} Many CVD‐related proteins were predicted to be wholly disordered, with 101 proteins from the CVD dataset predicted to have a total of almost 200 specific disorder‐based binding motifs (thus about 2 binding sites per protein).[248](#bip22298-bib-0248){ref-type="ref"} Finally, the dataset analysis revealed that in addition to being abundant in cancer‐ and CVD‐related proteins, intrinsic disorder is commonly found in such maladies as neurodegenerative diseases and diabetes.[193](#bip22298-bib-0193){ref-type="ref"}, [215](#bip22298-bib-0215){ref-type="ref"} A second approach used diseasome, a network of genetic diseases where the related proteins are interlinked within one disease and between different diseases.[250](#bip22298-bib-0250){ref-type="ref"} Here, the abundance of intrinsic disorder was analyzed in the human diseasome,[250](#bip22298-bib-0250){ref-type="ref"} which is a complex network that systematically links the human disease phenome with the human disease genome.[251](#bip22298-bib-0251){ref-type="ref"} These analyses showed that many human genetic diseases are caused by alteration of IDPs, that different disease classes varied in the disorder contents of their associated proteins, and that many IDPs involved in some diseases were enriched on disorder‐based protein interaction sites.[250](#bip22298-bib-0250){ref-type="ref"} Finally, a third approach is based on the evaluation of the association between a particular protein function (including the disease‐specific functional keywords) with the level of intrinsic disorder in a set of proteins known to carry out this function.[179](#bip22298-bib-0179){ref-type="ref"}, [180](#bip22298-bib-0180){ref-type="ref"}, [252](#bip22298-bib-0252){ref-type="ref"} This analysis revealed that many diseases were strongly correlated with proteins predicted to be disordered.[179](#bip22298-bib-0179){ref-type="ref"}, [180](#bip22298-bib-0180){ref-type="ref"}, [252](#bip22298-bib-0252){ref-type="ref"} Contrary to this, no disease‐associated proteins were found to be strongly correlated with absence of disorder.[252](#bip22298-bib-0252){ref-type="ref"} CONCLUDING REMARKS {#bip22298-sec-0021} ================== This review emphasizes the unique roles that conformational ensembles play in protein\'s life. These ensembles, which are either transiently populated (as in protein folding) or represent stable entities (as in IDPs), define peculiarities of protein folding, represent functional states of IDPs/IDPRs, and mark pathogenic traps originating from protein misfolding and leading to the pathogenesis of the realm of human diseases. Predisposition of a given protein for folding, nonfolding, and misfolding is determined by the peculiarities of its amino acid sequence and by the specific features of protein\'s environment. Furthermore, although the choice between nonfolding, folding, and misfolding is encoded in a given amino acid sequence, transitions between various types of conformational ensembles are also possible and are controlled by multiple factors, starting from the peculiarities of protein amino acid sequence and ending with specific features of protein environment. For example, IDPs may be forced to fold or misfold via the posttranslational modifications, addition of natural binding partners, or modification of their environment (e.g., changes in properties of solvent, etc.). A destabilizing environment may push an ordered protein to the misfolding route or can awake its dormant disorder for function, whereas the presence of chaperones may reverse the misfolding route and effectively dissolve small aggregates.[29](#bip22298-bib-0029){ref-type="ref"}
{ "pile_set_name": "PubMed Central" }
Bob Brown (footballer, born 1895) Robert Samuel Brown (16 October 1895 – 1980) was a professional footballer who played for Tottenham Hotspur and Aldershot. Football career Brown began his career at his local club non-League Thorneycrofts before joining Tottenham Hotspur. The left back made 45 appearances in all competitions for the White Hart Lane club between 1919–23. Brown ended his football career at Aldershot. References Category:1895 births Category:1980 deaths Category:Sportspeople from Southampton Category:English footballers Category:English Football League players Category:Tottenham Hotspur F.C. players Category:Aldershot F.C. players Category:Association football fullbacks
{ "pile_set_name": "Wikipedia (en)" }
101 F.Supp. 328 (1951) COUNTY BOARD OF ARLINGTON COUNTY, VIRGINIA, et al. v. UNITED STATES et al. No. 597. United States District Court E. D. Virginia, Alexandria Division. November 15, 1951. Malcolm D. Miller, Washington, D. C., for the plaintiffs. Gardner L. Boothe, Alexandria, Va., and S. Harrison Kahn, Washington, D. C. for A. B. & W. Transit Co. intervener. C. H. Johns, Washington, D. C., for the Interstate Commerce Commission. John H. D. Wigger, Special Asst. to the Atty. Gen., and William P. Woolls, Jr., Special Asst. to the U. S. Atty., Alexandria, Va., for the United States. Before DOBIE, Circuit Judge, and HUTCHESON and BRYAN, District Judges. *329 BRYAN, District Judge. The County Board of Arlington County, Virginia, its members, and an individual patron seek to suspend and set aside[1] an order entered by the Interstate Commerce Commission finding just and reasonable an increase in the fares of the Alexandria, Barcroft and Washington Transit Company for transportation of passengers between Washington, D. C. and points in the Counties of Arlington and Fairfax and the City of Alexandria, Virginia. Concession was made at the bar, during the pre-trial conference, that the findings of fact of the Commission are supported by the evidence. Our inquiry, then, is the legal soundness of the conclusions drawn by the Commission and now assailed by the plaintiffs. To become effective June 4, 1950, A B & W filed with the Commission schedules of proposed interstate fares between Washington and Virginia, but not within the zone that includes certain Government installations. The increase applicable to the territory of the plaintiffs was generally the addition of 5 cents to each one-way fare. Upon objection by the Arlington County Board and of civic associations along the lines of the Company, the schedules were suspended by the Commission until January 3, 1951. In the interim full hearings were granted to all parties in interest and the examiner's proposed report thereafter filed, followed by the exceptions of the plaintiff Board. On November 10, 1950 Division 2 of the Commission made its report, and therein set forth its findings and its conclusions, holding that, with exceptions unimportant here, the new fares were proper. The same day an order was entered by the Commission, Division 2, vacating its previous suspension of the rates and discontinuing the proceeding. Petition of the County Board for reconsideration by the full Commission was denied in April 1951. We dispose, in limine, of the complaint of the Commission's procedure. Of certain exceptions filed by the Board to the report of the examiner, Division 2 said, "Requested findings and exceptions, not discussed in this report nor reflected in our findings or conclusions, have been given consideration and found not justified." Reconsideration of the report of its Division 2, the Commission said, was denied because "the evidence of record adequately supports the findings of Division 2." Although the reasons for these rulings are thus unequivocally stated, the County Board insists that Division 2, as well as the Commission, failed to comply with section 8, of the Administrative Procedure Act,[2] by not adding findings and conclusions to support its action in refusing requested findings and its actions in overruling exceptions and refusing reconsideration. The contention is obviously meritless. Condensed, the plaintiff's complaint is that the Commission's conclusion is unbacked by the indispensable jurisdictional finding on fairness and reasonableness,[3] that it was reached without the employment of any approved principle for determining, or any acceptable standard for measuring, reasonable rates, that it ignores passenger interest, and that it depends in part upon intrastate operations. We think the Commission's action securely grounded in fact and in law. Division 2 definitively found that, A B & W's "operating revenues are less than its reasonable expenses" but when increased to the amounts permitted by the order of November 10, 1950, the fares would be just and reasonable for the transportation described. It computed that the increase would establish a ratio of operating costs to operating income, after income taxes, of 95.6.[4] Explicitly the report declares "the proposed fares * * * are just and reasonable" *330 and the "proposed fares * * * do not appear to exceed maximum reasonable fares, and we so conclude." In coming to its final conclusion the Commission proceeded logically and lawfully. Compelling the ultimate findings are extensive intermediate ones. The transportation demanded, the service provided, the facilities therefor, as well as the existing investment, the capital expenditures made and to be made, dividends paid, the cost and economies of operation, plus the present and predictable revenues, were all subjects of the report. When a rate-making body has thus given thought to the factors recognized as developing a fair rate for a service, its decision must be accepted as a just figure, even though its conclusion may not be premised upon any theorem of price-fixing.[5] True, operational costs and income predominated in the determination. This was not a matter of choice for the Commission. It was dictated by the nature of the utility under consideration. In following this natural course the Commission manifested an understanding of the methods and modus operandi of the business. For local transportation by bus the principal, and sometimes the sole, capital investment is mobile units. But so severe is their rate of depreciation that the entire investment, if unreplenished, would rapidly disappear. Too, the units demand close, constant, and costly supervision, repair, and maintenance. The necessity for frequent replacements requires that operating costs be heavily and regularly charged to create and maintain a depreciation reserve. Consequently, operating costs are the thing — the first and prime consideration in the ascertainment of what must be collected for the service. In this kind of utility the investment of capital in fixed assets seldom approaches the amount invested in that form by those utilities requiring for their purposes such permanent items as land, buildings, plants, rights of way, tunnels, trackage, depots, wires, poles, mains, reservoirs, or similar long lived properties. Careful appraisement in the latter class of utilities of their capital assets, in order to fix a rate base, is imperative, but it is not so dominant an inquiry for a suburban bus company. A B & W is typical; its chief concern is operating expense. Accentuation of operating costs in a proper case is known as the operating ratio rule. It is the pattern followed by the Commission here and in many other appropriate instances. Actually it is not a departure from the conventional modes. It is simply a label designating the process of evaluating a service in the light of all relevant factors, but with especial emphasis on the element of operating expense when the nature of the service makes operating costs the foremost consideration. The Commission's analysis was painstakingly detailed and comprehensive. It found that in the first quarter of 1950 the revenues fell below those in 1949 for the same period by $110,000, and the expenses in the 1950 quarter were greater by $36,000 than the revenues for that quarter. The quarter was proved to be a safe forecast for the year. The ratio of costs to revenue in the 1950 quarter, before income taxes, was 105, compared with 96.6 for the first quarter of 1949 and 97.3 for the entire year 1949, the last two ratios including provision for income taxes. The losses were also demonstrated through comparative bus-mile expenses and revenues. A loss trend was evident, not to be arrested by the strictest economies — some so drastic as to draw the criticism of the County Board. The Company's equipment-maintenance and garage expense for 1949 was less than for 1948, despite a general .10 cent hourly wage increase. Though bus parts, gasoline, tires, and other items advanced, the 1949 operation and maintenance expenses were also less than the 1948. These same items in the first quarter of 1950 were considerably smaller than for the 1949 quarter. Bus mileage was curtailed each month of 1950 below the corresponding month of 1949. Officers' salaries were scrutinized, as was the rental paid to the president of *331 the Company for the use of his building as a terminal. Depreciation rates were weighed. Thought was given to the possibility of a traffic decline incident to an increase of fares, with the reservation that, if no such decrease was felt, the fares could be readjusted to prevent excessive returns to the Company. We do not attempt an enumeration of all the matters mentioned by the Commission. Those we have noted sufficiently example the breadth and intensity of the Commission's study. Summarized, the cost figures found by the Commission disclose these operating results: First Quarter 1948 1949 of 1950 ------ ------ -------------- Operating Revenues $2,855,775 $2,995,837 $660,360 Operation Expense 2,876,338 2,913,532 696,765 Operation ratio — per centum — after income taxes — and including inappreciable nonoperating income and deductions 100.7 97.3 105.5 The Commission is not guilty of the unfavorable implication arising from the charge that it pursued a cost-plus method. So used this term infers that cost and profit were its only consideration. Every price-fixing, and a rate determination is nothing more, is finally the sum of cost plus profit. It is unlawful only when it ignores the other factors, and when the cost and profit are not subjected to the refining processes prescribed by law. Instantly, while the fare necessarily is made up of cost and profit, each of these has been pared to conform them to the rules of reasonableness. Significantly no issue of inadequacy or inefficiency of service has entered this case. Nevertheless, on the patrons' side, the Commission considered the age and number of the busses and the frequency of their schedules in rush hours and non-rush hours. It attentively examined the grievances urged by the citizen associations. Furthermore, the fare zones were measured for equality. But above all, the absence of any reduction in the net return for deficiency in service, or of any request therefor by the plaintiffs, is the most persuasive evidence that every normal requirement of service to the passengers has been met. Suggestion is made by the Board that the Commission erroneously failed to segregate the interstate from the intrastate revenues and expenses. The argument is that the Commission has not demonstrated that the interstate revenues are insufficient to take care of the interstate costs and to give a fair profit on the interstate business — that the losses may perhaps be rooted in the intrastate business. This argument is not sound. The inter and intra business are one entire operation, the latter as an integral part of the other. In these circumstances, for rate ascertainments, no separation is helpful. Neither reason nor expediency imposes such an obligation and the law has not.[6] No omission has been made of the particular considerations enjoined upon the Commission by the statute.[7] Nor has it infringed the statutory prohibition against *332 the use of "earning power" as evidence or an element of "value of the property" of the carrier[8] for no property here has been so appraised. The finding of the Commission that the new rates will establish an operating ratio of approximately 95.6 means that it allows a net percentage of 4.4 for profit. The reasonableness of this allowance appeared to the Commission from the facts of the record and hence is not subject to the objection sustained in Washington Gas Light Co. v. Baker[9] as urged by the plaintiffs. We have reviewed the proceedings on the points made by the plaintiffs, but it must be remembered that we are not deciding the merits of the rate case. Our search reveals that the Commission acted within the law defining its powers and on full and sufficient facts. This ends our inquiry. We are thereafter without jurisdiction to question the Commission's decision. The defendants in their answers challenge the standing of the County Board and its members in their representative capacity to bring this action. They acknowledge the right of the plaintiff Thomas F. Proctor to do so, as a patron of the bus line. Although the statute[10] gives the County Board and its members the privilege to be heard before the Commission, we have grave doubts of their right to institute a suit of this kind, because they are not directly affected by the order and the public is already represented by the Commission.[11] Nevertheless, as the plaintiff Proctor is conceded that right, we pass upon the case as if all the plaintiffs were properly before the court. A decree will be entered dismissing the complaint, with costs to the defendants. NOTES [1] Secs. 1336, 2321-2325, Title 28 U.S. Code. [2] 5 U.S.C.A. § 1007. [3] 49 U.S.C.A. § 316. [4] This percentage will be slightly greater. It was based upon an assumed raise of fares in the zone of the Government installations to the maximum requested, but subsequent to the instant report, only a partial increase was granted in that zone. [5] Federal Power Commission v. Hope Natural Gas Co., 320 U.S. 591, 602, 64 S.Ct. 281, 88 L.Ed. 333. [6] Illinois Commerce Commission v. U. S., 292 U.S. 474, 483, 54 S.Ct. 783, 78 L.Ed. 1371; Lone Star Gas Co. v. State of Texas, 304 U.S. 224, 241, 58 S.Ct. 883, 82 L.Ed. 1304. [7] 49 U.S.C.A. § 316(i). [8] 49 U.S.C.A. § 316(h). [9] 88 U.S.App.D.C. 115, 188 F.2d 11, certiorari denied 340 U.S. 952, 71 S.Ct. 571, 95 L.Ed. 686. [10] 49 U.S.C.A. § 316(e). [11] Jersey City v. U. S., D.C.N.J., 101 F. Supp. 702; Tyler v. Judges, 179 U.S. 405, 21 S.Ct. 206, 45 L.Ed. 252; U. S. v. Merchants Traffic Ass'n, 242 U. S. 178, 188, 37 S.Ct. 24, 61 L.Ed. 233; Pittsburgh & W. Va. Ry. v. U. S., 281 U.S. 479, 486, 50 S.Ct. 378, 74 L.Ed. 980; Moffat Tunnel League v. U. S., 289 U.S. 113, 53 S.Ct. 543, 77 L.Ed. 1069.
{ "pile_set_name": "FreeLaw" }
Scientific Reports 6: Article number: 3183010.1038/srep31830; published online: 08 23 2016; updated: 11 30 2016 This Article contains errors in Figure 4 where the hyperfine values for the donor separations along the \[110\] direction were calculated incorrectly. The correct Figure 4 appears below as [Figure 1](#f1){ref-type="fig"}. As a result, in the Results and Discussions section, "From Fig. 4 we can see that *A*~*ij*~ of a P-donor pair in silicon with one bound electron can vary from \~366.0 MHz to \~48.9 MHz within a 5 nm separation range". should read: "From Fig. 4 we can see that *A*~*ij*~ of a P-donor pair in silicon with one bound electron can vary from \~287.4 MHz to \~48.9 MHz within a 5 nm separation range". ![](srep38120-f1){#f1}
{ "pile_set_name": "PubMed Central" }
Monazite-(La) Monazite-(La) is a relatively rare representative of the monazite group, with lanthanum being the dominant rare earth element in its structure. As such, it is the lanthanum analogue of monazite-(Ce), monazite-(Nd), and monazite-(Sm). It is also the phosphorus analogue of gasparite-(La). The group contains simple rare earth phosphate minerals with the general formula of ATO4, where A = Ce, La, Nd, or Sm (or, rarely, Bi), and B = P or, rarely, As. The A site may also bear Ca and Th. References Category:Lanthanum minerals Category:Phosphate minerals Category:Monoclinic minerals
{ "pile_set_name": "Wikipedia (en)" }
[Evaluation of hearing thresholds of 40Hz auditory event related potential and auditory brainstem response]. Recordings of pure tone response, 40 Hz auditory event-related potential (40 Hz AERP) and auditory brainstem response (ABR) were obtained for 74 ears of 42 cases (32 normal ears, 42 injured ears). 40 Hz AERP (0.5-2 kHz) of 20 ears was tested under natural sleep and awake conditions. The results showed that the threshold of 40 Hz AERP was higher than that of tone pip, with the difference of 12.7 +/- 6.4 dBnHL at 0.5 kHz, 14.7 +/- 6.3 dBnHL at 1 kHz, and 15.6 +/- 5.6 dBnHL at 2 kHz respectively. The threshold of 40 Hz AERP increased in natural sleep compared with that in awake state. The threshold of ABR was higher than the behavioral. The data suggests that jointly using several tests get more accurate and objective results in evaluating hearing loss.
{ "pile_set_name": "PubMed Abstracts" }
/*************************************************************************** * __________ __ ___. * Open \______ \ ____ ____ | | _\_ |__ _______ ___ * Source | _// _ \_/ ___\| |/ /| __ \ / _ \ \/ / * Jukebox | | ( <_> ) \___| < | \_\ ( <_> > < < * Firmware |____|_ /\____/ \___ >__|_ \|___ /\____/__/\_ \ * \/ \/ \/ \/ \/ * * Copyright (C) 2014 by Marcin Bukat * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ****************************************************************************/ #include <getopt.h> #include <stdbool.h> #include <stddef.h> #include <stdio.h> #include "rkw.h" #define VERSION "v0.1" static void banner(void) { printf("RKWtool " VERSION " (C) Marcin Bukat 2014\n"); printf("This is free software; see the source for copying conditions. There is NO\n"); printf("warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n"); } static void usage(char *name) { banner(); printf("Usage: %s [-i] [-b] [-e] [-a] [-o prefix] file.rkw\n", name); printf("-i\t\tprint info about RKW file\n"); printf("-b\t\textract nand bootloader images (s1.bin and s2.bin)\n"); printf("-e\t\textract firmware files stored in RKST section\n"); printf("-o prefix\twhen extracting firmware files put it there\n"); printf("-a\t\textract additional file(s) (usually Rock27Boot.bin)\n"); printf("-A\t\textract all data\n"); printf("file.rkw\tRKW file to be processed\n"); } int main(int argc, char **argv) { int opt; struct rkw_info_t *rkw_info = NULL; char *prefix = NULL; bool info = false; bool extract = false; bool bootloader = false; bool addfile = false; while ((opt = getopt(argc, argv, "iebo:aA")) != -1) { switch (opt) { case 'i': info = true; break; case 'e': extract = true; break; case 'b': bootloader = true; break; case 'o': prefix = optarg; break; case 'a': addfile = true; break; case 'A': extract = true; bootloader = true; addfile = true; break; default: usage(argv[0]); break; } } if ((argc - optind) != 1 || (!info && !extract && ! bootloader && !addfile)) { usage(argv[0]); return -1; } banner(); rkw_info = rkw_slurp(argv[optind]); if (rkw_info) { if (info) { rkrs_list_named_items(rkw_info); rkst_list_named_items(rkw_info); } if (extract) unpack_rkst(rkw_info, prefix); if (bootloader) unpack_bootloader(rkw_info, prefix); if (addfile) unpack_addfile(rkw_info, prefix); rkw_free(rkw_info); return 0; } return -1; }
{ "pile_set_name": "Github" }
Q: Android ScrollView how to keep component on screen when it scroll to top of the screen? I have a ScrollView layout like this, for example: <ScrollView> <Component1> <Component2> <Component3> <Component4> ... </ScrollView> Inside ScrollView I have some components, each of them can be anything like LinearLayout, RelativeLayout, TableRow, ... Now what I want is I will scroll the view, when the <Component2> reach the top of the screen, it will be keep on the screen and <Component3>, <Component4>... will keep scrolling till the end of page. When I scroll down, <Component2> will only be scrolled when all the <Component3> has became visible. I saw this on an Iphone app and wondered how to achieve this on Android. I don't know if I describe clearly enough but it is same like this video http://www.youtube.com/watch?v=jXCrM1rzLZY&feature=player_detailpage#t=71s When the tabs scrolled up to top, it stay there. And when scrolled down like in 1:36 of that video, it stay there until all the content below has became visible on the screen. Does anybody know how to do this on Android? A: I guess you could create a hidden copy of Component2 in a RelativeLayout that is setVisible(true) when the coordinates of Component2 are lower(Android draws from the top) than the top of the ScrollView. When the coordinates of Component2 are higher than the top of the ScrollView (.getTop()), Component2Copy.setVisible(false). You may also want to disable them when changing their visibility. Good luck with this.
{ "pile_set_name": "StackExchange" }
Show HN: Prototyp – FramerJS based free prototyping - chinchang http://prototyp.in/ ====== nstart On the output side for the demo, this is all I see <body><script src="../framer.js"></script><script>var imageLayer; imageLayer = new Layer({ x: 0, y: 0, width: 128, height: 128, backgroundColor: 'lightgreen' }); imageLayer.center(); imageLayer.states.add({ second: { scaleX: 1.4, scaleY: 0.6 }, third: { y: 430, scaleX: 0.4, scaleY: 2 }, fourth: { y: 200, scaleY: 1.2 } }); imageLayer.states.animationOptions = { curve: 'spring(500,20,0)' }; imageLayer.on(Events.Click, function() { imageLayer.states.next(); });</script></body> that can't be right can it? ~~~ chinchang Can you please let me know your browser version? ------ chinchang Unlike FramerStudio, Prototyp works on pure JavaScript.
{ "pile_set_name": "HackerNews" }
Q: Why Gibbs energy for nucleation theory? In nucleation theory, the free energy is given by (https://en.wikipedia.org/wiki/Classical_nucleation_theory but in many other places also): $$\Delta G=\frac{4\pi }{3}r^3\Delta g +4\pi r^2\sigma,$$ with $\Delta g$ the difference in free energy between the two phases (liquid and gas for instance), and $\sigma$ the surface tension. $r$ is the radius of the "nucleus" of the new phase. My question is why don't we take the pressure into account? Because we are not at constant pressure, since Laplace law says for the difference of pressure inside and out of the "nucleus": $\Delta P= \frac{2\sigma}{r}$ ? So why do we use a Gibbs energy with no pressure? A: The formula you quote gives the Gibbs energy of a system [single liquid droplet plus vapor of same chemical species]. You say "we are not at constant pressure". This is true in the sense that pressure of liquid inside the droplet is different from pressure of vapor just above the droplet. The formula you quote actually takes into account these differences of pressure, through $\Delta g$. What the formula gives is not the basic Gibbs energy as defined in textbooks ($U-TS+PV$), but something related but more technical: it is the ordinary Gibbs energy of the system minus the ordinary Gibbs energy the system would have if it was all pure vapor at the same $T,P$, minus some constant of no consequence. The ordinary Gibbs energy of liquid can be expressed as $$ G_L = \mu_{liquid} N_{liquid} = g_{liquid}V_{liquid} $$ and similarly for the vapor. Thus, we can regard the Gibbs energy as having a density, either per molecule ($\mu$), or per volume ($g$). There is also Gibbs energy per unit boundary surface, so we have a separate contribution $$ G_{s} = \sigma S . $$ Chemical potential $\mu$ is a function of $T,P$, but we have a different function for liquid ($\mu_L(T,P)$), and different function for vapor($\mu_V(T,P)$). And we have same temperature $T$ everywhere (by assumption), but different pressures for liquid $P_L$ and vapor $P_V$ (due to curved surface of the droplet). Now, with this, we can understand how different pressures enter the expression for $\Delta G$. $$ \Delta G(T,P_V) = g_L(T,P_L) V_L + g_V(T,P_V)V_V + \sigma(T) S $$ Because $P_V$ is easy to measure and control directly (by putting in more vapor), it is taken as "the pressure" that $\Delta G$ is regarded as function of, but this is just arbitrary choice, we could use $P_L$ instead. The difference in pressures enters non-trivially in the right-hand side: the Gibbs energy densities $g_L,g_V$ are to be taken at different pressures. We can assume that all evaporation/condensation happens inside a big container of constant volume $V$, so $V_V = V - V_L$ and rewrite this as $$ \Delta G(T,P_V) = g_L(T,P_L) V_L + g_V(T,P_V)V - g_V(T,P_V)V_L + \sigma(T) S $$ This is almost the original formula, but the second term is getting in the way. However, provided the condensation or evaporation happens while $T$ and $P_V$ do not change (big container so it acts as fixed reservoir), this term does not change and we can drop it and define $\Delta G'$: $$ \Delta G'(T,P_V) = \left( g_L(T,P_L) - g_V(T,P_V)\right) V_L + \sigma(T) S. $$ So we see that the original $\Delta g$ in the equation actually takes into account different pressures. It may be convenient to express it as function of vapor pressure only (and $T$ and radius $r$): $$ \Delta g(T,P_V) = g_L(T,P_V+\frac{2\sigma}{r}) - g_V(T,P_V). $$
{ "pile_set_name": "StackExchange" }
Q: vtkArrayCalculator - Segmentation fault when accessing the output I want to use vtkArrayCalculator, for use in a Paraview filter, as described here: ArrayCalculatorExample vtkSmartPointer<vtkArrayCalculator> calculator = vtkSmartPointer<vtkArrayCalculator>::New(); calculator->SetInputData(input); calculator->AddScalarArrayName("u"); calculator->SetFunction("u+1"); calculator->SetResultArrayName("wind_velocity"); calculator->Update(); vtkSmartPointer<vtkFloatArray> windVelocity = vtkFloatArray::SafeDownCast(calculator->GetStructuredGridOutput()->GetPointData()->GetArray("wind_velocity")); Now when I want to access the data with (or similar commands) windVelocity->GetValue(0); I get a "Segmentation fault (core dumped)". "input" is a vtkStructuredGrid and "u" is a vtkDataArray (that can be downcast to a vtkFloatArray without problem). "u" can be accessed by input->GetPointData()->GetArray("u"); Every hint to what I am doing wrong is greatly appreciated! Edit: I already tried the following vtkSmartPointer<vtkFloatArray> windVelocity = vtkSmartPointer<vtkFloatArray>::New(); windVelocity->DeepCopy(vtkFloatArray::SafeDownCast(calculator->GetStructuredGridOutput()->GetPointData()->GetArray("wind_velocity"))); A: I'd suggest to split up the long chain of vtkFloatArray::SafeDownCast(calculator->GetStructuredGridOutput()->GetPointData()->GetArray("wind_velocity")) and use a debugger to see what the intermediate results are. When reading the definition of GetArray, it states that under various conditions the function might return NULL. Check the return value of GetArray; it is very likely that you do not get back what you expect. vtkDataArray* vtkFieldData::GetArray ( const char * arrayName ) inline Not recommended for use. Use GetAbstractArray(const char *arrayName) instead. Return the array with the name given. Returns NULL if array not found. A NULL is also returned if the array with the given name is not a vtkDataArray. To access vtkStringArray, vtkUnicodeStringArray, or vtkVariantArray, use GetAbstractArray(const char *arrayName).
{ "pile_set_name": "StackExchange" }
Q: Mathematica numerical "error" for simple multiplication In[71]:= 0.6*0.8048780487804877` Out[71]= 0.482927 In[72]:= 0.3*0.8414634146341463` Out[72]= 0.252439 In[74]:= (0.6*0.8048780487804877`) + 0.3*0.8414634146341463` Out[74]= 0.735366 In[75]:= 0.6*0.8048780487804877`+0.3*0.8414634146341463` Out[75]= 0.406365 Why the brackets In[74] and In[75] have effect? I think it should have no difference. I use mathematica 12.1. Thank you for your help. A: 0.8048780487804877`+0.3 is an arbitrary-precision number with precision 0.3. With the parentheses, the 0.3 does not specify the precision, but stands as a number. The second line is equivalent to 0.6 * (0.8048780487804877`+0.3) * 0.8414634146341463` A: Because the In[75] is completely different than you think. 0.6*0.8048780487804877`+0.3*0.8414634146341463` (* 0.406365 *) 0.6*0.8048780487804877` + 0.3*0.8414634146341463` (* 0.735366 *) Notice the space in the second example: in your case +0.3 is a specification for precision for tick ` Easier example: (* in this case it's specification of precision *) 1.2`+30 (* 1.20000000000000000000000000000 *) (* in this case it's 1.2 with $MachinePrecision, plus 30 *) 1.2` + 30 (* 31.2 *) A: It's not hte parentheses; it's the missing whitespace! There is a tiny difference between the meanings of 0.8048780487804877`+0.3 0.*10^-1 and 0.8048780487804877` +0.3 1.10488 We have 0.8048780487804877`+0.3 == 0.8048780487804877`0.3 and the number behind the backtick denotes the number of significant digits. In particular, no addition is performed during evaluation of in case of 0.8048780487804877`+0.3
{ "pile_set_name": "StackExchange" }
// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build arm,freebsd package unix import ( "syscall" "unsafe" ) func setTimespec(sec, nsec int64) Timespec { return Timespec{Sec: sec, Nsec: int32(nsec)} } func setTimeval(sec, usec int64) Timeval { return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { k.Ident = uint32(fd) k.Filter = int16(mode) k.Flags = uint16(flags) } func (iov *Iovec) SetLen(length int) { iov.Len = uint32(length) } func (msghdr *Msghdr) SetControllen(length int) { msghdr.Controllen = uint32(length) } func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) written = int(writtenOut) if e1 != 0 { err = e1 } return } func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno)
{ "pile_set_name": "Github" }
Guitarists will have different playing styles and musical inclinations, but they all agree on one thing: Their instruments must be in tune the whole time, without exceptions. Peavey takes care of that part with the AT-200, loaded with the Antares Auto-Tune for Guitar. Basically, the system tunes your guitar automatically and instantly, with a touch of a button and a strum of all six strings. It’s smart enough to tune your axe to dropped D for those power chords, or to baritone if you wanna go even deeper. Available in red, or if you’re feeling a little more ‘metal’, in black. $500+.
{ "pile_set_name": "Pile-CC" }
package me.devsaki.hentoid.activities.bundles; import android.os.Bundle; import javax.annotation.Nonnull; /** * Helper class to transfer data from any Activity to {@link me.devsaki.hentoid.activities.PrefsActivity} * through a Bundle * <p> * Use Builder class to set data; use Parser class to get data */ public class PrefsActivityBundle { private static final String KEY_IS_VIEWER_PREFS = "isViewer"; private static final String KEY_IS_DOWNLOADER_PREFS = "isDownloader"; private PrefsActivityBundle() { throw new UnsupportedOperationException(); } public static final class Builder { private final Bundle bundle = new Bundle(); public void setIsViewerPrefs(boolean isViewerPrefs) { bundle.putBoolean(KEY_IS_VIEWER_PREFS, isViewerPrefs); } public void setIsDownloaderPrefs(boolean isDownloaderPrefs) { bundle.putBoolean(KEY_IS_DOWNLOADER_PREFS, isDownloaderPrefs); } public Bundle getBundle() { return bundle; } } public static final class Parser { private final Bundle bundle; public Parser(@Nonnull Bundle bundle) { this.bundle = bundle; } public boolean isViewerPrefs() { return bundle.getBoolean(KEY_IS_VIEWER_PREFS, false); } public boolean isDownloaderPrefs() { return bundle.getBoolean(KEY_IS_DOWNLOADER_PREFS, false); } } }
{ "pile_set_name": "Github" }
Q: Inspect dump files from UWP app First I enabled saving of dump files on a Windows 10 Mobile phone: Settings > Update & Security > For developers > Save this many crash dumps: 3 Then I debugged an app which throwed an exception. I continued the debugging after stop. After disconnecting and connecting the mobile phone again, I was able to access the dump file stored under Windows phone\Phone\Documents\Debug directory. The file is called FPCL.WIndows - a736c773-c105-4b30-a799-4bf317872f5e with exception C000027B on 5-03-2016 12.11.dmp and has about 140 MB! I copied the file to the bin directory of my UWP app. Afterwards I opened it as file in Visual Studio 2015 (in the same project). Now I can see the Dump Summary and I have the following buttons: Debug with Managed Only Debug with Mixed Debug with Native Only Set symbol paths Copy all to clipboard If I run Debug with Managed Only I get A fatal exception was caught by the runtime. See $stowedexception in the Watch window to view the original exception information. and on clicking Break I get No compatible code running. The selected debug engine does not support any code executing on the current thread (e.g. only native runtime code is executing). In the Watch 1 window I see the following Name: {CLR}$stowedexception Value: {"The method or operation is not implemented."} Type: System.NotImplementedException This should be the exception I have thrown in my app. When I open this node and look under StackTrace I can get a line number. On pressing Continue I get The debugger cannot continue running the process. This operation is not supported when debugging dump files. So I can only stop it. If I run Debug with Mixed I get again A fatal exception was caught by the runtime. See $stowedexception in the Watch window to view the original exception information. and on clicking Break I get kernelbase.pdb not loaded kernelbase.pdb contains the debug information required to find the source for the module KERNELBASE.dll Module Information: Version: 10.0.10586.218 (th2_release.160401-1800) Original Location: KERNELBASE.dll Try one of the following options: Change existing PDB and binary search paths and retry: Microsoft Symbol Servers Here I can either press Load or New. So the kernelbase.pdb isn't found under the given location. Should it exists? Where should I find it? In the Watch 1 window I see the same as above and I can only stop it. If I run Debug with Native Only I get Unhandled exception at 0x76ECDF95 (combase.dll) in FPCL.WIndows - f736c883-f105-4d30-a719-4bf328872f5e with exception C000027B on 5-03-2016 12.11.dmp: 0xC000027B: Anwendungsinterne Ausnahme (parameters: 0x075C6838, 0x00000002). and on clicking Break I get the same missing kernelbase error as above, but here in the Watch 1 window the Value is Unable to evaluate the expression. So I can only stop it. According to this post I should be able to inspect the source code and find the cause. But how is such a UWP dump file inspected correctly? A: You mention [...] 0xC000027B [...] [...] $stowedexception [...] which are both indicators that there is a Stowed Exception inside the dump. To analyze such exceptions, first watch Channel 9 Defrag Tools, episode 136 where Andrew Richards explains and then analyzes them (at 3:28). Then download the PDE extension from the Defrag Tools OnDrive and analyze your dump in windbg instead of Visual Studio. Regarding the symbols of kernelbase, they should be downloaded from the Microsoft symbol server. To set that up in WinDbg, use .symfix;.reload. If you want to give it another try in Visual Studio, go to Debug / Options and choose Debugging / Symbols, then check "Microsoft Symbol Servers". Regarding the button to press in Visual Studio, choose "Managed only" when debugging the debug build, because your app will run on CoreCLR and choose "Native Only" when debugging the release build, because your app will use .NET native runtime support. (This applies if you didn't change the default settings; otherwise choose according to your compilation settings)
{ "pile_set_name": "StackExchange" }
Sam Wooding Sam Wooding (17 June 1895 – 1 August 1985) was an American jazz pianist, arranger and bandleader living and performing in Europe and the United States. Born in Philadelphia, Pennsylvania, between 1921 and 1923 Wooding was a member of Johnny Dunn's Original Jazz Hounds, one of several Dunn-led lineups that recorded in New York around that time for the Columbia label. He led several big bands in the United States and abroad. His orchestra was at Harlem's Smalls' Paradise in 1925 when a Russian impresario booked it as the pit band for a show titled The Chocolate Kiddies, scheduled to open in Berlin later that year, featuring music by Duke Ellington and starring the performers Lottie Gee and Adelaide Hall. While in Berlin, the band, featuring such musicians as Doc Cheatham, Willie Lewis, Tommy Ladnier, Gene Sedric, and Herb Flemming, recorded several selections for the Vox label. In 1929, with slightly different personnel, Wooding's orchestra made more recordings in Barcelona and Paris for the Parlophone and Pathé labels. Wooding did return to America in 1934. On 14 February 1934, Wooding and his orchestra were featured at The Apollo theater in Harlem in a Clarence Robinson production titled Chocolate Soldiers, starring the Broadway star Adelaide Hall. The show ran for a limited engagement and was highly praised by the press and helped establish The Apollo as Harlem's premier theater. It was the first major production staged at the newly renovated theater. Wooding returned to Europe, performing on the Continent, in Russia and England throughout most of the 1930s. Wooding's long stays overseas made him virtually unknown at home, but Europeans were among the staunchest jazz fans anywhere, and they loved what the band had to offer. "We found it hard to believe, but the Europeans treated us with as much respect as they did their own symphonic orchestras," he recalled in a 1978 interview. "They loved our music, but they didn’t quite understand it, so I made it a load easier for them by incorporating such melodies as "Du holder Abendstern" from Tannhäuser - syncopated, of course. They called it blasphemy, but they couldn't get enough of it. That would never have happened back here in the States. Here they looked on jazz as something that belonged in the gin mills and sporting houses, and if someone had suggested booking a blues singer like Bessie Smith, or even a white girl like Nora Bayes, on the same bill as Ernestine Schumann-Heink, it would have been regarded as a joke in the poorest of taste." Returning home in the late 1930s, when World War II seemed a certainty, Wooding began formal studies of music, attained a degree, and began teaching full-time, counting among his students trumpeter Clifford Brown. He also led and toured with the Southland Spiritual Choir. In the early 1970s, Sam Wooding formed another big band and took it to Switzerland for a successful concert, but this venture was short-lived. References Category:American jazz pianists Category:American male pianists Category:Continental jazz pianists Category:Dixieland pianists Category:American jazz bandleaders Category:1895 births Category:1985 deaths Category:Musicians from Philadelphia Category:20th-century American conductors (music) Category:20th-century American pianists Category:Jazz musicians from Pennsylvania Category:20th-century American male musicians Category:Male jazz musicians
{ "pile_set_name": "Wikipedia (en)" }
Our core partners come from a range of backgrounds - Finance, Law, Engineering, IT. We have participated in a number of cross border initiatives. We draw on a broad network of contacts throughout Asia Pacific, North America, and the Middle East. Our projects have ranged from waste management, through to commercial funding, and even retailing for end consumer products Our Experience Libertare is an experienced and diverse group of professionals. We provide strategic services to clients, with one key difference - we execute through to implementation. We do more than just consult, we have a vested interest in our projects through strategic partnerships. Associates within the group are selected based on specific project need, thus keeping the bench strength within the group current and relevant.
{ "pile_set_name": "Pile-CC" }
I'd be ecstatic if someone wrote an interactive turn report browser/post processor (a PD one that I could give away to new Olympia players). Olympia outputs an intermediate form of the turn report, which a little C program turns into the reports which are mailed to players. If someone had a better client, I could mail them the raw report instead of the formatted one, and they could process it with whatever client they liked. I expect that a turn browser would require some changes to my intermediate output format. I'm willing to make whatever changes are necessary. Let me know what you need. Report formatter and a sample raw turn report: # This is a shell archive. Remove anything before this line, # then unpack it by saving it in a file and typing "sh file". # # This archive contains: # rep.c 100 #
{ "pile_set_name": "Pile-CC" }
US Military Dietary Protein Recommendations: A Simple But Often Confused Topic. Military recommendations for dietary protein are based on the recommended dietary allowance (RDA) of 0.8 g of protein per kilogram of body mass (BM) established by the Food and Nutrition Board, Institute of Medicine (IOM) of the National Academies. The RDA is likely adequate for most military personnel, particularly when activity levels are low and energy intake is sufficient to maintain a healthy body weight. However, military recommendations account for periods of increased metabolic demand during training and real-world operations, especially those that produce an energy deficit. Under those conditions, protein requirements are higher (1.5-2.0 g/kg BM) in an attempt to attenuate the unavoidable loss of muscle mass that occurs during prolonged or repeated exposure to energy deficits. Whole foods are recommended as the primary method to consume more protein, although there are likely operational scenarios where whole foods are not available and consuming supplemental protein at effective, not excessive, doses (20-25 g or 0.25-0.3 g/kg BM per meal) is recommended. Despite these evidence-based, condition-specific recommendations, the necessity of protein supplements and the requirements and rationale for consuming higher-protein diets are often misunderstood, resulting in an overconsumption of dietary protein and unsubstantiated health-related concerns. This review will provide the basis of the US military dietary protein requirements and highlight common misconceptions associated with the amount and safety of protein in military diets.
{ "pile_set_name": "PubMed Abstracts" }
This invention relates to a system and apparatus for cultivating and harvesting shell fish, such as oysters. Most oysters are recovered commercially from natural oyster reefs by relatively crude harvesting procedures that usually include the hand picking of the oysters from the reef with the use of hand manipulatable tongs but sometimes without even the aid of such equipment. Such approaches to harvesting the oysters are expensive and require considerable labor that is both grueling and fatiguing to the worker. Apart from the harvesting problems which confront the shell fish industry, many coastal waters have become polluted and this has given rise to laws and regulations which preclude the commercial recovery of the oysters from the reef habitats located in the polluted areas. All of this is leading the shell fish industry toward the adoption of commercial rather than natural cultivation procedures. In the natural development of the mature oyster, the water borne spat seeks out an appropriate base to adhere to and thereafter develop into the mature oyster. An oyster shell per se is especially attractive as a base for such spats and this of course accounts for the development of the natural reefs. Commercially, various materials may be used as the cultch or base for attracting the spats, although oyster shells are frequently preferred. In some commercial practices, the cultch material is simply dispersed along the water bottom in the selected water area. This is not the most favorable approach to commercial cultivation of oysters however, since shifting bottom sands, sediment and mud frequently cover the cultch and render it useless for its intended purposes. Apart from this, the bottom locations for maturity of the oysters provide little opportunity to change or improve on the current harvesting procedures. The oyster has many natural enemies, among which can be mentioned Thor's the conch, leeches, boring clams, fungus, and the encrustating organisms, such as amorphous sponges, barnacles and mussels. Apart from these enemies, there are parasites and diseases which, experience has shown, can be controlled by periodic exposure of the oyster to the air and sun, as happens naturally, for example, in some tidal waters. To avoid the shifting sands and to provide improved harvesting procedures, there have been those who advocate suspending the cultch material from floating devices that are anchored at preselected growth areas. This approach has certain advantages, but it precludes the exposure of the oyster to the sun and air and leaves room for improving on the retrieval and harvesting procedures.
{ "pile_set_name": "USPTO Backgrounds" }
Q: "HTTP Status 405 – HTTP method GET is not supported by this URL" error when calling servlet from JSP I have the following code, but when I try to access the /data-upload URL I get the error "HTTP method GET is not supported by this URL". Java servlet code: package xyz.controllers; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.apache.http.entity.StringEntity; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; import javax.servlet.http.HttpServlet; import javax.servlet.annotation.WebServlet; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; @WebServlet("/data-upload") public class GetLocalAreaIds extends HttpServlet { // HTTP POST request public void doPost(HttpServletRequest request, HttpResponse response) throws ServletException, IOException { System.out.println("#VH in doPost method "); String url = "http://xyz.xyz/search"; HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(url); // add header post.setHeader("Content-Type", "application/xml"); String elementLocalNameType = request.getParameter("elementLocalNameType"); System.out.println("#VH elementLocalNameType: " + elementLocalNameType); String localAreaName = request.getParameter("localAreaName"); System.out.println("#VH localAreaName: " + localAreaName); StringEntity params = null; try { params = getStringEntityParams(elementLocalNameType, localAreaName); System.out.println("#VH params: " + params); } catch (Exception e) { System.out.println("Error while getting elementLocalNameType"); } post.setEntity(params); response = client.execute(post); System.out.println("\nSending 'POST' request to URL : " + url); System.out.println("Post parameters : " + post.getEntity()); System.out.println("Response Code : " + response.getStatusLine().getStatusCode()); BufferedReader rd = new BufferedReader( new InputStreamReader(response.getEntity().getContent())); StringBuffer result = new StringBuffer(); String line = ""; while ((line = rd.readLine()) != null) { result.append(line); } System.out.println("#VH result.toString(): " + result.toString()); } public void doGet(HttpServletRequest request, HttpResponse response) throws ServletException, IOException { doPost(request, response); } private StringEntity getStringEntityParams(String elementLocalNameType, String localAreaName) throws Exception { StringEntity params = new StringEntity("<request><workflow>get-element-values-workflow</workflow><get-element-values><element-localname>"+elementLocalNameType+"</element-localname><starts-with>"+localAreaName+"</starts-with><is-csv>True</is-csv></get-element-values></request>"); return params; } } JSP code: <%@ taglib prefix="tiles" uri="http://tiles.apache.org/tags-tiles" %> <%@ taglib prefix="user" uri="/WEB-INF/tlds/user.tld" %> <%@ page import="org.apache.http.client.HttpClient" %> <%@ page import="org.apache.http.client.methods.HttpGet" %> <%@ page import="org.apache.http.impl.client.DefaultHttpClient" %> <%@ page import="GetLocalAreaIds" %> <%@ taglib uri = "http://java.sun.com/jsp/jstl/core" prefix = "c" %> <tiles:insertDefinition name="layout"> <tiles:putAttribute name="title">Title</tiles:putAttribute> <tiles:putAttribute name="main"> <main id="content" role="main" class="group category-page"> <header class="page-header group"> <div class="full-width"> <h1>Data Upload</h1> </div> </header> <div class="browse-container full-width group"> <div id="error"> <p style="color: #ff0000">${error}</p> </div> <user:current-local-area msg="<b>LA:</b> !{#localAreaName}"/> <c:set var = 'la_id' scope = 'session' value = '<user:current-local-area msg="!{#localAreaName}">'/> <c:set var = "la_id"><user:current-local-area msg="!{#localAreaName}"/></c:set> <jsp:useBean id="GetLocalAreaIds" class="GetLocalAreaIds"/> <form action="${pageContext.request.contextPath}/data-upload" method="POST"> <span>Do you want to download the LA IDs for families or individuals?</span><br /> <input type="radio" name="elementLocalNameType" value="la-family-id"> Family IDs<br /> <input type="radio" name="elementLocalNameType" value="la-individual-id"> Individual IDs<br /> <input type="hidden" name="localAreaName" value="${la_id}"> <input class="button" type="submit" value="Submit"> </form> </div> </main> </tiles:putAttribute> </tiles:insertDefinition> I have read some of the other posts which deal with the same issue, but their solutions didn't work for me. When I try adding @Override to the doPost and doGet methods, I get an error saying "method does not override or implement a method from a supertype" even though I'm extending HttpServlet. A: The issue is, both get and post are not inherited from the HttpServlet change the doPost(HttpServletRequest request, HttpResponse response) to doPost(HttpServletRequest request, HttpServletResponse response) and doGet(HttpServletRequest request, HttpResponse response) to doGet(HttpServletRequest request, HttpServletResponse response) EDIT Next issue is response = client.execute(post); which as you stated "incompatible types: org.apache.http.HttpResponse cannot be converted to javax.servlet.http.HttpServletResponse" Change it to org.apache.http.HttpResponse my_response = client.execute(post); ... System.out.println("Response Code : " + my_response.getStatusLine().getStatusCode()); BufferedReader rd = new BufferedReader( new InputStreamReader(my_response.getEntity().getContent())); Also note, you don't write anything to user response out, so if you get nothing in calling client(e.g. browser) it's normal. Or you write everything you do with stdout with response out.
{ "pile_set_name": "StackExchange" }
-----Original Message----- From: Gibner, Stinson Sent: Monday, May 21, 2001 3:23 PM To: Borgman, Laine; Kaminski, Vince J Subject: D-G Energy Software Procurement Laine, Enclosed is a revised copy of the Software Licence Agreement with D-G Energy. The earlier version had prices and conditions of use which differed from what had been discussed over the telephone. This version brings into line the terms with what has been agreed upon, and has been given tentative approval by the president of D-G, who I met with last week. I have highlighed the sections which have changed from the version that you sent out for signature last November. I assume that the revised document will have to be reviewed by legal again. Let me know if I can be of any assistance in this process. Regards, Stinson Gibner
{ "pile_set_name": "Enron Emails" }
Brain Day Exhibits, activities and real brains bring neuroscience to life for adults and kids of all ages. As part of NYC Brain Awareness Week, scientists and NYSCI Explainers describe the brain’s different parts, demonstrate how it allows us to sense our environment and control our muscles, and discover similarities between human and animal cognition.
{ "pile_set_name": "Pile-CC" }
Olinda Elementary Olinda Elementary is an elementary-level school located in Brea, California. History Olinda Elementary was first built in 1898 in what is now Carbon Canyon Regional Park, one year after the village of Olinda was founded. Relocation The school was moved deeper into Carbon Canyon during the mid-1960's, in what is now Olinda Village. In 2012, the Olinda Village location was scheduled for demolition and a new site was constructed at a location on Birch Street next to the City of Brea's Sports Park. This sparked concern and anger with local residents as the school had also served as a park, but was made inaccessible after being sold. Awards In 2006, the school was recognized as a California Distinguished School and in 2007 was recognized as a Blue Ribbon School. Also, in 2010, the Orange County Register placed Olinda Elementary on their "Best Schools" list at number 10. References Category:Brea, California Category:Public elementary schools in California
{ "pile_set_name": "Wikipedia (en)" }
Q: Does Netty Expose the Number of Connections in the Backlog of the ParentGroup? By setting a ChannelOption I can specify the backlog size of the queue handling incoming connections. .option(ChannelOption.SO_BACKLOG, 100) I want to instrument my code so that I can measure the capacity in the queue. Does Netty provide any means of exposing the current state of the backlog? A: No it does not as it is completely handled within the kernel of your OS.
{ "pile_set_name": "StackExchange" }
Q: P-adic valuation for ideals Let $A$ be a Dedekind domain and $\mathfrak{a},\mathfrak{b}$ be fractional ideals of $A$. Then we know that $\mathfrak{a}$ and $\mathfrak{b}$ can be decomposed into $\mathfrak{a}=\prod\limits_{\mathfrak{p}}\mathfrak{p}^{v_{\mathfrak{p}}(\mathfrak{a})}$ and $\mathfrak{b}=\prod\limits_{\mathfrak{p}}\mathfrak{p}^{v_{\mathfrak{p}}(\mathfrak{b})}$, where $\mathfrak{p}$ are prime ideals of $A$ and the $v_{\mathfrak{p}}(\mathfrak{a}),v_{\mathfrak{p}}(\mathfrak{b})$ are integers, all but finitely many of which are zero. Prove that 1) $v_{\mathfrak{p}}(\mathfrak{a}\mathfrak{b})=v_{\mathfrak{p}}(\mathfrak{a})+v_{\mathfrak{p}}(\mathfrak{b})$ 2) $v_{\mathfrak{p}}(\mathfrak{a}+\mathfrak{b})=\min\lbrace v_{\mathfrak{p}}(\mathfrak{a}),v_{\mathfrak{p}}(\mathfrak{b}) \rbrace$ 3) $v_{\mathfrak{p}}(\mathfrak{a}\cap\mathfrak{b})=\max\lbrace v_{\mathfrak{p}}(\mathfrak{a}),v_{\mathfrak{p}}(\mathfrak{b}) \rbrace$ I can see that this $p$-adic valuation for ideals is supposed to be a generalization of the similar object for integers and prime numbers. But due to multiplication of ideals being slightly more complicated, I haven't been able to prove the first one. I think the last two don't look too hard if I can become comfortable with this thing. $\mathfrak{a}\mathfrak{b}=\left\lbrace \sum\limits_{i=1}^na_ib_i |a_i \in \mathfrak{a},b_i \in \mathfrak{b} \right\rbrace$. It's not clear to how I can use this to say something about the p-adic valuations of the product of these two ideals. EDIT: my attempt at 2: Without loss of generality, say $v_{\mathfrak{p}}(\mathfrak{a}) \leq v_{\mathfrak{p}}(\mathfrak{b})$ $(*)$ I have already proven that $\mathfrak{b} \subseteq \mathfrak{a} \iff v_{\mathfrak{p}}(\mathfrak{a}) \leq v_{\mathfrak{p}}(\mathfrak{b})$ for all $\mathfrak{p}$. Since the chosen $\mathfrak{p}$ in $(*)$ is arbitrary, I conclude that $\mathfrak{b} \subseteq \mathfrak{a}$. Hence $\mathfrak{a}+\mathfrak{b} \subseteq \mathfrak{a}+\mathfrak{a}=\mathfrak{a}$. And clearly $\mathfrak{a} \subseteq \mathfrak{a}+\mathfrak{b}$, so $\mathfrak{a}+\mathfrak{b}=\mathfrak{a}$ $(**)$ 2 follows easily from $(**)$. I am suspicious about this proof, particularly because $(**)$ seems stronger than what I was trying to show in the first place. I suspect I have misused $(*)$. A: The unique factorization of ideals exactly mirrors how this works in $\Bbb Z$, that's the trick. I write $v$ for the valuation, the prime $\mathfrak{p}$ being understood. I also assume $\mathfrak{a}\mathfrak{b}\ne 0$ since that case is easy to see. For the first one, you write $\mathfrak{a}=\mathfrak{p}^n\mathfrak{m}, \mathfrak{b}=\mathfrak{p}^m\mathfrak{l}$ with $\mathfrak{m},\mathfrak{l}$ not divisible by $\mathfrak{p}$, so that the first one immediately follows by the uniqueness of the factorization. For the second the easiest approach is to note that Dedekind domains are characterized by the fact that all of their localizations at each prime are DVRs. Then by passing to the localization, $A_{\mathfrak{p}}$, we may assume $\mathfrak{a},\mathfrak{b}$ are principal ideals, with $\mathfrak{a}=(\pi^n), \mathfrak{b}=(\pi^m)$ and $\pi$ a local uniformizing parameter. Then assume, WLOG, that $n=\min\{n,m\}>m$ (in the case of equality the result is immediate) and note $$v(\mathfrak{a}+\mathfrak{b})=v((\pi^n)+(\pi^m))=v(\{r\pi^n+s\pi^m: r,s\in A_{\mathfrak{p}}\})=n$$ The equality coming from the fact $\pi^n$ clearly divides all elements of that ideal, so $v\ge n$ and since $\pi^n$ is in it, $v\le n$. For the third, we again localize, and note since the valuation is discrete, we have that $k\le \ell\iff (\pi^k)\supseteq (\pi^\ell)$ with equality of numbers iff we have equality of ideals. But then the result immediately follows, since if $k\le\ell$ we have $(\pi^k)\cap(\pi^\ell)=(\pi^\ell)$. Edit The messier way that only involves the unique factorization characterization goes like this: first get a common denominator, i.e. write $$\mathfrak{a}=\mathfrak{p}^n\prod_{i=1}^r\mathfrak{q}_i^{e_i},\, \mathfrak{b}=\mathfrak{p}^n\prod_{i=1}^r\mathfrak{q}_i^{f_i}$$ with some of the exponents being allowed to be zero, and as always we assume all the $\mathfrak{q}_i$ are distinct from one another and $\mathfrak{p}$. Then for peace of mind, clear out the denominators aside from $\mathfrak{p}$, that is to say multiply $\mathfrak{a}+\mathfrak{b}$ by $$\prod_{i=1}^r\mathfrak{q}_i^{N_i}$$ with $N_i$ so that $N_i+e_i, N_i+f_i\ge 0$ so that we may assume $\mathfrak{a},\mathfrak{b}$ are integral ideals times a power of $\mathfrak{p}$. We note that we may further assume that $(N+e_i)(N+f_i)=0$, i.e either the exponent of $\mathfrak{q}_i$ in $\mathfrak{a}$ is $0$ or the corresponding exponent in $\mathfrak{b}$ is $0$. This modified version does not affect the valuation because the $\mathfrak{q}_i$ are all coprime to $\mathfrak{p}$. Then $$\mathfrak{a}+\mathfrak{b}=\mathfrak{p}^n\left(\prod_{i=1}^r\mathfrak{q}_i^{N_i-e_i}+\mathfrak{p}^{m-n}\prod_{i=1}^r\mathfrak{q}_i^{N_i-f_i}\right).$$ By assumption the summands within the parentheses are co-prime integer ideals, hence the inside of the sum is merely $A$, so that $v(\mathfrak{a}+\mathfrak{b})=n$ as desired. For the intersection you do similarly, using the Chinese Remainder theorem after clearing denominators and assuming $(N_i+e_i)(N_i+f_i)=0$, then you have $$\mathfrak{a}\cap\mathfrak{b}=\mathfrak{p}^n\left(\prod_{i=1}^r\mathfrak{q}_i^{N_i-e_i}\cap\mathfrak{p}^{m-n}\prod_{i=1}^r\mathfrak{q}_i^{N_i-f_i}\right)=\mathfrak{p}^n\cdot\prod_{i=1}^r\mathfrak{q}_i^{2N_i+e_i+f_i}\cdot\mathfrak{p}^{m-n}=\mathfrak{p}^m\cdot\mathfrak{j}$$ with $\mathfrak{j}$ coprime to $\mathfrak{p}$ and this has valuation $m$ as desired.
{ "pile_set_name": "StackExchange" }
CRYPTO GENIUS: SCAM OR LEGIT? THE ULTIMATE TEST Last Updated on February 12, 2019 CRYPTO GENIUS: SCAM OR LEGIT? THE ULTIMATE TEST 5 (100%) 15 votes Crypto Genius is the brainchild of a man identified as Chris Peterson. This software promises its user to generate an average daily profit of $5,900. According to him, he hired a team to develop the cryptocurrency trading software with an algorithm that makes it faster than any other software in the market. Are these claims true? Our review reveals otherwise. Crypto Genius is a scam that should be paid no attention. However, we have found that Cryptosoft is a legit robot which can bring you solid profits. Is the Crypto Genius a Scam? YES! In a bid to deliver important news to our esteemed readers, InsideBitcoins have done adequate research about this automated trading platform and have observed the following: The Crypto Genius app was created by an unknown identity Chris Petersen. A good search of the software reveals no information about Chris. Crypto Genius claims to make a whopping $5,900 for its members. However, there is no real testimony to that effect. All the personalities on the web platform are internet actors and all the testimonies are fabricated. It claims to be faster than any other cryptocurrency automated trading system by entering a trade position 0.39 seconds faster via South Korean fiber optic technology. However, that doesn’t make any sense. There are recommendable robots that work perfectly in place of the scam ‘Crypto Genius’ e.g. Cryptosoft. Cryptosoft is another interesting software used for trading cryptocurrencies. Although the use of Bitcoin robots to trade cryptocurrencies is a valid way of earning through cryptocurrency trading and investment, scams infiltrate the system to confuse the public. A close study of this scam trading platforms shows it is a well-arranged scam. Claiming to search for the most profitable trading signals for cryptocurrencies like Bitcoin, Ethereum, Ripple etc., and this software has duped many unsuspecting investors. With many unrelated stories, Chris Peterson tries to convince that his software is not a scam. He claims there is a South Korean fiber optic technology which helps the app take a good trading position faster than other software, this, in fact, is not true, there is no such technology, it is a bunch of crap put together to sound official. Requiring a minimum deposit of $250 before trading, Crypto Genius sounds contradictory against its initial statements which confirms that the app is free to use for all registered users. Launched in December 2017, Chris claimed also that the total accumulation value of cryptocurrencies this Crypto Genius software has traded is pegged at $13 million. Wow, that seems huge. However, there is no official validation from any regulatory body or member. With fake reviews showcased on its website, Crypto Genius is only but a SCAM which should not be engaged with. What is the Crypto Genius, and is it a Scam? Yes, the Crypto Genius software is a scam. This can be proven from all indications. After creating an account with the platform, the user is connected with a broker to trade cryptocurrencies called STOX Market. A quick background on this broker site reveals that it is an unregulated broker site and it was created solely for the purpose of scamming unsuspecting customers. There are many other unlicensed broker websites used to perpetrate this act. As earlier stated, this app was created by Chris Petersen. However, our research proves that this character is only fictitious and that is not its real image. The image of Chris Petersen is an image available on iStock. Thus, the image was either stolen or bought from the platform. The characters in the video on the platform also are internet actors paid to act as investors and real users of the imaginary trading system. Although the software can be accessed on mobile phones (Android and iOS), computers (Windows, Mac, Linux), the information displayed on the platform makes it a SCAM to watch against. The information is unverified, untrue and unrealistic. Having read a lot of reviews about this software, many who have used it in time past complained about the system with most of them having withdrawal problems after depositing, and trading. As a rule of thumb, to unmask a scam, observe the platform, if the platform presents services that are risk-free, booming and contains a lot of excitement from parties to vacation, these are signs to show that the program is a scam. Who founded the Crypto Genius? Crypto Genius was founded by an anonymous individual called Chris Petersen. Having researched this personality, it was observed that this wasn’t a real human but rather an image bought from Shutterstock as can be seen in the image below. Searching for the registration details of the domain name the same way we did for Crypto GPS, we searched for www.thecryptogenius.com and we found out that its registration details were all kept private. Thus, having a face who we found out to be an image bought or stolen from Shutterstock, and private details of the domain name, all these lead to only one conclusion. CRYPTO GENIUS IS A SCAM! Why the Crypto Genius is a Scam There are a lot of valid reasons to justify that Crypto Genius is a scam. These shreds of evidence are so evident that it was so easy digging them out. Some of them are: False claims about how much you can make In the promotional video displayed on the homepage of the website, Crypto Genius claims that people would be able to “experience the most important transfer of wealth in history”. Claiming to make nothing less than $5,900 for its users, Crypto Genius presents a seemingly impossible promise to its members. Crypto Genius claims that with the system, there is an algorithm designed into the trading software that searches the internet for profitable trading signals. However, in all of this, there is no concrete way to prove such an algorithm exists. So, imagine a trader who joins the platform by depositing the $250 minimum fee, he or she would expect to earn the same $5,900 daily. But, is this so, it definitely cannot be. What if there is another investor who invests $1,000, the person definitely cannot earn the $5,900 as the investment ratio differs. Thus, from all indications, these claims suffice to say that Crypto Genius is a scam. Fake Videos With a neatly arranged video on the software’s web platform, Crypto Genius claims media promotions from popular media houses such as CNN, Financial Times, and Forbes etc. However, the video is a doctored one as the media house logos used were actually edited by video professionals in their team. All the personalities also used in the video are internet actors and are in no way connected to cryptocurrency trading or the software. The video points out that the world economy is on the brink of collapse and the only way out is to join the platform and have a financially secured future. Fake Testimonials All the testimonies shared by so-called members on the platform about the software are fake. These testifiers claim that the software has helped reshape their financial life, they have built houses, bought cars, gone on vacations, attended wild parties, and they live luxury lives. Our research reveals that all of this is fake. These testifiers supposedly claim they have earned several thousands of dollars trading with the system every day. It is quite appalling that these testifiers claim to have so much money that they stopped working. All of these statements are unbelievable and are more than enough to conclude it is a scam. Fake rumors and TV claims Crypto Genius is reputed for using popular brand names and celebrity identities in the promotion of their scam. There have been situations where they used UK’s show, Dragons Den claiming the software team was on the show and the software was endorsed. Also, they claimed at a time that Elon Musk was stepping down at his company, Tesla to focus on cryptocurrency trading through automated trading systems. Also, they once claimed Peter Jones, a top investor in Dragons Den owns 20% shares in the software. However, none of the claims are true. Official sources have confirmed that there is no memorandum of understanding between the concerned parties. Misleading Information In a bid to mislead the public on the real reason behind the rise in the price of Bitcoin and other cryptocurrencies over time, this trading software claims that it can just generate thousands of dollars every day for every user. The truth is that the rise in the value of these cryptocurrencies is as a result of its increasing acceptance globally. A close study of the cryptocurrency system reveals that there are bearish and bullish markets at different times. Presently, the cryptocurrency industry is in a bearish condition which has seen Bitcoin move from an ATH of $20,000 to as low as $4,000. Thus, Crypto Genius is only out with misleading information about how to earn in the cryptocurrency space. The Signup, deposit, and trade process on Crypto Genius Crypto Genius is a platform that claims its software is free for all its registered users but requires $250 to activate trading. From here, one can easily get the trick being used. After registration on the website, the trading platform connects the registered user to a broker website to deposit funds for live trading. This platform has connections with different unlicensed crypto brokers i.e. STOX Market etc. such that when there is a deposit, the platform shares the money between them. After all of this, the system supposedly deposits the funds into the account and the money is seen in the account. With trading, the money would be increasing by the day showing profits are coming in. However, this is where the problem lies, upon sending a withdrawal request, the request would not be accepted nor attended to. Contacting support is also a dead end as there would be no response. At this point, it is clear the whole process was a scam. Have people made money with the Crypto Genius? In the course of the research of this software, all testimonies shared on other platforms asides from the software’s web platform were negative reviews. This goes to further assert that there is no one, not one that has earned money on this trading platform. Recommended robots The presence of fake and scam robots doesn’t mean that there are no good robots in the automated trading industry. There are quite a number of them, however, one of the best which we recommend greatly is Cryptosoft. Cryptosoft: Rated as of the most promising Crypto robots of 2019, Cryptosoft is a trading software that allows its users to trade cryptocurrencies using accurate trading signals. Cryptosoft is the best Crypto robot to use in 2018. Its services are transparent and valid. We have testimonies of users using this software. Click the link below to sign up. Crypto Genius Review: The Verdict! Having brought to light the status of Crypto Genius as a scam. It is important for you to know that the software is not trustworthy. They are only out to manipulate you with sweet information. Only gullible people would trust such systems where you would just earn a lot of money doing nothing. Asides from the fact that this system deceives with the data showing profits, this system also lies with its bogus claims of making thousands of dollars daily. So, once again, we do not recommend this software and it is a TOTAL SCAM! FAQs What is Bitcoin? Bitcoin is a decentralized currency that works on a tech called the blockchain. This currency can be regarded as a currency, asset or security. Is it possible to earn through cryptocurrency? Yes, it is. There are different ways to earn money in the cryptocurrency ecosystem. This includes Buying and Selling of Bitcoin, ICO fundraising, Cryptocurrency mining among others. Which celebrity has endorsed Crypto Genius so far? There’s no celebrity that has endorsed the trading software as it is a scam. Every information out there on the internet about a celebrity endorsing the software is false. Is it legal to trade cryptocurrencies? The response to this question depends on the country of use. Some countries totally disregard Bitcoin and other cryptocurrencies as currency and there are some that accept it. For instance, in a state in the US, a court ruled stating that Bitcoin is a currency. Asides Bitcoin Code and Cryptosoft, is there any other trading robot you can recommend? The crypto genius is a scam and you do not want to get hurt. Having an investment with this robot is very dangerous as they do not have any realistic thing on their platform and they have been reviewed to be a scam. Just the day before yesterday, my brother complained about her loss and I was shocked. He lost about $500 dollars on the crypto genius software and I could not believe my ears. This software is a scam and it is not worth your time at all. What are the other cryptocurrency robots that i can use? Which cryptocurrency robot will you recommend for me to use for trading? The crypto genius software of a scam in disguise, this is nothing but a total waste of your wonderful and worthwhile time. There are so many claims on the crypto genius software that are not real, they are all fake and not good. The crypto robot software claims to work on a special underlying technology but there is no such technology out there. Everything about the system is just fake and unrealistic in nature, I hope no one falls for this lie anymore. This software is a total scam, can you help me with softwares that is legit and profitable? What is the minimum amount i can make with this legit robots? Thank you for the honest information online, kudos to you guys and i hope you reply me as soon as possible. The crypto genius is nothing but a disappointment, with the name of the cryptocurrency robot I thought is was worth it not knowing that it was just a waste of time, money and resources and I am very sad to have wasted my precious time on this. This robot is a scam and you should not invest your money in this software for any reason. Will i be able to get my money back since it is not working for me? Is there is a way through which those that scam people online can be fish-out and dealt with? You guys are nothing but the best. I can’t just say anything more than this because you have really done a great job and I commend the team behind this. How cheap can people be just for the sake of making money they got a picture form shutterstock to use as the founder of their page. I the people behind this are punished for the crime they have committed with the Crypto Genius disguise. What is the easiest way to purchase cryptocurrencies from a trusted source? and how can I trade with ease online using a simple software?
{ "pile_set_name": "Pile-CC" }
We only use the finest wood to provide top-quality service to our customers. Some of the products we use on our flooring projects include: *Come visit our showroom for every hardwood flooring option you could think of. We can customize any project to meet your specifications. We guarantee to get the job done right the first time and will work hard to make sure you are completely satisfied. Call us today and let us show you the difference. Some of your hardwood floor options: Laminate Flooring is a great way to get the natural look of real hard wood for less. Floor Magicians Inc. offers a variety of laminate flooring to suit your style of living. Laminate floors lay over the installations sub-floor after laying a foam/film underlay to provide moisture and sound reducing properties.
{ "pile_set_name": "Pile-CC" }
Ständlerstraße The Ständlerstraße is a 3.5 km long street in the south of Munich. It is a part of the exterior ring planned in earlier years. It runs from the Stadelheimer Straße, the corner of Schwanseestraße in Giesing, crosses the A8, is crossed by the chain bridge Neuperlach and ends in the Karl-Marx-Ring in Neuperlach. Due to the original planning, the routing of the road is also generous for eight lanes, but only built to four lanes. The street was named after a family of merchants known as Stantler, who for several generations practiced the craft of blade smith in the area On it are the sculptures "Only Man is the Place of Images" by Jai Young Park and Pavilion - Slanted Walls by Kay Winkler, as well as the tram main workshop, which is now used by the MVG Museum and is a protected building. To the southwest is the cemetery at Perlacher Forst. For climate protection reasons, the street lamps were removed along the road in 2015. References Category:Streets in Munich
{ "pile_set_name": "Wikipedia (en)" }
Q: How to concatenate two DNAStringSet sequences per sample in R? I have two Large DNAStringSet objects, where each of them contain 2805 entries and each of them has length of 201. I want to simply combine them, so to have 2805 entries because each of them are this size, but I want to have one object, combination of both. I tried to do this s12 <- c(unlist(s1), unlist(s2)) But that created single Large DNAString object with 1127610 elements, and this is not what I want. I simply want to combine them per sample. EDIT: Each entry in my DNASTringSet objects named s1 and s2, have similar format to this: width seq [1] 201 CCATCCCAGGGGTGATGCCAAGTGATTCCA...CTAACTCTGGGGTAATGTCCTGCAGCCGG A: If your goal is to return a list where each list element is the concatenation of the corresponding list elements from the original lists restulting in a list of with length 2805 where each list element has a length of 402, you can achieve this with Map. Here is an example with a smaller pair of lists. # set up the lists set.seed(1234) list.a <- list(a=1:5, b=letters[1:5], c=rnorm(5)) list.b <- list(a=6:10, b=letters[6:10], c=rnorm(5)) Each list contains 3 elements, which are vectors of length 5. Now, concatenate the lists by list position with Map and c: Map(c, list.a, list.b) $a [1] 1 2 3 4 5 6 7 8 9 10 $b [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" $c [1] -1.2070657 0.2774292 1.0844412 -2.3456977 0.4291247 0.5060559 -0.5747400 -0.5466319 -0.5644520 -0.8900378 For your problem as you have described it, you would use s12 <- Map(c, s1, s2) The first argument of Map is a function that tells Map what to do with the list items that you have given it. Above those list items are a and b, in your example, they are s1 and s2.
{ "pile_set_name": "StackExchange" }
Tuesday, May 31, 2005 The Consequences of Telling the Truth If you are a European who dares to criticize Islam and the extremists it inevitably seems to breed, you'd better be prepared for lifestyle changes. In the case of Somali-born, Dutch member of parliament, Hirsi Ali, a barrage of very credible death threats combined with the murder of other Islam-critics will force you to seek government protection. Hence, Ms. Ali lives under siege from radical Muslims, a perfect allegory for her adopted country, the Netherlands, actually. She answers questions about her predicament by gesturing to the two ubiquitous bodyguards. "You can observe how it is," she says. "I am limited in my freedom of movement." Things have improved from the immediate aftermath of the killing, when she had to sleep in a naval base. She has travelled to the US and has met Salman Rushdie (the fatwa against whom she once supported in her youth). Now she has a flat, although of its two bedrooms one is reserved for the security team, and each time she opens her door a bodyguard will appear to check on her. Ms Hirsi Ali travels in an armoured-plated car, and knows that were she to have a relationship she would put a partner's life at risk. Fortunately, Ms. Ali is a woman of extraordinary courage and poise. Her outspokenness in the face of death threats and violence have helped awaken the Dutch people to the extent of the threat growing in their cities. "I travel, I have an apartment since March so I have a little more privacy than when I was being moved from place to place," she says. She smiles slightly as she adds: "There are some bad things and some moments when I think, 'Well, what is all this about?' - some form of panic, you know - you are threatened and stuff like that. But there is also the positive side, because within three years I have been able to convey my message to the public. So everyone in Europe knows the situation of Muslim women is not comparable to the situation of the native women. That there are also atrocities performed in the name of culture and religion taking place within Europe, within the Netherlands, and governments must deal with this." Even if the Muslims don't physically assail their critics, they will use any means available to silence them. Sadly, the radical Islamists have found an endless supply of "useful fools" in the form of Western multiculturalists, who are willing to compromise every western value and principle in the name of diversity, which they have elevated to a quasi-religious tenet. Thus, when Italian leftist Oriana Fallaci criticizes Islam in a book, Muslim activists easily use the anti-intolerance laws passed by Italian multiculturalists to have her indicted. A judge has ordered best-selling writer and journalist Oriana Fallaci to stand trial in her native Italy on charges she defamed Islam in a recent book. The decision angered Italy's justice minister but delighted Muslim activists, who accused Fallaci of inciting religious hatred in her 2004 work "La Forza della Ragione" (The Force of Reason). Fallaci lives in New York and has regularly provoked the wrath of Muslims with her outspoken criticism of Islam following the Sept. 11, 2001, attacks on U.S. cities. In "La Forza della Ragione," Fallaci wrote that terrorists had killed 6,000 people over the past 20 years in the name of the Koran and said the Islamic faith "sows hatred in the place of love and slavery in the place of freedom." State prosecutors originally dismissed accusations of defamation from an Italian Muslim organization, and said Fallaci should not stand trial because she was merely exercising her right to freedom of speech. But a preliminary judge in the northern Italian city of Bergamo, Armando Grasso, rejected the prosecutors advice at a hearing on Tuesday and said Fallaci should be indicted. Grasso's ruling homed in on 18 sentences in the book, saying some of Fallaci's words were "without doubt offensive to Islam and to those who practice that religious faith." Pause, for a moment, and consider the irony in this. Ms. Fallaci is to be tried for writing a book criticizing Islam in the same country where four hundred years ago Galileo faced the Inquisition for writing a book that affirmed that the Earth revolves around the sun and not vice versa. So much for freedom of speech, conscience or tolerance of dissent - in short, in the name of tolerating its non-Western minorities, Italy has turned its back on the last four centuries of Western political progress. Muslims, quite naturally, are overjoyed to see a Western legal system used to eviscerate defenders of Western civilization. Adel Smith, a high-profile Muslim activist who brought the original law suit, hailed the decision. "It is the first time a judge has ordered a trial for defamation of the Islamic faith," he told reporters. "But this isn't just about defamation. We would also like (the court) to recognize that this is an incitement to religious hatred." Isn't that clever? To criticize Islam is to defame it; to defame Islam is to incite religious hatred. So much for the European Enlightenment. So much for science and free inquiry. The truly vile aspect of Ms. Fallaci's predicament is that so many Italians (and, for that matter, Europeans in general) are willing to betray their cultural heritage in order to placate hostile immigrants.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'Given a symmetric $D\times D$ matrix $M$ over $\{0,1,*\}$, a list $M$-partition of a graph $G$ is a partition of the vertices of $G$ into $D$ parts which are associated with the rows of $M$. The part of each vertex is chosen from a given list in such a way that no edge of $G$ is mapped to a $0$ in $M$ and no non-edge of $G$ is mapped to a $1$ in $M$. Many important graph-theoretic structures can be represented as list $M$-partitions including graph colourings, split graphs and homogeneous sets and pairs, which arise in the proofs of the weak and strong perfect graph conjectures. Thus, there has been quite a bit of work on determining for which matrices $M$ computations involving list $M$-partitions are tractable. This paper focuses on the problem of counting list $M$-partitions, given a graph $G$ and given a list for each vertex of $G$. We identify a certain set of “tractable” matrices $M$. We give an algorithm that counts list $M$-partitions in polynomial time for every (fixed) matrix $M$ in this set. The algorithm relies on data structures such as sparse-dense partitions and subcube decompositions to reduce each problem instance to a sequence of problem instances in which the lists have a certain useful structure that restricts access to portions of $M$ in which the interactions of $0$s and $1$s is controlled. We show how to solve the resulting restricted instances by converting them into particular counting constraint satisfaction problems (${\ensuremath{\mathrm{\#CSP}}}$s) which we show how to solve using a constraint satisfaction technique known as “arc-consistency”. For every matrix $M$ for which our algorithm fails, we show that the problem of counting list $M$-partitions is [$\mathrm{\#P}$]{}-complete. Furthermore, we give an explicit characterisation of the dichotomy theorem — counting list $M$-partitions is tractable (in [$\mathrm{FP}$]{}) if the matrix $M$ has a structure called a derectangularising sequence. If $M$ has no derectangularising sequence, we show that counting list $M$-partitions is [$\mathrm{\#P}$]{}-hard. We show that the meta-problem of determining whether a given matrix has a derectangularising sequence is [$\mathrm{NP}$]{}-complete. Finally, we show that list $M$-partitions can be used to encode cardinality restrictions in $M$-partitions problems and we use this to give a polynomial-time algorithm for counting homogeneous pairs in graphs.' author: - 'Andreas Göbel[^1]' - Leslie Ann Goldberg - 'Colin McQuillan[^2]' - David Richerby - 'Tomoyuki Yamakami[^3]' bibliography: - '\\jobname.bib' title: 'Counting list matrix partitions of graphs[^4]' --- Introduction ============ A matrix partition of an undirected graph is a partition of its vertices according to a matrix which specifies adjacency and non-adjacency conditions on the vertices, depending on the parts to which they are assigned. For finite sets $D$ and $D'$, the set $\{0,1,*\}^{D\times D'}$ is the set of matrices with rows indexed by $D$ and columns indexed by $D'$ where each $M_{i,j} \in \{0,1,*\}$. For any symmetric matrix $M\in\{0,1,*\}^{D\times D}$, an [*$M$-partition*]{} of an undirected graph $G=(V,E)$ is a function $\sigma\colon V\to D$ such that, for distinct vertices $u$ and $v$, - $M_{\sigma(u),\sigma(v)}\neq 0$ if $(u,v)\in E$ and - $M_{\sigma(u),\sigma(v)}\neq 1$ if $(u,v)\not\in E$. Thus, $M_{i,j}=0$ means that no edges are allowed between vertices in parts $i$ and $j$, $M_{i,j}=1$ means that there must be an edge between every pair of vertices in the two parts and $M_{i,j}=*$ means that any set of edges is allowed between the parts. For entries $M_{i,i}$ on the diagonal of $M$, the conditions only apply to distinct vertices in part $i$. Thus, $M_{i,i}=1$ requires that the vertices in part $i$ form a clique in $G$ and $M_{i,i}=0$ requires that they form an independent set. For example, if $D=\{i,c\}$, $M_{i,i} = 0$, $M_{c,c}=1$ and $M_{c,i} = M_{i,c} = *$, i.e., $M=\left(\begin{smallmatrix}0 & *\\ * & 1\end{smallmatrix}\right)$, then an $M$-partition of a graph is a partition of its vertices into an independent set (whose vertices are mapped to $i$) and a clique (whose vertices are mapped to $c$). The independent set and the clique may have arbitrary edges between them. A graph that has such an $M$-partition is known as a split graph [@Golumbic]. As Feder, Hell, Klein and Motwani describe [@FHKM], many important graph-theoretic structures can be represented as $M$-partitions, including graph colourings, split graphs, $(a,b)$-graphs [@Bra96], clique-cross partitions [@EKR], and their generalisations. $M$-partitions also arise as “type partitions” in extremal graph theory [@BT00]. In the special case where $M$ is a $\{0,*\}$-matrix (that is, it has no 1 entries), $M$-partitions of $G$ correspond to homomorphisms from $G$ to the (potentially looped) graph $H$ whose adjacency matrix is obtained from $M$ by turning every $*$ into a 1. Thus, proper $|D|$-colourings of $G$ are exactly $M$-partitions for the matrix $M$ which has 0s on the diagonal and $*$s elsewhere. To represent more complicated graph-theoretic structures, such as homogeneous sets and their generalisations, which arise in the proofs of the weak and strong perfect graph conjectures [@lovasz; @CRST], it is necessary to generalise $M$-partitions by introducing lists. Details of these applications are given by Feder et al. [@FHKM], who define the notion of a list $M$-partition. A [*list $M$-partition*]{} is an $M$-partition $\sigma$ that is also required to satisfy constraints on the values of each $\sigma(v)$. Let ${{\mathcal{P}(D)}}$ denote the powerset of $D$. We say that $\sigma$ [*respects*]{} a function $L\colon V(G)\to {{\mathcal{P}(D)}}$ if $\sigma(v)\in L(v)$ for all $v\in V(G)$. Thus, for each vertex $v$, $L(v)$ serves as a list of allowable parts for $v$ and a *list $M$-partition* of $G$ is an $M$-partition that respects the given list function. We allow empty lists for technical convenience, although there are no $M$-partitions that respect any list function $L$ where $L(v)=\emptyset$ for some vertex $v$. Feder et al.[@FHKM] study the computational complexity of the following decision problem, which is parameterised by a symmetric matrix $M\in\{0,1,*\}^{D\times D}\!$. [[List-$M$-partitions]{}]{}. A pair $(G,L)$ in which $G$ is a graph and $L$ is a function $V(G)\to{{\mathcal{P}(D)}}$. “Yes”, if $G$ has an $M$-partition that respects $L$; “no”, otherwise. Note that $M$ is a parameter of the problem rather than an input of the problem. Thus, its size is a constant which does not vary with the input. A series of papers [@FH; @FHHList; @FHH] described in [@FHKM] presents a complete dichotomy for the special case of homomorphism problems, which are [[List-$M$-partitions]{}]{} problems in which $M$ is a $\{0,*\}$-matrix. In particular, Feder, Hell and Huang [@FHH] show that, for every $\{0,*\}$-matrix $M$ (and symmetrically, for every $\{1,*\}$-matrix $M$), the problem [[List-$M$-partitions]{}]{} is either polynomial-time solvable or [$\mathrm{NP}$]{}-complete. It is important to note that both of these special cases of [[List-$M$-partitions]{}]{} are constraint satisfaction problems (CSPs) and a famous conjecture of Feder and Vardi [@FV] is that a P versus [$\mathrm{NP}$]{}-complete dichotomy also exists for every CSP. Although general [[List-$M$-partitions]{}]{} problems can also be coded as CSPs with restrictions on the input,[^5] it is not known how to code them without such restrictions. Since the Feder–Vardi conjecture applies only to CSPs with unrestricted inputs, even if proved, it would not necessarily apply to [[List-$M$-partitions]{}]{}. Given the many applications of [[List-$M$-partitions]{}]{}, it is important to know whether there is a dichotomy for this problem. This is part of a major ongoing research effort which has the goal of understanding the boundaries of tractability by identifying classes of problems, as wide as possible, where dichotomy theorems arise and where the precise boundary between tractability and intractability can be specified. Significant progress has been made on identifying dichotomies for [[List-$M$-partitions]{}]{}. Feder et al. [@FHKM Theorem 6.1] give a complete dichotomy for the special case in which $M$ is at most $3\times 3$, by showing that [[List-$M$-partitions]{}]{} is polynomial-time solvable or [$\mathrm{NP}$]{}-complete for each such matrix. Later, Feder and Hell studied the [[List-$M$-partitions]{}]{} problem under the name CSP$^*_{1,2}(H)$ and showed [@FHFull Corollary 3.4] that, for every $M$, [[List-$M$-partitions]{}]{} is either [$\mathrm{NP}$]{}-complete, or is solvable in quasi-polynomial time. In the latter case, they showed that [[List-$M$-partitions]{}]{} is solvable in $n^{O(\log n)}$ time, given an $n$-vertex graph. Feder and Hell refer to this result as a “quasi-dichotomy”. Although the Feder–Vardi conjecture remains open, a complete dichotomy is now known for counting CSPs. In particular, Bulatov [@Bul08] (see also [@DRfull]) has shown that, for every constraint language $\Gamma$, the counting constraint satisfaction problem ${\ensuremath{\mathrm{\#CSP}}}(\Gamma)$ is either polynomial-time solvable, or [$\mathrm{\#P}$]{}-complete. It is natural to ask whether a similar situation arises for counting list $M$-partition problems. We study the following computational problem, which is parameterised by a finite symmetric matrix $M\in\{0,1,*\}^{D\times D}\!$. [[\#List-$M$-partitions]{}]{}. A pair $(G,L)$ in which $G$ is a graph and $L$ is a function $V(G)\to{{\mathcal{P}(D)}}$. The number of $M$-partitions of $G$ that respect $L$. Hell, Hermann and Nevisi [@HHN] have considered the related problem [[\#$M$-partitions]{}]{} without lists, which can be seen as [[\#List-$M$-partitions]{}]{} restricted to the case that $L(v)=D$ for every vertex $v$. This problem is defined as follows. [[\#$M$-partitions]{}]{}. A graph $G$. The number of $M$-partitions of $G$. In the problems [[List-$M$-partitions]{}]{}, [[\#List-$M$-partitions]{}]{} and [[\#$M$-partitions]{}]{}, the matrix $M$ is fixed and its size does not vary with the input. Hell et al. gave a dichotomy for small matrices $M$ (of size at most $3\times 3$). In particular, [@HHN Theorem 10] together with the graph-homomorphism dichotomy of Dyer and Greenhill [@DG] shows that, for every such $M$, [[\#$M$-partitions]{}]{} is either polynomial-time solvable or ${\ensuremath{\mathrm{\#P}}}$-complete. An interesting feature of counting $M$-partitions, identified by Hell et al. is that, unlike the situation for homomorphism-counting problems, there are tractable $M$-partition problems with non-trivial counting algorithms. Indeed the main contribution of the present paper, as described below, is to identify a set of “tractable” matrices $M$ and to give a non-trivial algorithm which solves [[\#List-$M$-partitions]{}]{} for every such $M$. We combine this with a proof that [[\#List-$M$-partitions]{}]{} is ${\ensuremath{\mathrm{\#P}}}$-complete for every other $M$. Dichotomy theorems for counting list $M$-partitions {#subsec:dichotomy} --------------------------------------------------- Our main theorem is a general dichotomy for the counting list $M$-partition problem, for matrices $M$ of all sizes. As noted above, since there is no known coding of list $M$-partition problems as CSPs without input restrictions, our theorem is not known to be implied by the dichotomy for [$\mathrm{\#CSP}$]{}. Recall that [$\mathrm{FP}$]{} is the class of functions computed by polynomial-time deterministic Turing machines. [$\mathrm{\#P}$]{} is the class of functions $f$ for which there is a nondeterministic polynomial-time Turing machine that has exactly $f(X)$ accepting paths for every input $X$; this class can be thought of as the natural analogue of [$\mathrm{NP}$]{} for counting problems. Our main theorem is the following. \[thm:dichotomy\][ For any symmetric matrix $M\in\{0,1,*\}^{D\times D}\!$, [[\#List-$M$-partitions]{}]{} is either in ${\ensuremath{\mathrm{FP}}}$ or ${\ensuremath{\mathrm{\#P}}}$-complete.]{} To prove Theorem \[thm:dichotomy\], we investigate the complexity of the more general counting problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}, which has two parameters — a matrix $M\in\{0,1,*\}^{D\times D}$ and a (not necessarily proper) subset [$\mathcal L$]{} of ${{\mathcal{P}(D)}}$. In this problem, we only allow sets in [$\mathcal L$]{} to be used as lists. [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. A pair $(G,L)$ where $G$ is a graph and $L$ is a function $V(G)\to {\ensuremath{\mathcal L}}$. The number of $M$-partitions of $G$ that respect $L$. Note that $M$ and [$\mathcal L$]{} are fixed parameters of [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} — they are not part of the input instance. The problem [[\#List-$M$-partitions]{}]{} is just the special case of [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} where ${\ensuremath{\mathcal L}}= {{\mathcal{P}(D)}}$. We say that a set ${\ensuremath{\mathcal L}}\subseteq {{\mathcal{P}(D)}}$ is [*subset-closed*]{} if $A\in {\ensuremath{\mathcal L}}$ implies that every subset of $A$ is in [$\mathcal L$]{}. This closure property is referred to as the “inclusive” case in [@FHFull]. \[def:closure\] Given a set ${\ensuremath{\mathcal L}}\subseteq {{\mathcal{P}(D)}}$, we write ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$ for its subset-closure, which is the set $${{\mathscr{S}({\ensuremath{\mathcal L}})}}=\{X \mid \mbox{for some $Y\in {\ensuremath{\mathcal L}}$, $X\subseteq Y$}\}.$$ We prove the following theorem, which immediately implies Theorem \[thm:dichotomy\]. \[thm:fulldichotomy\][Let $M$ be a symmetric matrix in $\{0,1,*\}^{D\times D}$ and let ${\ensuremath{\mathcal L}}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. The problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is either in ${\ensuremath{\mathrm{FP}}}$ or ${\ensuremath{\mathrm{\#P}}}$-complete.]{} Note that this does not imply a dichotomy for the counting $M$-partitions problem without lists. The problem with no lists corresponds to the case where every vertex of the input graph $G$ is assigned the list $D$, allowing the vertex to be potentially placed in any part. Thus, the problem without lists is equivalent to the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} with ${\ensuremath{\mathcal L}}=\{D\}$, but Theorem \[thm:fulldichotomy\] applies only to the case where [$\mathcal L$]{} is subset-closed. Polynomial-time algorithms and an explicit dichotomy ---------------------------------------------------- We now introduce the concepts needed to give an explicit criterion for the dichotomy in Theorem \[thm:fulldichotomy\] and to provide polynomial-time algorithms for all tractable cases. We use standard definitions of relations and their arities, compositions and inverses. For any symmetric $M\in\{0,1,*\}^{D\times D}$ and any sets $X,Y\in{{\mathcal{P}(D)}}$, define the binary relation $$H^M_{X,Y}=\{(i,j)\in X\times Y\mid M_{i,j}=*\}.$$ The intractability condition for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} begins with the following notion of rectangularity, which was introduced by Bulatov and Dalmau [@BD]. A relation $R\subseteq D\times D'$ is [*rectangular*]{} if, for all $i,j\in D$, and $i'\!,j'\in D'\!$, $$(i,i'),(i,j'),(j,i')\in R\implies (j,j')\in R\,.$$ Note that the intersection of two rectangular relations is itself rectangular. However, the composition of two rectangular relations is not necessarily rectangular: for example, $\{(1,1), (1,2), (3,3)\}\circ \{(1,1), (2,3), (3,1)\} = \{(1,1), (1,3), (3,1)\}$. Our dichotomy criterion will be based on what we call [$\mathcal L$]{}-$M$-derectangularising sequences. In order to define these, we introduce the notions of pure matrices and $M$-purifying sets. Given index sets $X$ and $Y$, a matrix $M\in\{0,1,*\}^{X\times Y}$ is [*pure*]{} if it has no $0$s or has no $1$s. Pure matrices correspond to ordinary graph homomorphism problems. As we noted above, $M$-partitions of $G$ correspond to homomorphisms of $G$ when $G$ is a $\{0,*\}$-matrix. The same is true (by complementation) when $G$ is a $\{1,*\}$-matrix. For any $M\in\{0,1,*\}^{D\times D}$, a set ${\ensuremath{\mathcal L}}\subseteq {{\mathcal{P}(D)}}$ is [*$M$-purifying*]{} if, for all $X,Y\in{\ensuremath{\mathcal L}}$, the $X$-by-$Y$ submatrix $M|_{X\times Y}$ is pure. For example, consider the matrix $$M = \left(\begin{matrix} 1 & * & 0 \\ * & 1 & * \\ 0 & * & 1 \end{matrix}\right)$$ with rows and columns indexed by $\{0,1,2\}$ in the obvious way. The matrix $M$ is not pure but for ${\ensuremath{\mathcal L}}= \{\{0,1\}, \{2\}\}$, the set ${\ensuremath{\mathcal L}}$ is $M$-purifying and so is the closure ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$. \[def:derect\] An [*[$\mathcal L$]{}-$M$-derectangularising sequence*]{} of length $k$ is a sequence $D_1,\dots,D_k$ with each $D_i \in{\ensuremath{\mathcal L}}$ such that: - $\{D_1,\ldots,D_k\}$ is $M$-purifying and - the relation $H^M_{D_1,D_2} \circ H^M_{D_2, D_3} \circ \dots \circ H^M_{D_{k-1}, D_k}$ is not rectangular. If there is an $i\in \{1,\ldots,k\}$ such that $D_i$ is the empty set then the relation $H=H^M_{D_1,D_2} \circ H^M_{D_2, D_3} \circ \dots \circ H^M_{D_{k-1}, D_k}$ is the empty relation, which is trivially rectangular. If there is an $i$ such that $|D_i|=1$ then $H$ is a Cartesian product, and is therefore rectangular. It follows that $|D_i|\geq 2$ for each $i$ in a derectangularising sequence. We can now state our explicit dichotomy theorem, which implies Theorem \[thm:fulldichotomy\] and, hence, Theorem \[thm:dichotomy\]. \[thm:explicitdichotomy\][ Let $M$ be a symmetric matrix in $\{0,1,*\}^{D\times D}$ and let ${\ensuremath{\mathcal L}}{}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. If there is an [$\mathcal L$]{}-$M$-derectangularising sequence then the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is ${\ensuremath{\mathrm{\#P}}}$-complete. Otherwise, it is in ${\ensuremath{\mathrm{FP}}}$. ]{} Sections \[sec:purifiedcsp\], \[sec:arc\] and \[sec:dichotomy\] develop a polynomial-time algorithm which solves the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} whenever there is no [$\mathcal L$]{}-$M$-derectangularising sequence. The algorithm involves several steps. First, consider the case in which ${\ensuremath{\mathcal L}}$ is subset-closed and $M$-purifying. In this case, Proposition \[prop:purifiediscsp\] presents a polynomial-time transformation from an instance of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} to an instance of a related counting CSP. Algorithm \[alg:AC\] exploits special properties of the constructed CSP instance so that it can be solved in polynomial time using a CSP technique called arc-consistency. (This is proved in Lemma \[lem:quickarc\].) This provides a solution to the original [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} problem for the $M$-purifying case. The case in which ${\ensuremath{\mathcal L}}$ is not $M$-purifying is tackled in Section \[sec:dichotomy\]. Section \[sec:DS\] gives algorithms for constructing the relevant data structures, which include a special case of sparse-dense partitions and also subcube decompositions. Algorithm \[alg:purify\] uses these data structures (via Algorithms \[alg:purifystep\], \[alg:Case1\], \[alg:Case2\], \[alg:Case3\] and \[alg:purifytriv\]) to reduce the [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} problem to a sequence of problems [[\#${\ensuremath{\mathcal L}}_i$-$M$-partitions]{}]{} where ${\ensuremath{\mathcal L}}_i$ is $M$-purifying. Finally, the polynomial-time algorithm is presented in Algorithms \[alg:mainpurifying\] and \[alg:main\]. For every ${\ensuremath{\mathcal L}}$ and $M$ where there is no ${\ensuremath{\mathcal L}}$-$M$-derectangularising sequence, either Algorithm \[alg:mainpurifying\] or Algorithm \[alg:main\] defines a polynomial-time function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} for solving the [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} problem, given an input $(G,L)$. The function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is not recursive. However, its *definition* is recursive in the sense that the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} defined in Algorithm \[alg:main\] calls a function [[\#${\ensuremath{\mathcal L}}_i$-$M$-partitions]{}]{} where ${\ensuremath{\mathcal L}}_i$ is a subset of ${{\mathcal{P}(D)}}$ whose cardinality is smaller than ${\ensuremath{\mathcal L}}$. The function [[\#${\ensuremath{\mathcal L}}_i$-$M$-partitions]{}]{} is, in turn, defined either in Algorithm \[alg:mainpurifying\] or in \[alg:main\]. The proof of Theorem \[thm:explicitdichotomy\] shows that, when Algorithms \[alg:mainpurifying\] and \[alg:main\] fail to solve the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}, the problem is ${\ensuremath{\mathrm{\#P}}}$-complete. Complexity of the dichotomy criterion ------------------------------------- Theorem \[thm:explicitdichotomy\] gives a precise criterion under which the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is in ${\ensuremath{\mathrm{FP}}}$ or ${\ensuremath{\mathrm{\#P}}}$-complete, where [$\mathcal L$]{} and $M$ are considered to be fixed parameters. In Section \[sec:meta\], we address the computational problem of determining which is the case, now treating [$\mathcal L$]{} and $M$ as inputs to this “meta-problem”. Dyer and Richerby [@DRfull] studied the corresponding problem for the [$\mathrm{\#CSP}$]{} dichotomy, showing that determining whether a constraint language $\Gamma$ satisfies the criterion for their ${\ensuremath{\mathrm{\#CSP}}}(\Gamma)$ dichotomy is reducible to the graph automorphism problem, which is in [$\mathrm{NP}$]{}. We are interested in the following computational problem, which we show to be [$\mathrm{NP}$]{}-complete. [[ExistsDerectSeq]{}]{}. An index set $D$, a symmetric matrix $M$ in $\{0,1,*\}^{D\times D}$ (represented as an array) and a set ${\ensuremath{\mathcal L}}{}\subseteq{{\mathcal{P}(D)}}$ (represented as a list of lists). “Yes”, if there is an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence; “no”, otherwise. \[thm:meta\][ [[ExistsDerectSeq]{}]{} is [$\mathrm{NP}$]{}-complete under polynomial-time many-one reductions.]{} Note that, in the definition of the problem [[ExistsDerectSeq]{}]{}, the input ${\ensuremath{\mathcal L}}$ is not necessarily subset-closed. Subset-closedness allows a concise representation of some inputs: for example, ${{\mathcal{P}(D)}}$ has exponential size but it can be represented as ${{\mathscr{S}(\{D\})}}$, so the corresponding input is just ${\ensuremath{\mathcal L}}=\{D\}$. In fact, our proof of Theorem \[thm:meta\] uses a set of lists [$\mathcal L$]{} where $|X|\leq 3$ for all $X\in{\ensuremath{\mathcal L}}$. Since there are at most $|D|^3+1$ such sets, our [$\mathrm{NP}$]{}-completeness proof would still hold if we insisted that the input [$\mathcal L$]{} to [[ExistsDerectSeq]{}]{} must be subset-closed. Let us return to the original problem [[\#List-$M$-partitions]{}]{}, which is the special case of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} where ${\ensuremath{\mathcal L}}={{\mathcal{P}(D)}}$. This leads us to be interested in the following computational problem. [[MatrixHasDerectSeq]{}]{}. An index set $D$ and a symmetric matrix $M$ in $\{0,1,*\}^{D\times D}$ (represented as an array). “Yes”, if there is a ${{\mathcal{P}(D)}}$-$M$-derectangularising sequence; “no”, otherwise. Theorem \[thm:meta\] does not quantify the complexity of [[MatrixHasDerectSeq]{}]{} because its proof relies on a specific choice of [$\mathcal L$]{} which, as we have noted, is not ${{\mathcal{P}(D)}}$. Nevertheless, the proof of Theorem \[thm:meta\] has the following corollary. \[cor:meta\][ [[MatrixHasDerectSeq]{}]{} is in [$\mathrm{NP}$]{}.]{} Cardinality constraints ----------------------- Many combinatorial structures can be represented as $M$-partitions with the addition of cardinality constraints on the parts. For example, it might be required that certain parts be non-empty or, more generally, that they contain at least $k$ vertices for some fixed $k$. Feder et al. [@FHKM] showed that the problem of determining whether such a structure exists in a given graph can be reduced to a [[List-$M$-partitions]{}]{} problem in which the cardinality constraints are expressed using lists. In Section \[sec:card\], we extend this to counting. We show that any [[\#$M$-partitions]{}]{} problem with additional cardinality constraints of the form, “part $d$ must contain at least $k_d$ vertices” is polynomial-time Turing reducible to [[\#List-$M$-partitions]{}]{}. As a corollary, we show that the “homogeneous pairs” introduced by Chvátal and Sbihi [@CS1987:Bull-free] can be counted in polynomial time. Homogeneous pairs can be expressed as an $M$-partitions problem for a certain $6\times 6$ matrix, with cardinality constraints on the parts. Preliminaries {#sec:prelim} ============= For a positive integer $k$, we write $[k]$ to denote the set $\{1,\dots,k\}$. If $\mathcal{S}$ is a set of sets then we use $\bigcap \mathcal{S}$ to denote the intersection of all sets in $\mathcal{S}$. The vertex set of a graph $G$ is denoted $V(G)$ and its edge set is $E(G)$. We write $\{0,1,*\}^{D}$ for the set of all functions $\sigma\colon D\to\{0,1,*\}$ and $\{0,1,*\}^{D\times D'}$ for the set of all matrices $M=(M_{i,j})_{i\in D,j\in D'}$, where each $M_{i,j}\in\{0,1,*\}$. We always use the term “$M$-partition” when talking about a partition of the vertices of a graph according to a $\{0,1,*\}$-matrix $M$. When we use the term “partition” without referring to a matrix, we mean it in the conventional sense of partitioning a set $X$ into disjoint subsets $X_1, \dots, X_k$ with $X_1\cup \dots \cup X_k = X$. We view computational counting problems as functions mapping strings over input alphabets to natural numbers. Our model of computation is the standard multi-tape Turing machine. We say that a counting problem $P$ is polynomial-time Turing-reducible to another counting problem $Q$ if there is a polynomial-time deterministic oracle Turing machine $M$ such that, on every instance $x$ of $P$, $M$ outputs $P(x)$ by making queries to oracle $Q$. We say that $P$ is polynomial-time Turing-equivalent to $Q$ if each is polynomial-time Turing-reducible to the other. For decision problems (languages), we use the standard many-one reducibility: language $A$ is many-one reducible to language $B$ if there exists a function $f$ that is computable in polynomial time such that $x\in A$ if and only if $f(x)\in B$. Counting list $M$-partition problems and counting CSPs {#sec:purifiedcsp} ====================================================== Toward the development of our algorithms and the proof of our dichotomy, we study a special case of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}, in which [$\mathcal L$]{} is $M$-purifying and subset-closed. For such [$\mathcal L$]{} and $M$, we show that the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is polynomial-time Turing-equivalent to a counting constraint satisfaction problem ([$\mathrm{\#CSP}$]{}). To give the equivalence, we introduce the notation needed to specify \#CSPs. A *constraint language* is a finite set $\Gamma$ of named relations over some set $D$. For such a language, we define the counting problem ${\ensuremath{\mathrm{\#CSP}}}(\Gamma)$ as follows. ${\ensuremath{\mathrm{\#CSP}}}(\Gamma)$. A set $V$ of variables and a set $C$ of constraints of the form ${\langle (v_1,\dots,v_k),R \rangle}$, where $(v_1,\dots,v_k)\in V^k$ and $R$ is an arity-$k$ relation in $\Gamma$. The number of assignments $\sigma\colon V\to D$ such that $$\label{eq:satisfying}(\sigma(v_1),\dots,\sigma(v_k))\in R\text{ for all }{\langle (v_1,\dots,v_k),R \rangle}\in C\,.$$ The tuple of variables $v_1, \dots, v_k$ in a constraint is referred to as the constraint’s *scope*. The assignments $\sigma\colon V\to D$ for which holds are called the [*satisfying assignments*]{} of the instance $(V,C)$. Note that a unary constraint ${\langle v,R \rangle}$ has the same effect as a list: it directly restricts the possible values of the variable $v$. As before, we allow the possibility that $\emptyset\in\Gamma$; any instance that includes a constraint ${\langle (v_1, \dots, v_k), \emptyset \rangle}$ has no satisfying assignments. \[defgammaprime\] Let $M$ be a symmetric matrix in $\{0,1,*\}^{D\times D}$ and let [$\mathcal L$]{} be a subset-closed $M$-purifying set. Define the constraint language $$\Gamma'_{\!{\ensuremath{\mathcal L}},M} = \{H^M_{X,Y}\mid X,Y\in{\ensuremath{\mathcal L}}\}$$ and let ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}= \Gamma'_{\!{\ensuremath{\mathcal L}},M} \cup {{\mathcal{P}(D)}}$, where ${{\mathcal{P}(D)}}$ represents the set of all unary relations on $D$. The unary constraints in ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$ will be useful in our study of the complexity of the dichotomy criterion, in Section \[sec:meta\]. First, we define a convenient restriction on instances of ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$. \[def:simple\] An instance of ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ is *simple* if: - there is exactly one unary constraint ${\langle v,X_v \rangle}$ for each variable $v\in V\!$, - there are no binary constraints ${\langle (v,v),R \rangle}$, and - each pair $u$, $v$ of distinct variables appears in at most one constraint of the form ${\langle (u,v),R \rangle}$ or ${\langle (v,u),R \rangle}$. \[lemma:simple\] For every instance $(V,C)$ of ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$, there is a simple instance $(V,C')$ such that an assignment $\sigma\colon V\to D$ satisfies $(V,C)$ if and only if it satisfies $(V,C')$. Further, such an instance can be computed in polynomial time. Observe that the set of binary relations in ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$ is closed under intersections: $H^M_{X,Y} \cap H^M_{X'\!,Y'} = H^M_{X\cap X'\!,Y\cap Y'}$ and this relation is in ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$ because [$\mathcal L$]{} is subset-closed. The binary part of ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$ is also closed under relational inverse because $M$ is symmetric, so $$\left(H^M_{X,Y}\right)^{-1} = \{(b,a) \mid (a,b)\in H^M_{X,Y}\} = H^M_{Y,X}\in {\Gamma_{\!{\ensuremath{\mathcal L}}, M}}\,.$$ Since ${{\mathcal{P}(D)}}\subseteq {\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$, the set of unary relations is also closed under intersections. We construct $C'$ as follows, starting with $C$. Any binary constraint ${\langle (v,v), R \rangle}$ can be replaced by the unary constraint ${\langle v, \{d\mid (d,d)\in R\} \rangle}$. All the binary constraints between distinct variables $u$ and $v$ can be replaced by the single constraint $$\left\langle (u,v), \bigcap \{R \mid {\langle (u,v), R \rangle}\in C \text{ or } {\langle (v,u), R^{-1} \rangle}\in C\} \right\rangle\,.$$ Let the set of constraints produced so far be $C''\!$. For each variable $v$ in turn, if there are no unary constraints applied to $v$ in $C''\!$, add the constraint ${\langle v, D \rangle}$; otherwise, replace all the unary constraints involving $v$ in $C''$ with the single constraint $$\left\langle v, \bigcap \{R \mid {\langle v, R \rangle}\in C''\} \right\rangle\,.$$ $C'$ is the resulting constraint set. The closure properties established above guarantee that $(V,C')$ is a ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ instance. It is clear that it has the same satisfying assignments as $(V,C)$ and that it can be produced in polynomial time. Our main result connecting the counting list $M$-partitions problem with counting CSPs is the following. \[prop:purifiediscsp\][For any symmetric $M\in\{0,1,*\}^{D\times D}$ and any subset-closed, $M$-purifying set [$\mathcal L$]{}, the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is polynomial-time Turing-equivalent to ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$.]{} Because of its length, we split the proof of the proposition into two lemmas. For any symmetric $M\in\{0,1,*\}^{D\times D}$ and any subset-closed, $M$-purifying set [$\mathcal L$]{}, ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ is polynomial-time Turing-reducible to [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. Consider an input $(V,C)$ to ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$, which we may assume to be simple. Each variable appears in exactly one unary constraint, ${\langle v,X_v \rangle}\in C$. Any variable $v$ that is not used in a binary constraint can take any value in $X_v$ so just introduces a multiplicative factor of $|X_v|$ to the output of the counting CSP. Thus, we will assume without loss of generality that every variable is used in at least one constraint with a relation from $\Gamma'_{\!{\ensuremath{\mathcal L}},M}$ and, by simplicity, there are no constraints of the form ${\langle (v,v),R \rangle}$. We now define a corresponding instance $(G,L)$ of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. The vertices of $G$ are the variables $V$ of the [$\mathrm{\#CSP}$]{} instance. For each variable $v\in V\!$, set $$L(v) = X_v \cap \bigcap\left\{ X \mid \mbox{for some $u$ and $Y$, $ {\langle (v,u),H^M_{X,Y} \rangle}\in C$ or $ {\langle (u,v),H^M_{Y,X} \rangle}\in C$} \right\}.$$ The edges $E(G)$ of our instance are the unordered pairs $\{u,v\}$ that satisfy one of the following conditions: - there is a constraint between $u$ and $v$ in $C$ and $M|_{L(u)\times L(v)}$ has a $0$ entry, or - there is no constraint between $u$ and $v$ in $C$ and $M|_{L(u)\times L(v)}$ has a $1$ entry. Since every vertex $v$ is used in at least one constraint with a relation $H^M_{X,Y}$ where, by definition, $X$ and $Y$ are in ${\ensuremath{\mathcal L}}$, every set $L(v)$ is a subset of some set $W\in{\ensuremath{\mathcal L}}$. [$\mathcal L$]{} is subset-closed so $L(v)\in{\ensuremath{\mathcal L}}$ for all $v\in V$, as required. We claim that a function $\sigma\colon V\to D$ is a satisfying assignment of $(V,C)$ if and only if it is an $M$-partition of $G$ that respects $L$. Note that, since [$\mathcal L$]{} is $M$-purifying, no submatrix $M|_{X\times Y}$ ($X,Y\in{\ensuremath{\mathcal L}})$ contains both 0s and 1s. First, suppose that $\sigma$ is a satisfying assignment of $(V,C)$. For each variable $v$, $\sigma$ satisfies all the constraints ${\langle v,X_v \rangle}$, ${\langle (v,u),H^M_{X,Y} \rangle}$ and ${\langle (u,v),H^M_{Y,X} \rangle}$ containing $v$. Therefore, $\sigma(v)\in X_v$ and $\sigma(v)\in X$ for each binary constraint ${\langle (v,u),H^M_{X,Y} \rangle}$ or ${\langle (u,v),H^M_{Y,X} \rangle}$, so $\sigma$ satisfies all the list requirements. To show that $\sigma$ is an $M$-partition of $G$, consider any pair of distinct vertices $u,v\in V$. If there is a constraint ${\langle (u,v), H^M_{X,Y} \rangle}\in C$, then $\sigma$ satisfies this constraint so $M_{\sigma(u),\sigma(v)}=*$ and $u$ and $v$ cannot stop $\sigma$ being an $M$-partition. Conversely, suppose there is no constraint between $u$ and $v$ in $C$. If $M|_{L(u)\times L(v)}$ contains a 0, there is no edge $(u,v)\in E(G)$ by construction; otherwise, if $M|_{L(u)\times L(v)}$ contains a 1, there is an edge $(u,v)\in E(G)$ by construction; otherwise, $M_{x,y}=*$ for all $x\in L(u)$, $y\in L(v)$. In all three cases, the assignment to $u$ and $v$ is consistent with $\sigma$ being an $M$-partition. Conversely, suppose that $\sigma$ is not a satisfying assignment of $(V,C)$. If $\sigma$ does not satisfy some unary constraint ${\langle v,X \rangle}$ then $\sigma(v)\notin L(v)$ so $\sigma$ does not respect [$\mathcal L$]{}. If $\sigma$ does not satisfy some binary constraint ${\langle (u,v), H^M_{X,Y} \rangle}$ where $u$ and $v$ are distinct then, by definition of the relation $H^M_{X,Y}$, $M_{\sigma(u),\sigma(v)}\neq *$. If $M_{\sigma(u),\sigma(v)}=0$, there is an edge $(u,v)\in E(G)$ by construction, which is forbidden in $M$-partitions; if $M_{\sigma(u),\sigma(v)}=1$, there is no edge $(u,v)\in E(G)$ but this edge is required in $M$-partitions. Hence, $\sigma$ is not an $M$-partition. For any symmetric $M\in\{0,1,*\}^{D\times D}$ and any subset-closed, $M$-purifying set [$\mathcal L$]{}, the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is polynomial-time Turing-reducible to ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$. We now essentially reverse the construction of the previous lemma to give a reduction from [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} to ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$. For any instance ($G,L)$ of [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}, we construct a corresponding instance $(V,C)$ of ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ as follows. The set of variables $V$ is $V(G)$. The set of constraints $C$ consists of a constraint ${\langle v, L(v) \rangle}$ for each vertex $v\in V(G)$ and a constraint ${\langle (u,v), H^M_{L(u),L(v)} \rangle}$ for every pair of distinct vertices $u$, $v$ such that: - $(u,v)\in E(G)$ and $M|_{L(u)\times L(v)}$ has a 0 entry, or - $(u,v)\not\in E(G)$ and $M|_{L(u)\times L(v)}$ has a 1 entry. We show that a function $\sigma\colon V\to D$ is a satisfying assignment of $(V,C)$ if and only if it is an $M$-partition of $G$ that respects $L$. It is clear that $\sigma$ satisfies the unary constraints if and only if it respects $L$. If $\sigma$ satisfies $(V,C)$ then consider any pair of distinct vertices $u,v\in V$. If there is a binary constraint involving $u$ and $v$, then $M_{\sigma(u),\sigma(v)} = M_{\sigma(v),\sigma(u)} = *$ so the existence or non-existence of the edge $(u,v)$ of $G$ does not affect whether $\sigma$ is an $M$-partition. If there is no binary constraint involving $u$ and $v$, then either there is an edge $(u,v)\in E(G)$ and $M_{\sigma(u),\sigma(v)}\neq 0$ or there is no edge $(u,v)$ and $M_{\sigma(u),\sigma(v)}\neq 1$. In all three cases, $\sigma$ maps $u$ and $v$ consistently with it being an $M$-partition. Conversely, if $\sigma$ does not satisfy $(V,C)$, either it fails to satisfy a unary constraint, in which case it does not respect $L$, or it satisfies all unary constraints (so it respects $L$), but it fails to satisfy a binary constraint ${\langle (u,v),H^M_{L(u),L(v)} \rangle}$. In the latter case, by construction, $M_{\sigma(u),\sigma(v)}\neq *$ so either $M_{\sigma(u),\sigma(v)}=0$ but there is an edge $(u,v)\in E(G)$, or $M_{\sigma(u),\sigma(v)}=1$ and there is no edge $(u,v)\in E(G)$. In either case, $\sigma$ is not an $M$-partition of $G$. An arc-consistency based algorithm for ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ {#sec:arc} ================================================================================================================= In the previous section, we showed that a class of [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} problems is equivalent to a certain class of counting CSPs, where the constraint language consists of binary relations and all unary relations over the domain $D$. We now investigate the complexity of such [$\mathrm{\#CSP}$]{}s. Arc-consistency is a standard solution technique for constraint satisfaction problems [@CSPbook]. It is, essentially, a local search method which initially assumes that each variable may take any value in the domain and iteratively reduces the range of values that can be assigned to each variable, based on the constraints applied to it and the values that can be taken by other variables in the scopes of those constraints. For any simple ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ instance $(V,C)$, define the vector of [*arc-consistent domains*]{} $(D_v)_{v\in V}$ by the procedure in Algorithm \[alg:ACComp\]. At no point in the execution of the algorithm can any domain $D_v$ increase in size so, for fixed $D$, the running time of the algorithm is at most a polynomial in $|V|+|C|$. It is clear that, if $(D_v)_{v\in V}$ is the vector of arc-consistent domains for a simple ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ instance $(V,C)$, then every satisfying assignment $\sigma$ for that instance must have $\sigma(v)\in D_v$ for each variable $v$. In particular, if some $D_v=\emptyset$, then the instance is unsatisfiable. (Note, though, that the converse does not hold. If $D=\{0,1\}$ and $R=\{(0,1),(1,0)\}$, the instance with constraints ${\langle x,D \rangle}$, ${\langle y,D \rangle}$, ${\langle z,D \rangle}$, ${\langle (x,y),R \rangle}$, ${\langle (y,z),R \rangle}$ and ${\langle (z,x),R \rangle}$ is unsatisfiable but arc-consistency assigns $D_x = D_y = D_z = \{0,1\}$.) The arc-consistent domains computed for a simple instance $(V,C)$ can yield further simplification of the constraint structure, which we refer to as [*factoring*]{}. The factoring applies when the arc-consistent domains restrict a binary relation to a Cartesian product. In this case, the binary relation can be replaced with corresponding unary relations. Algorithm \[alg:factor\] factors a simple instance with respect to a vector $(D_v)_{v\in V}$ of arc-consistent domains, producing a set $F$ of factored constraints. Recall that there is at most one constraint in $C$ between distinct variables and there are no binary constraints ${\langle (v,v), R \rangle}$ because the instance is simple. Note also that, if $|D_u|\leq 1$ or $|D_v|\leq 1$, then $R\cap (D_u\times D_v)$ is necessarily a Cartesian product. It is easy to see that the result of factoring a simple instance is simple, that Algorithm \[alg:factor\] runs in polynomial time and that the instance $(V,F)$ has the same satisfying assignments as $(V,C)$. The *constraint graph* of a [$\mathrm{CSP}$]{} instance $(V,C)$ (in any constraint language) is the undirected graph with vertex set $V$ that contains an edge between every pair of distinct variables that appear together in the scope of some constraint. \[line:choice\] Algorithm \[alg:AC\] uses arc-consistency to count the satisfying assignments of simple ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ instances. It is straightforward to see that the algorithm terminates, since each recursive call is either on an instance with strictly fewer variables or on one in which at least one variable has had its unary constraint reduced to a singleton and no variable’s unary constraint has increased. For general inputs, the algorithm may take exponential time to run but, in Lemma \[lem:quickarc\] we show that the running time is polynomial for the inputs we are interested in. We first argue that the algorithm is correct. By Lemma \[lemma:simple\], we may assume that the given instance $(V,C)$ is simple. Every satisfying assignment $\sigma\colon V\to D$ satisfies $\sigma(v)\in D_v$ for all $v\in V$ so restricting our attention to arc-consistent domains does not alter the output. Factoring the constraints also does not change the number of satisfying assignments: it merely replaces some binary constraints with equivalent unary ones. The constraints are factored, so any variable $v$ with $|D_v|=1$ must, in fact, be an isolated vertex in the constraint graph because, as noted above, any binary constraint involving it has been replaced by unary constraints. Therefore, if a component $H_i$ contains a variable $v$ with $|D_v|=1$, that component is the single vertex $v$, which is constrained to take a single value, so the number of satisfying assignments for this component, which we denote $Z_i$, is equal to $1$. (So we have now shown that the if branch in the for loop is correct.) For components that contain more than one variable, it is clear that we can choose one of those variables, $w_i$, and group the set of $M$-partitions $\sigma$ according to the value of $\sigma(w_i)$. (So we have now shown that the else branch is correct.) Because there are no constraints between variables in different components of the constraint graph, the number of satisfying assignments factorises as $\prod_{i=1}^\kappa Z_i$. For a binary relation $R$, we write $$\begin{aligned} \pi_1(R) &= \{a \mid (a,b)\in R \text{ for some }b\} \\ \pi_2(R) &= \{b \mid (a,b)\in R \text{ for some }a\}\,.\end{aligned}$$ For the following proof, we will also need the observation of Dyer and Richerby [@DRfull Lemma 1] that any rectangular relation $R\subseteq \pi_1(R)\times \pi_2(R)$ can be written as $(A_1\times B_1) \cup \dots \cup (A_\lambda \times B_\lambda)$, where the $A_i$ and $B_i$ partition $\pi_1(R)$ and $\pi_2(R)$, respectively. The subrelations $A_i\times B_i$ are referred to as [*blocks*]{}. A rectangular relation $R\neq \pi_1(R)\times \pi_2(R)$ must have at least two blocks. \[lem:quickarc\] Suppose that ${\ensuremath{\mathcal L}}$ is subset-closed and $M$-purifying. If there is no [$\mathcal L$]{}-$M$-derectangularising sequence, then Algorithm \[alg:AC\] runs in polynomial time. We will argue that the number of recursive calls made by the function AC in Algorithm \[alg:AC\] is bounded above by a polynomial in $|V|$. This suffices, since every other step of the procedure is obviously polynomial. Consider a run of the algorithm on instance $(V,C)$ which, by Lemma \[lemma:simple\], we may assume to be simple. Suppose the run makes a recursive call with input $(V_i,F'_{i,d})$. For each $v\in V_i$, let $D'_{v}$ denote the arc-consistent domain for $v$ that is computed during the recursive call. We will show below that $D'_v\subset D_v$ for every variable $v\in V_i$. This implies that the recursion depth is at most $|D|$. As a crude bound, it follows that the number of recursive calls is at most ${(|V|\cdot |D|)}^{|D|},$ since each recursive call that is made is nested below a sequence of at most $|D|$ previous calls, each of which chose a vertex $v\in V$ and “pinned” it to a domain element $d\in D$ (i.e., introduced the constraint ${\langle v,\{d\} \rangle}$). Towards showing that the domains of all variables decrease at each recursive call, suppose that we are computing $\mathrm{AC}(V,C)$ and the arc-consistent domains are $(D_v)_{v\in V}$. As observed above, for any component $H_i$ of the constraint graph on which a recursive call is made, we must have $|D_v|>1$ for every $v\in V_i$. Fix such a component and, for each $v\in V_i$, let $D'_v$ be the arc-consistent domain calculated for $v$ in the recursive call on $H_i$. It is clear that $D'_v\subseteq D_v$; we will show that $D'_v \subset D_v$. Consider a path $v_1\dots v_\ell$ in $H_i$, where $v_1=w_i$ and $v_\ell=v$. For each $j\in[\ell-1]$, there is exactly one binary constraint in $F_i$ involving $v_j$ and $v_{j+1}$. This is either ${\langle (v_j, v_{j+1}), R_j \rangle}$ or ${\langle (v_{j+1}, v_j), R_j^{-1} \rangle}$ and, without loss of generality, we may assume that it is the former. For $j\in[\ell-1]$, let $R'_j = R_j \cap (D_{v_j} \times D_{v_{j+1}}) = H^M_{D_{v_j},D_{v_{j+1}}}$. The relation $R'_j$ is pure because $D_{v_j}$ and $D_{v_{j+1}}$ are in the subset-closed set ${\ensuremath{\mathcal L}}$ and, since ${\ensuremath{\mathcal L}}$ is $M$-purifying, so is $\{D_{v_j},D_{v_{j+1}}\}$. These two domains do not form a derectangularising sequence by the hypothesis of the lemma, so $H^M_{D_{v_j},D_{v_{j+1}}}$ is rectangular. If some $R_j=\emptyset$ then $D_{v_j} = D_{v_{j+1}} = \emptyset$ by arc-consistency, contradicting the fact that $|D_v|>1$ for all $v\in V_i$. If some $R'_j$ has just one block, $R_j\cap (D_{v_j}\times D_{v_{j+1}})$ is a Cartesian product, contradicting the fact that $F$ is a factored set of constraints. Thus, every $R'_j$ has at least two blocks. For $j\in[\ell-1]$, let $\Phi_j = R'_1 \circ \dots \circ R'_j$. As above, note that $\{D_{v_1}, \ldots, D_{v_{j+1}}\}$ is $M$-purifying and the sequence $D_{v_1}, \dots, D_{v_{j+1}}$ is not derectangularising, so $\Phi_j$ is rectangular. We will show by induction on $j$ that $\pi_1(\Phi_j) = D_{v_1}$, $\pi_2(\Phi_j) = D_{v_{j+1}}$ and $\Phi_j$ has at least two blocks. Therefore, since the recursive call constrains $\sigma(w_i)$ to be $d$ and $d\in A$ for some block $A\times B\subset \Phi_\ell$, we have $D'_v\subseteq B\subset D_v$, which is what we set out to prove. For the base case of the induction, take $j=1$ so $\Phi_1=R'_1$. We showed above that $R'_1$ has at least two blocks and that $R'_1= H^M_{D_{v_1},D_{v_2}}$. By arc-consistency, $\pi_1(R'_1) = D_{v_1}$ and $\pi_2(R'_1) = D_{v_2}$. For the inductive step, take $j\in [\ell-2]$. Suppose that $\pi_1(\Phi_j)=D_{v_1}$, $\pi_2(\Phi_j)=D_{v_{j+1}}$ and $\Phi_j = \bigcup_{s=1}^\lambda (A_s\times A'_s)$ has at least two blocks. We have $\Phi_{j+1} = \Phi_j\circ R'_{j+1}$ and $R'_{j+1} = \bigcup_{t=1}^\mu (B_t\times B'_t)$ for some $\mu \geq 2$. For every $d\in D_{v_1}$, there is a $d'\in D_{v_{j+1}}$ such that $(d,d')\in \Phi_j$ by the inductive hypothesis, and a $d''\in D_{v_{j+1}}$ such that $(d'\!, d'')\in D_{v_{j+2}}$, by arc-consistency. Therefore, $\pi_1(\Phi_{j+1}) = D_{v_1}$; a similar argument shows that $\pi_2(\Phi_{j+1}) = D_{v_{j+2}}$. Suppose, towards a contradiction, that $\Phi_{j+1} = D_{v_1}\times D_{v_{j+2}}$. For this to be the case, we must have $A'_s\cap B_t\neq\emptyset$ for every $s\in\{1,2\}$ and $t\in[\mu]$. Now, let $D^*_{v_{j+1}}=D_{v_{j+1}}\setminus (A'_2\cap B_2)$ and consider the relation $$R = \{(d_1, d_3) \mid \mbox{for some $d_2\in D^*_{v_{j+1}} $, $(d_1, d_2)\in \Phi_j$ and $(d_2, d_3)\in R'_{j+1}$ }\}.$$ Since $A'_1 \subseteq D^*_{v_{j+1}}$ the non-empty sets $A'_1 \cap B_1$ and $A'_1 \cap B_2$ are both subsets of $D^*_{v_{j+1}}$ so $A_1\times B'_1\subseteq R$ and $A_1\times B'_2\subseteq R$. Similarly, $B_1 \subseteq D^*_{v_{j+1}}$, so $A'_2 \cap B_1 \subseteq D^*_{v_{j+1}}$ so $A_2\times B'_1\subseteq R$. However, $(A_2\times B'_2)\cap R = \emptyset$, so $R$ is not rectangular. We will now derive a contradiction by showing that $R$ is rectangular. Note that $$R = H^M_{D_{v_1},D_{v_2}} \circ \cdots \circ H^M_{D_{v_{j-1}},D_{v_j}} \circ H^M_{D_{v_j},D^*_{v_{j+1}}} \circ H^M_{D^*_{v_{j+1}},D_{v_{j+2}}}$$ but this relation is rectangular because the hypothesis of the lemma guarantees that the sequence $$D_{v_1},\ldots,D_{v_{j}},D^*_{v_{j+1}},D_{v_{j+2}}$$ is not an ${\ensuremath{\mathcal L}}$-$M$-derectangularising sequence and all of the elements of this sequence are in ${\ensuremath{\mathcal L}}$, and $\{D_{v_1},\ldots,D_{v_{j}},D^*_{v_{j+1}},D_{v_{j+2}}\}$ is $M$-purifying. Polynomial-time algorithms and the dichotomy theorem {#sec:dichotomy} ==================================================== Bulatov [@Bul08] showed that every problem of the form ${\ensuremath{\mathrm{\#CSP}}}(\Gamma)$ is either in ${\ensuremath{\mathrm{FP}}}$ or $\nP$-complete. Together with Proposition \[prop:purifiediscsp\], his result immediately shows that a similar dichotomy exists for the special case of the problem [[\#$\mathcal L$-$M$-partitions]{}]{} in which $\mathcal L$ is $M$-purifying and is closed under subsets. Our algorithmic work in Section \[sec:arc\] can be combined with Dyer and Richerby’s explicit dichotomy for ${\ensuremath{\mathrm{\#CSP}}}$ to obtain an explicit dichotomy for this special case of [[\#$\mathcal L$-$M$-partitions]{}]{}. In particular, Lemma \[lem:quickarc\] gives a polynomial-time algorithm for the case in which there is no [$\mathcal L$]{}-$M$-derectangularising sequence. When there is such a sequence, ${\Gamma_{\!{\ensuremath{\mathcal L}}, M}}$ is not “strongly rectangular” in the sense of [@DRfull]. It follows immediately that ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ is [$\mathrm{\#P}$]{}-complete [@DRfull Lemma 24] so [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is also [$\mathrm{\#P}$]{}-complete by Proposition \[prop:purifiediscsp\]. In fact, the dichotomy for this special case does not require the full generality of Dyer and Richerby’s dichotomy. If there is an [$\mathcal L$]{}-$M$-derectangularising sequence then it follows immediately from work of Bulatov and Dalmau [@BD Theorem 2 and Corollary 3] that ${\ensuremath{\mathrm{\#CSP}}}({\Gamma_{\!{\ensuremath{\mathcal L}}, M}})$ is [$\mathrm{\#P}$]{}-complete. In this section we will move beyond the case in which ${\ensuremath{\mathcal L}}$ is $M$-purifying to provide a full dichotomy for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. We will use two data structures: *sparse-dense partitions* and a representation of the set of *splits* of a bipartite graph. Similar data structures were used by Hell et al. [@HHN] in their dichotomy for the [[\#$M$-partitions]{}]{} problem for matrices of size at most $3$-by-$3$. Data Structures {#sec:DS} --------------- We use two types of graph partition. The first is a special case of a sparse-dense partition [@FHKM] which is also called an $(a,b)$-graph with $a=b=2$. \[def:bsd\] A bipartite–cobipartite partition of a graph $G$ is a partition $(B,C)$ of $V(G)$ such that $B$ induces a bipartite graph and $C$ induces the complement of a bipartite graph. \[lem:sparsedense\][@FHKM Theorem 3.1; see also the remarks on $(a,b)$-graphs.] There is a polynomial-time algorithm for finding all bipartite–cobipartite partitions of a graph $G$. The second decomposition is based on certain sub-hypercubes called subcubes. For any finite set $U\!$, a [*subcube*]{} of $\{0,1\}^U$ is a subset of $\{0,1\}^U$ that is a Cartesian product of the form $\prod_{u\in U} S_u$ where $S_u\in\{\{0\},\{1\},\{0,1\}\}$ for each $u\in U\!$. We can also associate a subcube $\prod_{u\in U} S_u$ with the set of assignments $\sigma\colon U\to \{0,1\}$ such that $\sigma(u)\in S_u$ for all $u\in U\!$. Subcubes can be represented efficiently by listing the projections $S_u$. Let $G=(U,U'\!,E)$ be a bipartite graph, where $U$ and $U'$ are disjoint vertex sets, and $E\subseteq U\times U'\!$. A [*subcube decomposition*]{} of $G$ is a list $U_1,\dots,U_k$ of subcubes of $\{0,1\}^U$ and a list $U'_1,\dots, U'_k$ of subcubes of $\{0,1\}^{U'}$ such that the following hold. - The union $(U_1\times U'_1)\cup \dots \cup (U_k\times U'_k)$ is the set of assignments $\sigma\colon U\cup U'\to\{0,1\}$ such that: $$\begin{aligned} & \label{todayone} \mbox{no edge $(u,u')\in E$ has $\sigma(u)=\sigma(u')=0$ and}\\ & \label{todaytwo} \mbox{no pair $(u,u')\in (U\times U')\setminus E$ has $\sigma(u)= \sigma(u')=1$.} \end{aligned}$$ - For distinct $i,j\in[k]$, $U_i\times U'_i$ and $U_j\times U'_j$ are disjoint. - For each $i\in[k]$, either $|U_i|=1$ or $|U'_i|=1$ (or both). Note that, although we require $U_i\times U'_i$ and $U_j\times U'_j$ to be disjoint for distinct $i,j\in[k]$, we allow $U_i\cap U_j\neq\emptyset$ as long as $U'_i$ and $U'_j$ are disjoint, and vice-versa. It is even possible that $U_i=U_j$, and indeed this will happen in our constructions below. \[lem:splittocubes\] A subcube decomposition of a bipartite graph $G=(U,U'\!,E)$ can be computed in polynomial time, with the subcubes represented by their projections. For a vertex $x$ in a bipartite graph, let $\Gamma(x)$ be its set of neighbours and let ${\overline{\Gamma}}(x)$ be its set of non-neighbours on the other side of the graph. Thus, for $x\in U\!$, ${\overline{\Gamma}}(x) = U'\setminus \Gamma(x)$ and, for $x\in U'\!$, ${\overline{\Gamma}}(x) = U\setminus \Gamma(x)$. Observe that we can write $\{0,1\}^n\setminus \{0\}^n$ as the disjoint union of $n$ subcubes $\{0\}^{k-1}\times \{1\}^1\times \{0,1\}^{n-k}$ with $1\leq k\leq n$, and similarly for any other cube minus a single point. We first deal with two base cases. If $G$ has no edges, then the set of assignments $\sigma\colon U\cup U'\to\{0,1\}$ satisfying (\[todayone\]) and (\[todaytwo\]) is the disjoint union of $$\{0\}^U\times \{0\}^{U'}, \quad (\{0,1\}^U\setminus\{0\}^U)\times \{0\}^{U'}, \quad \text{and} \quad \{0\}^U\times(\{0,1\}^{U'}\setminus\{0\}^{U'}).$$ The second and third terms can be decomposed into subcubes as described above to produce the output. Similarly, if $G$ is is a complete bipartite graph, then the set of assignments satisfying (\[todayone\]) and (\[todaytwo\]) is the disjoint union of $$\{1\}^U\times \{1\}^{U'}, \quad (\{0,1\}^U\setminus\{1\}^U)\times \{1\}^{U'}, \quad \text{and} \quad \{1\}^U\times(\{0,1\}^{U'}\setminus\{1\}^{U'}).$$ If neither of these cases occurs then there is a vertex $x$ such that neither $\Gamma(x)$ nor ${\overline{\Gamma}}(x)$ is empty. If possible, choose $x\in U$; otherwise, choose $x\in U'\!$. To simplify the description of the algorithm, we assume that $x\in U$; the other case is symmetric. We consider separately the assignments where $\sigma(x)=0$ and those where $\sigma(x)=1$. Note that, for any assignment, if $\sigma(y)=0$ for some vertex $y$, then $\sigma(z)=1$ for all $z\in\Gamma(y)$ and, if $\sigma(y)=1$, then $\sigma(z)=0$ for all $z\in{\overline{\Gamma}}(y)$. Applying this iteratively, setting $\sigma(x)=c$ for $c\in\{0,1\}$ also determines the value of $\sigma$ on some set $S_{x=c}\subseteq U\cup U'$ of vertices. Thus, we can compute a subcube decomposition for $G$ recursively. First, compute $S_{x=0}$ and $S_{x=1}$. Then, recursively compute subcube decompositions of $G-S_{x=0}$ (the graph formed from $G$ by deleting the vertices in $S_{x=0}$) and $G-S_{x=1}$. Translate these subcube decompositions into a subcube decomposition of $G$ by extending each subcube $(U_i\times U'_i)$ of $G-S_{x=c}$ to a subcube $(V_i\times V'_i)$ of $G$ whose restriction to $G-S_{x=c}$ is $(U_i\times U'_i)$ and whose restriction to $S_{x=c}$ is an assignment $\sigma$ with $\sigma(x)=c$ (in fact, all assignments that set $x$ to $c$ agree on the set $S_{x=c}$, by construction). It remains to show that the algorithm runs in polynomial time. The base cases are clearly computable in polynomial time, as are the individual steps in the recursive cases, so we only need to show that the number of recursive calls is polynomially bounded. At the recursive step, we only choose $x\in U'$ when $E(G) = U''\times U'$ for some proper subset $\emptyset\subset U''\subset U$ and, in this case, the two recursive calls are to base cases. Since each recursive call when $x\in U$ splits $U'$ into disjoint subsets, there can be at most $|U'|-1$ such recursive calls, so the total number of recursive calls is linear in $|V(G)|$. Reduction to a problem with $M$-purifying lists ----------------------------------------------- Our algorithm for counting list $M$-partitions uses the data structures from Section \[sec:DS\] to reduce problems where [$\mathcal L$]{} is not $M$-purifying to problems where it is (which we already know how to solve from Sections \[sec:purifiedcsp\] and \[sec:arc\]). The algorithm is defined recursively on the set ${\ensuremath{\mathcal L}}$ of allowed lists. The algorithm for parameters ${\ensuremath{\mathcal L}}{}$ and $M$ calls the algorithm for ${\ensuremath{\mathcal L}}_i$ and $M$ where ${\ensuremath{\mathcal L}}_i$ is a subset of ${\ensuremath{\mathcal L}}$. The base case arises when ${\ensuremath{\mathcal L}}_i$ is $M$-purifying. We will use the following computational problem to reduce [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} to a collection of problems [[\#${\ensuremath{\mathcal L}}'$-$M$-partitions]{}]{} that are, in a sense, disjoint. [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}. A graph $G$ and a function $L\colon V(G)\to{\ensuremath{\mathcal L}}$. Functions $L_1,\dots,L_t\colon V(G)\to{\ensuremath{\mathcal L}}$ such that - for each $i\in[t]$, the set $\{L_i(v) \mid v\in V(G)\}$ is $M$-purifying, - for each $i\in [t]$ and $v \in V(G)$, $L_i(v) \subseteq L(v)$, and - each $M$-partition of $G$ that respects $L$ respects exactly one of $L_1,\dots,L_t$. We will give an algorithm for solving the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} in polynomial time when there is no [$\mathcal L$]{}-$M$-derectangularising sequence of length exactly 2. The following computational problem will be central to the inductive step. [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{}. A graph $G$ and a function $L\colon V(G)\to {\ensuremath{\mathcal L}}$. Functions $L_1,\dots, L_k\colon V(G)\to {\ensuremath{\mathcal L}}$ such that - for each $i\in [k]$ and $v \in V(G)$, $L_i(v) \subseteq L(v)$, - every $M$-partition of $G$ that respects $L$ respects exactly one of $L_1,\dots,L_k$, and - for each $i\in[k]$, there is a $W\in{\ensuremath{\mathcal L}}{}$ which is inclusion-maximal in [$\mathcal L$]{} but does not occur in the image of $L_i$. Note that we can trivially produce a solution to the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{} by letting $L_1, \dots, L_k$ be an enumeration of all possible functions such that all lists $L_i(v)$ have size $1$ and satisfy $L_i(v) \subseteq L(v)$. Such a function $L_i$ corresponds to an assignment of vertices to parts so there is either exactly one $L_i$-respecting $M$-partition or none, which means that every $L$-respecting $M$-partition is $L_i$-respecting for exactly one $i$. However, this solution is exponentially large in $|V(G)|$ and we are interested in solutions that can be produced in polynomial time. Also, if $L(v)=\emptyset$ for some vertex $v$, the algorithm is entitled to output an empty list, since no $M$-partition respects $L$. The following definition extends rectangularity to $\{0,1,*\}$-matrices and is used in our proof. A matrix $M\in\{0,1,*\}^{X\times Y}$ is [*$*$-rectangular*]{} if the relation $H^M_{X,Y}$ is rectangular. Thus, $M$ is $*$-rectangular if and only if $M_{x,y}=M_{x'\!,y}=M_{x,y'}=*$ implies that $M_{x'\!,y'}=*$ for all $x,x'\in X'$ and all $y,y'\in Y''\!$. We will show in Lemma \[lem:claim\] that the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{} from Algorithm \[alg:purifystep\] is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{} whenever [$\mathcal L$]{} is not $M$-purifying and there is no length-2 [$\mathcal L$]{}-$M$-derectangularising sequence. Note that a length-2 [$\mathcal L$]{}-$M$-derectangularising sequence is a pair $X,Y\in{\ensuremath{\mathcal L}}$ such that $M|_{X\times Y}$, $M|_{X\times X}$ and $M|_{Y\times Y}$ are pure and $M|_{X\times Y}$ is not $*$-rectangular. If ${\ensuremath{\mathcal L}}\neq {{\mathcal{P}(D)}}$, it is possible that a matrix that is not $*$-rectangular has no length-2 [$\mathcal L$]{}-$M$-derectangularising sequence. For example, let $D=\{1,2,3\}$ and ${\ensuremath{\mathcal L}}= {{\mathcal{P}(\{1,2\})}}$ and let $M_{3,3}=0$ and $M_{i,j}=*$ for every other pair $(i,j)\in D^2\!$. $M$ is not $*$-rectangular but this fact is not witnessed by any submatrix $M|_{X\times Y}$ for $X,Y\in{\ensuremath{\mathcal L}}$. /\* $v_j\in C_i$\*/ $L_i(v_j) \gets X_0$ $L_i(v_j) \gets X_1$ $L_i(v_j) \gets Y_0$ $L_i(v_j) \gets Y_1$ \[lem:claim\] Let $M$ be a symmetric matrix in $\{0,1,*\}^{D\times D}$ and let ${\ensuremath{\mathcal L}}{}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. If [$\mathcal L$]{} is not $M$-purifying and there is no length-2 [$\mathcal L$]{}-$M$-derectangularising sequence, then Algorithm \[alg:purifystep\] is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{}. We consider an instance $(G,L)$ of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{} with $V(G)=\{v_1,\ldots,v_n\}$. If there is a $v_i\in V(G)$ with $L(v_i)=\emptyset$ then no $M$-partition of $G$ respects $L$, so the output is correct. Otherwise, we consider the three cases that can occur in the execution of the algorithm. #### Case 1. In this case column $d$ of $M|_{X\times Y}$ contains both a zero and a one. Equivalently, row $d$ of $M|_{Y\times X}$ does. Algorithm \[alg:Case1\] groups the set of $M$-partitions of $G$ that respect $L$, based on the first vertex that is placed in part $d$. For $i\in[n]$, $L_i$ requires that $v_i$ is placed in part $d$ and $v_1, \dots, v_{i-1}$ are not in part $d$; $L_{n+1}$ requires that part $d$ is empty. Thus, no $M$-partition can respect more than one of the $L_i$. Now consider an $L$-respecting $M$-partition $\sigma\colon V(G)\to D$ and suppose that $i$ is minimal such that $\sigma(v_i)=d$. We claim that $\sigma$ respects $L_i$. We have $\sigma(v_i)=d$, as required. For $j\neq i$, we must have $\sigma(v_j)\in L(v_j)$ since $\sigma$ respects $L$ and we must have $M_{d,\sigma(v_j)}\neq 1$ if $(v_i, v_j)\notin E(G)$ and $M_{d,\sigma(v_j)}\neq 0$ if $(v_i,v_j)\in E(G)$, since $\sigma$ is an $M$-partition. In addition, by construction, $\sigma(v_j)\neq d$ if $j<i$. Therefore, $\sigma$ respects $L_i$. A similar argument shows that $\sigma$ respects $L_{n+1}$ if $\sigma(v)\neq d$ for all $v\in V(G)$. Hence, any $M$-partition that respects $L$ respects exactly one of the $L_i$. Finally, we show that, for each $i\in[n+1]$, there is a set $W$ which is inclusion-maximal in [$\mathcal L$]{} and is not in the image of $L_i$. For $i\in [n]$, we cannot have both $a$ and $b$ in $L_i(v_j)$ for any $v_j$, so $X$ is not in the image of $L_i$. $Y$ contains $d$, so $Y$ is not in the image of $L_{n+1}$. #### Case 2. In this case, every row of $M|_{X_0\times X}$ contains a 0, while every row of $M|_{X_1\times X}$ fails to contain a zero. Since $M|_{X\times X}$ is not pure, but no row of $M|_{X\times X}$ contains both a zero and a one (since we are not in Case 1), $X_0$ and $X_1$ are non-empty. Note that $M|_{X_0\times X_0}$ and $M|_{X_1\times X_1}$ are both pure, while every entry of $M|_{X_0\times X_1}$ is a $*$. If $V_X=\emptyset$ then $X$ is an inclusion-maximal member of [$\mathcal L$]{} that is not in the image of $L$, so the output of Algorithm \[alg:Case2\] is correct. Otherwise, $(B_1,C_1),\dots,(B_k,C_k)$ is the list containing all partitions $(B,C)$ of $V_X$ such that $B$ induces a bipartite graph in $G$ and $C$ induces the complement of a bipartite graph. The algorithm returns $L_1,\ldots, L_k$. $X$ is not in the image of any $L_i$ so, to show that $\{L_1, \dots, L_k\}$ is a correct output for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{}, we just need to show that every $M$-partition of $G$ that respects $L$ respects exactly one of $L_1,\dots,L_k$. For $i\neq i'$, $(B_i,C_i)\neq (B_{i'},C_{i'})$ so there is at least one vertex $v_j$ such that $L_i(v_j)=X_0$ and $L_{i'}(v_j)=X_1$ or vice-versa. Since $X_0$ and $X_1$ are disjoint, no $M$-partition can simultaneously respect $L_i$ and $L_{i'}$. It remains to show that every $M$-partition respects at least one of $L_1, \dots, L_k$. To do this, we deduce two structural properties of $M|_{X\times X}$. First, we show that $M|_{X\times X}$ has no $*$ on its diagonal. Suppose towards a contradiction that $M_{d,d}=*$ for some $d\in X$. If $d\in X_0$, then, for each $d'\in X_1$, $M_{d,d'}=M_{d'\!,d}=*$ because, as noted above, every entry of $M|_{X_0\times X_1}$ is a $*$. Therefore, the $2\times 2$ matrix $M'=M|_{\{d,d'\}\times \{d,d'\}}$ contains at least three $*$s so it is pure. $\{d,d'\} \subseteq X\in {\ensuremath{\mathcal L}}$ so, by the hypothesis of the lemma, the length-2 sequence $\{d,d'\},\{d,d'\}$ is not [$\mathcal L$]{}-$M$-derectangularising, so $M'$ must be $*$-rectangular, so $M_{d'\!,d'}=*$ for all $d'\in X_1$. Similarly, if $M_{d'\!,d'}=*$ for some $d'\in X_1$, then $M_{d,d}=*$ for all $d\in X_0$. Therefore, if $M|_{X\times X}$ has a $*$ on its diagonal, every entry on the diagonal is $*$. But $M$ contains a 0, say $M_{i,j}=0$ with $i,j\in X_0$. For any $k\in X_1$, $$M|_{\{i,j\}\times \{j,k\}} = \begin{pmatrix} 0 & * \\ * & * \end{pmatrix},$$ so the length-2 sequence $\{i,j\}, \{j,k\}$ is [$\mathcal L$]{}-$M$-derectangularising, contradicting the hypothesis of the lemma (note that $\{i,j\},\{j,k\}\subseteq X\in{\ensuremath{\mathcal L}}$). Second, we show that there is no sequence $d_1,\dots,d_\ell\in X_0$ of odd length such that $$M_{d_1,d_2}=M_{d_2,d_3}=\dots=M_{d_{\ell-1},d_\ell}=M_{d_\ell,d_1}=*\,.$$ Suppose for a contradiction that such a sequence exists. Note that $M|_{X_0\times X_0}$ is $*$-rectangular since $X_0,X_0$ is not an [$\mathcal L$]{}-$M$-derectangularising sequence and $M|_{X_0\times X_0}$ is pure since Case 1 does not apply. We will show by induction that for every non-negative integer $\kappa \leq (\ell-3)/2$, $M_{d_1,d_{\ell-2\kappa-2}}=*$. This gives a contradiction by taking $\kappa=(\ell-3)/2$ since $M_{d_1,d_1}=*$ and we have already shown that $M|_{X_0\times X_0}$ has no $*$ on its diagonal. For every $\kappa$, the argument follows by considering the matrix $M_\kappa = M|_{\{d_1,d_{\ell-2\kappa-1}\} \times \{d_{\ell-2\kappa-2},d_{\ell-2\kappa}\}}$. The definition of the sequence $d_1,\ldots,d_\ell$ together with the symmetry of $M$ guarantees that both entries in row $d_{\ell-2\kappa-1}$ of $M_\kappa$ are equal to $*$. It is also true that $M_{d_1,d_{\ell-2\kappa}}=*$: If $\kappa=0$ then this follows from the definition of the sequence; otherwise it follows by induction. The fact that $M_{d_1,d_{\ell-2\kappa-2}}=*$ then follows by $*$-rectangularity. This second structural property implies that, for any $M|_{X\times X}$-partition of $G[V_X]$, the graph induced by vertices assigned to $X_0$ has no odd cycles, and is therefore bipartite. Similarly, the vertices assigned to $X_1$ induce the complement of a bipartite graph. Therefore, any $M$-partition of $G$ that respects $L$ must respect at least one of the $L_1, \dots, L_k$, so it respects exactly one of them, as required. #### Case 3. Since Cases 1 and 2 do not apply and [$\mathcal L$]{} is not $M$-purifying, there are distinct $X,Y\in {\ensuremath{\mathcal L}}$ such that $X$ and $Y$ are inclusion-maximal in [$\mathcal L$]{} and $M|_{X\times Y}$ is not pure. As in the previous case, the sets $X_0$, $X_1$, $Y_0$ and $Y_1$ are all non-empty. If either $V_X$ or $V_Y$ is empty then either $X$ or $Y$ is an inclusion-maximal set in [$\mathcal L$]{} that is not in the image of $L$ so the output of Algorithm \[alg:Case3\] is correct. Otherwise, $(U_1,U'_1),\dots,(U_k,U'_k)$ is a subcube decomposition of the bipartite subgraph $(V_X,V_Y,E)$. The $U_i$s are subcubes of $\{0,1\}^{V_X}$ and the $U'_i$s are subcubes of $\{0,1\}^{V_Y}$. The algorithm returns $L_1,\ldots,L_k$. Note that if $|U'_i|=1$ then $Y$ is not in the image of $L_i$. Similarly, if $|U'_i|>1$ but $|U_i|=1$ then $X$ is not in the image of $L_i$. The definition of subcube decompositions guarantees that, for every $i$, at least one of these is the case. To show this definition of $L_1,\ldots,L_k$ is a correct output for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{}, we must show that any $M$-partition of $G$ that respects $L$ also respects exactly one $L_i$. Since the sets in $\{U_i \times U'_i \mid i\in[k]\}$ are disjoint subsets of $\{0,1\}^{V_X\cup V_Y}$, any $M$-partition of $G$ that respects $L$ respects at most one $L_i$ so it remains to show that every $M$-partition of $G$ respects at least one $L_i$. To do this, we deduce two structural properties of $M|_{X\times Y}$. First, we show that every entry of $M|_{X_0\times Y_0}$ is $0$. The definition of $X_0$ guarantees that every row of $M|_{X_0\times Y_0}$ contains a $0$. Since Case 1 does not apply, and $M$ is symmetric, every entry of $M|_{X_0\times Y_0}$ is either $0$ or $*$. Suppose for a contradiction that $M_{i,j}=*$ for some $(i,j)\in X_0\times Y_0$. Pick $i'\in X_1$. For any $j'\in Y_0\setminus\{j\}$ we have $M_{i,j}=M_{i'\!,j}=M_{i'\!,j'}=*$, so by $*$-rectangularity of $M|_{X\times Y_0}$ we have $M_{i,j'}=*$. Thus, every entry of $M|_{\{i\}\times Y_0}$ is $*$, so there is a $*$ in every $Y_0$-indexed column of $M$. By the same argument, swapping the roles of $X$ and $Y$, every entry in $M|_{X_0\times Y_0}$ is $*$, contradicting the fact that $M|_{X\times Y}$ contains a $0$ since $M|_{X\times Y}$ is not pure. Second, a similar argument shows that every entry of $M|_{X_1\times Y_1}$ is $1$. Thus for all $M$-partitions $\sigma$ of $G$ respecting $L$, for all $x\in V_X$ and $y\in V_Y$, if $(x,y)\in E$ then $(\sigma(x),\sigma(y))\notin X_0\times Y_0$ while if $(x,y)\notin E$ then $(\sigma(x),\sigma(y))\notin X_1\times Y_1$. Using the definition of subcube decompositions, this shows that any $M$-partition of $G$ respecting $L$ respects some $L_i$. We can now give an algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}. The algorithm consists of the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}, which is defined in Algorithm \[alg:purifytriv\] for the trivial case in which ${\ensuremath{\mathcal L}}$ is $M$-purifying and in Algorithm \[alg:purify\] for the case in which it is not. Note that for any fixed ${\ensuremath{\mathcal L}}$ and $M$ the algorithm is defined either in Algorithm \[alg:purifytriv\] or in Algorithm \[alg:purify\] and the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} is not recursive. However, the *definition* is recursive, so the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} defined in Algorithm \[alg:purify\] does make a call to a function [[\#${\ensuremath{\mathcal L}}_i$-$M$-purify]{}]{} for some ${\ensuremath{\mathcal L}}_i$ which is smaller than ${\ensuremath{\mathcal L}}$. The function [[\#${\ensuremath{\mathcal L}}_i$-$M$-purify]{}]{} is in turn defined in Algorithm \[alg:purifytriv\] or Algorithm \[alg:purify\]. The correctness of the algorithm follows from the definition of the problem. The following lemma bounds the running time. \[lem:reducetopure\] Let $M\in\{0,1,*\}^{D\times D}$ be a symmetric matrix and let ${\ensuremath{\mathcal L}}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. If there is no length-$2$ [$\mathcal L$]{}-$M$-derectangularising sequence, then the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} as defined in Algorithms \[alg:purifytriv\] and \[alg:purify\] is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}. Note that ${\ensuremath{\mathcal L}}$ is a fixed parameter of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} — it is not part of the input. The proof is by induction on $|{\ensuremath{\mathcal L}}|$. If $|{\ensuremath{\mathcal L}}|=1$ then ${\ensuremath{\mathcal L}}=\{\emptyset\}$ so it is $M$-purifying. In this case, function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} is defined in Algorithm \[alg:purifytriv\]. It is clear that it is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}. For the inductive step suppose that $|{\ensuremath{\mathcal L}}|>1$. If ${\ensuremath{\mathcal L}}$ is $M$-purifying then function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} is defined in Algorithm \[alg:purifytriv\] and again the result is trivial. Otherwise, function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} is defined in Algorithm \[alg:purify\]. Note that ${\ensuremath{\mathcal L}}\subseteq{{\mathcal{P}(D)}}$ is subset-closed and there is no length-$2$ [$\mathcal L$]{}-$M$-derectangularising sequence. From this, we can conclude that, for any subset-closed subset ${\ensuremath{\mathcal L}}'$ of ${\ensuremath{\mathcal L}}$, there is no length-$2$ ${\ensuremath{\mathcal L}}'$-$M$-derectangularising sequence. So we can assume by the inductive hypothesis that for all subset-closed ${\ensuremath{\mathcal L}}'\subset {\ensuremath{\mathcal L}}{}$, the function [[\#${\ensuremath{\mathcal L}}'$-$M$-purify]{}]{} runs in polynomial time. The result now follows from the fact that the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{} runs in polynomial time (as guaranteed by Lemma \[lem:claim\]) and from the fact that each ${\ensuremath{\mathcal L}}_i$ is a strict subset of ${\ensuremath{\mathcal L}}$, which follows from the definition of problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify-step]{}]{}. Each $M$-partition that respects $L$ respects exactly one of $L_1,\dots,L_k$ and, hence, it respects exactly one of the list functions that is returned. Algorithm for [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} and proof of the dichotomy --------------------------------------------------------------------------------------------- We can now present our algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. The algorithm consists of the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} which is defined in Algorithm \[alg:mainpurifying\] for the case in which ${\ensuremath{\mathcal L}}$ is $M$-purifying and in Algorithm \[alg:main\] when it is not. AC$(V,C)$ where AC is the function from Algorithm \[alg:AC\] \[lem:positive\] Let $M\in\{0,1,*\}^{D\times D}$ be a symmetric matrix and let ${\ensuremath{\mathcal L}}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. If there is no [$\mathcal L$]{}-$M$-derectangularising sequence, then the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} as defined in Algorithms \[alg:mainpurifying\] and \[alg:main\] is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. If ${\ensuremath{\mathcal L}}$ is $M$-purifying then the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is defined in Algorithm \[alg:mainpurifying\]. Proposition \[prop:purifiediscsp\] shows that the reduction in Algorithm \[alg:mainpurifying\] to a CSP instance is correct and takes polynomial time. The CSP instance can be solved by the function AC in Algorithm \[alg:AC\], whose running time is shown to be polynomial in Lemma \[lem:quickarc\]. If ${\ensuremath{\mathcal L}}$ is not $M$-purifying then the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is defined in Algorithm \[alg:main\]. Lemma \[lem:reducetopure\] guarantees that the function [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} is a polynomial-time algorithm for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{}. If the list $L_1,\ldots,L_t$ is empty then there is no $M$-partition of $G$ that respects $L$ so it is correct that the function [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} returns $0$. Otherwise, we know from the definition of the problem [[\#${\ensuremath{\mathcal L}}$-$M$-purify]{}]{} that - functions $L_1,\ldots,L_t$ are from $V(G)$ to ${\ensuremath{\mathcal L}}$, - for each $i\in [t]$, the set $\{L_i(v) \mid v\in V(G)\}$ is $M$-purifying, - for each $i\in [t]$ and $v \in V(G)$, $L_i(v) \subseteq L(v)$, and - each $M$-partition of $G$ that respects $L$ respects exactly one of $L_1,\dots,L_t$. The desired result is now the sum, over all $i\in[t]$, of the number of $M$-partitions of $G$ that respect $L_i$. Since the list $L_1, \dots, L_t$ is generated in polynomial time, $t$ is bounded by some polynomial in $|V(G)|$. Now, for each $i\in[t]$, ${\ensuremath{\mathcal L}}_i$ is a subset-closed subset of ${\ensuremath{\mathcal L}}$. Since there is no ${\ensuremath{\mathcal L}}$-$M$-derectangularising sequence, there is also no ${\ensuremath{\mathcal L}}_i$-$M$-derectangularising sequence. Also, ${\ensuremath{\mathcal L}}_i$ is $M$-purifying. Thus, the argument that we gave for the purifying case shows that $Z_i$ is the desired quantity. We can now combine our results to establish our dichotomy for the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{}. [thm:explicitdichotomy]{}[ Let $M$ be a symmetric matrix in $\{0,1,*\}^{D\times D}$ and let ${\ensuremath{\mathcal L}}{}\subseteq{{\mathcal{P}(D)}}$ be subset-closed. If there is an [$\mathcal L$]{}-$M$-derectangularising sequence then the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is ${\ensuremath{\mathrm{\#P}}}$-complete. Otherwise, it is in ${\ensuremath{\mathrm{FP}}}$. ]{} Suppose that there is an [$\mathcal L$]{}-$M$-derectangularising sequence $D_1,\dots,D_k$. Recall (from Definition \[def:closure\]) the definition of the subset-closure ${{\mathscr{S}({\ensuremath{\mathcal L}}'')}}$ of a set ${\ensuremath{\mathcal L}}'' \subseteq {{\mathcal{P}(D)}}$. Let $${\ensuremath{\mathcal L}}'= {{\mathscr{S}(\{D_1,\ldots,D_k\} )}}.$$ Since $\{D_1,\ldots,D_k\}$ is $M$-purifying, so is ${\ensuremath{\mathcal L}}'\!$, which is also subset-closed. It follows that $\Gamma_{\!{\ensuremath{\mathcal L}}'\!,M}$ is well defined (see Definition \[defgammaprime\]) and contains the relations $H_{D_1,D_2}^M, \ldots,H_{D_{k-1},D_k}^M$ (and possibly others). Since $H_{D_1,D_2}^M \circ H_{D_2,D_3}^M \circ \cdots \circ H_{D_{k-1},D_k}^M$ is not rectangular, ${\ensuremath{\mathrm{\#CSP}}}(\Gamma_{\!{\ensuremath{\mathcal L}}'\!,M})$ is ${\ensuremath{\mathrm{\#P}}}$-complete [@BD Theorem 2 and Corollary 3] (see also [@DRfull Lemma 24]). By Proposition \[prop:purifiediscsp\], the problem [[\#${\ensuremath{\mathcal L}}'$-$M$-partitions]{}]{} is ${\ensuremath{\mathrm{\#P}}}$-complete so the more general problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is also ${\ensuremath{\mathrm{\#P}}}$-complete. On the other hand, if there is no ${\ensuremath{\mathcal L}}$-$M$-derectangularising sequence, then the result follows from Lemma \[lem:positive\]. Complexity of the dichotomy criterion {#sec:meta} ===================================== The dichotomy established in Theorem \[thm:explicitdichotomy\] is that, if there is an [$\mathcal L$]{}-$M$-derectangularising sequence, then the problem [[\#${\ensuremath{\mathcal L}}$-$M$-partitions]{}]{} is ${\ensuremath{\mathrm{\#P}}}$-complete; otherwise, it is in ${\ensuremath{\mathrm{FP}}}$. This section addresses the computational problem of determining which is the case, given [$\mathcal L$]{} and $M$. The following lemma will allow us to show that the problem [[ExistsDerectSeq]{}]{}  (the problem of determining whether there is an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence, given [$\mathcal L$]{} and $M$) and the related problem [[MatrixHasDerectSeq]{}]{}  (the problem of determining whether there is a ${{\mathcal{P}(D)}}$-$M$-derectangularising sequence, given $M$) are both in [$\mathrm{NP}$]{}. Note that, for this “meta-problem”, [$\mathcal L$]{} and $M$ are the inputs whereas, previously, we have regarded them as fixed parameters. \[lem:small\_derect\] Let $M\in\{0,1,*\}^{D\times D}$ be symmetric, and let ${\ensuremath{\mathcal L}}\subseteq {{\mathcal{P}(D)}}$ be subset-closed. If there is an [$\mathcal L$]{}-$M$-derectangularising sequence, then there is one of length at most $512(|D|^3+1)$. Pick an [$\mathcal L$]{}-$M$-derectangularising sequence $D_1,\dots,D_k$ with $k$ minimal; we will show that $k\leq 512(|D|^3+1)$. Define $$R=H^M_{D_1, D_2} \circ H^M_{D_2, D_3} \circ \dots \circ H^M_{D_{k-1}, D_k}.$$ Note that $R\subseteq D_1\times D_k$. By the definition of derectangularising sequence, there are $a,a'\in D_1$ and $b,b'\in D_k$ such that $(a,b)$, $(a'\!,b)$ and $(a,b')$ are all in $R$ but $(a'\!,b')\not\in R$. So there exist $$(x_1,\dots,x_k),(y_1,\dots,y_k),(z_1,\dots,z_k)\in D_1\times \dots \times D_k$$ with $(x_1,x_k)=(a,b)$, $(y_1,y_k)=(a'\!,b)$ and $(z_1,z_k)=(a,b')$ such that $M_{x_i,x_{i+1}} = M_{y_i,y_{i+1}} = M_{z_i,z_{i+1}}=*$ for every $i\in[k-1]$ but, for any $(w_1,\ldots,w_k)\in D_1 \times \dots \times D_k$ with $(w_1,w_k)=(a'\!,b')$, there is an $i\in[k-1]$ such that $M_{w_i,w_{i+1}}\neq *$. Setting $D'_i=\{x_i,y_i,z_i\}$ for each $i$ gives an [$\mathcal L$]{}-$M$-derectangularising sequence $D'_1,\dots,D'_k$ with $|D'_i|\leq 3$ for each $1\leq i\leq k$. (Note that any submatrix of a pure matrix is pure.) For all $1\leq s < t\leq k$ define $$R_{s,t}=H^M_{D'_s, D'_{s+1}} \circ H^M_{D'_{s+1}, D'_{s+2}} \circ \dots \circ H^M_{D'_{t-1}, D'_t}.$$ Since $D'_1,\ldots,D'_k$ is [$\mathcal L$]{}-$M$-derectangularising, $R_{1,k}$ is not rectangular but, by the minimality of $k$, every other $R_{s,t}$ is rectangular. Note also that no $R_{s,t}=\emptyset$ since, if that were the case, we would have $R_{1,k}=\emptyset$, which is rectangular. Suppose for a contradiction that $k> 512(|D|^3+1)$. There are at most $|D|^3+1$ subsets of $D$ with size at most three, so there are indices $1\leq i_0<i_1<i_2<\dots<i_{512}\leq k$ such that $D'_{i_0}=\dots=D'_{i_{512}}$. There are at most $2^{|D'_{i_0}|^2}-1 \leq 2^9-1=511$ non-empty binary relations on $D'_{i_0}$, so $R_{i_0,i_m}=R_{i_0,i_n}$ for some $1\leq m<n\leq 512$. Since $R_{1,k}$ is not rectangular, $$R_{1,k}= R_{1,i_0} \circ R_{i_0,i_n} \circ R_{i_n,k}= R_{1,i_0} \circ R_{i_0,i_m} \circ R_{i_n,k}= R_{1,i_m} \circ R_{i_n,k}$$ is not rectangular. Therefore, $D'_1,D'_2,\dots,D'_{i_m},D'_{1+i_n},D'_{2+i_n},\dots,D'_k$ is an $\mathcal L$-$M$-derectangularising sequence of length less than $k$, contradicting the minimality of $k$. Now that we have membership in [$\mathrm{NP}$]{}, we can prove completeness. [thm:meta]{}[ [[ExistsDerectSeq]{}]{} is [$\mathrm{NP}$]{}-complete under polynomial-time many-one reductions.]{} We first show that [[ExistsDerectSeq]{}]{} is in [$\mathrm{NP}$]{}. Given $D$, $M\in \{0,1,*\}^{D\times D}$ and ${\ensuremath{\mathcal L}}\subseteq {{\mathcal{P}(D)}}$, a non-deterministic polynomial time algorithm for [[ExistsDerectSeq]{}]{}first “guesses” an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence $D_1,\ldots,D_k$ with $k\leq 512{(|D|^3+1)}$. Lemma \[lem:small\_derect\] guarantees that such a sequence exists if the output should be “yes”. The algorithm then verifies that each $D_i$ is a subset of a set in ${\ensuremath{\mathcal L}}$, that $\{D_1,\ldots,D_k\}$ is $M$-purifying, and that the relation $H^M_{D_1,D_2} \circ H^M_{D_2, D_3} \circ \dots \circ H^M_{D_{k-1}, D_k}$ is not rectangular. All of these can be checked in polynomial time without explicitly constructing ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$. To show that [[ExistsDerectSeq]{}]{} is [$\mathrm{NP}$]{}-hard, we give a polynomial-time reduction from the well-known [$\mathrm{NP}$]{}-hard problem of determining whether a graph $G$ has an independent set of size $k$. Let $G$ and $k$ be an input to the independent set problem. Let $V(G)= [n]$ and assume without loss of generality that $k\in[n]$. Setting $D=[n]\times[k]\times[3]$, we construct a $D\times D$ matrix $M$ and a set [$\mathcal L$]{} of lists such that there is an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence if and only if $G$ has an independent set of size $k$. $M$ will be a block matrix, constructed using the following $3\times 3$ symmetric matrices. Note that each is pure, apart from ${\mathrm{Id}}$. $$\begin{gathered} {M_\mathrm{start}}= \begin{pmatrix}*&*&0 \\ *&*&0 \\ 0&0&*\end{pmatrix} \qquad {M_\mathrm{end}}= \begin{pmatrix}*&0&0 \\ 0&*&* \\ 0&*&*\end{pmatrix} \qquad {M_\mathrm{bij}}= \begin{pmatrix}*&0&0 \\ 0&*&0 \\ 0&0&*\end{pmatrix} \\ {\mathbf{0}}= \begin{pmatrix}0&0&0 \\ 0&0&0 \\ 0&0&0\end{pmatrix} \qquad {\mathrm{Id}}= \begin{pmatrix}1&0&0 \\ 0&1&0 \\ 0&0&1\end{pmatrix}\,.\end{gathered}$$ For $v\in [n]$ and $j\in[k]$, let $D[v,j] = \{(v,j,c) \mid c\in[3]\}$. Below, when we say that $M|_{D[v,j]\times D[v',j']}= N$ for some $3\times 3$ matrix $N$, we mean more specifically that $M_{(v,j,c),(v'\!,j'\!,c')} = N_{c,c'}$ for all $c,c'\in[3]$. $M$ is constructed as follows. - For all $v\in[n]$, $M|_{D[v,1] \times D[v,1]}= {M_\mathrm{start}}$ and $M|_{D[v,k]\times D[v,k]} = {M_\mathrm{end}}$. - For all $v\in[n]$ and all $j\in\{2,\dots,k-1\}$, $M|_{D[{v,j}]\times D[{v,j}]} = {M_\mathrm{bij}}$. - If $v\neq v'\!$, $(v,v')\notin E(G)$ and $j<k$, then - $M|_{D[{v,j}]\times D[{v',j+1}]} = M|_{D[v',j+1]\times D[{v,j}]} = {M_\mathrm{bij}}$ and - $M|_{D[{v,j}] \times D[{v',j'}]} = M|_{D[{v',j'}]\times D[{v,j}]} = {\mathbf{0}}$ for all $j'>j+1$. - For all $v,v'\in[n]$ and $j,j'\in[k]$ not covered above, $M|_{D[v,j]\times D[v',j']} = {\mathrm{Id}}$. To complete the construction, let ${\ensuremath{\mathcal L}}=\{D[v,j]\mid v\in[n], j\in[k]\}$. We will show that $G$ has an independent set of size $k$ if and only if there is an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence. For the forward direction of the proof, suppose that $G$ has an independent set $I = \{v_1, \dots, v_k\}$ of size $k$. We will show that $$D[v_1,1], D[v_1,1], D[v_2,2], D[v_3,3], \dots, D[v_{k-1}, k-1], D[v_k,k], D[v_k,k]$$ (where the first and last elements are repeated and the others are not) is ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising. Since there is no edge $(v_i,v_{i'})\in E(G)$ for $i,i'\in[k]$, the matrix $M|_{D[v_i,i]\times D[v_{i'},i']} $ is always one of ${M_\mathrm{start}}$, ${M_\mathrm{end}}$, ${M_\mathrm{bij}}$ and ${\mathbf{0}}$, so it is always pure. Therefore, $\{D[v_1,1], \dots, D[v_k,k]\}$ is $M$-purifying. It remains to show that the relation $$R = H^M_{D[v_1,1],D[v_1,1]} \circ H^M_{D[v_1,1],D[v_2,2]} \circ \dots \circ H^M_{D[v_{k-1,k-1}],D[v_{k},k]} \circ H^M_{D[v_k,k],D[v_k,k]}$$ is not rectangular. Consider $i\in[k-1]$. Since $(v_i,v_{i+1})\notin E(G)$, $M|_{ D[{v_i,i}] \times D[{ v_{i+1}, i+1 }]} = {M_\mathrm{bij}}$ so $H^M_{D[v_i,i], D[v_{i+1},i+1]}$ is the bijection that associates $(v_i,i,c)$ with $(v_{i+1},i+1,c)$ for each $c\in[3]$. Therefore, $$H^M_{D[v_1,1],D[v_1,2]} \circ \dots \circ H^M_{D[v_{k-1},k-1],D[v_k,k]}$$ is the bijection that associates $(v_1,1,c)$ with $(v_k,k,c)$ for each $c\in[3]$. We have $M|_{D[v_1,1]\times D[v_1,1]} = {M_\mathrm{start}}$ and $M|_{D[v_k,k] \times D[v_k,k]} = {M_\mathrm{end}}$ so $$\begin{aligned} H^M_{D[v_1,1], D[v_1,1]} &= \{((v_1,1,c),(v_1,1,c')) \mid c,c'\in[2]\} \cup \{((v_1,1,3),(v_1,1,3))\} \\ H^M_{D[v_k,k], D[v_k,k]} &= \{((v_k,k,1),(v_k,k,1))\} \cup \{((v_k,k,c),(v_k,k,c')) \mid c,c'\in\{2,3\}\}\,,\end{aligned}$$ and, therefore, $$\begin{aligned} R = \{((v_1,1,c),(v_k,k,c')) \mid c,c'\in[3]\} \setminus \{((v_1,1,3),(v_k,k,1))\}\,,\end{aligned}$$ which is not rectangular, as required. For the reverse direction of the proof, suppose that there is an ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising sequence $D_1,\ldots,D_m$. The fact that the sequence is derectangularising implies that $|D_i|\geq 2$ for each $i\in[m]$ — see the remarks following Definition \[def:derect\]. Each set in the sequence is a subset of some $D[v,j]$ in ${\ensuremath{\mathcal L}}$ so for every $i\in[m]$ let $v_i$ denote the vertex in $[n]$ and let $j_i$ denote the index in $[k]$ such that $D_i \subseteq D[v_i,j_i]$. Clearly, it is possible to have $(v_i,j_i)=(v_{i'},j_{i'})$ for distinct $i$ and $i'$ in $[m]$. We will finish the proof by showing that $G$ has a size-$k$ independent set. Let $$R = H^M_{D_1 , D_2} \circ \dots \circ H^M_{D_{m-1}, D_m },$$ which is not rectangular because the sequence is ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$-$M$-derectangularising. Since $\{D_1,\ldots,D_m\}$ is $M$-purifying, and any submatrix of ${\mathrm{Id}}$ with at least two rows and at least two columns is impure, every pair $(i,i')\in [m]^2$ satisfies $M|_{D[v_i,j_i]\times D[v_{i'},j_{i'}]} \neq {\mathrm{Id}}$. This means that we cannot have $(v_i,v_{i'})\in E(G)$ for any pair $(i,i')\in [m]^2$ so the set $I=\{v_1, \dots, v_m\}$ is independent in $G$. It remains to show that $|I|\geq k$. Observe that, if $v_i=v_{i'}$, we must have $j_i=j_{i'}$ since, otherwise, the construction ensures that $$M|_{D[v_i,j_i]\times D[v_{i'},j_{i'}]} = M|_{D[v_i,j_i]\times D[v_i,j_{i'}]} = {\mathrm{Id}},$$ which we already ruled out. Therefore, $|I| \geq |\{j_1, \dots, j_m\}|$. We must have $|j_i-j_{i+1}|\leq 1$ for each $i\in[m-1]$ as, otherwise, $M|_{D[v_i,j_i]\times D[v_{i+1},j_{i+1}]} = {\mathbf{0}}$, which implies that $R=\emptyset$, which is rectangular. There must be at least one $i\in [m-1]$ such that $v_i=v_{i+1}$ and $j_i=j_{i+1}=1$, so $M|_{D[v_i,j_i]\times D[v_{i+1},j_{i+1}]} = {M_\mathrm{start}}$. If not, $R$ is a composition of relations corresponding to ${M_\mathrm{bij}}$ and ${M_\mathrm{end}}$ and any such relation is either a bijection, or of the form of ${M_\mathrm{end}}$, so it is rectangular. Similarly, there must be at least one $i$ such that $v_i=v_{i+1}$ and $j_i=j_{i+1}=k$, giving $M|_{D[v_i,j_i]\times D[v_{i+1},j_{i+1}]} = {M_\mathrm{end}}$. Therefore, the sequence $j_1, \dots, j_m$ contains 1 and $k$. Since $|j_i-j_{i+1}|\leq 1$ for all $i\in[m-1]$, it follows that $[k] \subseteq \{j_1,\dots, j_m\}$, so $|I|\geq k$, as required. In fact, $\{j_1,\dots, j_m\} = [k]$ since each $j_i\in [k]$ by construction. We defined the problem [[ExistsDerectSeq]{}]{} using a concise input representation: ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$ does not need to be written out in full. Instead, the instance is a subset ${\ensuremath{\mathcal L}}$ containing the maximal elements of ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$. For example, when the instance is ${\ensuremath{\mathcal L}}=\{D\}$, we have ${{\mathscr{S}({\ensuremath{\mathcal L}})}}= {{\mathcal{P}(D)}}$. It is important to note that the [$\mathrm{NP}$]{}-completeness of [[ExistsDerectSeq]{}]{} is not an artifact of this concise input coding. The elements of the list ${\ensuremath{\mathcal L}}$ constructed in the NP-hardness proof have length at most three, so the list ${{\mathscr{S}({\ensuremath{\mathcal L}})}}$ could also be constructed explicitly in polynomial time. Lemma \[lem:small\_derect\] has the following immediate corollary for the complexity of the dichotomy criterion of the general [[\#List-$M$-partitions]{}]{} problem. Recall that, in this version of the meta-problem, the input is just the matrix $M$. [cor:meta]{}[ [[MatrixHasDerectSeq]{}]{} is in [$\mathrm{NP}$]{}.]{} Take ${\ensuremath{\mathcal L}}= \{D\}$ in Lemma \[lem:small\_derect\]. Cardinality constraints {#sec:card} ======================= Finally, we show how lists can be used to implement cardinality constraints of the kind that often appear in counting problems in combinatorics. Feder, Hell, Klein and Motwani [@FHKM] point out that lists can be used to determine whether there are $M$-partitions that obey simple cardinality constraints. For example, it is natural to require some or all of the parts to be non-empty or, more generally, to contain at least some constant number of vertices. Given a $D\times D$ matrix $M$, we represent such cardinality constraints as a function $C\colon D\to{\mathbb{Z}_{\geq 0}}$. We say that an $M$-partition $\sigma$ of a graph $G$ *satisfies* the constraint if, for each $d\in D$, $|\{v\in V(G)\mid \sigma(v)=d\}| \geq C(d)$. Given a cardinality constraint $C$, we write $|C| = \sum_{d\in D} C(d)$. We can determine whether there is an $M$-partition of $G=(V,E)$ that satisfies the cardinality constraint $C$ by making at most ${|V|}^{|C|}$ queries to an oracle for the list $M$-partitions problem, as follows. Let $L_C$ be the set of list functions $L\colon V\to {{\mathcal{P}(D)}}$ such that: - for all $v\in V\!$, either $L(v) = D$ or $|L(v)| = 1$, and - for all $d\in D$, there are exactly $C(d)$ vertices $v$ with $L(v) = \{d\}$. There are at most ${|V|}^{|C|}$ such list functions and it is clear that $G$ has an $M$-partition satisfying $C$ if, and only if, it has a list $M$-partition that respects at least one $L\in L_C$. The number of queries is polynomial in $|V|$ as long as the cardinality constraint $C$ is independent of $G$. For counting, the situation is a little more complicated, as we must avoid double-counting. The solution is to count all $M$-partitions of the input graph and subtract off those that fail to satisfy the cardinality constraint. We formally define the problem [[\#$C$-$M$-partitions]{}]{} as follows, parameterized by a $D\times D$ matrix $M$ and a cardinality constraint function $C\colon D\to {\mathbb{Z}_{\geq 0}}$. [[\#$C$-$M$-partitions]{}]{}. A graph $G$. The number of $M$-partitions of $G$ that satisfy $C$. \[prop:cardinality\] [[\#$C$-$M$-partitions]{}]{} is polynomial-time Turing reducible to [[\#List-$M$-partitions]{}]{}. Given the cardinality constraint function $C$, let $R = \{d\in D\mid C(d)>0\}$: that is, $R$ is the set of parts that have a non-trivial cardinality constraint. For any set $P\subseteq R$, say that an $M$-partition $\sigma$ of a graph $G=(V,E)$ *fails on $P$* if $|\{v\in V\mid \sigma(v) = d\}| < C(d)$ for all $d\in P$. That is, if $\sigma$ violates the cardinality constraints on all parts in $P$ (and possibly others, too). Let $\Sigma$ be the set of all $M$-partitions of our given input graph $G$. For $i\in R$, let $A_i = \{\sigma \in \Sigma \mid \mbox{$\sigma$ fails on $\{i\}$}\}$ and let $A=\bigcup_{i\in R} A_i$. By inclusion-exclusion, $$\begin{aligned} |A| &= -\!\!\sum_{\emptyset \subset P \subseteq R} {(-1)}^{|P|} \left|\bigcap_{i\in P} A_i\right|\\ &= -\!\!\sum_{\emptyset\subset P \subseteq R} {(-1)}^{|P|} \big|\{\sigma \in \Sigma \mid \mbox{$\sigma$ fails on $P$}\}\big|\,.\end{aligned}$$ We wish to compute $$\begin{aligned} \big|\{\sigma\in \Sigma\mid \text{$\sigma$ satisfies $C$}\}\big| &= \big|\Sigma\big| - |A| \\ &= \big|\Sigma\big| + \sum_{\emptyset \subset P\subseteq R} (-1)^{|P|} \big|\{\sigma\in\Sigma \mid \text{$\sigma$ fails on $P$}\}\big|\,. \end{aligned}$$ Therefore, it suffices to show that we can use lists to count the $M$-partitions that fail on each non-empty $P\subseteq R$. For such a set $P$, let $L_P$ be the set of list functions $L$ such that - for all $v\in V$, either $L(v) = D\setminus P$ or $L(v) = \{p\}$ for some $p\in P$, and - for all $p\in P$, $\big|\big\{v\in V\mid L(v)=\{p\}\big\}\big| < C(p)$. Thus, the set of $M$-partitions that respect some $L\in L_P$ is precisely the set of $M$-partitions that fail on $P$. Also, for distinct $L$ and $L'$ in $L_P$, the set of $M$-partitions that respect $L$ is disjoint from the set of $M$-partitions that respect $L'\!$. So we can compute $ \big|\{\sigma\in\Sigma \mid \text{$\sigma$ fails on $P$}\}\big|$ by making $|L_P|$ calls to [[\#List-$M$-partitions]{}]{}, noting that $|L_P|\leq |V|^{|C|}\!$. As an example of a combinatorial structure that can be represented as an $M$-partition problem with cardinality constraints, consider the *homogeneous pairs* introduced by Chvátal and Sbihi [@CS1987:Bull-free]. A homogeneous pair in a graph $G=(V,E)$ is a partition of $V$ into sets $U$, $W_1$ and $W_2$ such that: - $|U|\geq 2$; - $|W_1|\geq 2$ or $|W_2|\geq 2$ (or both); - for every vertex $v\in U$, $v$ is either adjacent to every vertex in $W_1$ or to none of them; and - for every vertex $v\in U$, $v$ is either adjacent to every vertex in $W_2$ or to none of them. Feder et al. [@FHKM] observe that the problem of determining whether a graph has a homogeneous pair can be represented as the problem of determining whether it has an [$M_{\mathrm{hp}}$]{}-partition satisfying certain constraints, where $D = \{1, \dots, 6\}$ and $${\ensuremath{M_{\mathrm{hp}}}}= \begin{pmatrix} * & * & 1 & 0 & 1 & 0 \\ * & * & 1 & 1 & 0 & 0 \\ 1 & 1 & * & * & * & * \\ 0 & 1 & * & * & * & * \\ 1 & 0 & * & * & * & * \\ 0 & 0 & * & * & * & * \end{pmatrix}.$$ $W_1$ corresponds to the set of vertices mapped to part $1$ (row 1 of ${\ensuremath{M_{\mathrm{hp}}}}$), $W_2$ corresponds to the set of vertices mapped to part $2$ (row 2 of ${\ensuremath{M_{\mathrm{hp}}}}$), and $U$ corresponds to the set of vertices mapped to parts $3$–$6$. In fact, there is a one-to-one correspondence between the homogeneous pairs of $G$ in which $W_1$ and $W_2$ are non-empty and the ${\ensuremath{M_{\mathrm{hp}}}}$-partitions $\sigma$ of $G$ that satisfy the following additional constraints. For $d\in D$, let $N_\sigma(d) = |\{v\in V(G)\mid \sigma(v)=d\}|$ be the number of vertices that $\sigma$ maps to part $d$. We require that - $N_\sigma(3) + N_\sigma(4) + N_\sigma(5) + N_\sigma(6)\geq 2$, - $N_\sigma(1) > 0$ and $N_\sigma(2) > 0$, and - at least one $N_\sigma(1)$ and $N_\sigma(2)$ is at least $2$. To see this, consider a homogeneous pair $(U,W_1,W_2)$ in which $W_1$ and $W_2$ are non-empty. Note that there is exactly one ${\ensuremath{M_{\mathrm{hp}}}}$-partition of $G$ in which vertices in $W_1$ are mapped to part $1$ and vertices in $W_2$ are mapped to part $2$ and vertices in $U$ are mapped to parts $3$–$6$. There is exactly one part available to each $v\in U$ since $v$ has an edge or non-edge to $W_1$ (but not both!) ruling out exactly two parts and $v$ has an edge or non-edge to $W_2$ ruling out an additional part. Going the other way, an ${\ensuremath{M_{\mathrm{hp}}}}$-partition that satisfies the constraints includes a homogeneous pair. Now let $${\ensuremath{M_{\mathrm{hs}}}}= \begin{pmatrix} * & 0 & 1\\ 0 & * & *\\ 1 & * & * \end{pmatrix}.$$ There is a one-to-one correspondence between the homogeneous pairs of $G$ in which $W_2$ is empty and the ${\ensuremath{M_{\mathrm{hs}}}}$-partitions of $G$ that satisfy the following additional constraints. - At least two vertices are mapped to parts $2$–$3$ (vertices in these parts are in $U$). - At least two vertices are mapped to part $1$ (vertices in this part are in $W_1$). Symmetrically, there is also a one-to-one correspondence between the homogeneous pairs of $G$ in which $W_1$ is empty and the ${\ensuremath{M_{\mathrm{hs}}}}$-partitions of $G$ that satisfy the above constraints. (Partitions according to ${\ensuremath{M_{\mathrm{hs}}}}$ correspond to so-called “homogeneous sets” but we do not need the details of these.) It is known from [@EKR1997:Hom-pair] that, in deterministic polynomial time, it is possible to determine whether a graph contains a homogeneous pair and, if so, to find one. We show that the homogeneous pairs in a graph can also be counted in polynomial time. We start by considering the relevant list-partition counting problems. \[thm:hompair\] There are polynomial-time algorithms for [[\#List-${\ensuremath{M_{\mathrm{hp}}}}$-partitions]{}]{} and [[\#List-${\ensuremath{M_{\mathrm{hs}}}}$-partitions]{}]{}. We first show that there is a polynomial-time algorithm for [[\#List-${\ensuremath{M_{\mathrm{hp}}}}$-partitions]{}]{}. The most natural way to do this would be to show that there is no ${{\mathcal{P}(D)}}$-[$M_{\mathrm{hp}}$]{}-derectangularising sequence and then apply Theorem \[thm:explicitdichotomy\]. In theory, we could show that there is no ${{\mathcal{P}(D)}}$-[$M_{\mathrm{hp}}$]{}-derectangularising sequence by brute force since $|D|=6$, but the number of possibilities is too large to make this feasible. Instead, we argue non-constructively. First, if there is no ${{\mathcal{P}(D)}}$-[$M_{\mathrm{hp}}$]{}-derectangularising sequence, the result follows from Theorem \[thm:explicitdichotomy\]. Conversely, suppose that $D_1, \dots, D_k$ is a ${{\mathcal{P}(D)}}$-[$M_{\mathrm{hp}}$]{}-derectangularising sequence. Let $M$ be the matrix such that $M_{i,j} = 0$ if $({\ensuremath{M_{\mathrm{hp}}}})_{i,j} = 1$ and $M_{i,j} = ({\ensuremath{M_{\mathrm{hp}}}})_{i,j}$, otherwise. $D_1, \dots, D_k$ is also a ${{\mathcal{P}(D)}}$-$M$-derectangularising sequence, since $H^M_{X,Y} = H^{{\ensuremath{M_{\mathrm{hp}}}}}_{X,Y}$ for any $X,Y\subseteq D$ and any sequence $D_1, \dots, D_k$ is $M$-purifying because $M$ is already pure. Therefore, by Theorem \[thm:explicitdichotomy\], counting list $M$-partitions is [$\mathrm{\#P}$]{}-complete. However, counting the list $M$-partitions of a graph $G$ corresponds to counting list homomorphisms from $G$ to the $6$-vertex graph $H$ whose two components are an edge and a $4$-clique, and which has loops on all six vertices. There is a very straightforward polynomial-time algorithm for this problem (a simple modification of the version without lists in [@DG]). Thus, ${\ensuremath{\mathrm{\#P}}}={\ensuremath{\mathrm{FP}}}$ so, in particular, there is a polynomial-time algorithm for counting list [$M_{\mathrm{hp}}$]{}-partitions. The proof that there is a polynomial-time algorithm for [[\#List-${\ensuremath{M_{\mathrm{hs}}}}$-partitions]{}]{} is similar. \[cor:hompair\] There is a polynomial-time algorithm for counting the homogeneous pairs in a graph. We are given a graph $G=(V,E)$ and we wish to compute the number of homogeneous pairs that it contains. By the one-to-one correspondence given earlier, it suffices to show how to count ${\ensuremath{M_{\mathrm{hp}}}}$-partitions and ${\ensuremath{M_{\mathrm{hs}}}}$-partitions of $G$ satisfying additional constraints. We start with the first of these. Recall the constraints on the ${\ensuremath{M_{\mathrm{hp}}}}$-partitions $\sigma$ that we wish to count: - $N_\sigma(3) + N_\sigma(4) + N_\sigma(5) + N_\sigma(6)\geq 2$, - $N_\sigma(1) > 0$ and $N_\sigma(2) > 0$, and - at least one $N_\sigma(1)$ and $N_\sigma(2)$ is at least $2$. Define three subsets $\Sigma_1$, $\Sigma_2$ and $\Sigma_{1,2}$ of the set of [$M_{\mathrm{hp}}$]{}-partitions of $G$ that satisfy the constraints. In the definition of each of $\Sigma_1$, $\Sigma_2$ and $\Sigma_{1,2}$, we will require that parts $1$ and $2$ are non-empty and parts $3$–$6$ contain a total of at least two vertices. In $\Sigma_1$, part $1$ must contain at least two vertices; in $\Sigma_2$, part $2$ must contain at least two vertices; in $\Sigma_{1,2}$, both parts $1$ and $2$ must contain at least two vertices. The number of suitable ${\ensuremath{M_{\mathrm{hp}}}}$-partitions of $G$ is $|\Sigma_1| + |\Sigma_2| - |\Sigma_{1,2}|$. Each of $|\Sigma_1|$, $|\Sigma_2|$ and $|\Sigma_{1,2}|$ can be computed by counting the ${\ensuremath{M_{\mathrm{hp}}}}$-partitions of $G$ that satisfy appropriate cardinality constraints. Parts $1$ and $2$ are trivially dealt with. The requirement that parts $3$–$6$ must contain at least two vertices between them is equivalent to saying that at least one of them must contain at least two vertices or at least two must contain at least one vertex. This can be expressed with a sequence of cardinality constraint functions and using inclusion–exclusion to eliminate double-counting. Counting constrained ${\ensuremath{M_{\mathrm{hs}}}}$-partitions of $G$ is similar (but simpler). [^1]: Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford, OX1 3QD, United Kingdom. [^2]: Department of Computer Science, Ashton Building, University of Liverpool, Liverpool, L69 3BX, United Kingdom. [^3]: Department of Information Science, University of Fukui, 3-9-1 Bunkyo, Fukui City, Fukui 910-8507, Japan. [^4]: A preliminary version of this paper appeared in the proceedings of CCC 2014. The research leading to these results has received funding from the MEXT Grants-in-Aid for Scientific Research and the EPSRC and the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013) ERC grant agreement no. 334828. The paper reflects only the authors’ views and not the views of the ERC or the European Commission. The European Union is not liable for any use that may be made of the information contained therein. [^5]: For the reader who is familiar with CSPs, it might be useful to see how a [[List-$M$-partitions]{}]{} problem can be coded as a CSP with restrictions on the input. Given a symmetric $M \in \{0,1,*\}^{D\times D}$, let $M_0$ be the relation on $D\times D$ containing all pairs $(i,j) \in D\times D$ for which $M_{i,j} \neq 1$. Let $M_1$ be the relation on $D\times D$ containing all pairs $(i,j)\in D\times D$ for which $M_{i,j} \neq 0$. Then a [[List-$M$-partitions]{}]{} problem with input $G,L$ can be encoded as a CSP whose constraint language includes the binary relations $M_0$ and $M_1$ and also the unary relations corresponding to the sets in the image of $L$. Each vertex $v$ of $G$ is a variable in the CSP instance with the unary constraint $L(v)$. If $(u,v)$ is an edge of $G$ then it is constrained by $M_1$. If it is a non-edge of $G$, it is constrained by $M_0$. Note that the CSP instance satisfies the restriction that every pair of distinct variables has exactly one constraint, which is either $M_0$ or $M_1$. In a general CSP instance, a pair of variables could be constrained by $M_0$ and $M_1$ or one of them, or neither. It is not clear how to code such a general CSP instance as a list partitions problem.
{ "pile_set_name": "ArXiv" }
[Study on the expression of proto-oncogene eIF4E in laryngeal squamous carcinoma]. To investigate the expression of the proto-oncogene eIF4E in laryngeal squamous cell carcinoma and its relationship with clinical pathology. Sections of 37 samples of laryngeal squamous cell carcinoma (LSCC) and 10 samples of vocal cords polyp were analyzed with anti-eIF4E polyclonal antibody utilizing SP (Streptavidin Peroxidase) immunohistochemical technique. All 37 LSCC samples overexpressed eIF4E in the tumors (eIF4E score: 30 approximately 210), whereas no staining was observed in 10 samples of vocal cords polyp (eIF4E score: 0). There was a significant correlation between overexpression of protein eIF4E with T stage, N stage, histological grade, recurrence and the states of metastases (P < 0.01). It did not correlate with age, sex, and site of tumor (P > 0.05). The eIF4E plays an important role on tumorgenesis, development, invasion and metastases of laryngeal squamous cell carcinoma. There was a correlation between the overexpression of protein eIF4E with TNM stage, histological grade, recurrence and metastases. eIF4E in the laryngeal squamous cell carcinoma can be considered as an independent prognostic factor and tumour molecular marker.
{ "pile_set_name": "PubMed Abstracts" }
See More Keerai / Spinach Vadai We all know vadais in a different shape but here in Tirunelveli in some street shops I have seen this Keerai Vadais in the shape of Bondas. And it was really interesting to know that these crispy vadais are made very easily without grinding the dals and instead they use the roasted flour. Is it not enough to kindle my curiosity to get the recipe and make it!! Pre time: 10 mins Cooking time: 20 mins Serves; 4 Ingredients for making Keerai / Spinach Vadai: A bunch of Arai keerai or tender Mulai keerai (Amaranth leaves) Chick pea flour / Besan flour / Kadalai Maavu 2 cups Roasted semolina / Rava 3 tea spoons Rice flour 2 table spoons Finely chopped Green chillies 2 tea spoons Finely grated ginger 1 tea spoon Red chilli powder 1/2 tea spoon Hing powder 1/4 tea spoon Cooking soda a pinch Salt to taste Oil for frying How to make Keerai / Spinach Vadai? Heat a pan in medium flame. Dry roast besan flour till nice aroma comes without burning it. Transfer it to a broad bowl. Wash well the Leaves remove the stems and take the leaves alone. Finely chop them. Add all the items to the bowl that has besan flour and mix well with hand. If needed sprinkle some water. Bring the stuff to the Roti dough consistency. Heat oil. Once the oil is heated well take rough balls (need not be a perfect sphere) from the batter and drop them in to the oil.
{ "pile_set_name": "Pile-CC" }
Aspergillus Aspergillus () is a genus consisting of a few hundred mould species found in various climates worldwide. Aspergillus was first catalogued in 1729 by the Italian priest and biologist Pier Antonio Micheli. Viewing the fungi under a microscope, Micheli was reminded of the shape of an aspergillum (holy water sprinkler), from Latin spargere (to sprinkle), and named the genus accordingly. Aspergillum is an asexual spore-forming structure common to all Aspergillus species; around one-third of species are also known to have a sexual stage. Aspergillus can be eliminated from homes with the help of either rubbing alcohol(70%) or by using strong air purifiers to eliminate the effects on the lungs. Taxonomy Species Aspergillus consists of a few hundred species. Growth and distribution Aspergillus is defined as a group of conidial fungi—that is, fungi in an asexual state. Some of them, however, are known to have a teleomorph (sexual state) in the Ascomycota. With DNA evidence, all members of the genus Aspergillus are members of the Ascomycota. Members of the genus possess the ability to grow where a high osmotic pressure exists (high concentration of sugar, salt, etc.). Aspergillus species are highly aerobic and are found in almost all oxygen-rich environments, where they commonly grow as molds on the surface of a substrate, as a result of the high oxygen tension. Commonly, fungi grow on carbon-rich substrates like monosaccharides (such as glucose) and polysaccharides (such as amylose). Aspergillus species are common contaminants of starchy foods (such as bread and potatoes), and grow in or on many plants and trees. In addition to growth on carbon sources, many species of Aspergillus demonstrate oligotrophy where they are capable of growing in nutrient-depleted environments, or in environments with a complete lack of key nutrients. Aspergillus niger is a prime example of this; it can be found growing on damp walls, as a major component of mildew. Several species of Aspergillus, including A. niger and A. fumigatus, will readily colonise buildings, favouring warm and damp or humid areas such as bathrooms and around window frames. Aspergillus are found in millions in pillows. Commercial importance Species of Aspergillus are important medically and commercially. Some species can cause infection in humans and other animals. Some infections found in animals have been studied for years, while other species found in animals have been described as new and specific to the investigated disease, and others have been known as names already in use for organisms such as saprophytes. More than 60 Aspergillus species are medically relevant pathogens. For humans, a range of diseases such as infection to the external ear, skin lesions, and ulcers classed as mycetomas are found. Other species are important in commercial microbial fermentations. For example, alcoholic beverages such as Japanese sake are often made from rice or other starchy ingredients (like manioc), rather than from grapes or malted barley. Typical microorganisms used to make alcohol, such as yeasts of the genus Saccharomyces, cannot ferment these starches. Therefore, koji mold such as Aspergillus oryzae is used to first break down the starches into simpler sugars. Members of the genus are also sources of natural products that can be used in the development of medications to treat human disease. Perhaps the largest application of Aspergillus niger is as the major source of citric acid; this organism accounts for over 99% of global citric acid production, or more than 1.4 million tonnes per year. A. niger is also commonly used for the production of native and foreign enzymes, including glucose oxidase, lysozyme, and lactase. In these instances, the culture is rarely grown on a solid substrate, although this is still common practice in Japan, but is more often grown as a submerged culture in a bioreactor. In this way, the most important parameters can be strictly controlled, and maximal productivity can be achieved. This process also makes it far easier to separate the chemical or enzyme of importance from the medium, and is therefore far more cost-effective. Research A. nidulans (Emericella nidulans) has been used as a research organism for many years and was used by Guido Pontecorvo to demonstrate parasexuality in fungi. Recently, A. nidulans was one of the pioneering organisms to have its genome sequenced by researchers at the Broad Institute. As of 2008, a further seven Aspergillus species have had their genomes sequenced: the industrially useful A. niger (two strains), A. oryzae, and A. terreus, and the pathogens A. clavatus, A. fischerianus (Neosartorya fischeri), A. flavus, and A. fumigatus (two strains). A. fischerianus is hardly ever pathogenic, but is very closely related to the common pathogen A. fumigatus; it was sequenced in part to better understand A. fumigatus pathogenicity. Sexual reproduction Of the 250 species of aspergilli, about 64% have no known sexual state. However, many of these species likely have an as yet unidentified sexual stage. Sexual reproduction occurs in two fundamentally different ways in fungi. These are outcrossing (in heterothallic fungi) in which two different individuals contribute nuclei, and self-fertilization or selfing (in homothallic fungi) in which both nuclei are derived from the same individual. In recent years, sexual cycles have been discovered in numerous species previously thought to be asexual. These discoveries reflect recent experimental focus on species of particular relevance to humans. A. fumigatus is the most common species to cause disease in immunodeficient humans. In 2009, A. fumigatus was shown to have a heterothallic, fully functional sexual cycle. Isolates of complementary mating types are required for sex to occur. A. flavus is the major producer of carcinogenic aflatoxins in crops worldwide. It is also an opportunistic human and animal pathogen, causing aspergillosis in immunocompromised individuals. In 2009, a sexual state of this heterothallic fungus was found to arise when strains of opposite mating types were cultured together under appropriate conditions. A. lentulus is an opportunistic human pathogen that causes invasive aspergillosis with high mortality rates. In 2013, A. lentulus was found to have a heterothallic functional sexual breeding system. A. terreus is commonly used in industry to produce important organic acids and enzymes, and was the initial source for the cholesterol-lowering drug lovastatin. In 2013, A. terreus was found to be capable of sexual reproduction when strains of opposite mating types were crossed under appropriate culture conditions. These findings with Aspergillus species are consistent with accumulating evidence, from studies of other eukaryotic species, that sex was likely present in the common ancestor of all eukaryotes. A. nidulans, a homothallic fungus, is capable of self-fertilization. Selfing involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex, but instead requires activation of these pathways within a single individual. Among those Aspergillus species that exhibit a sexual cycle, the overwhelming majority in nature are homothallic (self-fertilizing). This observation suggests Aspergillus species can generally maintain sex though little genetic variability is produced by homothallic self-fertilization. A. fumigatus, a heterothallic (outcrossing) fungus that occurs in areas with widely different climates and environments, also displays little genetic variability either within geographic regions or on a global scale, again suggesting sex, in this case outcrossing sex, can be maintained even when little genetic variability is produced. Genomics The simultaneous publication of three Aspergillus genome manuscripts in Nature in December 2005 established the genus as the leading filamentous fungal genus for comparative genomic studies. Like most major genome projects, these efforts were collaborations between a large sequencing centre and the respective community of scientists. For example, the Institute for Genome Research (TIGR) worked with the A. fumigatus community. A. nidulans was sequenced at the Broad Institute. A. oryzae was sequenced in Japan at the National Institute of Advanced Industrial Science and Technology. The Joint Genome Institute of the Department of Energy has released sequence data for a citric acid-producing strain of A. niger. TIGR, now renamed the J. Craig Venter Institute, is currently spearheading a project on the A. flavus genome. Genome sizes for sequenced species of Aspergillus range from about 29.3 Mb for A. fumigatus to 37.1 Mb for A. oryzae, while the numbers of predicted genes vary from about 9926 for A. fumigatus to about 12,071 for A. oryzae. The genome size of an enzyme-producing strain of A. niger is of intermediate size at 33.9 Mb. Pathogens Some Aspergillus species cause serious disease in humans and animals. The most common pathogenic species are A. fumigatus and A. flavus, which produces aflatoxin which is both a toxin and a carcinogen, and which can contaminate foods such as nuts. The most common species causing allergic disease are A. fumigatus and A. clavatus. Other species are important as agricultural pathogens. Aspergillus spp. cause disease on many grain crops, especially maize, and some variants synthesize mycotoxins, including aflatoxin. Aspergillus can cause neonatal infections. A. fumigatus (the most common species) infections are primary pulmonary infections and can potentially become rapidly necrotizing pneumonia with the potential to disseminate. The organism can be differentiated from other common mold infections based on the fact that it takes on a mold form both in the environment and in the host (unlike Candida albicans which is a dimorphic mold in the environment and a yeast in the body). Aspergillosis Aspergillosis is the group of diseases caused by Aspergillus. The most common species among paranasal sinus infections associated with aspergillosis is A. fumigatus. The symptoms include fever, cough, chest pain, or breathlessness, which also occur in many other illnesses, so diagnosis can be difficult. Usually, only patients with already weakened immune systems or who suffer other lung conditions are susceptible. In humans, the major forms of disease are: Allergic bronchopulmonary aspergillosis, which affects patients with respiratory diseases such as asthma, cystic fibrosis, and sinusitis Acute invasive aspergillosis, a form that grows into surrounding tissue, more common in those with weakened immune systems such as AIDS or chemotherapy patients Disseminated invasive aspergillosis, an infection spread widely through the body Aspergilloma, a "fungus ball" that can form within cavities such as the lung Aspergillosis of the air passages is also frequently reported in birds, and certain species of Aspergillus have been known to infect insects. See also List of Aspergillus species Mold health issues Sick building syndrome References Soltani, Jalal (2016) Secondary Metabolite Diversity of the Genus Aspergillus. In Book: "Recent Advances New and Future Developments in Microbial Biotechnology and Bioengineering: Aspergillus System Properties and Applications". pp. 275–292. External links FungiDB: An integrated functional genomics database for fungi and oomycetes Aspergillus Genome Resources (NIH) Aspergillus Comparative Database Comparative genomic resource at the Broad Institute Central Aspergillus Data Repository The Fungal Genetics Stock Center The Aspergillus/Aspergillosis Website An encyclopedia of Aspergillus for patients, doctors and scientists Aspergillus surveillance project at a large tertiary-care hospital. (PDF). The Aspergillus Genome Database Category:Parasitic fungi Category:Ascomycota genera
{ "pile_set_name": "Wikipedia (en)" }
Clinton's hands-off approach to ISIS mimics Obama's failing strategy President Obama pauses during talk about the war on terrorism and efforts to degrade and destroy the Islamic State group, during a news conference at the Pentagon in Washington, D.C., on Aug. 4, 2016. (AP Photo/J. Scott Applewhite) It’s genocide. And although most humans have committed to making sure genocide never happens again, Hillary Clinton has made a political commitment to not send U.S. troops to stop this growing genocide, no matter what the U.S. intelligence tells her. “Donald Trump has been all over the place on ISIS. He’s talked about letting Syria become a free zone for ISIS. A major country in the Middle East that could launch attacks against us and others. He’s talked about sending ground troops — American ground troops. Well, that is off the table as far as I am concerned,” Clinton said. It is beyond troubling to see a politician make such a naive political promise about our U.S. national security. ISIS will certainly be pleased to know that whatever they do, wherever they go, however brutal they continue to be, Hillary Clinton still isn’t sending U.S. troops to stop them. It’s perplexing that she has kept this weak stance in the face of growing turmoil throughout the world, but especially in Syria. It is the exact failing strategy that President Obama has been using in Syria for the past five years. Obama also made a political promise to the left wing Democrat base that he wouldn’t send U.S. troops overseas. His political promise, made on the campaign trail in 2008, supersedes any U.S. intelligence information made available to him by the Central Intelligence Agency or recommended by military and diplomatic advisers. Last month, in his speech to the Democratic National Convention, President Obama admitted that his ISIS strategy won’t defeat ISIS over the next six months. Obama conceded that ISIS will still be around when the next president takes office in January 2017, which is sadly consistent with his lack of strategy from Day One: He just won’t do what needs to be done.
{ "pile_set_name": "Pile-CC" }
NOW is the time to take charge of your career! What career goals do you want to accomplish this year? Do you want to change companies, change job roles within your current company, ask for a raise or land that promotion? Maybe you are just looking to expand your sphere of influence. Figuring out WHAT you want to accomplish is the first step in making it happen! In my previous post I started discussing how to know if you’re getting paid what you are worth. Today I will give you some tools to use in your research and some tips to help in your salary negotiations. Find out what the position is currently paying for similar work in similar environments (including positions inside and outside the company) through — Once you have the salary data, the best indicator of the market place is to combine a number of different salary ranges set for the same job by at least a half-dozen employers rather than just relying on one company’s salary range. Ideally, the more salary ranges for the same job in the same geographical area that you can compare, the better. Then, add all the minimums together and divide by the number of salaries you’re comparing. Do the same for the midpoints (the average of the minimum and maximum), and maximums. This will give you the range of the job’s the market value. Now you can determine where in the range your current salary is. The norm is that the midpoint (the average dollars of the minimum and maximum of the range) is ideally where you should be after 5 or so years in that position. This is not an absolute but more of a goal to assess your value. And finally… The second thing you need to understand (and this is more difficult) to determine whether you are at the high end or low end of the pay scale is that various companies may offer different salary ranges due to the company’s internal salary policies and practices. There may be hiring policies stating that no matter how much experience you have, the best starting salary offer would be no more than a small percentage above the minimum starting point. Of course, this is to help avoid internal salary issues. Also, despite your position and responsibilities, if the company has not kept up with the market movement for its current employees then they are not very likely to pay more to a new hire than they do for current employees. So even though a company may recognize and value your experience and skill-set, the salary offer may be minimal to avoid any potential internal salary issues. After you have done your research and are more educated about your worth in the marketplace, it may be a good time to schedule a meeting with your manager to inquire about a potential pay raise! Now, go forth and prosper… and know your worth in the marketplace. Everyone has bad days at work, but if your bad day stretches to a hundred bad days(!) then you may want to start shaking things up a bit. Twice during my own career I found myself in a frustrating and unchallenging job and stayed longer than I should have. Mostly because I was delusional and thought that if I proved my loyalty and stayed with the company long enough they’d reward me with a “new and improved” job, (did I mention the delusional part?), but also because I was afraid of trying something new, and potentially failing. If you’re in a similar situation and the thought of charting into unknown career territory makes you want to curl up under your office cube, then you may want to try career sampling – the art of dipping your toe into a pool of new career opportunities, before diving in head first. One idea on how to try career sampling is work part-time. It’s a great way to test drive transitioning in a new job role, company or industry is to start out part time. Investing a little time up front to take on a part-time position is a much better strategy than investing all your time and realizing you’ve made a bad career choice. If you think you don’t have the right experience, a great attitude and eagerness to learn can help get your foot in the door. And once you start proving yourself and showing results, a promotion to a full time position could be just around the corner! For those professionals employed right now, you may be wondering if getting a promotion, pay raise, or increased perks are even possible given this tough economy. The answer is “yes!” If you’ve added value and achieved results for the company in the past 12 months, then follow these five key strategies to help beef up your paycheck – 1. Focus on results. Many professionals make the mistake of focusing on how hard they’ve worked during the past year. Instead, you need to focus on results. State what you’ve accomplished to help the company save money, or generate new revenue. For example, you may be able to say that you helped launch a new product that resulted in a 2-percent increase in market share. Or, that you implemented a new technology that saved the company $20-thousand a year. Those are REAL results that prove you’re adding value to the company’s bottom line. When you’re able to build a strong business case and prove that you’re adding value to a company, you’re much more likely to be financially rewarded. 2. Work with your manager. Many times, people see their manager as someone who stops them from getting a raise, when realistically your boss is your greatest ally. The key is to get as much face time with your manager as possible, preferably weekly one-on-one meetings. Focus on the results your achieving, as well as making sure that expectations are aligned and that you’re providing value on the right kinds of initiatives. Also, when you meet for your year-end performance review, you can focus on what you’ve accomplished over the past 12 months (which is what most people do), but you should also set a vision of what you will be accomplishing in the next six months. This shows that you’re a forward thinker, committed to the company, and that you’re going to continue adding value to help the company be successful. 3. Clearly identify what you want. If you’re working for a company that’s doing well, then by all means you should ask for a raise! But if you’re like most professionals working for a company that’s struggling to stay afloat, then a pay raise may not be realistic. Think outside the box of other perks that you could negotiate for yourself such as a few more vacation days, one day a week to work from home, or access to the company’s box seats at the next sporting event. There are always perks you can negotiate. Identify what you really want, clearly ask for it, and then make sure you walk away from the conversation with at least one great win for yourself! 4. Keep a good attitude. Right now it’s a pretty tough environment to get a raise so don’t get discouraged if your manager tells you “no”. Keep a good attitude and respond positively no matter what. A good friend of mine Laura Browne, author of “Raise Rules for Women” states, “The goal is to position yourself favorably so that when the economy turns around and the company starts making money, you’ll be the first in line to get a raise.” 5. Set yourself up for success. If your manager turns down your request for a raise, a smart response is to simply say, “Help me understand what I would need to do to get a raise.” Try to get a clear action plan of next steps including priorities, goals, and milestones that will set you up for success. Then, make sure you follow-up by meeting with your manager every week, or at minimum every other week, to get feedback and help you stay on track. All companies want to keep great employees. So no matter what the economy is doing, you should always prove how you’re adding value to the company and at least ask for a pay increase. When you demonstrate that you consistently add to a company’s bottom line, more than likely they’ll do everything in their power to keep you on board. Dreaming of a new job? Perhaps something more meaningful, fulfilling and with a beefier paycheck? If you said, Yes, then the timing couldn’t be better. The unemployment rate is currently […] Read More »
{ "pile_set_name": "Pile-CC" }
Plasma high-density-lipoprotein cholesterol levels during long-term use of an oral contraceptive in Nigerian women. Total and high-density lipoprotein (HDL) cholesterol were estimated in 131 blood samples obtained from women who had been taking the oral contraceptive Noriday 1 + 50Fe (one packet contains 21 tablets of 1 mg of norethindrone + 0.05 mg of mestranol, and 7 of 75 mg of ferrous fumarate) for 1-60 months. Thirty five women who had never used oral contraception (OC) formed the control group. There was a significantly higher mean HDL cholesterol level, and HDL cholesterol/total cholesterol ratio but not total cholesterol level in the women who had been using OC for 19-60 months. The values in women who had been using OC for 1-18 months were not significantly different from those in the control group. The increase in the HDL cholesterol level may not depend on the oestrogen content of the oral contraceptive but on the duration of its intake.
{ "pile_set_name": "PubMed Abstracts" }
Alt-Svc: quic=":443"; ma=2592000; v="39,38,37,35" Content-Length: 1561 Content-Type: text/html; charset=UTF-8 Date: Fri, 06 Oct 2017 07:12:39 GMT P3p: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: sffe Set-Cookie: NID=113=sf8HB8MOccdtv3GuiD-oxHqAbOB5tEVpRQ2MHJRVMQ-c5BI7geDIJYGYRDUgxkxfqiEIlPxQQhbChpaMHeCvWjlcQ4FX-SPq3-koIA_KaMvzvJepU4FgNBrG02AHzOSc; expires=Sat, 07-Apr-2018 07:12:39 GMT; path=/; domain=.google.ie; HttpOnly Status: 404 X-Content-Type-Options: nosniff X-Xss-Protection: 1; mode=block
{ "pile_set_name": "Github" }
1. Field of the Invention The present invention relates to an adjustment device, and more particularly to an adjustment device for a stacker trolley. The adjustment device allows the user to adjust a distance between two forks and also a height of the two forks above the ground. 2. Description of Related Art A conventional stacker trolley is shown in FIG. 7, wherein the stacker trolley has a body (70) with two forks (706), a bracket (90) securely connected to one end of the body (70) and a hydraulic pump (80) mounted on the body (70) for controlling upward/downward movement of the two forks (706). A yoke (701) is provided to one end of each of the two forks (706). The bracket (90) is composed of a crossbar (91) and two arms (92) each connected to one end of the crossbar (91). The crossbar (91) is securely connected to a bottom of the yoke (701) and the arms (92) are respectively and pivotally connected to a bottom of the hydraulic pump (80). When the user is using the stacker trolley and applies a force to the handle (801), the hydraulic fluid in the hydraulic pump (80) is pumped out so as to lift the forks (706). However, when using the stacker trolley, a major drawback is that the yoke (701) is securely connected to the two forks (706). Hence, the distance between the two forks (706) is fixed and can not be changed. When the size of the pallet to be moved by the trolley is changed from small to large or vice versa, the stacker trolley is no longer available to successfully and stably lift the pallet. Even when the pallet is actually lifted by the stacker trolley, because the forks (706) are indirectly yet securely connected to the hydraulic pump (80), the forks (80) are tilted. In this situation, the pallet and the cargo on top of the pallet will easily fall to the ground. To overcome the shortcomings, the present invention tends to provide an improved adjustment device to mitigate and obviate the aforementioned problems. The primary objective of the present invention is to provide an improved adjustment device for a stacker trolley to adjust the distance between the two forks, such that the stacker trolley is adaptable for all kinds of trolley with different dimensions. Another objective of the present invention is to provide an improved adjustment device for the stacker trolley to maintain the forks in horizontal when the forks are lifted. Other objects, advantages and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
{ "pile_set_name": "USPTO Backgrounds" }
Separation of lactic acid-producing bacteria from fermentation broth using a ceramic microfiltration membrane with constant permeate flow. The influence of several operating parameters on the critical flux in the separation of lactic acid-producing bacteria from fermentation broth was studied using a ceramic microfiltration membrane equipped with a permeate pump. The operating parameters studied were crossflow velocity over the membrane, bacterial cell concentration, protein concentration, and pH. The influence of the isoelectric point (IEP) of the membrane was also investigated. In the interval studied (5.3-10.8 m/s), the crossflow velocity had a marked effect on the critical flux. When the crossflow velocity was increased the critical flux also increased. The bacterial cells were retained by the membrane and the concentration of bacterial cells did not affect the critical flux in the interval studied (1.1-3.1 g/L). The critical flux decreased when the protein concentration was increased. It was found that the protein was adsorbed on the membrane surface and protein retention occurred even though the conditions were such that no filter cake was present on the membrane surface. When the pH of the medium was lowered from 6 to 5 (and then further to 4) the critical flux decreased from 76 L/m(2)h to zero at both pH 5 and pH 4. This was found to be due to the fact that the lowering in pH had affected the physiology of the bacterial cells so that the bacteria tended to adhere to the membrane and to each other. The critical flux, for wheat flour hydrolysate without particles, was much lower (28 L/m(2)h) when using a membrane with an IEP of 5.5 than the critical flux of a membrane with an IEP at pH 7 (96 L/m(2)h). This was found to be due to an increased affinity of the bacteria for the membrane with the lower IEP.
{ "pile_set_name": "PubMed Abstracts" }
/* Copyright 2008-2018 Clipperz Srl This file is part of Clipperz, the online password manager. For further information about its features and functionalities please refer to http://www.clipperz.com. * Clipperz is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. * Clipperz is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. * You should have received a copy of the GNU Affero General Public License along with Clipperz. If not, see http://www.gnu.org/licenses/. */ //try { if (typeof(Clipperz.ByteArray) == 'undefined') { throw ""; }} catch (e) { // throw "Clipperz.Crypto.ECC depends on Clipperz.ByteArray!"; //} if (typeof(Clipperz.Crypto.ECC) == 'undefined') { Clipperz.Crypto.ECC = {}; } if (typeof(Clipperz.Crypto.ECC.BinaryField) == 'undefined') { Clipperz.Crypto.ECC.BinaryField = {}; } Clipperz.Crypto.ECC.BinaryField.FiniteField = function(args) { args = args || {}; this._modulus = args.modulus; return this; } Clipperz.Crypto.ECC.BinaryField.FiniteField.prototype = MochiKit.Base.update(null, { 'asString': function() { return "Clipperz.Crypto.ECC.BinaryField.FiniteField (" + this.modulus().asString() + ")"; }, //----------------------------------------------------------------------------- 'modulus': function() { return this._modulus; }, //----------------------------------------------------------------------------- '_module': function(aValue) { var result; var modulusComparison; modulusComparison = Clipperz.Crypto.ECC.BinaryField.Value._compare(aValue, this.modulus()._value); if (modulusComparison < 0) { result = aValue; } else if (modulusComparison == 0) { result = [0]; } else { var modulusBitSize; var resultBitSize; result = aValue; modulusBitSize = this.modulus().bitSize(); resultBitSize = Clipperz.Crypto.ECC.BinaryField.Value._bitSize(result); while (resultBitSize >= modulusBitSize) { Clipperz.Crypto.ECC.BinaryField.Value._overwriteXor(result, Clipperz.Crypto.ECC.BinaryField.Value._shiftLeft(this.modulus()._value, resultBitSize - modulusBitSize)); resultBitSize = Clipperz.Crypto.ECC.BinaryField.Value._bitSize(result); } } return result; }, 'module': function(aValue) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._module(aValue._value.slice(0))); }, //----------------------------------------------------------------------------- '_add': function(a, b) { return Clipperz.Crypto.ECC.BinaryField.Value._xor(a, b); }, '_overwriteAdd': function(a, b) { Clipperz.Crypto.ECC.BinaryField.Value._overwriteXor(a, b); }, 'add': function(a, b) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._add(a._value, b._value)); }, //----------------------------------------------------------------------------- 'negate': function(aValue) { return aValue.clone(); }, //----------------------------------------------------------------------------- '_multiply': function(a, b) { var result; var valueToXor; var i,c; result = [0]; valueToXor = b; c = Clipperz.Crypto.ECC.BinaryField.Value._bitSize(a); for (i=0; i<c; i++) { if (Clipperz.Crypto.ECC.BinaryField.Value._isBitSet(a, i) === true) { Clipperz.Crypto.ECC.BinaryField.Value._overwriteXor(result, valueToXor); } valueToXor = Clipperz.Crypto.ECC.BinaryField.Value._overwriteShiftLeft(valueToXor, 1); } result = this._module(result); return result; }, 'multiply': function(a, b) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._multiply(a._value, b._value)); }, //----------------------------------------------------------------------------- '_fastMultiply': function(a, b) { var result; var B; var i,c; result = [0]; B = b.slice(0); // Is this array copy avoidable? c = 32; for (i=0; i<c; i++) { var ii, cc; cc = a.length; for (ii=0; ii<cc; ii++) { if (((a[ii] >>> i) & 0x01) == 1) { Clipperz.Crypto.ECC.BinaryField.Value._overwriteXor(result, B, ii); } } if (i < (c-1)) { B = Clipperz.Crypto.ECC.BinaryField.Value._overwriteShiftLeft(B, 1); } } result = this._module(result); return result; }, 'fastMultiply': function(a, b) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._fastMultiply(a._value, b._value)); }, //----------------------------------------------------------------------------- // // Guide to Elliptic Curve Cryptography // Darrel Hankerson, Alfred Menezes, Scott Vanstone // - Pag: 49, Alorithm 2.34 // //----------------------------------------------------------------------------- '_square': function(aValue) { var result; var value; var c,i; var precomputedValues; value = aValue; result = new Array(value.length * 2); precomputedValues = Clipperz.Crypto.ECC.BinaryField.FiniteField.squarePrecomputedBytes; c = value.length; for (i=0; i<c; i++) { result[i*2] = precomputedValues[(value[i] & 0x000000ff)]; result[i*2] |= ((precomputedValues[(value[i] & 0x0000ff00) >>> 8]) << 16); result[i*2 + 1] = precomputedValues[(value[i] & 0x00ff0000) >>> 16]; result[i*2 + 1] |= ((precomputedValues[(value[i] & 0xff000000) >>> 24]) << 16); } return this._module(result); }, 'square': function(aValue) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._square(aValue._value)); }, //----------------------------------------------------------------------------- '_inverse': function(aValue) { var result; var b, c; var u, v; // b = Clipperz.Crypto.ECC.BinaryField.Value.I._value; b = [1]; // c = Clipperz.Crypto.ECC.BinaryField.Value.O._value; c = [0]; u = this._module(aValue); v = this.modulus()._value.slice(0); while (Clipperz.Crypto.ECC.BinaryField.Value._bitSize(u) > 1) { var bitDifferenceSize; bitDifferenceSize = Clipperz.Crypto.ECC.BinaryField.Value._bitSize(u) - Clipperz.Crypto.ECC.BinaryField.Value._bitSize(v); if (bitDifferenceSize < 0) { var swap; swap = u; u = v; v = swap; swap = c; c = b; b = swap; bitDifferenceSize = -bitDifferenceSize; } u = this._add(u, Clipperz.Crypto.ECC.BinaryField.Value._shiftLeft(v, bitDifferenceSize)); b = this._add(b, Clipperz.Crypto.ECC.BinaryField.Value._shiftLeft(c, bitDifferenceSize)); // this._overwriteAdd(u, Clipperz.Crypto.ECC.BinaryField.Value._shiftLeft(v, bitDifferenceSize)); // this._overwriteAdd(b, Clipperz.Crypto.ECC.BinaryField.Value._shiftLeft(c, bitDifferenceSize)); } result = this._module(b); return result; }, 'inverse': function(aValue) { return new Clipperz.Crypto.ECC.BinaryField.Value(this._inverse(aValue._value)); }, //----------------------------------------------------------------------------- __syntaxFix__: "syntax fix" }); Clipperz.Crypto.ECC.BinaryField.FiniteField.squarePrecomputedBytes = [ 0x0000, // 0 = 0000 0000 -> 0000 0000 0000 0000 0x0001, // 1 = 0000 0001 -> 0000 0000 0000 0001 0x0004, // 2 = 0000 0010 -> 0000 0000 0000 0100 0x0005, // 3 = 0000 0011 -> 0000 0000 0000 0101 0x0010, // 4 = 0000 0100 -> 0000 0000 0001 0000 0x0011, // 5 = 0000 0101 -> 0000 0000 0001 0001 0x0014, // 6 = 0000 0110 -> 0000 0000 0001 0100 0x0015, // 7 = 0000 0111 -> 0000 0000 0001 0101 0x0040, // 8 = 0000 1000 -> 0000 0000 0100 0000 0x0041, // 9 = 0000 1001 -> 0000 0000 0100 0001 0x0044, // 10 = 0000 1010 -> 0000 0000 0100 0100 0x0045, // 11 = 0000 1011 -> 0000 0000 0100 0101 0x0050, // 12 = 0000 1100 -> 0000 0000 0101 0000 0x0051, // 13 = 0000 1101 -> 0000 0000 0101 0001 0x0054, // 14 = 0000 1110 -> 0000 0000 0101 0100 0x0055, // 15 = 0000 1111 -> 0000 0000 0101 0101 0x0100, // 16 = 0001 0000 -> 0000 0001 0000 0000 0x0101, // 17 = 0001 0001 -> 0000 0001 0000 0001 0x0104, // 18 = 0001 0010 -> 0000 0001 0000 0100 0x0105, // 19 = 0001 0011 -> 0000 0001 0000 0101 0x0110, // 20 = 0001 0100 -> 0000 0001 0001 0000 0x0111, // 21 = 0001 0101 -> 0000 0001 0001 0001 0x0114, // 22 = 0001 0110 -> 0000 0001 0001 0100 0x0115, // 23 = 0001 0111 -> 0000 0001 0001 0101 0x0140, // 24 = 0001 1000 -> 0000 0001 0100 0000 0x0141, // 25 = 0001 1001 -> 0000 0001 0100 0001 0x0144, // 26 = 0001 1010 -> 0000 0001 0100 0100 0x0145, // 27 = 0001 1011 -> 0000 0001 0100 0101 0x0150, // 28 = 0001 1100 -> 0000 0001 0101 0000 0x0151, // 28 = 0001 1101 -> 0000 0001 0101 0001 0x0154, // 30 = 0001 1110 -> 0000 0001 0101 0100 0x0155, // 31 = 0001 1111 -> 0000 0001 0101 0101 0x0400, // 32 = 0010 0000 -> 0000 0100 0000 0000 0x0401, // 33 = 0010 0001 -> 0000 0100 0000 0001 0x0404, // 34 = 0010 0010 -> 0000 0100 0000 0100 0x0405, // 35 = 0010 0011 -> 0000 0100 0000 0101 0x0410, // 36 = 0010 0100 -> 0000 0100 0001 0000 0x0411, // 37 = 0010 0101 -> 0000 0100 0001 0001 0x0414, // 38 = 0010 0110 -> 0000 0100 0001 0100 0x0415, // 39 = 0010 0111 -> 0000 0100 0001 0101 0x0440, // 40 = 0010 1000 -> 0000 0100 0100 0000 0x0441, // 41 = 0010 1001 -> 0000 0100 0100 0001 0x0444, // 42 = 0010 1010 -> 0000 0100 0100 0100 0x0445, // 43 = 0010 1011 -> 0000 0100 0100 0101 0x0450, // 44 = 0010 1100 -> 0000 0100 0101 0000 0x0451, // 45 = 0010 1101 -> 0000 0100 0101 0001 0x0454, // 46 = 0010 1110 -> 0000 0100 0101 0100 0x0455, // 47 = 0010 1111 -> 0000 0100 0101 0101 0x0500, // 48 = 0011 0000 -> 0000 0101 0000 0000 0x0501, // 49 = 0011 0001 -> 0000 0101 0000 0001 0x0504, // 50 = 0011 0010 -> 0000 0101 0000 0100 0x0505, // 51 = 0011 0011 -> 0000 0101 0000 0101 0x0510, // 52 = 0011 0100 -> 0000 0101 0001 0000 0x0511, // 53 = 0011 0101 -> 0000 0101 0001 0001 0x0514, // 54 = 0011 0110 -> 0000 0101 0001 0100 0x0515, // 55 = 0011 0111 -> 0000 0101 0001 0101 0x0540, // 56 = 0011 1000 -> 0000 0101 0100 0000 0x0541, // 57 = 0011 1001 -> 0000 0101 0100 0001 0x0544, // 58 = 0011 1010 -> 0000 0101 0100 0100 0x0545, // 59 = 0011 1011 -> 0000 0101 0100 0101 0x0550, // 60 = 0011 1100 -> 0000 0101 0101 0000 0x0551, // 61 = 0011 1101 -> 0000 0101 0101 0001 0x0554, // 62 = 0011 1110 -> 0000 0101 0101 0100 0x0555, // 63 = 0011 1111 -> 0000 0101 0101 0101 0x1000, // 64 = 0100 0000 -> 0001 0000 0000 0000 0x1001, // 65 = 0100 0001 -> 0001 0000 0000 0001 0x1004, // 66 = 0100 0010 -> 0001 0000 0000 0100 0x1005, // 67 = 0100 0011 -> 0001 0000 0000 0101 0x1010, // 68 = 0100 0100 -> 0001 0000 0001 0000 0x1011, // 69 = 0100 0101 -> 0001 0000 0001 0001 0x1014, // 70 = 0100 0110 -> 0001 0000 0001 0100 0x1015, // 71 = 0100 0111 -> 0001 0000 0001 0101 0x1040, // 72 = 0100 1000 -> 0001 0000 0100 0000 0x1041, // 73 = 0100 1001 -> 0001 0000 0100 0001 0x1044, // 74 = 0100 1010 -> 0001 0000 0100 0100 0x1045, // 75 = 0100 1011 -> 0001 0000 0100 0101 0x1050, // 76 = 0100 1100 -> 0001 0000 0101 0000 0x1051, // 77 = 0100 1101 -> 0001 0000 0101 0001 0x1054, // 78 = 0100 1110 -> 0001 0000 0101 0100 0x1055, // 79 = 0100 1111 -> 0001 0000 0101 0101 0x1100, // 80 = 0101 0000 -> 0001 0001 0000 0000 0x1101, // 81 = 0101 0001 -> 0001 0001 0000 0001 0x1104, // 82 = 0101 0010 -> 0001 0001 0000 0100 0x1105, // 83 = 0101 0011 -> 0001 0001 0000 0101 0x1110, // 84 = 0101 0100 -> 0001 0001 0001 0000 0x1111, // 85 = 0101 0101 -> 0001 0001 0001 0001 0x1114, // 86 = 0101 0110 -> 0001 0001 0001 0100 0x1115, // 87 = 0101 0111 -> 0001 0001 0001 0101 0x1140, // 88 = 0101 1000 -> 0001 0001 0100 0000 0x1141, // 89 = 0101 1001 -> 0001 0001 0100 0001 0x1144, // 90 = 0101 1010 -> 0001 0001 0100 0100 0x1145, // 91 = 0101 1011 -> 0001 0001 0100 0101 0x1150, // 92 = 0101 1100 -> 0001 0001 0101 0000 0x1151, // 93 = 0101 1101 -> 0001 0001 0101 0001 0x1154, // 94 = 0101 1110 -> 0001 0001 0101 0100 0x1155, // 95 = 0101 1111 -> 0001 0001 0101 0101 0x1400, // 96 = 0110 0000 -> 0001 0100 0000 0000 0x1401, // 97 = 0110 0001 -> 0001 0100 0000 0001 0x1404, // 98 = 0110 0010 -> 0001 0100 0000 0100 0x1405, // 99 = 0110 0011 -> 0001 0100 0000 0101 0x1410, // 100 = 0110 0100 -> 0001 0100 0001 0000 0x1411, // 101 = 0110 0101 -> 0001 0100 0001 0001 0x1414, // 102 = 0110 0110 -> 0001 0100 0001 0100 0x1415, // 103 = 0110 0111 -> 0001 0100 0001 0101 0x1440, // 104 = 0110 1000 -> 0001 0100 0100 0000 0x1441, // 105 = 0110 1001 -> 0001 0100 0100 0001 0x1444, // 106 = 0110 1010 -> 0001 0100 0100 0100 0x1445, // 107 = 0110 1011 -> 0001 0100 0100 0101 0x1450, // 108 = 0110 1100 -> 0001 0100 0101 0000 0x1451, // 109 = 0110 1101 -> 0001 0100 0101 0001 0x1454, // 110 = 0110 1110 -> 0001 0100 0101 0100 0x1455, // 111 = 0110 1111 -> 0001 0100 0101 0101 0x1500, // 112 = 0111 0000 -> 0001 0101 0000 0000 0x1501, // 113 = 0111 0001 -> 0001 0101 0000 0001 0x1504, // 114 = 0111 0010 -> 0001 0101 0000 0100 0x1505, // 115 = 0111 0011 -> 0001 0101 0000 0101 0x1510, // 116 = 0111 0100 -> 0001 0101 0001 0000 0x1511, // 117 = 0111 0101 -> 0001 0101 0001 0001 0x1514, // 118 = 0111 0110 -> 0001 0101 0001 0100 0x1515, // 119 = 0111 0111 -> 0001 0101 0001 0101 0x1540, // 120 = 0111 1000 -> 0001 0101 0100 0000 0x1541, // 121 = 0111 1001 -> 0001 0101 0100 0001 0x1544, // 122 = 0111 1010 -> 0001 0101 0100 0100 0x1545, // 123 = 0111 1011 -> 0001 0101 0100 0101 0x1550, // 124 = 0111 1100 -> 0001 0101 0101 0000 0x1551, // 125 = 0111 1101 -> 0001 0101 0101 0001 0x1554, // 126 = 0111 1110 -> 0001 0101 0101 0100 0x1555, // 127 = 0111 1111 -> 0001 0101 0101 0101 0x4000, // 128 = 1000 0000 -> 0100 0000 0000 0000 0x4001, // 129 = 1000 0001 -> 0100 0000 0000 0001 0x4004, // 130 = 1000 0010 -> 0100 0000 0000 0100 0x4005, // 131 = 1000 0011 -> 0100 0000 0000 0101 0x4010, // 132 = 1000 0100 -> 0100 0000 0001 0000 0x4011, // 133 = 1000 0101 -> 0100 0000 0001 0001 0x4014, // 134 = 1000 0110 -> 0100 0000 0001 0100 0x4015, // 135 = 1000 0111 -> 0100 0000 0001 0101 0x4040, // 136 = 1000 1000 -> 0100 0000 0100 0000 0x4041, // 137 = 1000 1001 -> 0100 0000 0100 0001 0x4044, // 138 = 1000 1010 -> 0100 0000 0100 0100 0x4045, // 139 = 1000 1011 -> 0100 0000 0100 0101 0x4050, // 140 = 1000 1100 -> 0100 0000 0101 0000 0x4051, // 141 = 1000 1101 -> 0100 0000 0101 0001 0x4054, // 142 = 1000 1110 -> 0100 0000 0101 0100 0x4055, // 143 = 1000 1111 -> 0100 0000 0101 0101 0x4100, // 144 = 1001 0000 -> 0100 0001 0000 0000 0x4101, // 145 = 1001 0001 -> 0100 0001 0000 0001 0x4104, // 146 = 1001 0010 -> 0100 0001 0000 0100 0x4105, // 147 = 1001 0011 -> 0100 0001 0000 0101 0x4110, // 148 = 1001 0100 -> 0100 0001 0001 0000 0x4111, // 149 = 1001 0101 -> 0100 0001 0001 0001 0x4114, // 150 = 1001 0110 -> 0100 0001 0001 0100 0x4115, // 151 = 1001 0111 -> 0100 0001 0001 0101 0x4140, // 152 = 1001 1000 -> 0100 0001 0100 0000 0x4141, // 153 = 1001 1001 -> 0100 0001 0100 0001 0x4144, // 154 = 1001 1010 -> 0100 0001 0100 0100 0x4145, // 155 = 1001 1011 -> 0100 0001 0100 0101 0x4150, // 156 = 1001 1100 -> 0100 0001 0101 0000 0x4151, // 157 = 1001 1101 -> 0100 0001 0101 0001 0x4154, // 158 = 1001 1110 -> 0100 0001 0101 0100 0x4155, // 159 = 1001 1111 -> 0100 0001 0101 0101 0x4400, // 160 = 1010 0000 -> 0100 0100 0000 0000 0x4401, // 161 = 1010 0001 -> 0100 0100 0000 0001 0x4404, // 162 = 1010 0010 -> 0100 0100 0000 0100 0x4405, // 163 = 1010 0011 -> 0100 0100 0000 0101 0x4410, // 164 = 1010 0100 -> 0100 0100 0001 0000 0x4411, // 165 = 1010 0101 -> 0100 0100 0001 0001 0x4414, // 166 = 1010 0110 -> 0100 0100 0001 0100 0x4415, // 167 = 1010 0111 -> 0100 0100 0001 0101 0x4440, // 168 = 1010 1000 -> 0100 0100 0100 0000 0x4441, // 169 = 1010 1001 -> 0100 0100 0100 0001 0x4444, // 170 = 1010 1010 -> 0100 0100 0100 0100 0x4445, // 171 = 1010 1011 -> 0100 0100 0100 0101 0x4450, // 172 = 1010 1100 -> 0100 0100 0101 0000 0x4451, // 173 = 1010 1101 -> 0100 0100 0101 0001 0x4454, // 174 = 1010 1110 -> 0100 0100 0101 0100 0x4455, // 175 = 1010 1111 -> 0100 0100 0101 0101 0x4500, // 176 = 1011 0000 -> 0100 0101 0000 0000 0x4501, // 177 = 1011 0001 -> 0100 0101 0000 0001 0x4504, // 178 = 1011 0010 -> 0100 0101 0000 0100 0x4505, // 179 = 1011 0011 -> 0100 0101 0000 0101 0x4510, // 180 = 1011 0100 -> 0100 0101 0001 0000 0x4511, // 181 = 1011 0101 -> 0100 0101 0001 0001 0x4514, // 182 = 1011 0110 -> 0100 0101 0001 0100 0x4515, // 183 = 1011 0111 -> 0100 0101 0001 0101 0x4540, // 184 = 1011 1000 -> 0100 0101 0100 0000 0x4541, // 185 = 1011 1001 -> 0100 0101 0100 0001 0x4544, // 186 = 1011 1010 -> 0100 0101 0100 0100 0x4545, // 187 = 1011 1011 -> 0100 0101 0100 0101 0x4550, // 188 = 1011 1100 -> 0100 0101 0101 0000 0x4551, // 189 = 1011 1101 -> 0100 0101 0101 0001 0x4554, // 190 = 1011 1110 -> 0100 0101 0101 0100 0x4555, // 191 = 1011 1111 -> 0100 0101 0101 0101 0x5000, // 192 = 1100 0000 -> 0101 0000 0000 0000 0x5001, // 193 = 1100 0001 -> 0101 0000 0000 0001 0x5004, // 194 = 1100 0010 -> 0101 0000 0000 0100 0x5005, // 195 = 1100 0011 -> 0101 0000 0000 0101 0x5010, // 196 = 1100 0100 -> 0101 0000 0001 0000 0x5011, // 197 = 1100 0101 -> 0101 0000 0001 0001 0x5014, // 198 = 1100 0110 -> 0101 0000 0001 0100 0x5015, // 199 = 1100 0111 -> 0101 0000 0001 0101 0x5040, // 200 = 1100 1000 -> 0101 0000 0100 0000 0x5041, // 201 = 1100 1001 -> 0101 0000 0100 0001 0x5044, // 202 = 1100 1010 -> 0101 0000 0100 0100 0x5045, // 203 = 1100 1011 -> 0101 0000 0100 0101 0x5050, // 204 = 1100 1100 -> 0101 0000 0101 0000 0x5051, // 205 = 1100 1101 -> 0101 0000 0101 0001 0x5054, // 206 = 1100 1110 -> 0101 0000 0101 0100 0x5055, // 207 = 1100 1111 -> 0101 0000 0101 0101 0x5100, // 208 = 1101 0000 -> 0101 0001 0000 0000 0x5101, // 209 = 1101 0001 -> 0101 0001 0000 0001 0x5104, // 210 = 1101 0010 -> 0101 0001 0000 0100 0x5105, // 211 = 1101 0011 -> 0101 0001 0000 0101 0x5110, // 212 = 1101 0100 -> 0101 0001 0001 0000 0x5111, // 213 = 1101 0101 -> 0101 0001 0001 0001 0x5114, // 214 = 1101 0110 -> 0101 0001 0001 0100 0x5115, // 215 = 1101 0111 -> 0101 0001 0001 0101 0x5140, // 216 = 1101 1000 -> 0101 0001 0100 0000 0x5141, // 217 = 1101 1001 -> 0101 0001 0100 0001 0x5144, // 218 = 1101 1010 -> 0101 0001 0100 0100 0x5145, // 219 = 1101 1011 -> 0101 0001 0100 0101 0x5150, // 220 = 1101 1100 -> 0101 0001 0101 0000 0x5151, // 221 = 1101 1101 -> 0101 0001 0101 0001 0x5154, // 222 = 1101 1110 -> 0101 0001 0101 0100 0x5155, // 223 = 1101 1111 -> 0101 0001 0101 0101 0x5400, // 224 = 1110 0000 -> 0101 0100 0000 0000 0x5401, // 225 = 1110 0001 -> 0101 0100 0000 0001 0x5404, // 226 = 1110 0010 -> 0101 0100 0000 0100 0x5405, // 227 = 1110 0011 -> 0101 0100 0000 0101 0x5410, // 228 = 1110 0100 -> 0101 0100 0001 0000 0x5411, // 229 = 1110 0101 -> 0101 0100 0001 0001 0x5414, // 230 = 1110 0110 -> 0101 0100 0001 0100 0x5415, // 231 = 1110 0111 -> 0101 0100 0001 0101 0x5440, // 232 = 1110 1000 -> 0101 0100 0100 0000 0x5441, // 233 = 1110 1001 -> 0101 0100 0100 0001 0x5444, // 234 = 1110 1010 -> 0101 0100 0100 0100 0x5445, // 235 = 1110 1011 -> 0101 0100 0100 0101 0x5450, // 236 = 1110 1100 -> 0101 0100 0101 0000 0x5451, // 237 = 1110 1101 -> 0101 0100 0101 0001 0x5454, // 238 = 1110 1110 -> 0101 0100 0101 0100 0x5455, // 239 = 1110 1111 -> 0101 0100 0101 0101 0x5500, // 240 = 1111 0000 -> 0101 0101 0000 0000 0x5501, // 241 = 1111 0001 -> 0101 0101 0000 0001 0x5504, // 242 = 1111 0010 -> 0101 0101 0000 0100 0x5505, // 243 = 1111 0011 -> 0101 0101 0000 0101 0x5510, // 244 = 1111 0100 -> 0101 0101 0001 0000 0x5511, // 245 = 1111 0101 -> 0101 0101 0001 0001 0x5514, // 246 = 1111 0110 -> 0101 0101 0001 0100 0x5515, // 247 = 1111 0111 -> 0101 0101 0001 0101 0x5540, // 248 = 1111 1000 -> 0101 0101 0100 0000 0x5541, // 249 = 1111 1001 -> 0101 0101 0100 0001 0x5544, // 250 = 1111 1010 -> 0101 0101 0100 0100 0x5545, // 251 = 1111 1011 -> 0101 0101 0100 0101 0x5550, // 252 = 1111 1100 -> 0101 0101 0101 0000 0x5551, // 253 = 1111 1101 -> 0101 0101 0101 0001 0x5554, // 254 = 1111 1110 -> 0101 0101 0101 0100 0x5555 // 255 = 1111 1111 -> 0101 0101 0101 0101 ]
{ "pile_set_name": "Github" }
Q: Как закрыть AlertDialog с помощью кнопки в его разметке? Создаю AlertDialog: @Override protected Dialog onCreateDialog(int id) { AlertDialog.Builder adb = new AlertDialog.Builder(this); adb.setTitle(R.string.dlg_fonts); view = (LinearLayout) getLayoutInflater().inflate(R.layout.dialog, null); adb.setView(view); tvDlg = (TextView) view.findViewById(R.id.tvDlg); etDlg = (EditText) view.findViewById(R.id.etDlg); btnDlgOk = (Button) view.findViewById(R.id.btnDlgOk); btnDlgCancel = (Button) view.findViewById(R.id.btnDlgCancel); return adb.create(); } @Override protected void onPrepareDialog(int id, Dialog dialog) { super.onPrepareDialog(id, dialog); } // В разметке есть событие на кнопку onClick public void onclDlgOk(View view){ } Как сделать, чтобы после нажатия на кнопку диалоговое окно закрылось? A: Попробуйте так: @Override protected Dialog onCreateDialog(int id) { AlertDialog.Builder adb = new AlertDialog.Builder(this); adb.setTitle(R.string.dlg_fonts); view = (LinearLayout) getLayoutInflater().inflate(R.layout.dialog, null); adb.setView(view); tvDlg = (TextView) view.findViewById(R.id.tvDlg); etDlg = (EditText) view.findViewById(R.id.etDlg); btnDlgOk = (Button) view.findViewById(R.id.btnDlgOk); btnDlgCancel = (Button) view.findViewById(R.id.btnDlgCancel); Dialog dialog = adb.create(); btnDlgCancel.setOnClickListener(new View.OnClickListener(){public void onClick(View v){dialog.dismiss();}}); return dialog ; }
{ "pile_set_name": "StackExchange" }
COMMERCIAL DESCRIPTIONCrack open Yeti Imperial Stout’s sophisticated sibling -- Oak Aged Yeti Imperial Stout. Although these beers come from the same clan, they have entirely different personalities. Aging on a blend of French and toasted oak chips infuses a subtle oak and vanilla character into Yeti’s already intense chocolate, roasted coffee malt flavor and hugely assertive hop profile. Who says you can’t tame a Yeti? 75 International Bittering Units (IBUs). Pours a pitch black with a two finger khaki head which lingers and becomes very nice looking lacing. Aroma is of deeply roasted malts, coco, black coffee, licorice, dark fruits, burnt caramel, vanilla, molasses, some wood notes, and some light pine/citrus hops. Taste is similar to aroma but has more oak notes to it has a lot of citrus and pine hops on the back end. Has full body with super creamy, velvety, creamy, oily mouthfeel and a almost burnt, pretty bitter, semi dry finish. Overall, a excellent imp. stout that is definitely a step above regular Yeti. Pours opaque black with a nice brown head. Aroma is roast coffee, sweet melting sugar, vanilla, prunes and a little hint of red wine. Taste is milk chocolate, cocoa powder, a hint of sweet molasses, some minute coffee notes, a veritable dose of vanilla and a slight bitterness which is cocoa like on the finish. Full bodied and softly caresses the mouth just like your favourite snuggle blanket caresses your Sunday hangover and makes you feel human again. 22 oz bottle shared with ebone1988. Bottled 09/18/2012. The pour is a viscous oily black with a medium brown head and some great lace. The aroma is great. Lots of roasted malts, chocolate, vanilla, and oak. Licorice comes through on the back end pretty strong and brings a good bitterness to the table. The flavor is very roasty. The malts are smashing you right in the face. The chocolate is much more reserved. The oak isn't there. The bitter licorice comes through on the back end for a little change of pace. The mouth feel is thinner than I want it to be. The carbonation is spot on, but it's just a touch thin. The aftertaste is roasty licorice and a little bitter. I love it ---Rated via Beer Buddy for iPhone Join us! RateBeer is made by beer enthusiasts for the craft beer community. Your basic membership is free and allows you to read all beer ratings. Click here to create your account... and give your opinion!
{ "pile_set_name": "Pile-CC" }
Intra-articular facet joint steroid injection-related adverse events encountered during 11,980 procedures. To analyze the incidence and characteristics of intra-articular facet joint injection (FJI)-related adverse events requiring hospitalization and emergency room visits. From January 2007 to December 2017, a total of 11,980 FJI procedures in 6066 patients (mean age 66.8 years, range 15-97 years, M:F = 2004:4062) were performed in our department. Of these, we retrospectively reviewed 489 cases in 432 patients who were hospitalized or visited the emergency room within a month of FJI. FJI-related adverse events were classified as procedure-related complications, drug-related systemic events, or uncertain etiology events, on the basis of consensus of two spine radiologists. This is a descriptive study without statistical analysis. There were 101 FJI-related adverse event cases in 99 patients (mean age 71.8 years, range 39-97 years, M:F = 39:60). The overall incidence of FJI-related adverse events was 0.84% (101/11,980) per case and 1.63% (99/6066) per patient. The incidence of procedure-related complications and drug-related systemic adverse events was 0.07% (8/11,980) and 0.15% (18/11,980), respectively; the rate of uncertain etiology events was 0.63% (75/11,980). All eight procedure-related complication cases involved major complications. There are seven cases of infectious spondylitis and one was progression of systemic aspergillosis to the spine. One patient died of an uncontrolled infection with infective endocarditis, and two patients experienced partial recovery with neurological sequelae. The overall incidence of FJI-related adverse events is low, and procedure-related major complications are rare without dural puncture or epidural hematoma. Nevertheless, infection can occur, resulting in serious outcomes. • The incidence of FJI-related adverse events requiring hospitalization or ER visit was 0.84%. • The incidence of major procedure-related complications was 0.07%. • All major complications were associated with infection and there were no cases of epidural hematoma.
{ "pile_set_name": "PubMed Abstracts" }
Q: Referencing Fragments inside ViewPager I have a problem with referencing my Fragments inside a ViewPager. I would like to do it because from my activity I'd like to refresh a fragment at a specified position (e.g. currently displayed fragment). Currently I have something like this: public static class MyPagerAdapter extends FragmentPagerAdapter { private static final String TAG = "MyPagerAdapter"; private static HashMap<Integer, EventListFragment> mPageReferenceMap = new HashMap<Integer, EventListFragment>(); public MyPagerAdapter(FragmentManager fm) { super(fm); } @Override public int getCount() { return NUM_ITEMS; } @Override public Fragment getItem(int position) { Log.i(TAG, "getItem: "+position); int dateOffset = position-1; EventListFragment mFragment = EventListFragment.newInstance(dateOffset); mPageReferenceMap.put(position, mFragment); return mFragment; } @Override public void destroyItem(ViewGroup container, int position, Object object) { Log.i(TAG, "destroyItem: "+position); mPageReferenceMap.remove(position); super.destroyItem(container, position, object); } public EventListFragment getFragment(int key) { Log.i(TAG, "Size of pager references: "+mPageReferenceMap.size()); return mPageReferenceMap.get(key); } } The problem is that the destroyItem() gets called more often than getItem(), so I'm left with null references. If I don't use destroyItem() to clear references to destroyed fragments... well I reference fragments that don't exist. Is there any nice way to reference fragments that are created with EventListFragment mFragment = EventListFragment.newInstance(dateOffset);? Or what should I do to refresh a fragment inside a ViewPager from my activity (from options menu to be precise)? A: I managed to solve it. The trick was to make a reference list inside Activity, not PagerAdapter. It goes like this: List<WeakReference<EventListFragment>> fragList = new ArrayList<WeakReference<EventListFragment>>(); @Override public void onAttachFragment (Fragment fragment) { Log.i(TAG, "onAttachFragment: "+fragment); if(fragment.getClass()==EventListFragment.class){ fragList.add(new WeakReference<EventListFragment>((EventListFragment)fragment)); } } public EventListFragment getFragmentByPosition(int position) { EventListFragment ret = null; for(WeakReference<EventListFragment> ref : fragList) { EventListFragment f = ref.get(); if(f != null) { if(f.getPosition()==position){ ret = f; } } else { //delete from list fragList.remove(f); } } return ret; } Of course your fragment has to implement a getPosition() function, but I needed something like this anyway, so it wasn't a problem. Thanks Alex Lockwood for your suggestion with WeakReference! A: Two things: Add the following line in your Activity's onCreate method (or wherever you initialize your ViewPager): mPager.setOffscreenPageLimit(NUM_ITEMS-1); This will keep the additional off-screen pages in memory (i.e. preventing them from being destroyed), even when they aren't currently being shown on the screen. You might consider implementing your HashMap so that it holds WeakReference<Fragment>s instead of the Fragments themselves. Note that this would require you to change your getFragment method as follows: WeakReference<Fragment> weakRef = mPageReferenceMap.get(position); return (weakRef != null) ? weakRef.get() : null; This has nothing to do with your problem... it's just something I noticed and thought I would bring to your attention. Keeping WeakReferences to your Fragments will allow you to leverage the garbage collector's ability to determine reachability for you, so you don't have to do it yourself.
{ "pile_set_name": "StackExchange" }
Exploring Fashion & Culture in the most African U.S. City All posts filed under: Our Closet Photos by Paa Kwesi Yanful (@kwesithethird) This year Noirlinians was invited by #EssenceFest to take over their Instagram account for a weekend to tell (Our) #MyNOLAdiary. We met up for an afternoon with Kwesi, one of our favorite local Ghanian photographers for this shoot with a twist…for once, we got to choose the locations of the shoot*. | We are Noirlinians, and this is #OurNOLAdiary | *in typical Noirlinians shoots/blog posts, local or New Orleans based Black photographers […] Photos by Phrozen Photography Post Soundtrack: Immigrant (Sade) The Treme (St. Augustine Church): The community of Treme can be described as colorful, vibrant, creative, strong, & diverse. Formally known by the French as Faubourg Treme this community is named after Claude Treme the Frenchmen who sold the land to the city of New Orleans so they could build sub divisions and sale plots for housing behind the much crowded French Quarters. Treme was special from the start. […] Photos by Malik Bartholomew (Phrozen Photography) Post Soundtrack: We Are Family The Treme: I selected several sites in Treme and one of the site locations I selected was the underpass of the Claiborne Avenue Bride better known to native New Orleanians as “Under Da Bridge” or “Tha Bridge.” This site is extremely important historically and culturally to Black New Orleanians. Before the construction of the Claiborne Avenue interstate bridge the street Claiborne Avenue was home to the downtown […] Photos by Patrick Melon Post Soundtrack – Say It Loud (James Brown) Lot by Dbl Blk Cafe: This location is a place central to downtown New Orleans. It made sense to get some of the urban grit that makes any industrialized city recognizable as such. With the golden beams of the sun creating amazing highlight I was able to create a lot of contrast between my subjects and their environment. Photos by Patrick Melon Post Soundtrack – You Must Learn (KRS1) Crescent Park: Crescent Park has a beautiful cityscape in the background showing the layout of the central business district. Its simple and clean cut design in stone is appealing to me in and of itself. My subjects wore their more easily identifiable ‘ethnic’ clothing in this area which I feel is fitting considering the existence of the park. Although the area is beautiful and I certainly appreciate it, […] Photos by danielle c miles Post Soundtrack – “Ambidextrous” Be Steadwell The Corner: Corners, neutral grounds, stoops, shade trees and corner stores have long been cornerstones in Black communities across the nnorld. From the exchange of neighborhood gossip to political debates, dominos, chess games on legless tables balanced on the knees of the players, to the trading of goods and services beneath signs that scream “NO LOITERING!”– which is perceived as more of a request than […] Photos by danielle c milesPost Soundtrack – “Brown Skin Lady” BlackStar 7th Ward: This shoot was done in my neighborhood, the 7th ward of New Orleans. Its right outside of the Treme (well it used to be the Treme before they built the Claiborne overpass which divided the Black neighborhood. The Treme is a historically Black part of New Orleans (the backatown) and is America’s oldest surviving Black neighborhood. Photos by danielle c miles – The Corner: Corners, neutral grounds, stoops, shade trees and corner stores have long been cornerstones in Black communities across the world. From the exchange of neighborhood gossip to political debates, dominos, chess games on legless tables balanced on the knees of the players, to the trading of goods and services beneath signs that scream “NO LOITERING!”– which is perceived as more of a request than a demand, cornerstores — these […] For Wanjiru and Putu Photography by Asia-Vinae “Preach” Palmer Post Soundtrack – “Mama Says” Ibeyi Photos by Asia-Vinae “Preach” Palmer : “The Sister Houses” – Abandoned houses in New Orleans have stories. They are more powerful than the rust that coats the iron gates and more alive than the vines coating the sides of the building. This location is in the middle of Treme and Uptown New Orleans, it is in the heart of the […] Noirlinians is an AfroFashion blog exploring the complex relationship between culture, clothing & identity in the diaspora. Featuring Liberian artist and designer Denisio Truitt of DOPEciety and poet and organizer Mwende “FreeQuency” Katwiwa, the idea for the blog emerged after a fast friendship developed between the two based on their African heritage and artistic interests.
{ "pile_set_name": "Pile-CC" }
Synthesis of regioselectively protected forms of cytidine based on enzyme-catalyzed deacetylation as the key step. N4-Acetylcytidine (77%) and 2',3'-O, N4-triacetylcytidine (95%) were obtained from the hydrolysis of a common precursor, the peracetylated form of cytidine with Aspergillus niger lipase (Amano A) and Burkholderia cepacia esterase (SC esterase S), respectively, under very mild conditions. The experimental procedure for the conversion of triacetylcytidine to a corresponding phosphoramidite (82%), an intermediate for sugar nucleotide synthesis, is also elaborated.
{ "pile_set_name": "PubMed Abstracts" }
The Way It Is...Live! The Way It Is...Live! is a concert film by Snowy White and his band The White Flames, recorded during a 2004 tour, and released in 2005. It features a promotional video of Peter Green's "Black Magic Woman". Originally edited as a DVD, a bonus audio CD includes the complete concert. Track listing All songs by Snowy White, except where noted. "No Stranger to the Blues" (Snowy White, Gil Marais-Gilchrist) – 3:57 "What I'm Searching For" – 6:24 "Little Wing" (Jimi Hendrix) – 5:24 "Blues Is the Road" – 4:44 "I Loved Another Woman" (Peter Green) – 4:47 "The Answer" – 2:56 "Land of Plenty" – 6:42 "Lucky Star" – 5:19 "Teprjsah" (Walter Latupeirissa) – 7:35 "Working Blues" – 6:14 "Angel Inside You, Part I & II" – 12:28 "This Time of My Life" – 3:41 "A Piece of Your Love" – 4:12 "Black Magic Woman" (Green) Personnel Snowy White – guitars, vocals. Walter Latupeirissa – bass guitar, vocals. Max Middleton – Hammond organ, keyboards, piano, vocals. Richard Bailey – drums, percussion. References Category:Snowy White albums Category:Concert films Category:Live video albums Category:2005 live albums Category:2005 video albums
{ "pile_set_name": "Wikipedia (en)" }
Q: Using Multiple Accounts from a single physical machine ​Hey all, I was wondering something. I am planning to create a simple Voting Dapp for a closed election, where a number of accounts will be set up ahead of time and pre-authorised to vote. The Voting Contract will contain a Whitelist to know whether or not someone who makes a call to its functions should be allowed to vote. My question is this: the number of PCs that I have (maybe twelve or so) is far less than the number of prospective voters I have. Ideally, I would like to be able to use the PCs in a manner akin to Voting Stations: that is, they would be able to log in and make a vote on an Account, which would be unique to them, but which would be only one of many accounts available on that particular PC. Is this possible? Would it require, say, Metamask accounts? I'm unclear as to what an architecture like this would look like. Thanks for your help. A: To elaborate on JAG's answer - if you run a local Ethereum blockchain like Ganache, you get 10 accounts with 100ETH each by default (or you can configure it for any other number of accounts/value). Then connect your Metamask to that network (http://localhost:7545) and add private keys that show in Ganache, to Metamask (via Import account menu). Then in Metamask you can switch between all the accounts you add and run send a transaction as a currently active account. I am doing exactly that for my multi-account dapp testing.
{ "pile_set_name": "StackExchange" }
The purpose of the proposed project is to test a theoretical model of HIV/AIDS disclosure that simultaneously examines multiple factors hypothesized to predict and inhibit disclosure and tests the relationships between disclosure events and psychological and behavioral outcomes. To date, there exists no known model that simulates the hypothesized complexity of disclosure processes or relates disclosure events to post-disclosure psychological (e.g., negative affect) or behavioral (e.g., sexual risk) outcomes for people living with HIV/AIDS (PLWHA). The proposed model of HIV/AIDS disclosure will provide a parsimonious theoretical framework through which to understand how PLWHA make decisions about whom to disclose their positive serostatus and how they are affected by these decisions. The proposed study will employ a disclosure recipient-specific, longitudinal design in order to assess how these disclosure processes vary across disclosure recipients and time. Specific Aim 1 will examine the relative predictive utility of factors hypothesized to facilitate (i.e., negative coping, negative psychological effects of concealment, significance of relationship with disclosure recipient, and eco-concerns for well-being of others) and inhibit (i.e., anticipated negative reactions from disclosure recipient, ego-concerns for well-being of self, social norms against disclosure, and perceived positive serostatus of disclosure recipient) disclosure that are conceptually related with either the discloser or the disclosure recipient. Specific Aim 2 will examine how disclosure (or nondisclosure) decisions affect psychological and behavioral outcomes. It is hypothesized that the effect of disclosure on behavioral outcomes will be mediated by the effect of disclosure on psychological outcomes such that, to the extent that people experience positive psychological outcomes due to disclosure, they will also experience concomitant decreases in sexual risk and increases in adherence behaviors. This project has important conceptual and applied implications. The proposed model of HIV/AIDS disclosure will provide a framework for theorists and researchers to understand the complexity of disclosure processes and allow them to use this knowledge to inform interventions that assist PLWHA in identifying the optimal conditions and recipients for disclosure. Additionally, this framework has the potential to enhance the understanding of general disclosure processes across other types of concealed stigmatized identities (e.g., mental illness) and to emphasize both the potential psychological and behavioral effects of disclosure.
{ "pile_set_name": "NIH ExPorter" }
Background ========== Experimental autoimmune encephalomyelitis (EAE) is an animal model of multiple sclerosis (MS), an inflammatory demyelinating disease of the central nervous system (CNS) \[[@B1]\]. Both MS and EAE are thought to be initiated by myelin-reactive CD4^+^ T cells that produce interferon-γ (IFN-γ) and interleukin-17 (IL-17) (that is, Th1 and Th17 cells, respectively) \[[@B2]-[@B4]\]. Interferon regulatory factor 3 (IRF3) is a transcription factor that, together with IRF7 and nuclear factor-κB (NF-κB), is activated by antiviral pattern recognition receptors. IRF3 activation is part of the first line of defence against invading viruses, and its activation results in the production of IFN-β. This in turn, induces an amplification loop of type I IFN, which leads to the development of an antiviral state \[[@B5]-[@B7]\]. The importance of IRF3 in the development of antiviral immunity has been shown by using IRF3-deficient animals, which are more susceptible to viral infection. In addition, IRF3/IRF7 double-knockouts do not produce IFN-γ in response to viruses and are severely impaired in their antiviral responses \[[@B8]\]. Toll-like receptor (TLR) signalling can be divided broadly into MyD88-dependent and MyD88-independent pathways. IRF3 is activated through the MyD88-independent pathway. TLRs 3 and 4 recruit the adaptor molecule Toll-IL-1 resistance domain--containing adaptor-inducing IFN-β (TRIF) (TLR4 also uses TRIF-related adaptor molecule) \[[@B5]\]. TRIF then interacts with TANK-binding kinase 1 (TBK1), RIP1 and tumour necrosis factor (TNF) receptor--associated factor \[[@B9]\]. TBK1, along with inhibitor of NF-κB kinase ϵ, phosphorylates IRF3, which facilitates its translocation into the nucleus \[[@B10]\]. IRF3 in the nucleus can then activate the type I IFN promoters, the IFN-β promoter in particular. The role of IRFs in EAE and MS has received limited attention. Tada and colleagues showed that IRF1 plays a proinflammatory role in EAE \[[@B11]\], and, recently, Huber *et al.* showed that IRF4 promotes CD8^+^ T-cell--mediated EAE \[[@B12]\]. Tzima *et al*. found that mice with heme oxygenase 1 deficiency in myeloid cells exhibited enhanced EAE severity which was associated with a lack of IRF3 activation \[[@B13]\]. To our knowledge, our present study is the first in which the impact of IRF3 deficiency in EAE has been investigated. On the basis of previous studies showing the protective effect of type I IFN signalling in EAE \[[@B14]-[@B17]\], we expected *irf3*^−/−^ mice to develop more severe EAE. Compellingly, *irf3*^−/−^ mice in fact developed significantly less severe EAE with less CNS infiltration and diminished T-cell responses, including proliferation and Th17 development. Furthermore, myelin-reactive CD4^+^ T cells lacking IRF3 completely failed to transfer EAE in an IL-23-driven, Th17-biased model, as did WT cells transferred into *irf3*^−/−^ recipients. IRF3 deficiency in non-CD4^+^ cells, but not in CD4^+^ cells, conferred impairment of Th17 development in antigen-activated cultures. These data implicate IRF3 in the pathogenesis of autoimmune inflammation and Th17 responses. Methods ======= Experimental autoimmune encephalomyelitis induction --------------------------------------------------- EAE was actively induced in 8- to 12-week-old female C57BL/6 mice (The Jackson Laboratory, Bar Harbor, ME, USA) and *irf3*^−/−^ mice by subcutaneous injection of 150 μg of myelin oligodendrocyte glycoprotein (MOG~35--55~) in complete Freund's adjuvant (CFA) medium containing 5 mg/ml *Mycobacterium tuberculosis. Bordetella pertussis* toxin (PT) was administered intraperitoneally (200 ng/mouse) on day 0 and day 2. To adoptively transfer EAE, C57BL/6 or *irf3*^−/−^ mice were immunised subcutaneously with 200 μg of MOG~35--55~ in CFA medium at four sites on the back. Mice were sacrificed after 9 to 12 days, and their lymph nodes and spleens were retrieved. Cells were cultured (spleens and lymph nodes combined) at a density of 8 × 10^6^ cells/ml in RPMI 1640 medium with 10% foetal calf serum (FCS) and penicillin-streptomycin, L-glutamine, 2-mercaptoethanol, 2-\[4-(2-hydroxyethyl)piperazin-1-yl\]ethanesulfonic acid and sodium pyruvate in 150 × 25--mm Petri dishes. Cells were cultured for 3 days with IL-23 (20 ng/ml) at 37°C recovered, and CD4^+^ cells were purified with anti-CD4-conjugated magnetic beads (Miltenyi Biotec, Surrey, UK). Cells were resuspended in phosphate-buffered saline (PBS), and 5 × 10^6^ cells were injected via the tail vein into recipient mice. PT was injected intraperitoneally (200 ng/injection) on day 0 and day 2. Mice were scored daily for clinical signs of disease according to the following scale: partial limp tail, 0.5; full limp tail, 1; limp tail and waddling gait, 1.5; paralysis of one hindlimb, 2; paralysis of one hindlimb and partial paralysis of other hindlimb, 2.5; paralysis of both hindlimbs, 3; ascending paralysis, 3.5; paralysis of trunk, 4; moribund, 4.5; death, 5. Cumulative scores were calculated by adding together all daily scores for an individual mouse to yield a single cumulative score value for each mouse. All studies were performed with the approval of the institutional animal care and use committee of Thomas Jefferson University (Philadelphia, PA, USA) or in compliance with the UK Home Office and approved by the Queen's University Ethical Review Committee. Isolation of central nervous system cells ----------------------------------------- Spinal cords were removed from the mice after transcardial perfusion with PBS. Mononuclear cells were isolated by Percoll gradient centrifugation. Pooled cells were cultured for 4 hours in RPMI 1640 medium containing 10% FCS and stimulated with phorbol 12-myristate 13-acetate (50 ng/ml), ionomycin (500 ng/ml) and GolgiPlug protein transport inhibitor (1 μg/10^6^ cells; BD Biosciences, San Jose, CA, USA). T-cell activation *in vitro* ---------------------------- Spleens were harvested from wild-type (WT) and *irf3*^−/−^ mice, and single-cell suspensions were prepared following erythrocyte lysis. Cells were cultured at a density of 2 × 10^6^ cells/ml in X-VIVO 15 medium (Lonza, Walkersville, MD, USA) or Iscove's modified Dulbecco's medium and activated with anti-CD3/anti-CD28 antibodies or with MOG~35--55~ (25 μg/ml) in the presence or absence of the following cytokines and antibodies for 3 days as indicated: transforming growth factor-β (TGF-β) (2 ng/ml), IL-6 (20 ng/ml), TNF-α (10 ng/ml), IL-1β (10 ng/ml), IL-23 (10 ng/ml), IL-12 (10 ng/ml) and anti-IFN-γ (10 μg/ml) for 3 days. CD4^+^ and CD4^−^ cells were purified prior to culture by immunomagnetic separation (Miltenyi Biotec) and cultured in various combinations as indicated. Flow cytometry -------------- Flow cytometric analysis of splenocytes and mononuclear cells from the CNS was performed as previously described \[[@B18]\]. Briefly, cells were washed and blocked with anti-CD16/anti-CD32 antibodies. Blocked cells were stained for 20 minutes in the dark with fluorescence-labelled antibodies to a range of cell surface markers (BD Pharmingen, San Diego, CA, USA). For intracellular staining, cells were washed, fixed and permeabilised using FIX & PERM cell permeabilisation reagents (Caltag Laboratories, Burlingame, CA, USA). Cells were intracellularly stained for IL-17 and IFN-γ. Data were acquired on a FACSAria or FACSCanto system (BD Biosciences) and analysed using FlowJo software (TreeStar, Ashland, OR, USA). Cytokine analysis ----------------- Splenocytes from EAE experiments were cultured for 72 hours *ex vivo* with MOG~35--55~ (25 μg/ml) or anti-CD3/anti-CD28 (1 μg/ml) at a density of 2 × 10^6^ cells/ml in RPMI 1640 medium containing 10% FCS, penicillin-streptomycin, L-glutamine and nonessential amino acids. Supernatant cytokine concentrations from all splenocyte cultures were measured by enzyme-linked immunosorbent assay (IL-17; R&D Systems, Minneapolis, MN, USA). Proliferation assay ------------------- Splenocytes were cultured for 48 hours with MOG~35--55~ (25 μg/ml) or anti-CD3/anti-CD28 (1 μg/ml) at a density of 2 × 10^6^ cells/ml in X-VIVO 15 medium. T cell proliferation was measured by \[^3^H\]thymidine incorporation as previously described \[[@B19]\]. Statistical analysis -------------------- Clinical scores were tested for statistical significance by comparing areas under the curve for each animal and comparing groups with a nonparametric Mann-Whitney *U* test. Cytokine production and proliferative responses of WT and *irf3*^−/−^ mice were compared using an unpaired two-tailed Student's *t* test. Results and discussion ====================== Deficiency of interferon regulatory factor 3 inhibits experimental autoimmune encephalomyelitis ----------------------------------------------------------------------------------------------- To investigate the role of IRF3 in CNS autoimmune inflammation, we induced EAE in WT and *irf3*^−/−^ mice with MOG~35--55~ in CFA medium. Surprisingly, we consistently observed significantly less severe disease in *irf3*^−/−^ mice (Figure [1](#F1){ref-type="fig"}A). Disease incidence was also significantly lower in *irf3*^−/−^ groups (Table [1](#T1){ref-type="table"}), as were maximal and cumulative clinical scores in five independent experiments (Figures [1](#F1){ref-type="fig"}B and [1](#F1){ref-type="fig"}C). These data indicate that IRF3 contributes to the pathogenesis of EAE. ![**IRF3-deficient mice develop less severe experimental autoimmune encephalomyelitis than wild-type mice.** Interferon regulatory factor 3--knockout (IRF3ko, *irf3*^−/−^) and wild-type (WT) mice were immunised with myelin oligodendrocyte glycoprotein (MOG~35--55~) in complete Freund's adjuvant, and *Bordetella pertussis* toxin was administered on day 0 and day 2. **(A)** Mice were scored daily for clinical signs of experimental autoimmune encephalomyelitis. Data represent average ± SEM of five pooled experiments (see Table [1](#T1){ref-type="table"}). Mean maximal **(B)** and cumulative **(C)** clinical scores reached in *irf3*^−/−^ and WT mice in five independent experiments over the full duration of disease are shown.](1742-2094-11-130-1){#F1} ###### **Incidence of actively induced experimental autoimmune encephalomyelitis in wild-type and*irf3***^**−/−**^**mice**^**a**^ **Experiment** **Wild type** **IRF3**^**−/−**^ ---------------- --------------- ------------------- 1 7/7 (100%) 5/6 (83%) 2 4/6 (67%) 7/7 (100%) 3 5/5 (100%) 2/8 (25%) 4 6/6 (100%) 1/6 (17%) 5 4/5 (80%) 2/5 (40%) Total 26/29 (90%) 17/32 (53%) ^a^Wild-type and interferon regulatory factor 3--knockout (*irf3*^−/−^) mice were immunised with myelin oligodendrocyte glycoprotein (MOG~35--55~) in Complete Freund's adjuvant, and *Bordetella pertussis* toxin was administered on day 0 and day 2. Mice were scored daily for clinical signs of experimental autoimmune encephalomyelitis. Data represent disease incidence in each group in five independent experiments. To investigate potential roles of IRF3 in EAE pathogenesis, we examined CNS inflammatory infiltrates. We observed fewer total cells in pooled spinal cords of *irf3*^−/−^ mice with EAE than in their WT counterparts (Figure [2](#F2){ref-type="fig"}A). We analysed infiltrates by flow cytometry and observed slightly lower proportions of both CD4^+^ and CD8^+^ cells in the CNS of *irf3*^−/−^ mice (Figure [2](#F2){ref-type="fig"}B). Given the central role for CD4^+^ T cells in EAE pathogenesis, we examined the helper T cell subsets in the CNS infiltrate. We observed a lower proportion of both Th1 (CD4^+^IFN-γ^+^) and Th17 (CD4^+^IL-17^+^) cells, as well as CD4^+^ cells, producing both IFN-γ and IL-17 in *irf3*^−/−^ mice (Figure [2](#F2){ref-type="fig"}C). ![**IRF3-deficient mice have less central nervous system infiltration in experimental autoimmune encephalomyelitis.** Spinal cords were dissected from perfused interferon regulatory factor 3--knockout (*irf3*^−/−^) and wild-type (WT) mice, and mononuclear cells were isolated by Percoll gradient centrifugation. **(A)** The absolute numbers of central nervous system mononuclear cells in *irf3*^−/−^ and WT mice were determined. **(B)** and **(C)** Cells were cultured for 4 hours with phorbol 12-myristate 13-acetate and ionomycin in the presence of GolgiPlug protein transport inhibitor. Cells were stained for CD4 and CD8 **(B)**. The percentages of positive gated live cells are displayed. Cells were intracellularly stained for interleukin-17 (IL-17) and interferon-γ (IFN-γ) **(C)**. The percentages of positive gated CD4^+^ live cells are displayed. Cells from one representative experiment (harvested on day 22 after immunisation) of two is shown.](1742-2094-11-130-2){#F2} We next examined the peripheral immune response in spleens of immunised WT and *irf3*^−/−^ mice. We observed significantly lower proliferative responses to polyclonal stimulation in splenocytes of *irf3*^−/−^ mice (Figure [3](#F3){ref-type="fig"}A). There was also a trend towards reduced antigen-specific proliferation to MOG~35--55~ (Figure [3](#F3){ref-type="fig"}A). We next examined cytokine production in splenocytes activated with anti-CD3/anti-CD28 or with MOG~35--55~. Expression of IL-17 was significantly lower in splenocytes from *irf3*^−/−^ mice that were polyclonally activated or activated with MOG~35--55~ (Figure [3](#F3){ref-type="fig"}B), suggesting that IRF3 may play a role in IL-17 production by T cells. ![**Diminished T-cell responsiveness in*irf3***^**−/−**^**mice with experimental autoimmune encephalomyelitis.** Spleen cells from interferon regulatory factor 3--knockout (*irf3*^−/−^) and wild-type (WT) mice (*n* = 5 or 6 mice/group) harvested 18 to 22 days after immunisation were cultured in triplicate in the presence of MOG~35--55~ or anti-CD3/anti-CD28. **(A)** After 48 hours of culture, proliferative responses to myelin oligodendrocyte glycoprotein (MOG~35--55~) or anti-CD3/anti-CD28 were determined. Proliferative responses are shown as mean counts per minute (cpm) ± SEM. **(B)** After 72 hours of culture, interleukin-17 (IL-17) production in response to MOG~35--55~ or anti-CD3/anti-CD28 was determined in supernatants by enzyme-linked immunosorbent assay. One representative experiment of two is shown.](1742-2094-11-130-3){#F3} IRF3 deficiency impairs Th17 differentiation -------------------------------------------- As both Th1 and Th17 cells were found at lower frequencies in the CNS of *irf3*^−/−^ mice with EAE than in the WT mice (Figure [2](#F2){ref-type="fig"}C), we investigated the impact of IRF3 deficiency on Th1 and Th17 differentiation *in vitro*. Consistently, IRF3 deficiency resulted in increased proportions of Th1 cells (CD4^+^IFN-γ^+^) in all conditions tested (Figure [4](#F4){ref-type="fig"}A). Conversely, proportions of Th17 cells (CD4^+^IL-17^+^) were lower in *irf3*^−/−^ cultures than in WT cultures in Th17-polarised cultures (Figure [4](#F4){ref-type="fig"}A), the only condition in which robust IL-17 expression was observed (TGF β + IL-6 + IL-1β + anti-IFN-γ). These findings demonstrate that IRF3 mediates helper T cell polarisation with opposing supportive and inhibitory effects on Th17 and Th1 differentiation, respectively. Thus, although proportions of both Th1 and Th17 subsets were reduced in the CNS of mice with EAE (Figure [2](#F2){ref-type="fig"}C), it is likely that reduced Th17 development was a limiting factor in the development of EAE in *irf3*^−/−^ mice. Of note, certain Th17 polarisation experiments included neutralising anti-IFN-γ antibody to ensure that differences were not due to the observed increase in IFN-γ production in the absence of IRF3 (Figure [4](#F4){ref-type="fig"}A). We also examined additional Th17-polarising cytokine cocktails that support *de novo* differentiating (TGF-β + IL-6 + IL-1β + TNF-α) and differentiated (IL-23) Th17 cells, and we consistently observed less IL-17 production in IRF3-deficient cultures (Figure [4](#F4){ref-type="fig"}B). These data provide strong evidence that IRF3 is involved in Th17 cell development and function. ![**Interferon regulatory factor 3 supports Th17 cell differentiation and pathogenicity. (A)** Splenocytes cells from naive interferon regulatory factor--knockout (KO, *irf3*^−/−^) and wild-type (WT) mice were activated with anti-CD3/anti-CD28 (1 μg/ml) in non-polarised (no exogenous cytokines), Th1-polarising (interleukin-12 (IL-12)) or Th17-polarising (transforming growth factor β (TGF-β), IL-6, IL-1β and antibody against interferon γ (anti-IFN-γ)) conditions for 3 days. Helper T (Th) cell polarisation was analysed by flow cytometry following restimulation with phorbol 12-myristate 13-acetate/ionomycin/GolgiPlug protein transport inhibitor for the final 4 hours of culture. The percentages of positive gated CD4^+^ live cells are displayed. **(B)** Splenocytes were activated in other Th17-polarising conditions as indicated for 3 days, and IL-17 production was measured by enzyme-linked immunosorbent assay (ELISA). **(C)** WT and i*rf3*^−/−^ mice were immunised with myelin oligodendrocyte glycoprotein (MOG~35--55~) in complete Freund's adjuvant. After 10 days, cells from the spleens and lymph nodes were reactivated with MOG~35--55~ (25 μg/ml) in the presence of IL-23 (20 ng/ml) for 3 days. Cells were transferred into naive WT or *irf3*^−/−^ recipient mice. Mice were scored daily for clinical signs of disease (see Table [2](#T2){ref-type="table"}). **(D)** Spleens and lymph nodes were harvested from WT and *irf3*^−/−^ mice that had been immunised with MOG~35--55~ for 7 days. CD4^+^ and CD4^−^ populations were immunomagnetically purified and cultured in combinations as indicated. Cultures were reactivated with MOG~35--55~ (25 μg/ml) in the presence of IL-23 (20 ng/ml), and IL-17 was measured by ELISA (*n* = 4). Data from one experiment representative of two or three replicate experiments are shown.](1742-2094-11-130-4){#F4} IRF3-deficient T cells fail to transfer experimental autoimmune encephalomyelitis --------------------------------------------------------------------------------- To investigate the role of IRF3 specifically in Th17-cell pathogenicity, we used a Th17-biased model of adoptively transferred EAE. WT and *irf3*^−/−^ donors were immunised with MOG~35--55~ in CFA medium, and peripheral lymphoid organs were harvested and reactivated with antigen *in vitro* in the presence of exogenous IL-23. CD4^+^ cells were purified and transferred intravenously to naive recipient WT mice. Strikingly, *irf3*^−/−^ CD4^+^ cells failed to induce EAE in any recipient animal in three independent experiments (Figure [4](#F4){ref-type="fig"}C and Table [2](#T2){ref-type="table"}). These findings demonstrate a key role for IRF3 in the pathogenicity of Th17 cells in EAE. Interestingly, as purified CD4^+^ T cells (WT or *irf3*^−/−^) were transferred to WT recipients, the findings from this study could have pointed towards a central role for direct, T-cell intrinsic IRF3 activity in influencing Th17 differentiation and pathogenicity. However, transfer of WT CD4^+^ cells to *irf3*^−/−^ recipients also failed to induce clinical EAE in any animals in two replicate experiments (Figure [4](#F4){ref-type="fig"}C and Table [2](#T2){ref-type="table"}), suggesting that T-cell extrinsic mechanisms are also involved in the lack of disease observed in IRF3-deficient recipients of WT CD4^+^ T cells. ###### **Incidence of adoptively transferred experimental autoimmune encephalomyelitis with CD4**^**+**^**cells from*irf3***^**−/−**^**and wild-type donors**^**a**^ **Experiment** **WT to WT** ***irf*3**^**−/−**^**to WT** **WT to*irf3***^−/−^ ---------------- -------------- ------------------------------ ---------------------- 1 6/6 (100%) 0/4 (0%) 0/6 (0%) 2 7/7 (100%) 0/4 (0%) -- 3 4/5 (80%) 0/5 (0%) -- 4 3/3 (100%) -- 0/5 (0%) Total 20/21 (95%) 0/13 (0%) 0/11 (0%) ^a^Wild-type (WT) and interferon regulatory factor 3--knockout (*irf3*^−/−^) mice were immunised with myelin oligodendrocyte glycoprotein (MOG~35--55~) in complete Freund's adjuvant medium. After 10 days, cells from spleens and lymph nodes were reactivated with MOG~35--55~ in the presence of interleukin-23 for 3 days. Cells were transferred intravenously into naive WT or *irf3*^−/−^ recipient mice that received *Bordetella pertussis* toxin on day 0 and day 2 after cell transfer. Mice were scored daily for clinical signs of disease. Data represent disease incidence in each group in two to four independent experiments, and headings describe donor-to-recipient transfers. Thus, we sought to address whether CD4^+^ T cell intrinsic or extrinsic IRF3 activity influenced Th17 differentiation. Splenocytes and lymph nodes from WT and *irf3*^−/−^ mice were harvested 7 days after immunisation with MOG~35--55~. CD4^+^ and CD4^−^ fractions were prepared by immunomagnetic purification and co-cultured in combinations in which IRF3 deficiency was restricted to CD4^+^ cells, CD4^−^ cells, all cells or none. Cultures were reactivated with MOG~35--55~ in Th17-polarising conditions. Strikingly, IL-17 production was impaired when all cells were deficient in IRF3 and when CD4^−^ cells were deficient in IRF3, but not when only CD4^+^ cells lacked IRF3 (Figure [4](#F4){ref-type="fig"}D). These data show that, during antigen activation of CD4^+^ T cells, IRF3 activity in non-CD4^+^ T cells is required for maximal Th17 responses. The findings of these studies are surprising, considering that type I IFN has been found to be protective in EAE in studies of IFN-α/β receptor--deficient mice and IFN-β-deficient mice \[[@B15]-[@B17]\]. Furthermore, we have previously shown that activation of TLR3 with polyinosinic-polycytidylic acid, which signals via IRF3, suppresses relapsing--remitting EAE in SJL mice \[[@B20]\]. This was also shown in chronic EAE by Tzima *et al*. \[[@B13]\]. However, these paradoxical findings may be explained in part by findings reported by Axtell *et al*. \[[@B21]\], who showed that, though IFN-β suppressed EAE in a Th1 model, the severity of EAE in an IL-23-driven Th17 model was in fact exacerbated by IFN-β. As signalling through IRF3 results in IFN-β production, IRF3 deficiency may abrogate such an exacerbating effect of IFN-β, particularly in IL-23-driven autoreactive T cells. In addition, Al-Salleeh and Petro have shown that the IL-23p19 promoter contains a binding site for IRF3 \[[@B22]\], and Smith *et al.* have reported increased IRF3 binding to the IL-23p19 promoter in monocytes taken from patients with systemic lupus erythematosus \[[@B23]\]. In a recent study of the inhibition of IL-23 by morphine, Ma *et al*. reported inhibition of IRF3 phosphorylation and suggested that this may underlie the observed inhibition of IL-23 \[[@B24]\]. Of note, however, we observed impaired IL-17 responses in cultures supplemented with IL-23; thus, a lack of IRF3-driven IL-23 does not completely explain the decreased Th17 responses in our studies. In addition to recent discoveries pertaining to IL-23, IRF3 has been shown to inhibit Th1 responses by binding to the *Il12b* promoter and negatively regulating *Il12b* expression \[[@B25]\]. Indeed, *irf3*^−/−^ dendritic cells infected with vesicular stomatitis virus induced enhanced Th1 responses in naive syngeneic recipients associated with increased *Ifng* expression \[[@B25]\]. Similarly, in our present study, we observed enhanced Th1 differentiation during *in vitro* T cell activation in non-polarised and Th1- and Th17-polarising conditions. It is tempting to speculate that such enhanced Th1 development in the absence of IRF3 inhibited Th17 development in our cultures. However, it is noteworthy that neutralisation of IFN-γ in our cultures did not restore Th17 polarisation in *irf3*^−/−^ cultures to levels of WT cells. Conclusion ========== Collectively, the data reported here lend support to a role for IRF3 in driving the IL-23/Th17 axis and the pathogenesis of CNS autoimmune inflammation. These data indicate that IRF3 plays a critical role in the development of Th17 responses and MOG~35--55~-induced EAE and thus warrants investigation in human MS. Competing interests =================== The authors declare that they have no competing interests. Authors' contributions ====================== DCF and BG designed experiments. DCF, KO, ZFK and AY performed experiments. BG, DCF and AM oversaw the study. DCF, BG and KO prepared the manuscript. All authors read and approved the final manuscript. Acknowledgements ================ We are very grateful to Prof Tadatsugu Taniguchi (University of Tokyo) and Prof Kate Fitzgerald (University of Massachusetts) for the provision of *irf3*^−/−^ mice.
{ "pile_set_name": "PubMed Central" }
--- abstract: 'We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic “addition” and “multiplication” long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks.' author: - | Colin Raffel\ LabROSA, Columbia University\ `craffel@gmail.com` Daniel P. W. Ellis\ LabROSA, Columbia University\ `dpwe@ee.columbia.edu` bibliography: - 'refs.bib' title: 'Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems' --- Models for Sequential Data ========================== Many problems in machine learning are best formulated using sequential data and appropriate models for these tasks must be able to capture temporal dependencies in sequences, potentially of arbitrary length. One such class of models are recurrent neural networks (RNNs), which can be considered a learnable function $f$ whose output $h_t = f(x_t, h_{t - 1})$ at time $t$ depends on input $x_t$ and the model’s previous state $h_{t - 1}$. Training of RNNs with backpropagation through time [@werbos1990backpropagation] is hindered by the vanishing and exploding gradient problem [@pascanu2012difficulty; @hochreiter1997long; @bengio1994learning], and as a result RNNs are in practice typically only applied in tasks where sequential dependencies span at most hundreds of time steps. Very long sequences can also make training computationally inefficient due to the fact that RNNs must be evaluated sequentially and cannot be fully parallelized. Attention --------- A recently proposed method for easier modeling of long-term dependencies is “attention”. Attention mechanisms allow for a more direct dependence between the state of the model at different points in time. Following the definition from [@bahdanau2014neural], given a model which produces a hidden state $h_t$ at each time step, attention-based models compute a “context” vector $c_t$ as the weighted mean of the state sequence $h$ by $$c_t = \sum_{j = 1}^T \alpha_{tj} h_j$$ where $T$ is the total number of time steps in the input sequence and $\alpha_{tj}$ is a weight computed at each time step $t$ for each state $h_j$. These context vectors are then used to compute a new state sequence $s$, where $s_t$ depends on $s_{t - 1}$, $c_t$ and the model’s output at $t - 1$. The weightings $\alpha_{tj}$ are then computed by $$e_{tj} = a(s_{t - 1}, h_j), \alpha_{tj} = \frac{\exp(e_{tj})}{\sum_{k = 1}^T \exp(e_{tk})}$$ where $a$ is a learned function which can be thought of as computing a scalar importance value for $h_j$ given the value of $h_j$ and the previous state $s_{t - 1}$. This formulation allows the new state sequence $s$ to have more direct access to the entire state sequence $h$. Attention-based RNNs have proven effective in a variety of sequence transduction tasks, including machine translation [@bahdanau2014neural], image captioning [@xu2015show], and speech recognition [@chan2015listen; @bahdanau2015end]. Attention can be seen as analogous to the “soft addressing” mechanisms of the recently proposed Neural Turing Machine [@graves2014neural] and End-To-End Memory Network [@sukhbaatar2015end] models. Feed-Forward Attention ---------------------- A straightforward simplification to the attention mechanism described above which would allow it to be used to produce a single vector $c$ from an entire sequence could be formulated as follows: $$\label{eq:ffattention} e_t = a(h_t), \alpha_t = \frac{\exp(e_t)}{\sum_{k = 1}^T \exp(e_k)}, c = \sum_{t = 1}^T \alpha_t h_t$$ As before, $a$ is a learnable function, but it now only depends on $h_t$. In this formulation, attention can be seen as producing a fixed-length embedding $c$ of the input sequence by computing an adaptive weighted average of the state sequence $h$. A schematic of this form of attention is shown in Figure \[fig:schematic\]. [@sonderby2015convolutional] compared the effectiveness of a standard recurrent network to a recurrent network augmented with this simplified version of attention on the task of protein sequence analysis. ![Schematic of our proposed “feed-forward” attention mechanism (cf. [@cho2015introduction] Figure 1). Vectors in the hidden state sequence $h_t$ are fed into the learnable function $a(h_t)$ to produce a probability vector $\alpha$. The vector $c$ is computed as a weighted average of $h_t$, with weighting given by $\alpha$.[]{data-label="fig:schematic"}](schematic.pdf){width=".8\textwidth"} A consequence of using an attention mechanism is the ability to integrate information over time. It follows that by using this simplified form of attention, a model could handle variable-length sequences even if the calculation of $h_t$ was feed-forward, i.e. $h_t = f(x_t)$. Using a feed-forward $f$ could also result in large efficiency gains as the computation could be completely parallelized. We investigate the capabilities of this “feed-forward attention” model in Section \[sec:experiments\]. We note here that feed-forward models without attention can be used for sequential data when the sequence length $T$ is fixed, but when $T$ varies across sequences, some form of temporal integration is necessary. An obvious straightforward choice, which can be seen as an extreme oversimplification of attention, would be to compute $c$ as the unweighted average of the state sequence $h_t$, i.e. $$\label{eq:unweighted} c = \frac{1}{T}\sum_{t = 1}^T h_t$$ This form of integration has been used to collapse the temporal dimension of audio [@dieleman2014recommending] and text document [@lei2015molding] sequences. We will also explore the effectiveness of this approach. Toy Long-Term Memory Problems {#sec:experiments} ============================= A common way to measure the long-term memory capabilities of a given model is to test it on the synthetic problems originally proposed by [@hochreiter1997long]. In this paper, we will focus on the “addition” and “multiplication” problems; due to space constraints, we refer the reader to [@hochreiter1997long] or [@sutskever2013importance] for their specification. As proposed by [@hochreiter1997long], we define accuracy as the proportion of sequences for which the absolute error between predicted value and the target value was less than .04. Applying our feed-forward model to these tasks is somewhat disingenuous because they are commutative and therefore may be easier to solve with a model which ignores temporal order. However, as we further argue in Section \[sec:discussion\], we believe these tasks provide a useful demonstration of our model’s ability to refer to arbitrary locations in the input sequence when computing its output. Model Details ------------- For all experiments, we used the following model: First, the state $h_t$ was computed from the input at each time step $x_t$ by $h_t = \textrm{LReLU}(W_{xh}x_t + b_{xh})$ where $W_{xh} \in \mathbb{R}^{D \times 2}, b_{xh} \in \mathbb{R}^D$ and $\textrm{LReLU}(x) = \max(x, .01x)$ is the “leaky rectifier” nonlinearity, as proposed by [@maas2013rectifier]. We found that this nonlinearity improved early convergence so we used it in all of our models. We tested models where the context vector $c$ was then computed either as in Equation (\[eq:ffattention\]), with $a(h_t) =\tanh(W_{hc}h_t + b_{hc})$ where $W_{hc} \in \mathbb{R}^{1 \times D}, b_{hc} \in \mathbb{R}$, or simply as the unweighted mean of $h$ as in Equation (\[eq:unweighted\]). We then computed an intermediate vector $s = \textrm{LReLU}(W_{cs}c + b_{cs})$ where $W_{cs} \in \mathbb{R}^{D \times D}, b \in \mathbb{R}^D$ from which the output was computed as $y = \textrm{LReLU}(W_{sy}s + b_{sy})$ where $W_{sy} \in \mathbb{R}^{1 \times D}$, $b_{sy} \in \mathbb{R}$. For all experiments, we set $D = 100$. We used the squared error of the output $y$ against the target value for each sequence as an objective. Parameters were optimized using “adam”, a recently proposed stochastic optimization technique [@kingma2014adam], with the optimization hyperparameters $\beta_1$ and $\beta_2$ set to the values suggested by [@kingma2014adam] (.9 and .999 respectively). All weight matrices were initialized with entries drawn from a Gaussian distribution with a mean of zero and, for a matrix $W \in \mathbb{R}^{M \times N}$, a standard deviation of $1/\sqrt{N}$. All bias vectors were initialized with zeros. We trained on mini-batches of 100 sequences and computed the accuracy on a held-out test set of 1000 sequences every epoch, defined as 1000 parameter updates. We stopped training when either 100% accuracy was attained on the test set, or after 100 epochs. All networks were implemented using Lasagne [@dieleman2015lasagne], which is built on top of Theano [@bastien2012theano; @bergstra2010theano]. ---------------------------------- ---- ----- ----- ------ ------ ------- ---- ----- ----- ------ ------ ------- Task $T_0$ 50 100 500 1000 5000 10000 50 100 500 1000 5000 10000 (r)[2-7]{} (r)[8-13]{} Attention 1 1 1 1 2 3 1 2 4 2 15 6 Unweighted 1 1 1 2 8 17 2 2 8 33 ---------------------------------- ---- ----- ----- ------ ------ ------- ---- ----- ----- ------ ------ ------- : Number of epochs required to achieve perfect accuracy, or accuracy after 100 epochs (greyed-out values), for the experiment described in Section \[sec:fixed\].[]{data-label="tab:fixed"} Fixed-Length Experiment {#sec:fixed} ----------------------- Traditionally, the sequence lengths tested in each task vary uniformly between $[T_0, 1.1T_0]$ for different values of $T_0$. As $T_0$ increases, the model must be able to handle longer-term dependencies. The largest value of $T_0$ attained using RNNs with different training, regularization, and model structures has varied from a few hundred [@martens2011learning; @sutskever2013importance; @le2015simple; @krueger2015regularizing; @arjovsky2015unitary] to a few thousand [@hochreiter1997long; @jaegar2012long]. We therefore tested our proposed feed-forward attention models for $T_0 \in \{50, 100, 500, 1000, 5000, 10000\}$. The required number of epochs or accuracy after 100 epochs for each task, sequence length, and temporal integration method (adaptively weighted attention or unweighted mean) is shown in Table \[tab:fixed\]. For fair comparison, we report the best result achieved using any learning rate in $\{.0003, .001, .003, .01\}$. From these results, it’s clear that the feed-forward attention model can quickly solve these long-term memory problems for all sequence lengths we tested. Our model is also efficient: Processing one epoch of 100,000 sequences with $T_0 = 10000$ took 254 seconds using an NVIDIA GTX 980 Ti GPU, while processing the same data with a single-layer vanilla RNN with a hidden dimensionality of 100 (resulting in a comparable number of parameters) took 917 seconds on the same hardware. In addition, there is a clear benefit to using the attention mechanism of Equation (\[eq:ffattention\]) instead of a simple unweighted average over time, which only incurs a marginal increase in the number of parameters (10,602 vs. 10,501, or less than 1%). Variable-length Experiment -------------------------- Because the range of sequence lengths $[T_0, 1.1T_0]$ is small compared to the range of $T_0$ values we evaluated, we further tested whether it was possible to train a single model which could cope with sequences with highly varying lengths. To our knowledge, such a variant of these tasks has not been studied before. We trained models of the same architecture used in the previous experiment on minibatches of sequences whose lengths were chosen uniformly at random between 50 and 10000 time steps. Using the attention mechanism of Equation (\[eq:ffattention\]), on held-out test sets of 1000 sequences, our model achieved 99.9% accuracy on the addition task and 99.4% on the multiplication task after training for 100 epochs. This suggests that a single feed-forward network with attention can simultaneously handle both short and very long sequences, with a marginal decrease in accuracy. Using an unweighted average over time, we were only able to achieve accuracies of 77.4% and 55.5% on the variable-length addition and multiplication tasks, respectively. Discussion {#sec:discussion} ---------- A clear limitation of our proposed model is that it will fail on any task where temporal order matters because computing an average over time discards order information. For example, on the two-symbol temporal order task [@hochreiter1997long] where a sequence must be classified in terms of whether two symbols $X$ and $Y$ appear in the order $X, X$; $Y, Y$; $X, Y$; or $Y, X$, our model can differentiate between the $X, X$ and $Y, Y$ cases perfectly but cannot differentiate between the $X, Y$ and $Y, X$ cases at all. Nevertheless, we submit that for some real-world tasks involving sequential data, temporal order is substantially less important than being able to handle very long sequences. For example, in Joachims’ seminal paper on text document categorization [@joachims1998text], he posits that “word stems work well as representation units and that their ordering in a document is of minor importance for many tasks”. In fact, the current state-of-the-art system for document classification still uses order-agnostic sequence integration [@lei2015molding]. We have also shown in parallel work that our proposed feed-forward attention model can be used effectively for pruning large-scale (sub)sequence retrieval searches, even when the sequences are very long and high-dimensional [@raffel2016pruning]. Our experiments explicitly demonstrate that including an attention mechanism can allow a model to refer to specific points in a sequence when computing its output. They also provide an alternate argument for the claim made by [@bahdanau2014neural] that attention helps models handle very long and widely variable-length sequences. We are optimistic that our proposed feed-forward model will prove beneficial in additional real-world problems requiring order-agnostic temporal integration of long sequences. Further investigation is warranted; to facilitate future work, all of the code used in our experiments is available online.[^1] Acknowledgements ================ We thank Sander Dieleman, Bart van Merri[ë]{}nboer, S[ø]{}ren Kaae S[ø]{}nderby, Brian McFee, and our anonymous reviewers for discussion and feedback. [^1]: [`https://github.com/craffel/ff-attention/tree/master/toy‘_problems`](https://github.com/craffel/ff-attention/tree/master/toy_problems)
{ "pile_set_name": "ArXiv" }
Tuesday, April 29, 2008 All text and photos in this post copyright John Zada and John Bell 2008 In brackish Arabic laced with Farsi and Hindi, Captain Abdul-Fatah al-Shehi orders a young deckhand to steer his boat along a sharp bend in the coastline. As the wooden dhow veers from the open water into a rocky inlet, al-Shehi grins with satisfaction, the vessel now navigating a course of placid water between two desolate mountains rising sharply from the Persian Gulf. “Do you have something like this where you are from?” al-Shehi asks in heavily accented English. On the horn of the Arabian Peninsula, the Musandam region is a place characterized by – of all things - fjords. These coastal mountains, barren and fissured, are the Middle East’s answer to the giants that guard the coasts of Alaska, Norway and Greenland. Though less grandiose than their cousins, Musandam’s fjords are an enchanting feature of an area full of strange and intriguing oddities. Part of the Sultanate of Oman but separated by a 70km strip of the United Arab to the south, the Musandam Peninsula remains an enclave of nature and traditional Arab culture on the fringes of Dubai’s mega-urbanization project. Here steep mountain-hugging paths, isolated coastal villages, and an endless series of wadis where lone Shihuh tribesman shepherd their small flocks of goat, exist in a centuries-old time-warp. This rocky headland of the Hajar Mountains also happens to be one of the most strategically important points on the planet: the rugged cape guards the southern side of the Straits of Hormuz, where the Persian Gulf narrows between Oman and Iran into a busy thoroughfare that sees 90 per cent of the Gulf’s oil transit to the Indian Ocean and beyond. Despite, or perhaps because of, the geopolitics of the area, Musandam is one of the quietest and most pristine areas in the Middle East. Its deep blue waters are home to thriving coral reefs, and countless other marine species, including whale sharks and dolphins. And so inaccessible is the peninsula’s mountainous interior that it is believed to hide a small population of the elusive and critically endangered Arabian leopard. Once a military zone largely off limits to foreigners, the area was opened to travellers in the late 1990s to attract some of the burgeoning tourist activity taking place across the border in the United Arab Emirates. Soon afterward, local businessman such as al-Shehi, emerged from Musandam’s quieter nooks to take advantage of the windfall. “Before the foreigners came, I had only one dhow boat that I used only to catch fish,” says al-Shehi, whose Musandam Sea Adventure Tour Company, is based in Khasab, Musandam’s capital. “Eventually this one boat became four, and now we make many runs a day from the port.” Nestled in a wadi full of palm groves between the mountains and the sea, Khasab feels entirely cut off from the world. But its small port bustles day and night. As al-Shehi quietly points out to us while we are still moored, the area is teeming with Iranian smugglers – a big part of the local economy. They come to purchase commercial goods in Khasab by day - cigarettes, televisions, stereos, DVD players, refrigerators and almost anything one can find in the town’s market - then carry them across the Gulf to southern Iran in speedboats by night, carefully avoiding detection by the Iranian police boats that wait in ambush on the other side. “It’s a very dangerous job,” al-Shehi says. “Two years ago some smugglers were killed by pirates in Iranian waters - local criminals, hired by the police to stop these people.” Parked near the speed-boats are the much slower-moving dhows that are owned and manned by Omani sailors. These wooden craft, some of them examples of ancient designs and building practices, constitute one of the oldest continuous seafaring traditions in existence. The waters off the Arabian coast are dotted with these vessels, which carry their cargo as far away as India and Pakistan. Of course, in recent years, sailors such as al-Shehi have also become tour guides, refurbishing their boats with cushions and light canopies to ferrying travelers comfortably along the Peninsula’s circuitous coastline. He sees it as a sustainable industry and is keenly aware of the area’s environmental sensitivity. “It is a business, yes, but we also want travelers to continue to appreciate the beauty here,” he says. “We value nature and are working to protect it – unlike what is happening in other parts of the region.” His dhow comes to a stop a few hours later at the end of the fjord and moors beside a tiny islet known as Telegraph Island. This, he explains, was once the site of a strategic base where the British Empire’s telegraph lines connected London with the Indian subcontinent. The coral reef below hosts a riot of colourful fish. The passengers are handed snorkeling gear and given 45 minutes to enjoy the show while al-Shehi prepares a lunch of fish biryani and other local delights prepared beforehand by his wife. “Maybe these fjords are not as large as the ones you know,” al-Shehi says while heating up the biryani. “But I’m sure you will not find another fjord in the world where you can do what we are doing here right now.” Friday, April 18, 2008 Review: The Rise and Fall of Alexandria - Birthplace of the Modern World, by Justin Pollard and Howard Reid. Penguin Books, 300 pps. This book in fact has two titles. The one above, and the one as registered in the Library of Congress, “The Rise and Fall of Alexandria – Birthplace of the Modern Mind”. It is much more the latter: Justin Pollard and Howard Reid do a great job at taking the reader through the intellectual adventure of the city and its contribution to the way moderns think. Through nineteen chapters, the authors take us through the history of the city by looking at a series of geniuses, inventors and critical figures and what they contributed to the unique development of Alexandria. The story begins with Alexander the Great, the founder who laid the outline of the city and its harbour with barley flour as birds dived to devour the seeds. He used the meal because of the lack of chalk in Egypt – a practical act that threads through the city’s classical history. Although this is the story of the scientists and philosophers of Alexandria, many of their findings, from the inventions of Archimedes to the geometry of Ptolemy were geared towards the practical and not just the speculative. The city’s history is replete with eccentrics creating odd devices and tricks through such things as steam power, or determined minds seeking to circumnavigate the globe. Some stand out above the others. Erastothenes, who worked at Alexandria's great library, set out to measure the earth’s circumference. Despite ancient measuring devices, he was only off by 225 miles. Another savant was Philo, who stated that God is creativity itself. He was a believer in the “Great Chain of Being” and a man who mixed his Jewish heritage with Hellenism as the city itself merged Greek thought with Egyptian cosmology. The revered Hypatia, teacher, mathematician, and leader of the city’s academic elite in its late Classical period - an era of decline that witnessed battles between its “pagan” roots and its newfound zealotry, Christianity. Hypatia was killed on the floor of the nave of a church by a Christian mob that “set upon her with broken pieces of roof tile, flaying her alive.” Alexandria was the great cosmopolitan experiment in its time. It housed large, Egyptian, Greek and Jewish communities among many others. Its library provided a cultural hub and its merchant class and location meant it was the New York of its era, being connected to but also beyond the continent it sat upon. However, another city today also offers a parallel: it has a kind of “library”, is a leading edge cultural hub, has a “Pharos”, a global wonder of architecture, and like ancient Alexandria, serves as an entrepot where many millions are made. That city is Dubai with its “Media City” and its great Burj, soon to be by far the tallest building in the world. Like Alexandria, it may even end up with a battle between a “pagan” or secular culture and the surrounding religious zealotry. But Dubai has yet to show us its Hypatia, Erastothenes and Philo. Indeed, the book begs the question: where today are those figures that give birth to the future? Monday, April 14, 2008 All text and photos in this post are copyright John Zada and John Bell 2008 “I have a wonderful idea for a novel,” wrote a clerk of the British Information Office in Egypt, in a letter to a friend in Big Sur, California in 1944. “A nexus for all news of Greece, side-by-side with a sort of spiritual butcher’s shop with girls on slabs.” When novelist Lawrence Durrell confided his idea to his lifelong literary confidant and friend Henry Miller, little did he know he would construct a piece-de-resistance from which all references to a city would be forever drawn. His celebrated four-decker novel, The Alexandria Quartet chronicles a city in which every international crossroads today claims some sort of lineage. A town of auspicious, mythological beginnings, Alexandria would engender herself to every cosmopolitan soul throughout her recorded history. From the moment of her conception in the mind of her namesake, Alexander the Great, in 331 BC, foreigners flocked to her shores. Situated on Egypt’s Mediterranean coast with her back to Africa, the town fixed her gaze northwards towards Europe in a gesture of perpetual invitation. Within decades of her construction she became lord and locus of world knowledge carrying humanity further in her first six hundred years than in all previous millennia combined. Beyond her initial burst of brilliance Alexandria would continue to radiate her eminence as the influential bride of many a conqueror. From the Dark Ages onwards, she bore witness to waves of successive invaders who parked their ships in her crowded harbours: Byzantines, Arabs, Ottoman Turks, French, and later the British. Yet it wasn’t until the late nineteenth and early twentieth centuries that this erstwhile cosmopolis saw its latest incarnation as an international entrepot. This was the Alexandria that stoked the Durrellian imagination. That no other writer of modern fiction had before drawn upon the city’s storehouse of anecdotal riches, gave Alexandria yet another new form in which to be realized. As colonial avatar, Durrell’s Alexandria was a confluence of agendas. It was where British soldiers and bureaucrats refined and executed their imperial designs, and where merchants from across the Mediterranean came to make their fortunes. In her souqs and on her palm-lined esplanades, English, French, Arabs, Italians, Greeks, Armenians and Jews all intermingled in a dizzying frenzy of work and play; churning an economy that thrived on the exchange of gossip and goods. Day and night, the city seethed with intrigues. It was in its sweltering heat mitigated by a northwest breeze that “Monty” planned and won the war in North Africa. Where writers like E.M. Forester and Constantine Cavafy immortalized a decadent epoch through their respective brands of myth-making. Every word spoken, every move made, during this time, later became a nostalgia to be clung to by her aging denizens. Yet, this was but one Alexandria. Despite her modern renaissance, this city of pashas and aristocrats was but a mere approximation, an unconscious and fleeting parody of an earlier self. For entombed beneath the concrete of the modern town, were the undisturbed remains of one of the greatest cities the world had ever seen. This, the Alexandria of antiquity, brought together all previous crossroads, setting the standard for every great international city that would follow in her wake. The Alexandria of the ancients was a civilization unto herself -- an epicenter of human achievement. Within her boundless parameters thrived a people devoted to scholarship, invention, technology, commerce and leisure. Today, this memory echoes as an endless catalogue of peoples, personages and achievements. Her success was predicated upon the wiles of Egyptian priests, Greek aristocrats, Jewish merchants, Persian middlemen, and Phoenician sailors. Visitors from Iberia to India to sub-Saharan Africa came to explore the city’s bi-ways. From the labyrinthine crypts of the city’s great library come to us the calculations of her immortalized savants: the geography of Strabo, the astronomy of Hipparchus, the mathematics of Euclid. Flourishing side-by-side with this rigorous scholarship was a mélange of pseudo-sciences that operated with unprecedented freedom: Gnostics, neo-Platonists, and Hermetic philosophers shared the city’s pulpits with the cults of Mythra, Isis, Christ and Yahweh – to name but a few. Without doubt, Alexandria was an interzone par excellence - a powerhouse of civilization - where every idea, philosophy and project coalesced into perfection. Yet, the passing of time would exact its inevitable toll. Today Alexandria stands as little more than a maritime suburb of Cairo. Squalid, dusty and ghost-ridden, she exists as a husk of laundry-bannered tenements and European motifs held captive by the hinterland she had always rejected. Upstaged as a seaport, much of the traffic to-and-from the town nowadays enters and exits from the desert to her rear – perhaps the greatest indication of her tragic descent into irrelevance. Even the reincarnation of her legendary library in the architecturally savvy Bibliotheque Alexandrina (a structural epitaph to a bygone moment) inspires the pathos of a past utterly unattainable. But from Alexandria’s poignant decay comes the glory of a life lived to its fullest. She remains the original exemplar of the international crossroads whose legacy resides in her many progeny in which so many of us today call home. Whether or not she is to be re-born, is left purely to Providence. In the meantime she sleeps, forever exuding the past beneath that same Mediterranean breeze. About Me John Bell and John Zada are two Canadians of Middle East origin who have had a lifelong fascination with the Middle East. A diplomat and a writer respectively, they have both spent more than two decades living and working throughout the region in various capacities. Here they pool their knowledge, insights, and experiences to generate and enliven new perspectives on the region, and project new ideas regarding human development as related to the Middle East and beyond. About this Blog 'Al-Bab' is a Middle East blog that looks at the region beyond the stale, news grabbing conflicts that afflict it. This site presents the land, people and spirit of the Middle East, as well as its past, present, and sometimes its future, as its authors see it. The blog offers a patchwork of vignettes that celebrate the region's rich heritage and the many linkages shared by the people who live there. It also aims to present new and constructive paradigms for conscious human evolution, with the Middle East as a backdrop. We hope the blog will inform those with an interest in learning about the Middle East, and also act as a vector for learning regarding the possibility of positive human change.
{ "pile_set_name": "Pile-CC" }
Nation, Psychology, and International Politics, 1870-1919 Glenda Sluga Nation, Psychology, and International Politics, 1870-1919 Glenda Sluga This volume offers a new cultural and political history of the idea of the nation. Situating the history of international politics and the idea of the nation in the history of psychology, it reveals the popularity and political importance of a transnational discourse of the psychology of nations that had taken shape in the previous half-century.
{ "pile_set_name": "Pile-CC" }
Luo peoples The Luo are several ethnically and linguistically related Nilotic ethnic groups in Africa that inhabit an area ranging from South Sudan and Ethiopia, through Northern Uganda and eastern Congo (DRC), into western Kenya, and the Mara Region of Tanzania. Their Luo languages belong to the Nilotic group and as such form part of the larger Eastern Sudanic family. Luo groups in South Sudan include the Shilluk, Anuak, Pari, Acholi, Balanda Boor, Thuri and Luwo, and those in Uganda include the Alur, Acholi, Lango, Padhola, and Joluo. The Joluo and their language Dholuo are also known as the "Luo proper", being eponymous of the larger group. The level of historical separation between these groups is estimated at about eight centuries. Dispersion from the Nilotic homeland in South Sudan was presumably triggered by the turmoils of the Muslim conquest of Sudan. The migration of individual groups over the last few centuries can to some extent be traced in the respective group's oral history. Origins in Sudan The Luo are part of the Nilotic group of people. The Nilote had separated from the other members of the East Sudanic family by about the 3rd millennium BC. Within Nilotic, Luo forms part of the Western group. Within Luo, a Northern and a Southern group is distinguished. "Luo proper" or Dholuo is part of the Southern Luo group. Northern Luo is mostly spoken in South Sudan, while Southern Luo groups migrated south from the Bahr el Ghazal area in the early centuries of the second millennium AD (about eight hundred years ago). A further division within the Northern Luo is recorded in a "widespread tradition" in Luo oral history: the foundational figure of the Shilluk (or Chollo) nation was a chief named Nyikango, dated to about the mid-15th century. After a quarrel with his brother, he moved northward along the Nile and established a feudal society. The Pari people descend from the group that rejected Nyikango. Ethiopia The Anuak are a Luo people whose villages are scattered along the banks and rivers of the southwestern area of Ethiopia, with others living directly across the border in South Sudan. The name of this people is also spelled Anyuak, Agnwak, and Anywaa. The Anuak of South Sudan live in a grassy region that is flat and virtually treeless. During the rainy season, this area floods, so that much of it becomes swampland with various channels of deep water running through it. The Anuak who live in the lowlands of Gambela are distinguished by the color of their skin and are considered to be Nilotic Africans. The Ethiopian peoples of the highlands are of different ethnicities, and identify by lighter skin color. The Anuak have accused the current Ethiopian government and dominant highlands people of committing genocide against them. The government's oppression has affected the Anuak's access to education, health care and other basic services, as well as limiting opportunities for development of the area. The Acholi, another Luo people in South Sudan, occupy what is now called Magwi County in Eastern Equatorial State. They border the Uganda Acholi of Northern Uganda. The South Sudan Acholi numbered about 10,000 on the 2008 population Census. Uganda Around 1500, a small group of Luo known as the Biito-Luo, led by Chief Labongo (his full title became Isingoma Labongo Rukidi, also known as Mpuga Rukidi), encountered Bantu-speaking peoples living in the area of Bunyoro. These Luo settled with the Bantu and established the Babiito dynasty, replacing the Bachwezi dynasty of the Empire of Kitara. According to Bunyoro legend, Labongo, the first in the line of the Babiito kings of Bunyoro-Kitara, was the twin brother of Kato Kimera, the first king of Buganda. These Luo were assimilated by the Bantu, and they lost their language and culture. Later in the 16th century, other Luo-speaking people moved to the area that encompasses present day South Sudan, Northern Uganda and North-Eastern Congo (DRC) – forming the Alur, Jonam and Acholi. Conflicts developed when they encountered the Lango, who had been living in the area north of Lake Kyoga. The Lango also speak a Luo language. According to Driberg (1923), the Lango reached the eastern province of Uganda (Otuke Hills), having traveled southeasterly from the Shilluk area. The Lango language is similar to the Shilluk language. There is not consensus as to whether the Lango share ancestry with the Luo (with whom they share a common language), or if they have closer ethnic kinship with their easterly Ateker neighbours, with whom they share many cultural traits. Between the middle of the 16th century and the beginning of the 17th century, some Luo groups proceeded eastwards. One group called Padhola (or Jopadhola - people of Adhola), led by a chief called Adhola, settled in Budama in Eastern Uganda. They settled in a thickly forested area as a defence against attacks from Bantu neighbours who had already settled there. This self-imposed isolation helped them maintain their language and culture amidst Bantu and Ateker communities. Those who went further a field were the Jo k'Ajok and Jo k'Owiny. The Ajok Luo moved deeper into the Kavirondo Gulf; their descendants are the present-day Jo Kisumo and Jo Karachuonyo amongst others. Jo k'Owiny occupied an area near Got Ramogi or Ramogi hill in Alego of Siaya district. The Owiny's ruins are still identifiable to this day at Bungu Owiny near Lake Kanyaboli. The other notable Luo group is the Omolo Luo who inhabited Ugenya and Gem areas of Siaya district. The last immigrants were the Jo Kager, who are related to the Omollo Luo. Their leader Ochieng Waljak Ger used his advanced military skill to drive away the Omiya or Bantu groups, who were then living in present-day Ugenya around 1750AD. Kenya and Tanzania Between about 1500 and 1800, other Luo groups crossed into present-day Kenya and eventually into present-day Tanzania. They inhabited the area on the banks of Lake Victoria. According to the Joluo, a warrior chief named Ramogi Ajwang led them into present-day Kenya about 500 years ago. As in Uganda, some non-Luo people in Kenya have adopted Luo languages. A majority of the Bantu Suba people in Kenya speak Dholuo as a first language and have largely been assimilated. The Luo in Kenya, who call themselves Joluo (aka Jaluo, "people of Luo"), are the fourth largest community in Kenya after the Kikuyu, Luhya and Kalenjin. In 2017 their population was estimated to be 6.1 million. In Tanzania they numbered (in 2010) an estimated 1,980,000 . The Luo in Kenya and Tanzania call their language Dholuo, which is mutually intelligible (to varying degrees) with the languages of the Lango, Kumam and Padhola of Uganda, Acholi of Uganda and South Sudan and Alur of Uganda and Congo. The Luo (or Joluo) are traditional fishermen and practice fishing as their main economic activity. Other cultural activities included wrestling (yii or dhao) for the young boys aged 13 to 18 in their age sets. Their main rivals in the 18th century were the Lango, the Highland Nilotes, who traditionally engaged them in fierce bloody battles, most of which emanated from the stealing of their livestock. The Luo people of Kenya are nilotes and are related to the Nilotic people. The Luo people of Kenya are the fourth largest community in Kenya after the Kikuyu and, together with their brethren in Tanzania, comprise the second largest single ethnic group in East Africa. This includes peoples who share Luo ancestry and/or speak a Luo language. Acholi (Uganda, South Sudan, Kenya) Langi (Uganda, South Sudan) Alur (Uganda and DRC) Anuak (Ethiopia and South Sudan) Blanda Boore (South Sudan) Jopadhola (Uganda) Jumjum (South Sudan) Jur Beli (South Sudan) Kumam (Uganda) Joluo (Kenya and Tanzania) Luwo (South Sudan) Pari (South Sudan) Shilluk (South Sudan) Thuri (South Sudan) Balanda Boor (South Sudan) Cope/Paluo people (Uganda) Notable Luo people Aamito Lagum, Ugandan international fashion model and winner of the first Africa's Next Top Model Achieng Oneko, independence freedom fighter and politician (Kenya) Adongo Agada Cham, 23rd King of the Anuak Nyiudola Royal Dynasty of Sudan and Ethiopia Ayub Ogada, singer, composer, and performer on the nyatiti, the Nilotic lyre of Kenya Barack Obama, 44th President of the United States, of Luo descent through his father, Barack Obama, Sr. (American) Barack Obama Sr., economist, Harvard University graduate, father of previous U.S. President Barack Obama (Kenyan) Bazilio Olara-Okello, former Senior Army officer, deceased (Ugandan) who led the rebellion that gave Tito Okello the Presidency Betty Oyella Bigombe, former Ugandan politician, a senior fellow at the U.S Institute of Peace George Cosmas Adyebo, was a Ugandan politician and economist who was Prime Minister of Uganda from 1991 to 1994. Daniel Owino Misiani, was a Tanzanian musician from Mara Region. He was known as the "King of History" in Kenya; overseas and in Tanzania, he was known as "the grandfather of benga", of which he pioneered. David Wasawo, University of Oxford trained Zoologist and the first African Deputy Principal of Makerere University College and Nairobi University College Divock Okoth Origi, is a Belgian professional footballer who plays as a forward for Liverpool and the Belgium national team. He is the son of former Kenyan professional footballer Mike Origi. (Belgian) Dennis Oliech, football player, the most successful Kenyan footballer of his time George Ramogi, musician (Kenya) Erinayo Wilson Oryema, Uganda's first African Inspector General of Police, Minister of Land, Mineral, and Water Resources and Minister of Land, Housing and Physical Planning, (Uganda) Grace Ogot, educationist (Kenya) Harris Onywera, a University of Nairobi, Rhodes University and University of Cape Town graduate, and a Centers for Disease Control and Prevention trained STI/HIV/AIDS researcher (Kenya) Henry Luke Orombi, Archbishop of the church of Uganda Prof. Henry Odera Oruka - philosopher James Orengo, Senate Member in Kenya and a Senior Counsel in Kenya. He is also known for the Second Liberation fight in Kenyan politics Janani Luwum, former Archbishop of the Church of Uganda Jaramogi Oginga Odinga - independence fighter, first Vice President of independent Kenya Johnny Oduya, a defenseman for the Chicago Blackhawks of the NHL Joseph Kony, leader of the Lord's Resistance Army, notorious rebel group in Uganda Lupita Nyong'o, Oscar Award winning actress and filmmaker; graduate from The Yale School of Drama, (Kenyan/Mexican) Matthew Lukwiya, epidemiologist, died while fighting to eradicate the ebola pandemic in northern Uganda Milton Obote, former Ugandan President Musa Juma, musician (Kenya) Tom Mboya, politician, Pan-Africanist, assassinated in 1969 (Kenya) Tony Nyadundo, musician (Kenya) Oburu Odinga, former Kenyan Minister and Member of Kenyan Senate Okatch Biggy, musician (Kenyan) Okot p'Bitek, poet and author of the Song of Lawino (Uganda) Olara Otunnu, former Under-Secretary-General of the United Nations and Special Representative for Children and Armed Conflict (Uganda) Raila Odinga, second Prime Minister of Kenya Robert Ouko, Kenyan Foreign Minister, murdered in 1990 Thomas R. Odhiambo, pre-eminent scientist, founder of International Centre of Insect Physiology and Ecology (Kenya) Tito Okello, former President of Uganda and Army Commander Yvonne Adhiambo Owuor, author (Kenya) References Further reading Ogot, Bethwell A., History of the Southern Luo: Volume I, Migration and Settlement, 1500-1900, (Series: Peoples of East Africa), East African Publishing House, Nairobi, 1967 Johnson D., History and Prophecy among the Nuer of Southern Sudan, PhD Thesis, UCLA, 1980 Deng F.M. African of Two Worlds; the Dinka in Afro-Arab Sudan, Khartoum, 1978 External links Re-introducing the "People Without History" Towards a Human Rights Approach to Citizenship and Nationality Struggles in Africa The making of the Shilluk kingdom: A socio-political synopsis About Kenya The Luo History of the Anuak to 1956, by Professor Emeritus Robert O. Collins The pride of a people: Barack Obama, the ‘LUO’, lwanda magere) by Philip Ochieng, Nation Media Group, January, 2009. The Shilluk People, Their Language and Folklore by Diedrich Westermann Category:Ethnic groups in Kenya Category:Ethnic groups in South Sudan Category:Ethnic groups in Tanzania Category:Ethnic groups in Uganda Category:Pastoralists
{ "pile_set_name": "Wikipedia (en)" }
Michael Douglas knows when to give Catherine Zeta-Jones 'space' Getty Images If Ryan Seacrest looked freshly captivated by Catherine Zeta-Jones' look on the red carpet, so did her fellow-Oscar-winner husband, Michael Douglas ... who claimed he'd first seen it only 15 minutes before. As he explained to Seacrest, "You have to give a little space. You learn that after 15 years of marriage." -- Jay Bobbin, Zap2it If Ryan Seacrest looked freshly captivated by Catherine Zeta-Jones' look on the red carpet, so did her fellow-Oscar-winner husband, Michael Douglas ... who claimed he'd first seen it only 15 minutes before. As he explained to Seacrest, "You have to give a little space. You learn that after 15 years of marriage." -- Jay Bobbin, Zap2it (Getty Images) If Ryan Seacrest looked freshly captivated by Catherine Zeta-Jones' look on the red carpet, so did her fellow-Oscar-winner husband, Michael Douglas ... who claimed he'd first seen it only 15 minutes before. As he explained to Seacrest, "You have to give a little space. You learn that after 15 years of marriage." -- Jay Bobbin, Zap2it
{ "pile_set_name": "Pile-CC" }
Kenzo ring This ring is from KENZO line , made up from sterling silver and wood. It has a very unique flowery design at the front. It has been worn only a few times, with no signs of wear. Its original price was 120euro
{ "pile_set_name": "Pile-CC" }
Mill y Marx: dos visiones de la libertad1 Mill and Marx: two visions of freedom César Augusto Mora Alonso Escuela Normal Superior de Cartagena de Indias/ Universidad de Cartagena Giovanni Mafiol de la Ossa Universidad de Cartagena Resumen Este texto analiza las concepciones que J. S. Mill y K. Marx defienden sobre la libertad, con el fin de establecer si entre ellas hay puntos de convergencia, a pesar de las diferentes motivaciones que las impulsan. En este sentido, la tesis que se propone es que ambos pensadores coinciden en la defensa que realizan de la libertad, dado que le otorgan un papel destacado a la autodeterminación, el libre desarrollo de la individuali1 Este trabajo ha sido el resultado de la participación en dos eventos dedicados a homenajear la vida y obra de John Stuart Mill, a saber: Simposio John Stuart Mill: vigencia y legado de su pensamiento filosófico y V Congreso Internacional y VIII Nacional de Filosofía del Derecho, Ética y Política. Este tuvo lugar en la Sede Candelaria de la Universidad Libre (Bogotá) del 26 al 28 de agosto de 2015; aquel, entre tanto, se desarrolló en Tunja, en las instalaciones de la Universidad Pedagógica y Tecnológica de Colombia (UPTC), los días 24 y 25 de agosto del mismo año. César Augusto Mora Alonso, Giovanni Mafiol de la Ossa104 dad y a la existencia de condiciones materiales para su concreción. Así las cosas, las distancias que las separan no constituyen ningún óbice al momento de abogar por lo que consideran como el valor fundamental del ser humano. Palabras clave: Mill, Marx, libertad, autodeterminación, libre desarrollo de la individualidad Abstract This paper analyzes the conceptions that J. S. Mill and K. Marx expresses on freedom, in order to establish whether there are convergence points among them, despite their different purposes. In this regard, we affirm that both thinkers agree on their defense of freedom, since they give a prominent role to self-determination, the free development of individuality and the existence of material conditions for its concretion. Thus, the distances that separate them are no obstacle, when advocating for what they consider as the fundamental value of the human being. Key words: Mill, Marx, freedom, self-determination, free development of individuality Una relación complicada No cabe duda de que la influencia de John Stuart Mill y Karl Marx ha sido decisiva en la filosofía y política contemporáneas. Ambos, a su manera, se destacaron por la promoción de sociedades libres y democráticas. El primero buscaba blindar al individuo de los abusos del poder estatal y, sobre todo, de la tiranía que ejercen las tradiciones y costumbres sociales, mientras que el segundo tenía como propósito la emancipación de la humanidad del yugo alienante del modo de producción capitalista. Con esto, aspiraba a una sociedad en la que el libre desarrollo de cada cual terminara por convertirse en una condición indispensable para el libre desarrollo de todos. Mill y Marx: dos visiones de la libertad 105 Sin embargo, buscar puntos de convergencia entre Mill y Marx puede que de entrada no sea una tarea sencilla, ya que en el primer volumen de El Capital las críticas y los ataques que se dirigen contra el filósofo inglés destacan por su implacabilidad. En efecto, allí se dice que es un presumido –"se proclama a sí mismo como el Adam Smith de los tiempos presentes" (1981: 89)– cuyas investigaciones no son profundas ni significativas, por la razón de que se limitan a repetir, pero a repetir mal, los alegatos endebles de los primeros divulgadores de David Ricardo (1981: 463). Básicamente, Marx se refiere a Mill en los siguientes términos: Después de demostrarnos con la claridad que hemos visto cómo la producción capitalista existiría siempre, aunque no existiera, Mill es lo bastante consecuente para probar que este régimen de producción no existe ni aun cuando existe: "Y aun en el caso anterior [cuando el capitalista adelanta al obrero todos sus medios de subsistencia] el obrero puede ser considerado bajo el mismo punto de vista, es decir, como capitalista. Pues, al suministrar su trabajo por un precio inferior al del mercado, [¡] puede entenderse que adelanta a su patrono la diferencia [!] etc.". En realidad, el obrero adelanta al capitalista su trabajo gratis durante una semana, etc., para percibir al final de la semana, etc., su precio en el mercado; y esto convierte al obrero, según Mill, ¡en capitalista! En tierra llana hasta un montón de arena puede parecer una colina; por el calibre de sus "personajes intelectuales" podemos medir todo el adocenamiento en que ha caído la burguesía (1981: 465)2. A pesar de ello, Marx reconoce que Mill es consciente tanto de la situación precaria del proletariado como de la legitimidad de sus aspiraciones, y que por eso se esfuerza por conciliar las exigencias de esta clase con los postulados esenciales de la economía política capitalista. De ahí que lo 2 Salvo indicación expresa, corresponden a los autores mencionados todas las comillas, cursivas y corchetes que aparezcan dentro de las citas. César Augusto Mora Alonso, Giovanni Mafiol de la Ossa106 considere no como un simple sofista y sicofanta de la clase dominante, sino como un hombre que desea obtener cierta importancia científica, motivo por el cual merece que no se le ubique en la misma cohorte de los economistas vulgares y apologéticos3. El problema es que Marx considera que esta propuesta –la de armonizar las reivindicaciones de la causa obrera con los principios del capitalismo– termina en un vacuo sincretismo que concilia lo inconciliable, y que representa, en últimas, "la declaración en quiebra de la economía "burguesa" (1981: XVI). Por su parte, en la obra de Mill no se encuentran comentarios despectivos de la personalidad o los escritos de Marx; es más, es posible que no exista ninguna alusión explícita de carácter positivo o negativo sobre él. Lo más probable es que Mill no haya estudiado sus trabajos y, en caso de que los hubiera leído, no fueron, a lo mejor, lo suficientemente significativos como para referenciarlos. Es cierto que Mill en su Autobiografía (2008) se denomina socialista. De hecho, su política económica rechaza el monopolio capitalista de los medios de producción y promueve impuestos redistributivos de la riqueza destinados al desarrollo de obras sociales benéficas; pero la verdad es que sus posiciones, en este sentido, están más cercanas al "socialismo utópico" que al "socialismo científico". Además, en Sobre la libertad, expresa sus reservas contra ciertas actitudes de los movimientos socialistas y de masas. Una de ellas, quizá la más grave, es la de imponer la opinión mayoritaria. La razón estriba en que esta tendencia o sentimiento democrático puede coartar los dos requisitos indispensables para el despliegue del potencial humano, a saber: la libertad y la variedad de situaciones; lo que conduce, inevitablemente, a 3 De hecho, en el primer volumen de El Capital, en el capítulo titulado "Conversión de la plusvalía en capital", se cita un pasaje de Principios de economía política de Mill para ilustrar la miseria de los obreros: "Hoy día, el producto de trabajo se divide en razón inversa al trabajo: la parte mayor va a parar a los que nunca han trabajado, la siguiente a aquellos cuyo trabajo casi es puramente nominal, y así, descendiendo en la escala, la recompensa va haciéndose menor y menor a medida que el trabajo se hace más duro y más desagradable, hasta llegar al trabajo físico más fatigoso y agotador, que a veces no rinde siquiera lo estrictamente necesario para vivir" (Mill, citado por Marx, 1981: 555). Mill y Marx: dos visiones de la libertad 107 la uniformidad de los individuos. A juicio de Mill, el modo en que tales movimientos acometen dicha uniformidad es a través de la elevación de las clases bajas y el rebajamiento de las altas (2014: 142, 162-163). Otro pasaje da cuenta del proceder pernicioso de muchos miembros de la clase trabajadora: Solo tenemos que suponer una considerable difusión de las ideas socialistas, con lo cual se volverá intolerable a los ojos de la mayoría poseer una propiedad que vaya más allá de una pequeña cuantía, o algún ingreso que no se haya ganado mediante el trabajo manual. En principio, opiniones como estas prevalecen ya ampliamente entre la clase artesana, y pesan de una manera opresiva sobre los que están sujetos principalmente a la opinión de dicha clase, esto es, sus propios miembros. Es sabido que los malos trabajadores, que constituyen la mayoría de los operarios en muchas ramas de la industria, son de la firme opinión de que deberían recibir los mismos salarios que los buenos, y que no se debería permitir a nadie, a través del trabajo a destajo o de otro modo, ganar a causa de una mayor habilidad o aplicación más que otros que no las tengan. Y emplean una policía moral, que en ocasiones se convierte en física, para impedir que los trabajadores hábiles reciban, y que los patrones les den una remuneración superior por un servicio más útil (2014: 162-163). Aun así, pese a las distancias que en este sentido hay entre ambos autores, consideramos que es posible establecer puntos de convergencia entre ellos. Los nexos podrían hallarse en la defensa que realizan de la libertad, pues la autodeterminación y el libre desarrollo de la individualidad son los imperativos éticos que definen sus reflexiones filosóficas. Por ello, nuestro propósito aquí es poner de relieve las coincidencias que estas dos visiones presentan a la hora de promover sociedades democráticas, en las que los individuos puedan desarrollar plenamente sus proyectos de vida, sin la interferencia de poderes externos. Para eso, primero se hace necesario abordar por separado cada una de estas propuestas; luego, reconocer sus César Augusto Mora Alonso, Giovanni Mafiol de la Ossa108 diferencias y, finalmente, lograr apreciar los puentes que, en relación con la libertad, se pueden construir entre Mill y Marx. Marx y su visión de la libertad4 En la obra de Marx, el tema de la libertad aparece estrechamente vinculado al de la revolución comunista, ya que su objetivo es la liberación de las grandes mayorías de la explotación y miseria a la que las somete el capitalismo. En el fondo, lo que se busca es el establecimiento de una sociedad en la que se hayan franqueado las barreras que impiden la autorrealización humana. La idea es que cada quien pueda desarrollar libremente su individualidad y contribuir, al mismo tiempo, al fomento de la de sus semejantes. En suma, la sociedad comunista sería aquella en la que el libre desarrollo de uno termina siendo la condición fundamental para el libre desarrollo de todos (Marx y Engels, 1998: 67). Uno de los medios para alcanzar la liberación es la crítica tajante del statu quo. De acuerdo con Marx, esta crítica debe caracterizarse por la indignación y la denuncia (2012: 50), así que su tarea estriba en "desenmascarar la enajenación del hombre en su forma profana" (2012: 48). En este orden de ideas, también tiene como propósito el hacer que los explotados tengan conciencia de la condición en la que se hallan. En últimas, lo que se persigue es la transformación del ser humano en lo más valioso, al cumplir con la obligación moral de erradicar todos los factores que lo oprimen y esclavizan (2012: 54-55). A juicio de Marx, la explotación del hombre por el hombre se presenta porque las relaciones de producción del sistema capitalista generan y 4 La mayor parte de esta sección retoma los argumentos de un artículo titulado "Sobre la idea de justicia en Marx", de la autoría de César Augusto Mora Alonso (2017), que aparece publicado en Cuestiones de Filosofía, 3(21), 45-63. Revista de carácter semestral, editada por la Escuela de Filosofía y Humanidades de la Universidad Pedagógica y Tecnológica de Colombia (UPTC). Mill y Marx: dos visiones de la libertad 109 sostienen una división entre la minoría que posee los medios de producción y la mayoría, que únicamente tiene la fuerza de trabajo para poder sobrevivir. La cuestión radica en que estos trabajadores –pese a toda la opulencia que generan– se encuentran en la más absoluta miseria. De ahí que Marx afirme que se hallan en estado de enajenación, pues sus actividades laborales son forzadas, lo que se traduce en la ausencia de dominio sobre sus propias vidas. De lo que se trata, entonces, es de acabar con el trabajo que enajena, para darle lugar al trabajo que libera, que humaniza (Marx, 1985). Es por esto que la idea de "emancipación humana" ocupa un lugar destacado en el proyecto liberador de Marx. En Sobre la cuestión judía (2008), dicha idea se presenta de la siguiente manera: Toda emancipación es la recuperación del mundo humano, de las relaciones, al hombre mismo. La emancipación política es la reducción del hombre, de una parte, a miembro de la sociedad burguesa, al individuo egoísta independiente y, de otra, al ciudadano del Estado, a la persona moral. Solo cuando el hombre individual real recupera en sí al ciudadano abstracto y se convierte como hombre individual en ser genérico, en su trabajo individual y en sus relaciones individuales, solo cuando el hombre ha reconocido y organizado sus forces propres como fuerzas sociales y cuando, por tanto, no separa ya de sí la fuerza social en forma de fuerza política, solo entonces se lleva a cabo la emancipación humana (2008: 196-197). No obstante, es el mismo proletariado el que debe realizar esta emancipación, puesto que es el que más padece, por cuanto es una clase "con cadenas radicales (...) al que su sufrimiento universal le confiere un carácter universal; que no reclama un derecho especial, ya que no es una injusticia especial la que padece, sino la injusticia a secas (...)" (2012: 59). Este es el motivo por el cual Marx considera que la emancipación de la clase trabajadora solo puede darse en el momento en que se libere de –pero también libere a– los sectores sociales restantes, por la razón de que dicha César Augusto Mora Alonso, Giovanni Mafiol de la Ossa110 clase representa "la pérdida total del hombre y (...) solo recuperándolo totalmente puede ganarse a sí misma" (2012: 59). Por eso es que todas las miradas de Marx apuntan a la autorrealización (Selbstbestimmung), que concibe como la capacidad que tiene cada persona para determinar su propia vida, al desplegar –completa y voluntariamente– todas sus aptitudes. Sin embargo, aclara que esto solamente se dará una vez que hayan sido destruidas la división del trabajo y las contradicciones de clase. Lo que abrirá las puertas a un nuevo tipo de organización social, en la que se establece que el libre desarrollo de todos depende en gran medida del libre desarrollo de cada uno (Marx y Engels, 1998: 67). Marx denomina comunista a ese nuevo tipo de organización social. En ella, todos sus miembros trabajan de manera libre y voluntaria, al ya no estar enajenados por el trabajo forzado. En ese sentido, la regulación de las condiciones de producción hace posible "que yo pueda dedicarme hoy a esto y mañana a aquello, que pueda por la mañana cazar, por la tarde pescar y por la noche apacentar el ganado y, después de comer, si me place, dedicarme a criticar, sin necesidad de ser exclusivamente cazador, pescador, pastor o crítico, según los casos" (Marx y Engels, 1994: 46). En función de este panorama, Marx introduce la célebre distinción entre el "reino de la necesidad" y el "reino de la libertad". En el tercer volumen de El Capital (1981), concretamente en el capítulo XLVIII de la sección séptima, asegura lo siguiente: En efecto, el reino de la libertad solo empieza allí donde termina el trabajo impuesto por la necesidad y por la coacción de los fines externos: queda, pues, conforme a la naturaleza de la cosa, más allá de la órbita de la verdadera producción material. Así como el salvaje tiene que luchar con la naturaleza para satisfacer sus necesidades, para encontrar el sustento de su vida y reproducirla, el hombre civilizado tiene que hacer lo mismo, bajo todas las formas sociales y bajo todos los posibles sistemas de producción. A medida que se desarrolla, desarrollándose Mill y Marx: dos visiones de la libertad 111 con él sus necesidades, se extiende este reino de la necesidad natural, pero, al mismo tiempo, se extienden también las fuerzas productivas que satisfacen aquellas necesidades. La libertad, en este terreno, solo puede consistir en que el hombre socializado, los productores asociados, regulen racionalmente este intercambio de materias con la naturaleza, lo pongan bajo su control común, en vez de dejarse dominar por él como un poder ciego, y lo lleven a cabo con el menor gasto posible de fuerzas y en las condiciones más adecuadas y más dignas de su naturaleza humana. Pero, con todo ello, siempre seguirá siendo este un reino de la necesidad. Al otro lado de sus fronteras comienza el despliegue de las fuerzas humanas que se considera como fin en sí, el verdadero reino de la libertad que, sin embargo, solo puede florecer tomando como base aquel reino de la necesidad. La condición fundamental para ello es la reducción de la jornada de trabajo (1981: 826-827). Así las cosas, la sociedad comunista vendría a ser la concreción de este "reino de la libertad". En su seno, cada quien podrá realizar libremente su individualidad, debido a la desaparición de los obstáculos que impedían el desarrollo total de las potencialidades humanas. Por lo tanto, las riendas estarán sueltas para la realización integral de los distintos proyectos personales de vida. En suma, emancipación, libertad y autorrealización son los imperativos éticos que definen el pensamiento de Marx. En estos, la humanidad encuentra su carácter genérico (Marx, 1985). Además, en un artículo de la Gaceta Renana, titulado "Los debates sobre la libertad de prensa y sobre la publicación de las sesiones de la Dieta", deja claro lo que representa la libertad. Sobre ella afirma lo siguiente: La libertad es en tal grado la esencia del hombre que incluso sus enemigos la realizan al combatir su realidad, tratando de adueñarse como de la joya más preciosa de aquello que rechazan como joya de la naturaleza humana. Ningún ser humano combate la libertad, lo que combate es a lo sumo la libertad de los otros. Por lo tanto, siempre ha existido una César Augusto Mora Alonso, Giovanni Mafiol de la Ossa112 forma de libertad, solo que en un caso como prerrogativa especial y en otro como derecho general (1983: 75). Pese a ello, lo que Marx asume como libertad no puede ser identificado sin más con la concepción que de ella tiene la tradición liberal, por cuanto esta última –con su divisa de hacer y deshacer, siempre y cuando no se perjudique al otro, y cuya aplicación práctica se concreta en el derecho de la propiedad privada– termina por convertir a las personas en mónadas aisladas y replegadas. En una sociedad en la que reina la desunión y el egoísmo, lo que puede encontrar un individuo en otro no es la materialización, sino más bien la limitación de su libertad (Marx, 2008: 190-191)5. De ahí que el ejercicio de esta únicamente sea posible en lo que Marx llama comunidad (Gemeinschaft), dado que en ella el ser humano despliega toda su naturaleza social y cooperativa (Marx, 1985); lo que posibilita que la expresión individual de una vida no deje por fuera la de los demás, pues en las relaciones sociales debe prevalecer, ante todo, la solidaridad, y no el provecho particular por cuenta del perjuicio colectivo. No obstante, esto no constituye un impedimento para el desarrollo de los proyectos personales de vida, en razón de que el comunismo se caracterizaría por ser una asociación de hombres y mujeres libres en actividad libre, en la que representaría una condición necesaria para el libre desarrollo de todos, el libre desarrollo de cada cual (Marx y Engels, 1998: 67). 5 No cabe duda de que Marx observaba con desconfianza las "libertades burguesas", como consecuencia de su íntima relación con el egoísmo y el lucro individual, pero, como lo advierte acertadamente G. Restrepo (1999), es innegable que Marx reconoce la importancia y el sentido de estas libertades, si se confrontan con las restricciones a la libertad que se imponían en la Edad Media. Además, él es consciente de la función que cumplen en las reivindicaciones de la clase proletaria por dignificar sus condiciones de trabajo y, ante todo, por el establecimiento de las condiciones propicias para el arribo de la "verdadera libertad". Nuevamente, G. Restrepo acierta al destacar que el Manifiesto comunista arremete contra los derechos económicos (libertades de empresa, contratación y apropiación), pero que no ocurre igual con los derechos individuales y políticos (libertades de pensamiento, expresión y asociación). Mill y Marx: dos visiones de la libertad 113 J. S. Mill y su visión de la libertad Mill, en Sobre la libertad, declara con preocupación que se experimenta, en su época, una fuerte tendencia a aumentar el dominio de la sociedad sobre la vida de las personas; y que esto se hace a través de dos poderes diferentes, pero que aparecen íntimamente vinculados, a saber: la legislación y la opinión (2014: 64). Por eso, en esta obra, tiene como propósito determinar hasta dónde debe llegar la jurisdicción de estos poderes, en lo que respecta a la autonomía o independencia individual, dado que es preciso establecer con claridad la proporción entre el control social legítimo y el ámbito de las libertades individuales. Un tema en el que Mill considera que queda todo por hacer y que se revela "como la cuestión vital del futuro" (2014: 47). Esto, para el filósofo inglés, reviste suma importancia porque la soberanía popular tiene la facilidad de convertirse en tiranía de la mayoría, al desear oprimir a una parte de la sociedad o a aquellos individuos que no se adaptan a los patrones establecidos. Mill advierte que esta opresión se da cuando –al margen de las decisiones gubernamentales– la sociedad impone dictámenes en asuntos que no son de su competencia. Y si bien estos dictámenes no se ejecutan mediante graves castigos, sí son muy efectivos, pues consiguen ejercer presión sobre los aspectos más íntimos de la vida personal. Por esta razón afirma lo siguiente: Se necesita también protección contra la tiranía de imponer opiniones y sentimientos; contra la tendencia de la sociedad a imponer, por otros medios diferentes a las sanciones civiles, sus propias ideas y prácticas como reglas de conducta sobre aquellos que disienten de ellas; contra la tendencia a coartar el desarrollo y, si es posible, impedir la formación de cualquier individualidad que no esté en armonía con sus costumbres, obligando a todos los caracteres a formarse a sí mismos sobre el modelo de la propia sociedad (2014: 52). César Augusto Mora Alonso, Giovanni Mafiol de la Ossa114 En función de todo lo anterior, Mill plantea el principio de la libertad. Este proclama que no debe existir interferencia de ninguna índole en lo que concierne a los intereses particulares del individuo, por la razón de que estos aspectos de su vida y su conducta solo le pueden afectar a él. Mill llama a esto "la región propia de la libertad humana" (2014: 61). Una región sagrada que no debe ser vulnerada por los poderes estatales y sociales. En ella, la espontaneidad individual aparece de tres maneras: como libertad de pensamiento y expresión, como libertad de gustos e inclinaciones y como libertad de asociación. El filósofo inglés advierte que una sociedad que no respete estas libertades no puede ser denominada libre, independientemente de su sistema político; "y ninguna es completamente libre si [tales libertades] no existen en ella de forma incondicional y absoluta" (2014: 62)6. No obstante, Mill sostiene que, en ciertos casos, tanto el Estado como la sociedad se encuentran legitimados para imponer restricciones sobre la libertad de acción de una persona. Estos casos son aquellos en los que se puede ver amenazada la integridad de uno o varios miembros de la comunidad, lo que quiere decir que únicamente se debe actuar en contra de la voluntad de alguien con el fin de salvaguardar a los otros de un inminente daño. Y en el evento de que el daño se haya consumado, aparecen todos los motivos para sancionarlo legal o moralmente. De acuerdo con Mill, la única dimensión de la espontaneidad individual que debe estar sujeta al control externo es la que atañe a sus semejantes; en las demás, o sea, en las que son de estricta incumbencia personal, ningún otro tiene potestad (2014: 58 y 60). Este es el famoso principio de daño7. A juicio del filósofo inglés, todo aquello que apunta a la destrucción de la individualidad merece que se le llame despotismo (2014: 130). Este se 6 Los corchetes son nuestros. 7 Dicho en palabras de J. S. Mill: "El único aspecto de la conducta por el que se puede responsabilizar a alguien frente a la sociedad es aquel que concierne a otros. En aquello que le concierne únicamente a él, su independencia es absoluta. Sobre sí mismo, sobre su propio cuerpo y su propia mente, el individuo es soberano" (2014: 58). Mill y Marx: dos visiones de la libertad 115 vale de la represión y la censura para llevar a cabo dicho cometido. Por eso, en Sobre la libertad, se realiza una defensa categórica de la libertad de pensamiento y discusión, ya que de ambas depende el bienestar mental del individuo, condición básica de los demás tipos de bienestar (2014: 115). El mérito de Mill, con esta defensa, es el haber desarrollado una metodología rigurosa para que la libertad se exprese en términos efectivos (Silva et al, 2007). La individualidad, la pluralidad, el debate y la crítica se presentan como los antídotos que contrarrestan el dogmatismo y la intolerancia de los arbitrarios. Aquí también resultan ganadoras las diversas disciplinas del conocimiento. El diálogo y la argumentación son las herramientas para alcanzar y preservar la verdad. Pero si todas las personas poseen el derecho a pensar y expresar lo que quieran, ¿por qué, entonces, no pueden tener la libertad de conducir sus vidas de acuerdo con las ideas que profesan? Para Mill, la libertad de gustos e inclinaciones aparece aquí como la consecuencia natural y práctica de la libertad de conciencia y opinión. De hecho, la libertad de conducta es la que permite que la individualidad se afirme a sí misma a través de su propio desarrollo; de ahí que la considere como uno de los principios esenciales del bienestar, como "el ingrediente fundamental del progreso individual y social" (2014: 120). En efecto, el libre desarrollo de la individualidad es lo que permite que cada persona despliegue todas sus capacidades al máximo y vea en su existencia algo valioso, dado que, al autoafirmarse, puede alcanzar un alto grado de satisfacción. Ello, en palabras de Mill, tiene repercusiones positivas en la sociedad, porque cuando hay mayor plenitud de vida en las personas, "hay también más vida en el conjunto que está compuesto por ellas" (2014: 129); lo que, indudablemente, sienta las bases para el logro de la justicia social. El principio utilitarista –que propende al máximo de bienestar para el mayor número– debe allanar el terreno para que las personas desarrollen, por completo y sin restricciones, sus facultades morales e intelectuales. La libertad se presenta, entonces, como la fuente permanente e inagotable del desarrollo social (2014: 138-139). César Augusto Mora Alonso, Giovanni Mafiol de la Ossa116 Así pues, la libertad en Mill no es una formalidad, un mero derecho político abstracto. Tampoco se reduce a la justificación para no intervenir en los asuntos que forman parte de la esfera privada del individuo; en otras palabras, no representa una simple vindicación del concepto negativo de libertad, entendido como alejamiento de los demás. Si bien para el filósofo inglés este resulta relevante, la verdad es que se halla en función del concepto positivo de libertad (Ruiz Sanjuán, 2014: 34), puesto que lo crucial en su obra es la autorrealización, el libre desarrollo de la individualidad, el despliegue máximo de las facultades de la persona. Aquí residen los pilares del bienestar y la felicidad. De ahí que Mill sostenga que su defensa de la individualidad no sea una apología del egoísmo y la indiferencia social, que pretenda que a los seres humanos no les importe la vida de los demás, a menos que estén en juego sus propios intereses. Por el contrario, considera que lo indispensable para fomentar el bienestar social es la cooperación y el aumento del esfuerzo desinteresado (2014: 146). Por ello, Mill hace tanto hincapié en la idea de que una sociedad que garantice la libertad debe brindar las condiciones materiales para que los individuos se autodeterminen, ya que la pobreza y la ausencia de derechos sociales representan serios obstáculos para llevar a cabo ese propósito (Ruiz Sanjuán, 2014: 35). Bajo estas circunstancias, el principio de la utilidad sería una quimera. Divergencias y convergencias Es indiscutible que Mill y Marx coinciden en el momento de realizar una defensa a ultranza de la libertad. A primera vista, sin embargo, las motivaciones de cada uno en esta cuestión parecen ser distintas, pues, en el caso de Mill, se busca determinar "la naturaleza y los límites del poder que la sociedad puede ejercer legítimamente sobre el individuo" (2014: 47). Su finalidad es protegerlo de la tiranía que ejerce el Estado y la opinión pública a través de las tradiciones y las costumbres. En el caso de Marx, entre tanto, la intención es emancipar a la humanidad de la opresión capitalista Mill y Marx: dos visiones de la libertad 117 para establecer el comunismo. Un tipo de sociedad que representaría "el verdadero reino de la libertad", en la medida en que el libre desarrollo de cada uno sería la base para el libre desarrollo de todos. Esto significa que en el pensamiento de Marx la libertad real solo puede surgir con una revolución que destruya los fundamentos políticos y económicos de la sociedad del capital. Las cosas no son así en la propuesta de Mill, dado que no se necesita la destrucción del aparato estatal ni de la economía que lo sustenta para garantizar los derechos y las libertades individuales; de hecho, es el Estado el que los propicia y mantiene. Además, Mill muestra sus simpatías con la doctrina del libre cambio, "que descansa sobre una base diferente –aunque igualmente sólida– que el principio de la libertad individual (...)" (2014: 173). Marx, por el contrario, ve al Estado como un órgano de dominación que ofrece libertades en un plano meramente formal, y que desaparecerá en la etapa suprema del comunismo, pues, con la desaparición de los antagonismos de clase, ya no será necesario un sistema artificial que concilie los intereses en conflicto. El desarrollo de la productividad y de las condiciones de trabajo establecerá un régimen de abundancia que fomentará el libre desarrollo de la individualidad de cada uno de los asociados (1977: 12). En suma, para Marx, la libertad verdadera es la meta o el resultado de la revolución proletaria, mientras que, para Mill, es un derecho o conquista histórica que no se debe perder, sino que hay que maximizar a través del despliegue continuo de las distintas individualidades y de la sociedad en su conjunto. A juicio del filósofo inglés, la expresión de la libertad es la fuente permanente e inagotable de desarrollo de las sociedades actuales. No obstante, pese a las diferencias, es posible encontrar puntos en común entre ambas visiones de la libertad. Uno de ellos, quizá el principal, es la idea de que los individuos puedan desarrollar a plenitud sus proyectos de vida, sin la interferencia de poderes externos. A pesar de ello, tanto en Mill como en Marx, la libertad no se expresa simplemente en términos César Augusto Mora Alonso, Giovanni Mafiol de la Ossa118 negativos, es decir, como ausencia de obstáculos que coaccionan la personalidad. Para ellos, ante todo, la libertad es sinónimo de autorrealización: el hecho de que cada quien, al ser dueño de su propia voluntad, sea responsable de su propia existencia. Los dos estiman que la libertad es lo distintivo del ser humano. Gracias a su puesta en práctica, este puede dar rienda suelta a toda su naturaleza social y cooperativa. La autoafirmación de cada una de las individuales impacta positivamente a la sociedad en su conjunto. En ambos, por tanto, hay un rechazo rotundo del egoísmo y la indiferencia. De igual manera, ponen el acento en la existencia de condiciones materiales para garantizar una verdadera libertad. Otro punto de convergencia es la defensa que realizan de la libertad de pensamiento y expresión. A partir de aquí se pueden tender puentes entre el segundo capítulo de Sobre la libertad y los artículos publicados por Marx en la Gaceta Renana; en especial, el "Editorial del n. ° 179 de la Gaceta de Colonia", las "Observaciones sobre las recientes instrucciones para la censura en Prusia" y "Los debates sobre la libertad de prensa y sobre la publicación de las sesiones de la Dieta". En estos escritos, Marx rechaza la censura y defiende, por consiguiente, la idea de tratar en los periódicos temas de cualquier índole. Asegura que la censura es un monstruo civilizado o un aborto perfumado, mientras que la prensa libre representa la expresión de la esencia de la libertad. Aquí, al igual que Mill, confía en que la libertad de discusión y el debate público son la puerta de acceso a la verdad y el desarrollo social. No podemos concluir sin preguntar si aún se mantienen vigentes las reflexiones que estos dos pensadores realizan sobre la libertad. Sin embargo, no resulta tan sencillo responder inmediatamente que sí. La razón estriba en que han sido ampliamente cuestionados los modelos políticos que sustentan sus propuestas. Una razón suficiente para pensar en una noción de libertad que rescate lo mejor de ambas. Sin lugar a dudas, una tarea necesaria, pero de largo aliento. Mill y Marx: dos visiones de la libertad 119 No obstante, no se debe soslayar el hecho de que Mill y Marx llegan a un mismo resultado, a pesar de haber asumido el problema con perspectivas muy distintas. Como dijimos, el quid está en la apuesta por la autodeterminación, el libre desarrollo de la individualidad y la existencia de condiciones materiales para concretarlas. Es aquí donde, a nuestro parecer, estas propuestas continúan vigentes en una sociedad como la colombiana, en la que la pobreza y la desigualdad impiden que un gran sector de la población disfrute de las libertades y derechos que consagra su constitución política. Por eso, como lo hicieron estos pensadores, hay que ejercer una defensa categórica de la libertad, ya que en estos momentos se nota más que nunca que está siendo reducida a un simple formalismo. Una prueba de ello está en las restricciones que impone el nuevo código de policía y en las presiones que grupos políticos y religiosos influyentes ejercen contra las minorías, la pluralidad y la diversidad. De ahí que aún resulte pertinente la defensa que Mill ejerce, dado que nos da la clave para oponernos a las arbitrariedades y a las imposiciones restrictivas que hacen de la libertad una simple aspiración. Lo mismo sucede con Marx, puesto que, sin unas condiciones materiales adecuadas, es imposible que la libertad se concrete de manera efectiva. En definitiva, para que no sean más que un anhelo, el llamado de ambos es a seguir luchando por las libertades. César Augusto Mora Alonso, Giovanni Mafiol de la Ossa120 Referencias bibliográficas MARX, Karl. (1977). Crítica del programa de Gotha. Moscú: Editorial Progreso. MARX, Karl. (1981). El Capital. (Vols. I y III). La Habana: Editorial de Ciencias Sociales. MARX, Karl. (1983). Los debates sobre la libertad de prensa y sobre la publicación de las sesiones de la Dieta. En J. L. Vermal (Ed.). En defensa de la libertad. Los artículos de la Gaceta Renana (1842-1843). (pp. 49-102). Valencia: Fernando Torres-Editor. MARX, Karl. (1985). Manuscritos de economía y filosofía. Madrid: Alianza. MARX, Karl., y ENGELS, Friedrich. (1994). La ideología alemana. Valencia: Universitat de València. MARX, Karl., y ENGELS, Friedrich. (1998). Manifiesto comunista. Barcelona: Crítica. MARX, Karl. (2008). Sobre la cuestión judía. En R. Jaramillo Vélez (Ed.). Escritos de juventud sobre el derecho. Textos 1837-1847. (pp. 171-204). Barcelona: Anthropos. MARX, Karl. (2012). Para una crítica de la filosofía del derecho de Hegel. Introducción. En F. Groni (Ed.). Páginas malditas. Sobre la cuestión judía y otros textos. (pp. 47-60). Buenos Aires: Libros de Anarres. MILL, John Stuart. (2008). Autobiografía. Madrid: Alianza. MILL, John Stuart. (2014). Sobre la libertad. Madrid: Akal. MORA ALONSO, César Augusto. (2017). Sobre la idea de justicia en Marx. Cuestiones de Filosofía, 3(21), 45-63. Tunja: UPTC. RESTREPO, Guillermo. (1999). Ética y libertad en Marx. En J. Caycedo y J. Estrada (Comps.). Marx vive. (pp. 139-153). Bogotá: Universidad Nacional de Colombia. Mill y Marx: dos visiones de la libertad 121 RUIZ SANJUÁN, César. (2014). La libertad en el pensamiento político de John Stuart Mill. En C. Ruiz Sanjuán (Ed.). Mill, J. S. Sobre la libertad. (pp. 5-40). Madrid: Akal. SILVA, Alonso., MALDONADO, Jorge., y AGUIRRE, Javier. (2007). Individualidad, pluralidad y libertad de expresión en J. S. Mill. Praxis filosófica, (24), 115-135. Cali: Universidad del Valle.
{ "pile_set_name": "PhilPapers" }
1. Field of the Invention This invention relates to voltage controlled oscillation circuits. 2. Description of the Related Art There has been known an emitter-coupled stable multi-vibrator, for example as shown in FIG. 5, which is conventionally used as a voltage-controlled oscillation circuit. The circuit shown in FIG. 5 has a capacitor c1 connected between emitters of the transistor tr1 and tr2 which are further connected to voltage-controlled current sources cs1 and cs2. As further shown in FIG. 5, the base of tr1 is connected to the collector of tr2 and the base of tr2 is connected to the collector of tr1. Thereafter, the collectors of tr1 and tr2 are connected to a power supply terminal VCC (e.g., 3 V) through diode-connected transistors tr3 and tr4 and resistors r1, r2 connected in parallel with the transistors tr3 and tr4. With the above configuration, the transistors tr1 and tr2 are alternately turned on to cause charging to and discharging from the capacitor c1, thereby effecting oscillation operation. This produces voltage waveforms as denoted by a and b in FIG. 6 on terminals a and b. Here, each transistor is equal in size and the current value in each voltage-controlled current source is determined same. The voltage waveforms a and b oscillate with a base-to-emitter voltage VBE of each transistor as an amplitude value. The frequency is determined by the charge/discharge time of the capacitor c1, which is controlled by varying the current values of the voltage-controlled current sources cs1, cs2. For example, the charge/discharge time is reduced by increasing the current value, which means in the voltage waveforms in FIG. 6 the slant of the voltage waveform becomes abrupt to increase the frequency with amplitude value kept constant. In the configuration of FIG. 5, the transistors tr3 and tr4 are used as diodes to set the amplitude at the voltage VBE (approximately 0.7 V). The resistors r1 and r2 are provided in parallel with these diodes, in order to raise the potential on the terminals a and b to the potential of the power supply terminal VCC when no current flows through these diodes. In order to fix the amplitude value, there is a necessity, for any current value within a variable range of the voltage-controlled current sources cs1, cs2, to give a voltage drop due to the resistors r1, r2 and such current greater than the voltage VBE. If the voltage drop is not set in that manner, a current flows on a resistor r1, r2 side, making it impossible to ensure the amplitude value. Accordingly, there is a necessity of setting at a certain great value the resistance value of the resistor r1, r2 and/or the current value of the voltage-controlled current source cs1, cs2. However, such a voltage-controlled oscillation circuit is usually used for a PLL or the like, and integrated together with other circuit elements in one chip. If the resistors r1, r2 are increased in size, a floating capacitance will increase to decrease the speed in raising the potential. This, in other words, prevents the frequency from being increased. Due to this, if a high frequency oscillation is desired, the current value of the voltage-controlled current sources cs1 and cs2 requires an increase, making it difficult to reduce the power consumption. Meanwhile, if frequency is increased by decreasing the capacitance of the capacitor cl, there is a limitation in decreasing the capacitance value of the capacitor c2 if the capacitance is considered relative to the floating capacitance. As stated above, in order to increase the frequency in the FIG. 5 configuration there is no way but to increase the current value of the voltage-controlled current source cs1, cs2, resulting in difficulty in advancing the reduction in power consumption.
{ "pile_set_name": "USPTO Backgrounds" }
1. Field of the Invention The present invention relates to an actuator for use in an optical pickup, and more particularly, to improvements in retaining the neutral point of tracking in an actuator of the type in which focusing and tracking are performed by sliding the lens in the axial direction and by rotating the line about a shaft, respectively. 2. Description of the Prior Art In an actuator in which focusing and tracking are conducted by sliding a lens in the axial direction and rotating the lens around a shaft, respectively, it is common to use a rubber spring to retain the neutral point of tracking. FIG. 2 shows an example of such a conventional actuator. In the Figure, one end of a rubber spring 21 is fixed to a pin 23 set up in a fixed or stationary portion such as a yoke, and the other end thereof is fixed through a pin 22 to an objective lens retaining tube 1 constituting a movable or rotary portion and having a central portion and an annular peripheral portion. When the objective lens moves in the tracking direction indicated by the double-headed arrow A, its neutral point is retained by the resilient force of the rubber spring 21. This conventional method of retention by a rubber spring, however, requires the provision of space for the rubber spring, and is therefore very inconvenient from the view point of the desirability of reducing the size of the pickup, which has been requested more and more in recent years. If the rubber spring is inaccurately mounted in such a way that it is distorted, nonlinearity may be generated in the spring action, or the neutral position of the objective lens may be deflected from the outset. This makes the process of mounting the rubber spring very troublesome. In addition, the rubber spring does not have good temperature characteristics owing to the physical properties of its material, and is subjected to deterioration as its resonance frequency or sensitivity changes. When the objective lens retaining tube 1 is raised to its operating point by passing a bias current through a focusing servomechanism, the rubber spring generates a force which pulls it downward, thereby necessitating to flow a larger bias current. In this way, the rubber spring affects the movement of the objective lens in the focusing direction.
{ "pile_set_name": "USPTO Backgrounds" }
Healthcare Recruiting and Staffing Solutions What is Health eCareers? Health eCareers is the only true single source healthcare recruiting solution for all of your healthcare and medical staffing needs, bringing employers the unmatched ability to reach across the entire healthcare audience and target specific types of job seekers. Our healthcare recruiting solutions are a cost effective way to get your employment opportunities in front of a large pool of highly qualified candidates. Health eCareers powers the official career centers of over 100 healthcare associations, bringing you unmatched reach to the most highly qualified and difficult to reach job seekers in the industry. Healthcare professionals visit the association’s site throughout their careers and your jobs and brand, positioned within the association site, get top exposure to top talent. Click here to view our current association partners. Reach and recruit healthcare professionals through our specialty sites that put your job opportunities in front of top candidates in: These specialty sites are ideal for healthcare recruiting and staffing and getting qualified candidates to apply for your positions. Find and hire the top physician, administration / executive, nurse practitioner, physician assistant, healthcare IT and nursing candidates today! NEED HELP OR CONSULTATION? Contact our team of experts today. All Health eCareers clients receive dedicated and industry-leading account management support from a team of true healthcare recruiting experts. We've been helping healthcare employers fill positions cost-effectively for over 15 years and can help you craft the most cost-effective medical recruitment strategy available. Job postings Put your jobs in front of the right job seekers with the most detailed healthcare job categories available. Resume Search Access the most pre-screened, qualified resumes from our jobseeker database as well as our top tier association partners, reaching more than 290,000 qualified healthcare professionals; unqualified resumes are not accepted. Web site advertising, branding and featured spots Stand out from your competitors with banners, featured employer spots and enhanced job branding on our Network of healthcare Web sites. Targeted Email Campaigns and Job Alerts Increase your healthcare recruiting efforts by delivering your brand and message directly to the inbox of targeted job seekers. We offer customized distribution and the ability to advertise in job alerts so you can reach the most engaged audience.
{ "pile_set_name": "Pile-CC" }
Q: CTE Query Not Returning Desired Results I'm not getting the first record below returned in my CTE query (shown later): Here's my table: Key ParentID ChildID (Removed DateJoined Field here) 1 0 1 3 1 83 4 1 84 6 83 85 7 85 86 8 83 87 My CTE Query produces the following results: ID Name Date Joined Parent ID Parent Name Level 83 Hanks, James 2014-09-13 1 Golko, Richard 1 84 Hanks, James 2014-09-13 1 Golko, Richard 1 85 Walker, Jamie 2014-09-13 83 Hanks, James 2 87 Newman, Betty 2014-09-20 83 Hanks, James 2 86 Adams, Ken 2014-09-13 85 Walker, Jamie 3 How can i also return the first record with ParentID = 0? When I call the following sproc like this: EXEC UCU_RTG_ProgramStructure_GetMemberTree 0,4 I still only get results starting with parentID=1 as shown above Here's my CTE Query: CREATE PROCEDURE [dbo].[UCU_RTG_ProgramStructure_GetMemberTree] @ParentID int, @MaxLevel int AS WITH matrix AS ( --initialization SELECT UserID, DateJoined, ParentID, 1 AS lvl FROM dbo.UCU_RTG_ProgramStructure WHERE ParentID = @ParentID UNION ALL --recursive execution SELECT p.UserID,p.DateJoined,p.ParentID, lvl+1 FROM dbo.UCU_RTG_ProgramStructure p INNER JOIN matrix m ON p.ParentID = m.UserID WHERE lvl < @MaxLevel ) SELECT matrix.UserID, u.LastName + ', ' + u.FirstName AS Member ,DateJoined,ParentID,u2.LastName + ', ' + u2.FirstName AS Parent,lvl FROM matrix INNER JOIN dbo.Users u ON u.UserID = matrix.UserID INNER JOIN dbo.Users u2 ON u2.UserID = matrix.ParentID ORDER BY ParentID THE CTE Query is fine except it doesn't return the parentID=0 record(s) Thanks... A: I figured it out finally after looking at my post to make sure it was correct: the final select clause is wrong: SELECT matrix.UserID, u.LastName + ', ' + u.FirstName AS Member ,DateJoined,ParentID,u2.LastName + ', ' + u2.FirstName AS Parent,lvl FROM matrix INNER JOIN dbo.Users u ON u.UserID = matrix.UserID INNER JOIN dbo.Users u2 ON u2.UserID = matrix.ParentID the last INNER JOIN has to be changed to LEFT JOIN because there is no UserID 0 to join the ParentID 0 to. Hope this helps someone else with recursive CTE queries.
{ "pile_set_name": "StackExchange" }
Q: Finding quadratic residues in a finite field by using a primitive element Let $1+2x$ be a primitive element of the field $\mathbb F_9$ obtained via the irreducible polynomial $$x^2 + 1$$ over the base field $\mathbb F_3$. i) Make a list of the elements of $\mathbb F_9$ together with the primitive element $1+2x$ and all the powers of primitive element. ii) Which powers are quadratic residues and which are quadratic non-residues? Why? A: $$\begin{array}{rcl} \left(1+2x\right)^0 & = & 1 \\ \left(1+2x\right)^1 & = & 1+2x \\ \left(1+2x\right)^2 & = & x \\ \left(1+2x\right)^3 & = & 1+x \\ \left(1+2x\right)^4 & = & 2 \\ \left(1+2x\right)^5 & = & 2+x \\ \left(1+2x\right)^6 & = & 2x \\ \left(1+2x\right)^7 & = & 2x+2 \\ \end{array}$$ Since $1+2x$ is primitive, every element of $\mathbb{F}_9$ has the form $\left(1+2x\right)^\alpha$ for some $\alpha$. Thus $$\left(\left(1+2x\right)^\alpha\right)^2=\left(1+2x\right)^{2\alpha}$$ so the quadratic residues are $1,2,x,$ and $2x$ - the even powers of $1+2x$.
{ "pile_set_name": "StackExchange" }
Most Popular in Family Life Welcome to Picket Fence Blogs! Our site provides you with the top lifestyle blogs on the web. Feel free to just look around, or add your own blog so others can find you and benefit from what you have to say. Plus, it's free to join! A crunchy blog about a family of 3 (and 5 cats!) living in the Canadian Prairies. Mama & Papa are attachment parents to Samuel, wading through the adventures of life. Blog focuses heavily on breastfeeding, intactivism, cosleeping, cloth diapering etc.. #parenting#breastfeeding#crunchy#intactivist#paleo A mom's opinion of what to buy. A thorough evaluation of items we use in our home. If I feel it's worth spending the money on, then I'll recommend that you buy it! #babies#baby#baby gear# children#nursing I want to touch the lives of other children and encourage them to love reading as much as I do. If I can get just one child to pick up SOMETHING and read and actually enjoy it, I feel like I've done my job. One stay-at-home mommy and a work-a-holic daddy raising twin boys, and all of lifes adventures that go along with it. Topics include but are not limited to, Crafty projects, Photos, Appliance Repair, DIY home repairs, Floral Arangements, Recipes, ect. Fairy Busy Mommy is a blog about my life as a work at home mom & wife. Read along as I babble about working from home, day to day stuff, silly things my kid & critters do, family, fun, friends, product reviews, giveaways and whatever else suits my fancy.
{ "pile_set_name": "Pile-CC" }
Wednesday, April 04, 2012 Earlier today, the Center for American Progress released Voter Suppression 101, a report documenting conservative efforts todisenfranchise voters through state restrictions on voting. At a press call accompanying the release, former Civil Rights Movement leader and current Congressman Jim Clyburn (D-SC) was asked for his personal feelings on seeing another wave of voter disenfranchisement after he fought so hard to end Jim Crow. His response was grim: I cannot remember — even sitting in an Orangeburg County jail — when I had as much anxiety as I’m experiencing today. Back then, even when we were at the back of the bus and we were not able to sit down at lunch counters, we really felt strong that what’s happening to me here in Orangeburg, SC or Columbia, SC, ah, if I can get my plight before the United States Supreme Court, the promise of this country will be delivered for me. That’s what we felt, and I can remember our discussions in meetings — yeah, we’re going to jail now. We are going to be convicted. But we know that that conviction is going to be overturned by the United States Supreme Court.
{ "pile_set_name": "Pile-CC" }
The clinical usefulness of ictal SPECT in temporal lobe epilepsy: the lateralization of seizure focus and correlation with EEG. To analyze the relationship between ictal electroencephalography (EEG) and ictal single-photon emission computed tomography (SPECT) and to evaluate the diagnostic usefulness of ictal SPECT as an independent presurgical evaluation technique. Sixty-eight patients with temporal lobe epilepsy who underwent temporal lobectomy with good surgical outcome were included in this study. Ictal SPECT was performed during video-EEG monitoring. The ictal EEG was analyzed in 5-second intervals from the initiation of the ictal rhythm. Lateralized EEG dominance was determined by the amplitude, frequency, or regional patterns of ictal rhythm for each 5-second interval. The total ictal EEG was divided into three periods: preinjection (maximum, 30 seconds), the initial part of the postinjection period (30 seconds), and the latter part of the postinjection period (30 to 60 seconds). The results of ictal SPECT were compared with the lateralized EEG dominance of each period and at seizure onset. Fifty-four of 68 ictal EEGs correctly lateralized seizure focus ipsilateral to the side of surgery. Ictal SPECT correctly lateralized the epileptogenic temporal lobe in 61 of 68 patients (mean injection time, 29.8 seconds from onset). Multivariate analysis indicated that only the EEG dominance of the preejection period correlated significantly with the concordant hyperperfusion of ictal SPECT. Correct lateralization of ictal SPECT occurred in 10 of 14 patients with nonlateralized ictal EEG. Preinjection neuronal activity seems to be important for the accurate interpretation of the hyperperfused patterns of ictal SPECT. Ictal SPECT is an independent and confirmatory presurgical evaluation technique.
{ "pile_set_name": "PubMed Abstracts" }
This invention relates to a reclosable metal paint can of the type which incorporate a plastic handle for carrying as well as for hanging the can as from a rung of a ladder.
{ "pile_set_name": "USPTO Backgrounds" }
Diagnosis of a leiomyoma of the small intestine by selective angiography. A case is reported in which a leiomyoma of the jejunum was diagnosed by mesenteric angiography after two negative barium meal examinations. The tumour was resected and recovery was uneventful.
{ "pile_set_name": "PubMed Abstracts" }
Ride Like you Mean it! ATV Riding ATV riding in the Yaak River Valley is an incredible experience! While we do not have ATV specific trails in the Yaak, we do have miles and miles of easy riding logging roads. Much of the time you will not see another vehicle, but watch for wildlife! We have seen numerous Deer, Elk, Moose, Black Bear, Grizzly Bear and even Canadian Lynx from the ATV. We can set you up and get you started! From Yaak Base Camp, we provide maps and can direct you on many great ATV tours. From a two hour leisurely trip to a mountain top with incredible views to a 100 mile all day trip, we have you covered!​ We own various Yamaha and Polaris ATVs and UTVsthat can be available for your stay.
{ "pile_set_name": "Pile-CC" }
Henry Hyde (who took the pic of me!) asked the very good question of how writers respond when they receive a report. He’s the editor of a magazine, and said that contributors are often aghast when their work is red-penned. So what the blazes does a writer make of a 40-page document of major changes (as I described in my previous post)? Well, I try to be gentle. I also encourage the author to see the report as criticism of the work, not them – although it’s often hard for them to see that. The more writing you do in a professional environment, the thicker your soles become and the more you’re able to see a manuscript as a work for others to help you with, rather than a bundle of your most tender nerve-endings. It helps to have sensitive criticism, though. In traditional publishing, I’ve had savage editors who seemed to relish their chance to tear an author down – and generous souls who make it clear they are working for a book they already believe in. I hope I’ve learned from them how to be the latter. The author has control One author brought up an interesting point about a copy editor who had rewritten her dialogue, converting it unsuitably from period to a modern voice. With hindsight it was clear that the editor was probably working in an area outside her experience and thought all books should be edited the same way – a salutary warning to choose your team carefully. And several authors asked: ‘what if the author disagrees with the editor’? A good question. It is, of course, entirely up to you what you do with a proof-reader’s tweaks or an editor’s recommendations. You are in control. Burn the report if you like, we’ll never know – but we’d prefer to think we’d been useful. I’m careful to make suggestions rather than must-dos, and to encourage an author to explore what they’re aiming for. A good editor will also try to ensure they’re in tune with the author before any precious words change hands (let alone precious $$$). (Here’s my post on how a good editor helps you be yourself. I’m not tooting my own trumpet here – for most of you who are reading this, it’s likely I won’t be the right editor. Be highly wary of anyone who says they can developmentally edit absolutely anything.) Thanks Toni Holopainen for the pic of the man undergoing a thorough edit Next (and finally): self-editing to self-censorshipIf you’ve worked with editors, how did you feel about their criticisms? If you’ve been through this process several times, have you toughened up? Have you disagreed with an editor’s suggestions, and what came of it? Have you ever paid for an editorial service and concluded it was a waste of time and money? Let’s discuss! I get a lot of enquiries from first-time authors who have already set a publication date and allowed a nominal fortnight or so to sort out the book after my report. They have no idea how deep a developmental edit might go. Especially for a first novel, or a first leap into an unfamiliar genre, you might need a few months to tune the book up. I know some writers who’ve taken a year on a rewrite, and I recently wrote a document of 20,000 words on a book of 100,000. Equally, other authors don’t need as much reworking and should have a usable manuscript inside a month. But don’t make a schedule until your editor delivers their verdict – er, worst. Thanks, Henry Hyde, for the pic of me:) Next (after a brief sojourn at The Undercover Soundtrack): negative criticismHave you had editorial feedback (whether from an editor or critique partners) that required major rewrites? How long did it take you to knock the manuscript into its new shape? Were you surprised? As you might have seen from various flurries on Facebook and Twitter, last weekend I gave a talk at the Writers & Artists selfpublishing event in London. There are some interesting discussion points I want to share, and some of you will have crawled out of Nanowrimo and won’t be in the mood for a giant reading task, so I’ll be posting them in short bites over the next 6 days. Editing – many minds make your book better My task at the event was to explain the various steps of editing and why they were important – developmental editing, copy editing and proof reading (here’s my post on a publishing schedule for indie authors ). This care with the book content was an absolute gold standard for the day, and was stressed over and again – guided rewriting with expert help, and attention to detail. JJ Marsh of Triskele Books in her talk on how their collective works, said that the combined critical talents of her fellow authors had made her books far better than she could have made them on her own. Psychological thriller writer Mark Edwards, women’s fiction author Talli Roland all talked about the people who helped shoulder the responsibility of getting the book to a publishable standard. Jon Fine, director of author and publisher relations at Amazon, cut to the chase by quoting thriller selfpublishing phenomenon Joe Konrath : ‘Don’t publish shit.’ (Next time I’ll just say that.) Some of the delegates didn’t need to be told anyway. From a show of hands, roughly a fifth of them had already been working with editors, in thriving professional relationships where their limits were being pushed and they were being challenged to raise their game. If there’s one advantage selfpublishing can give us, it’s the control over our destiny and artistic output, and many of these writers were committed to making books they could be proud of. Eek, the cost! True, good editing comes at a cost. Jeremy Thompson of the Matador selfpublishing imprint gave grim warnings about companies that advertise editing services for just $99. And it probably seems unjust that a pastime that should be so cheap has such a steep price tag. Writing is free as air, after all. But publishing isn’t. It never has been. No manuscript ever arrived at a publisher and went straight onto the presses. It went through careful stages of professional refinement – which takes time and money. Male or female So a female author or a male main character must need a certain gender of voice actor, right? (And if you’re crossing the gender divide, how do you choose?) Actually, it’s less of a cut-and-dried rule than you’d think. Jason said he’d often had authors who’d specified they wanted a female voice, then when a male actor had auditioned it had been the perfect match – even in genres like romance, whose readership are very definite in their expectations. Jason made the point that the book – or the author’s work in audio – might have a voice that’s independent of the voice of the author or the character; it is its own identity. We’ll come back to this. Accents When I originally looked for a voice actor, I specified a British accent, but as many of you probably know, the narrator I chose is from the US. Initially I got a lot of US actors auditioning because I was one of the guinea-pig authors when ACX launched in the UK – they hadn’t yet got a bank of UK actors to choose from. So I heard a smorgasbord of attempts to ‘do British’, some convincing and some not. But I soon realised it didn’t matter after a few minutes anyway. The accent was irrelevant. The interpretation of the book went deeper than a voice’s characteristic twang, or lack of it. What was actually important was the voice actor’s understanding of the work. And Sandy, regardless of the flavour of her English, was the most in tune with the novel. She also liked a lot of books that I liked. I picked her. Same voice for all your books? Jason said if you have a series, listeners expect the same voice throughout or it breaks the story world. Authors of standalone books, obviously, might search for new narrators each time. I’m happy with Sandy for both my novels even though they are different in tone – because she works well with my style and outlook. Joanna has two series, so she cast a narrator for each. Funnily enough, we might have ended up with the same one, as the narrator for her dark crime series was one of the auditioners for My Memories of a Future Life! Small world. Hunting for narrators You’re not limited to only the voice actors who approach you – and indeed, many authors don’t find an ideal match that way. Jason encouraged authors to hunt around the ACX narrator profiles, listen to their samples and invite them to audition for yours. Or some authors do what I did – if you know a voice actor who’d be perfect, introduce them to the system. Working with unfamiliar accents Joanna, like me, is British, and ended up working with an American narrator. Once into the recording process she found there were pronunciations that were alien to her Brit-tuned ears but natural to the US narrator. What to do about them? Tomayto or tomahhto? Before recordings start, you need to discuss this, and also tricky pronunciations such as character or place names. Sandy and I talked about it. I suspected there would be many more variations than I’d have be able to think of. If I’d decided ‘leisure’ couldn’t be ‘leesure’, I’d have then, for the sake of consistency, had to pull her up on words I never dreamed had a US difference until I heard them. And the difference goes further than isolated words – sentence emphasis is also radically different. US English stresses the adjective in a phrase like ‘lying on a sticky mat’. UK English stresses the noun (UK: ‘on a sticky mat’, US: ‘on a sticky mat’). Joanna Penn, author entrepreneur I didn’t want to stilt my narrator with unnecessary strictures so I asked her to pronounce her usual way. I’m glad, because there were hundreds of differences. Hundreds. It would have been madness. In any case, that didn’t matter. So long as the interpretation of the line was true, the emotion understood, the accent was irrelevant. Joanna had also come to this conclusion, saying there’s a lot we need to leave to the narrator’s judgement and style. She intervened in place name pronunciations, but allowed everything else to go with the actor’s natural style and emphasis. Having said that, an audiobook is a creative relationship. The voice actor is expecting you to guide them on interpretation. Sandy and I spent several emails discussing how the bod characters in Lifeform Three should sound and what their individual characteristics were. I sent her short recordings of how they seemed in my own head as I wrote them, which she turned into polished performances. It was quite a feat for her – sometimes she had four or five characters in one scene and had to inhabit all those minds, as well as switching to thoughtful narration. For me it was easy because I wrote them. For her, it was mind-and-tongue gymnastics. You can probably see why questions of ‘leesure’ versus ‘lezzure’ cease to be important. Forget them. Don’t expect a drama performance Jason pointed out that the audiobook isn’t a stage or film performance. It’s a reading – a quieter, more subtle business. Characters’ accents don’t need to be full-on impersonations, they are a hint. Passages of emotion don’t have to be performed, merely rendered so they bring to life what is already in the prose. In prose, the writer has already done the job on the page. The voice actor is converting that into sound. It’s intimate; it’s not slaughtering the back row. It’s murmuring in your ears. The voice that is the best conduit for your work Ultimately, the best narrator is the right person to inhabit the book and bring it alive, from its lightest moments to its darkest corners. If you’re weighing up possible narrators, be prepared to revise what you imagined. If you thought the narrator should be British or male, but the more true interpretation, the one that gives you goosebumps, is US and female, that actor is the one to choose. The differences will vanish as soon as the listener gets into the story. After a minute or two, they won’t notice. Since I released My Memories of a Future Life, some people have asked me why I chose an American, and indeed have mentioned it in reviews. Then they report that they got immersed. Your best narrator is the person who can inhabit the book, who can become its voice in the reader’s head and make them forget everything else. Today I gave a speech at The Oldie literary lunch (which was very exciting!) and they asked me to explain about making ebooks. I promised a post to distil the important details, and save them from squinting at their notes and wondering if that scrawl really does say ‘Smashwords’, and indeed what that alien name might mean. If you already know how to publish ebooks you can probably skip most of this. However, you might find some of the links and reading list useful, or pass them on to a friend. And if you’re here from The Oldie – hello again. Nice to have you visit. How to do it It’s easy. Really easy. If you can format a Word file, you can make an ebook. It’s more complicated if you have footnotes or multiple headings that might need to be visually distinguished, or you want graphics (which might not be advisable) but it’s generally easy. Have I said that often enough? Here’s my post on how to format for Kindle, in which you’ll see how I had to be dragged into the ebook revolution. But by all the atoms in the heavens, I’m glad I was. You’ll also see the original, grey cover of the book that now looks like this. That post includes the notes about stripping out the formatting codes and rethinking the book as a long-continuous roll of text, not fixed pages. The Smashwords style guide is also explained. (You knew you wrote that silly word down for a reason.) If you don’t have the Word file If you’re publishing a book that previously appeared in print, you might not have the polished Word file with all the copy editing and proofreading adjustments. Often, the author sees the later proofing stages on paper only, and any adjustments are done at the publisher. If you can get the final Word file, that’s simplest. If not, try to get a PDF, which will have been used to make the book’s interior. You can copy the text off a PDF and paste it into a Word document. You’ll have to do quite a lot of clean-up as this will also copy all the page numbers and headers, and there will be invisible characters such as carriage returns. You’ll need to edit all of these out by hand. Sometimes PDFs are locked. You can’t copy the text off by normal methods, but you can find a way round it with free online apps. Dig around Google and see what you find. Another option is to scan a print copy. Depending on the clarity of the printing and whether the pages have yellowed, you may end up with errors and gobbledygook words, so again you’re in for a clean-up job. You’ll need a thorough proof-read as some scanners will misread letter combinations – eg ‘cl’ may be transformed into ‘d’ and your spellcheck won’t know that you meant to say ‘dose’ instead of ‘close’. But it’s quicker than retyping the entire book. Ebook formats There are two main ebook formats. Mobi (used on Amazon’s Kindle device) and epub (used on many other devices). They are both made in much the same way, and the instructions in my basic how-to-format post are good for both. PDFs are also sold on some sites. Covers You need to get a cover. Cover design is a science as well as an art. A cover is not just to make your book look pretty, it’s a marketing tool. If you’re republishing a print book, check if you have the rights to use the artwork. If not, you’ll have to get another cover made. Use a professional cover designer (see later). Here are posts to clue you in: In traditional publishing, a manuscript goes through a number of stages – developmental editing, copy editing and proof reading. If you’ve done this, go straight to formatting your manuscript. Otherwise, the following posts will help you understand what you need to do. The main DIY platforms to sell your ebooks are Amazon Kindle Direct Publishing, Kobo Writing Life and Smashwords (you’re getting used to that name now). Publishing on them is free and they’re simple to use. You can publish direct to ibooks, but that’s not easy unless you have a PhD in Mac. And a Mac. Besides, Smashwords (ta-daaah) will publish to ibooks for you. There are other platforms that act as intermediaries, for a greater or lesser fee, and greater or lesser advantage. Categories and keywords on online retailers: choose them wisely and the algorithms will target your ideal readers – especially on Kindle. You can make a whole science out of it, but this piece on KDP explains the basics in good, plain English. Essentially, you pick two categories, and then get yourself in several more specialised lists by including set keywords. But this system has its limitations. At first writers of genre fiction had many sub-categories to choose from, but writers of literary, contemporary and general fiction found themselves in one immense category where it was hard to be seen. There were few ways to tell the algorithms ‘I’m non-genre but I have a flavour of romance, or loss, or my novel is set in Borneo’. Recently Amazon has made big improvements and refined the choices – find them here. Despite this very welcome addition, the results haven’t been as good for me as when I unknowingly broke the rules. When I put other authors in the keywords, my sales soared. Tsktsk I did it in all innocence. Reviewers had been comparing my first novel with Paulo Coelho, Margaret Atwood, John Fowles, Doris Lessing, so I put those names in the keywords. My sales rose, readers seemed happy to have found me this way – so the comparisons must have been useful and valid. Then I discovered writers who did this were being sent warning emails so I removed them – and fizzled back down the charts. It’s a real shame, because for me, this tactic was more effective than keywords about genres, subjects, settings, themes and issues. And surely the author and their style is a significant feature of any novel. With literary fiction, it’s the most important quality of all. It’s a valid way to talk about a book in the literary world – and yet it isn’t accommodated in the search mechanisms that writers can control. It’s a refinement that would be helpful to both authors and readers. What’s more, now would be a great time to discuss and lobby for it. Here’s why. We are connected… Last week I was watching a videocast from the Grub St Writers Muse and the Marketplace conference. One of the panel members was Jon Fine, director of author and publisher relations at Amazon, so I tweeted @Grubwriters with my point about author comparisons. Jon Fine was rather interested in the idea and replied that it was something they’d never thought of. So…. watch this space! (Let’s pause for a geek check: I tweeted a question in my home in London at 7.30pm, watched it read out to a room in Boston where it was 2.30pm, and real live people started to talk about it, with voices and hand-waving… and a man from Amazon stroked his chin and said ‘maybe we could…’) @grubwriters Also for Jon Fine: Comparisons with other authors work well for literary authors. Is that allowed in Amazon categories? #muse14 So I want to kick off a discussion here. Amazon are in the mood to get constructive feedback on this right now. There couldn’t be a better time to discuss it. I’ve shared my one tiny idea for improving the algorithms to help readers find our work; you guys no doubt have more to add. The questions begin! 1 Have you tried a category tweak that got you to more readers – Amazon-legal or not? Is there a category facility you’d like to see? Jon Fine also said the categories problem was more widespread than Amazon. The industry standard for classifying books by subject, BISAC seems limited in its precision, although possibly it’s geared for booksellers rather than readers. 2 If you are – or have been – a bookseller, what’s your take? Would you find it helpful if the BISAC categories were made more flexible and detailed? A week or so ago I talked about making audio books with ACX, the self-publishing arm of Audible. From the author’s end it’s relatively simple – pitch your book, listen to auditions, guide the narrator and review chapters as they’re posted on ACX. But at the other end of the line, the narrator/producer is spending 4-6 hours on each finished hour you hear. What are they doing? When you listen to the files, what problems should you be alert for? And if you’re narrating and producing your own book, what do you need to know? Sandy: First I print out the pages of the text. Then I review them to refresh my memory and make notes about pronunciation or content/emphasis. Then I prep the ‘studio’ – which is my closet. I set up the laptop and hook up the mic and headphones. Each chapter (or chunks if they are long) gets recorded in one go because the laptop needs to be outside of the closet, away from the mic or we can hear the fan. I hit record, then shut myself in with mic, headphones, and a glass of water. Roz: You need studio-grade equipment to meet the quality standards for an audiobook. Podcasting gear won’t cut it. Equipment notes follow at the end of the piece.Back to Sandy. Sandy: I monitor the audio as I go via headphones. Any time there is a mistake I do a retake and keep going so I don’t interrupt the flow. Roz: Watch for these when you’re reviewing the uploaded files. Even with the most meticulous narrator, a repeated phrase or two can slip through. The finished quality is the responsibility of both of you! Sandy: I usually catch around 98% of the errors. The tough ones are when I read a word incorrectly but it sounds right at the time (like make instead of makes) so I don’t catch it. Sometimes I can fix it while editing but sometimes I have to re-record the word or sentence. That’s a pain because it holds up the workflow. The only time I come out is when I need to check a pronunciation – Roz has some pretty atypical words! Oedema? Nebulae? Roentgen?? Roz: Sorry about that… Sandy: I record six or so chapters at a time, until my voice gets tired, then load them onto my main PC for editing. I splice together the chapters if they are in chunks, then compress the audio and equalize so the sound quality is good. Then I listen to each chapter and follow along using the printed manuscript to make sure it is correct. I try to fix any mistakes, and make a note of the ones I can’t. I also adjust the pauses between lines so it flows dramatically. Sandy: I listen for the best takes and remove the bad ones, and cut out extra noises like mouth clicks and breaths. I usually end up listening to each recorded line at least twice, sometimes as many as five or six times. I spend more time on the dramatic passages because those feel important to get right. Roz: This is like the writing process! Sandy: Once a chapter is complete I run a range check to make sure it fits within the ACX parameters. I adjust the volume as needed, then export it as an MP3 ready to upload. Incomplete chapters waiting for pickups get put aside until after my next recording session so I can drop in re-recorded lines. Sometimes the new lines need to be tweaked to get them to match the original recording – different days can sound quite different. Equipment Sandy: Do your research before spending money on equipment. Get the best setup you can afford because when you are recording a solo speaking voice there isn’t much to hide behind, and there is only so much you can do in post-production. Most audiophiles recommend a high-end microphone with a pre-amp to convert the analogue sound to a digital signal. The pre-amp is almost as important as the mic, so if you go this route you have to spend quite a bit of money to get a good sound. If you are planning on doing professional recording full time this is probably the way to go. USB microphones have a built-in pre-amp, but traditionally sound tinny and aren’t warm enough for audiobooks. However, they have come a long way recently because of the huge rise in amateur voiceover work for video blogs and podcasts. The mic I bought is a high-end professional brand (Shure PG42) and tuned for voice recording. Since this voice project is a bit of an experiment for me, I wasn’t prepared to buy a full mic/pre-amp system, so I invested in one of the best USB mics. There are some great resources out there for mic comparisons, such as this one. The other advice I would offer is to give yourself a crash course in post-production – for compression, equalisation and to remove mistakes and odd noises. There is a ton of great info on the web, such as articles like this …and video tutorials like this. ACX also has some very useful tips. Roz again: Oh cripes, two names that sound the same! This really caused a hiccup, and made me rather unpopular with Sandy. Listening to her chapters, I discovered I had a Gene (the main hypnotist character) and a neighbour Jean. On the page, they are perfectly distinct, but in the ears… they sound the same. This gave Sandy some extra rerecording and messed up our schedule. I talked in my previous post about the pronunciation guide. When you write this, check you don’t have two names that a listener might get confused! Ooh, knotty word Less of a problem, and certainly more amusing, was a word that didn’t translate well to a US accent. There’s a line where Gene, the hypnotist, is described as wearing a ‘knotty’ jumper. In Sandy’s voice it came out as ‘naughty’. I imagine comics letterers have the same problem with ‘flick’. This might not have mattered, except that the knotty jumper occurs again in tense scenes that would be rendered farcical if the listener thought a character was wearing a ‘naughty’ jumper. We decided to stay the right side of serious and replace it with a less troublesome ‘rough-knit’… Do you want to release your title as an audiobook? If you live in the US, you can go through ACX, the DIY arm of Audible, but ACX wasn’t open to UK authors – until now. For the past month, I’ve had both my novels in production as a test pilot, and now I can tell you what I’ve learned so far about offering a title, choosing a narrator and working with them. What’s ACX? Good question. ACX is a network where narrators and producers can meet authors who want their work released as audiobooks. Once you’ve hooked up, you can then use the site as an interface to create the book, keep track of contracts and monitor sales. In short, it’s genius. Setting up You know how tedious it is every time you set up an identity on a new site? All that form-filling and profile-making? ACX requires minimal faff. Once you tell them who you are and what book you’d like to offer, it pulls the detail off Amazon. Getting voices You can: opt to narrate and produce the audiobook yourself, but to do this you must have professional-quality equipment and experience of sound editing, or the book won’t pass the quality check. pluck a willing narrator/producer out of the ether – this is what I did. Pitching your book Next to your book info, you can add notes to make your book more attractive to collaborators – your platform, sales figures and anything else that will convince them you’re worth working with. Which brings me to… Costs Making an audiobook isn’t cheap. An average novel is about 10 hours of narration (roughly 90,000 words) and is likely to cost $200 or more per finished hour. You have several options if you’re seeking a narrator/producer on ACX pay up front pay a royalty share (which I did) All the ins and outs of this are much better explained on the ACX site, so check them out there. My acx journey – mistakes made and lucky discoveries Choose an audition passage When I talked to the ACX crew, they told me many writers put up the first few pages as the audition piece. This can be a mistake, because the beginning may not be typical of the book’s action. I looked for a challenging scene with dramatic dialogue as well as the narrator’s internal thoughts, which I felt would test the reader’s approach more usefully. I added notes about the context of the scene and the style of the book – and waited for auditions. And lo, they rolled in. (This was in itself a wonderful surprise.) Once I got over the novelty, I realised I needed to tweak my presentation. Accent – & mistake #1 – ACX lets you specify the age, style and accent of the reader. Age and style were easy enough to choose, but accent caused me more trouble. I assumed this had to be British as, obviously, I’m a Brit, my characters are Brits and I write with British language. However, ACX is predominantly US, so that vastly reduced the available talent pool. Some of the voice actors did very credible Brit accents. Some couldn’t pull it off and sounded Chinese or German. Others ignored my stipulation – quite wisely as it turned out they sounded just fine in their natural accents. So I quickly realised accent was a detail that didn’t matter, and edited my directions. Indeed the narrator I chose is American. Another reason to choose an accent other than your own is if the majority of your readers are in another territory. I sell a lot in the US, so an American accent might make them feel more at home. Accent isn’t the only deciding factor, though. Suitability for the material – While the narrator might be able to do a good job with the audition scene, you have to be sure they’ll interpret the whole of your book in the right way. A Regency romance needs a completely different approach from literary fiction, and I can imagine it’s a nightmare to realise your narrator simply doesn’t ‘get’ your book. If you have a contender, poke around in their ACX profile and follow up any websites where they demonstrate other books they’ve narrated. Also, ask them what they like to read. Acting versus reading & mistake #2 – some books benefit from a reader who will do a lot of gutsy acting, including distinct voices for the main characters. But for most fiction, that’s too much. They’re journeys in prose and need a more intimate, subtle treatment, which might even sound flat to some ears. Listeners know they’re being read to. They don’t need rollicking declamation – or music or sound effects. And a good reader can make it clear who’s talking without bursting into different voices, so you actually need less ‘acting’ than you might think. I made another mistake here in my original guidance notes for the audition. I didn’t ask for different character voices, but I did explain the book had sections in a different tone – the female narrator, and the future incarnation who was a male version of her. Thankfully, before it went live a friend pointed out that this might cause a lot of horrible baritone overacting, and that I should simply let the text do the work. Ultimately, you choose a narrator on a hunch that they fit your work. One author I spoke to at LBF said he knew when he’d found his because the guy sounded like the ideal voice he’d have chosen – but better. That’s how I found mine too – although it was by a more roundabout route. I had a shortlist of possible voices, including seasoned Broadway actors, but there was one question I couldn’t answer. When I looked into their backgrounds, none of them had narrated a novel like mine, and I was worried they wouldn’t get it. Then I remembered a friend who I’d heard do narration work – on a computer game, of all things. At the time, I noticed how she had an insightful, feisty quality that reminded me of Laurie Anderson. Even better, I knew her reading tastes made her a good fit. I contacted her. She didn’t know about ACX, but she was keen to give it a go, registered and sent me an audition. Her reading was just right – inhabiting the material with well-judged expression and I knew the book would suit her personality. If you’ve been a subscriber here for a while you might recognise her from some of the goofy photos I’ve used on my posts. But her voice absolutely suits my kind of fiction, and if yours is like mine you might like her too (here’s her ACX page). Our process Here’s how we’re working. Pronunciation guide – All books have peculiar words and names and you need to warn your narrator of these. We set up an editable file on Google docs. As I said, Sandy’s American and I’m a Brit, so we had to decide whether she should pronounce words like ‘leisure’ and ‘z’ in the UK or US version. I decided we could fiddle endlessly with this so I asked her to do whatever was natural. If we tried to anglicise everything there would be certain words we’d miss, or the stresses would still be American. And I didn’t want to get in the way of her doing the most instinctive job. So she says tomayto while I say tomahto. No big deal. Pace – one of the first tasks is to approve the first 15 minutes of the audiobook. Sandy was afraid I might think she was reading too slow, but I felt the text suited a measured pace. ACX actually advise that you err on the side of slow because listeners can artificially speed the reading up if they want. Pauses – you need pauses between paragraphs, scene switches, and maybe in other sections too. We spent an email exchange identifying exactly the right kind of pause for each. Listen to finished chapters – You need to set aside time to listen to chapters as your narrator uploads them to ACX. We have a schedule and a chart where we tick off chapters I’ve approved or asked for modifications (usually these are pronunciations or stresses). ACX gives you a time code so you can pinpoint exactly where an edit is needed. Next time, I’ll delve into the narrator’s side, including what exactly is involved in creating an audio book. I’d love a traditional publishing deal. I’ve submitted my manuscript to two agents, and while waiting to hear from them I have been offered three ebook contracts – but I’m not sure which way to go. Also, could you quote me a price for professional editing? I answered the email at length in private, but some interesting issues emerged that I feel might make a useful post. Wow, three offers! Three ebook contracts already. Way to go! Some publishers are offering ebook-only deals to authors, and considering print if sales are good. But in the nicest possible way, I was worried about my friend here – because in this market, it seemed unlikely to get that many serious offers and not have secured an agent. My correspondent sent me the details of the publishers and I checked their sites. I’m not going to reveal their names here as I haven’t contacted them or asked for statements, as you should do in a proper investigative piece. Also, they weren’t attempting to scam or con anyone. They certainly could publish her book. But she didn’t realise they weren’t publishers of the kind she was hoping to get offers from. One site had several pages about selling tuition and support to authors. There was a mission statement page that included a point about ‘fees’. The others stated they offered services to authors. Publishers – of the kind that my friend here was seeking – don’t use those terms. These people are pitching for business, not offering a publishing contract. If I were her, I’d wait to hear what the agents say! But if you do want to use self-publishing services, here are a few pointers. Beware rogue clauses Some publishing services providers can try to tie up your rights so that you can’t publish the book elsewhere. Others will make you pay for formatting and then not release the files for you to use yourself unless you pay a further fee. (I know regular readers of this blog who’ve been caught in these situations.) Some charge way over the market rate as well. To get acquainted with the kinds of scams and horrors that are perpetrated on unsuspecting authors, make a regular appointment with Victoria Strauss’s blog Writer Beware. Check the quality Assuming no nasty clauses, you also need to know if the services are good enough. I’ve seen some pretty dreadful print books from self-publishing services companies. Before committing, buy one of their titles and check it out, or send it to a publishing-savvy friend who can help you make a sensible judgement. Obviously traditional imprints score here because they have kudos and reputation. And the publishing services companies on my friend’s list were attempting to address this. They emphasised that they were attached to reader communities, or wrote persuasively about how they were in the process of building them. This sounds good, and let’s assume they are genuinely putting resources in. But communities take years to establish, plus a number of these publishers seemed to be relying on their writers to spread the word. We all learn pretty quickly that we need to reach readers, not other bunches of writers. And if a community is in its infancy, you might be better buying advert spots on email lists such as Bookbub or The Fussy Librarian, depending on your genre. Some of these companies may give you no advantage over doing it yourself. You might be in exactly the same position as if you put your book on Createspace and KDP and write a description that will take best advantage of Amazon search algorithms. Basically, if you get a proper publishing offer, you don’t pay for any of the book preparation – that includes editing, formatting, cover etc. Which leads me to my correspondent’s final question about editing. This is one of the things a publisher should do! You only need the likes of me if a) an agent says you need to work with an editor to hone your manuscript or craft or b) if you intend to self-publish! Do you have any advice to add about assessing offers from publishers or publishing service providers? Or cautionary tales? Please don’t name any names or give identifiable details as it may get legally tricky … Just a brief post as we all duck away for a thorough Christmassing. Lifeform Three is now up and alive on the Amazons and Smashwords. I’ve loaded it on Kobo and it should shortly be appearing there. Print proofs are in transit from CreateSpace, so in January I hope to have the feelable, giftable, signable, alphabeticisable, filable, decorative version … (Can you tell I prefer print books at heart? Our house hardly needs walls. It has bookshelves.) I’m still trying to work out which Amazon categories would suit it best. If you pick your categories cleverly you maximise your chances of being seen by casual browsers. In one respect Lifeform Three is science fiction, but early reviewers are making comparisons with Ray Bradbury, Margaret Atwood and Kazuo Ishiguro – all very lovely, but it’s not what most people imagine by the term SF. It’s now possible to fine-tune your book’s categories on KDP by inputting keywords in your descriptive tags, so I’m going to be doing some experimenting in the next few weeks. In case you’re interested, here’s a handy link with a full list of those magic words that could get you wider exposure. And Lifeform Three now has a website – an online home I can put on my Moo cards (also on the to-do list). At the moment it’s a mere page but I’ll be adding to it. So if my remarks about misty woods, whispering memories and lost doors have got you curious about the story, seek the synopsis on its website or at Amazon.
{ "pile_set_name": "Pile-CC" }
Construction of buildings is traditionally a manual process. Recent advances in additive “3D” manufacturing technology promise to reduce the cost of construction substantially, by autonomously printing sequential layers of a structure from the bottom up. However, current attempts to 3D print structures using Ordinary Portland Cement (OPC) are limited by the medium. OPC reacts over hours and days with water to harden. Thus, print speed is limited with OPC: too fast and wet cement will collapse under its own weight, too slow and the layers will not adhere well to each other. Mixed OPC slurry pumped through the gantry must be used before it hardens in the conduits. These systems mostly utilize 3-axis gantry designs, in large part to support the weight of the conduits and wet cement being supplied to the print head. The structure size is then limited by the size of the gantry which defines the area of the print bed, as well as the maximum height of the structure. Many potential construction sites have limited access that may not support the delivery of large gantry systems. Alternative cement chemistries utilizing two slurries that do not react until combined already exist commercially, albeit not yet very competitively with OPC. One example of such a chemistry is a basic magnesia (MgO) slurry (Part B) and a mild acid phosphate (e.g., KH2PO4) slurry (Part A) which can react to form a hard cement in minutes. These slurries may also contain other fillers to control for viscosity and add strength or reduce the cost of the material. Such chemistries, due to their exothermic nature and fast cure may not always work well when poured in bulk form (i.e., traditional OPC construction methods), but lend themselves well to 3D printing methods. Although seemingly unrelated to one not having the benefit of this disclosure, various polymer epoxy systems wherein a “PART B” hardening agent is mixed with a “PART A” resin would also lend themselves to a methodology which eliminates the limitations of gantry systems. While bulk polymers have not historically been used in construction, the methodology claimed herein may promote the adoption of polymer materials as a part or whole in building scale structures.
{ "pile_set_name": "USPTO Backgrounds" }
New article series at catonmat: Detailed Summary of MIT's Linear Algebra - pkrumins http://www.catonmat.net/blog/mit-linear-algebra-part-one/ ====== amichail Why do you care that this introductory course is taught at MIT? For a course at that level, the professor's teaching skills matter more than his/her research skills. A professor from a lower-ranked institution that focuses more on teaching might do a better job for such a course. I guess the real reason there is interest in the MIT course is because it is presented to top students and hence is more likely to be advanced and rigorous. ~~~ tjr The article seems to answer the question: _I had already had two terms of linear algebra when I studied physics back in 2004. But it was not enough for a curious mind like mine. I wanted to see how it was taught at the world’s best university._ ...and... _The course is taught by no other than Gilbert Strang. He’s the world’s leading expert in linear algebra and its applications and has helped the development of Matlab mathematics software._ ~~~ liquidben Your first point is good, but the second point is problematic. Just because someone is an expert and helped build software, doesn't necessarily translate to that same person being able to teach it. Teaching is a skill in and of itself. Luckily other posters here vouch for Strang's ability as a teacher. ~~~ chrischen Someone who has a better understanding of the subject matter will usually have an advantage in teaching it. ------ pbz <http://videolectures.net/mit1806s05_linear_algebra/> ------ danh Also available in iTunes: [http://deimos3.apple.com/WebObjects/Core.woa/Browse/mit.edu....](http://deimos3.apple.com/WebObjects/Core.woa/Browse/mit.edu.1299892995.01299892999) ------ pkrumins Btw, I started using twitter today! If you enjoy my catonmat blog, you should follow me on twitter here: <http://twitter.com/pkrumins>
{ "pile_set_name": "HackerNews" }
Things You Must Know To Become A Game Tester Sure you have probably read all the different stories and sales letters out there on the Internet that tell you that they can make you a game tester earning $100,000 a year without any type of experience or education. But, do you really believe this? Of course, not. Game testing is a real job, with real requirements and real pay - not $100,000 a year. There are some things that you will need to know and have experience with before you can become a video or computer game tester. So, here are the top four things that you will need to know before you apply for a game tester job: 1. You have to know how to test software. When I say “software”, I mean any type of software out there since different games will run on different types of platforms. You will have to know how to test software to get any type of game testing job no matter if you only want to test video games or computer games. By knowing how to test all types of software, you are almost guaranteed to get a better paying game tester job. 2. You have to know game programming languages. This means that you will have to learn all the jargon of the game development world, along with some basic computer language, such as C or C++, to help you truly understand the different things that are going on in the game. You can learn most of these different types of languages from online courses or from local schools. 3. You have to have a good working knowledge of PC hardware and software. This means that even if you want to be a video game tester, you will need to know how to work with computers and be familiar with everything about them. You will need to know different programs as well, such as databases, spreadsheets, and word documents. 4. You have to have a good working knowledge of the video or computer game industry. This does not mean that you need to know what titles are releasing next month, but you need to know how the developers go about designing a game, how the programmers, artists, composers, and others do their part of the game and so on. This will require that you do some studying on your own, and most of this knowledge you will expand upon after you get a game testing position. Well, there you have it. The top four things that every game tester needs to know before landing a job testing video or computer games. It is not an easy process, and if you are really serious about making it as a game tester, then you may need to take a couple of classes online or off line to gain the skills that you need to succeed in getting a good game tester job. Of course, there are other aspects that go into getting a game tester position. When you are ready to learn all about becoming a video game tester, you can find a full course at http://www.becomeagametester.com
{ "pile_set_name": "Pile-CC" }
September 6, 2005 1100 PST (FTW) – Following is a story by Paul Krugman of the New York Times which basically lays the blame for all these “failures” (how sick we are of hearing that word after 9/11) at the feet of Bush funding cuts at the Federal Emergency Management Administration (FEMA) since 2001. If you have been watching TV at all – who hasn’t? – you have also seen former Clinton FEMA Director, James Lee Witt emerging as a knight in white armor saying basically the same thing. Yes, it’s true that under the Clinton administration many of these challenges were better addressed and planned for. But that was before Peak Oil and climate collapse. Can you hear Hillary and Bill chuckling? The Clinton administration also helped create the greater canvas on which these new brush strokes are being placed. Have you forgotten that Bill Clinton and Bush I are great buddies, traveling the world together? George Herbert Walker Bush just loves Bill Clinton. Why is that? Beware America. Beware. If we’re to follow the current media line, the litany of errors and deliberate, callous decision making which has cost so many lives with Katrina is to be blamed solely upon the White House. It is now a virtual certainty that a Democrat will be placed there in 2008 (I did not say elected and will not until we have verifiable paper ballots returned to us). What we now see emerging clearly is that the Democrats will make it a major plank in their platform that FEMA’s budget will be enormously expanded, along with its authority to act independently in a “crisis.” The poor, dispossessed and fearful will likely cheer for and demand these steps without having the slightest clue what they are asking for. Already I have heard Jesse Jackson pointing at FEMA and calling for hearings. The Democrats have found their sheet music. Intelligent critics from both left and right have for years painstakingly documented FEMA’s paramount leadership role in Continuity of Government (COG) operations and planning. Better described, COG is what will happen if Congress is nuked, if a major catastrophe makes “normal” government operations impossible, or if there is major civil unrest (or total economic collapse). Much of FEMA’s infrastructure is really dedicated to this task and not to disaster relief. The COG function and authority has been greatly expanded since 9/11. At FTW we have written about FEMA many times and discussed it at length in my book Crossing The Rubicon: The Decline of the American Empire at the End of the Age of Oil. There is no shortage of verifiable government records confirming all this including about two score Executive Orders, The Patriot Act, The Homeland Security Bill, and a couple of pieces of legislation having to do with biological warfare enacted in the post-9/11 climate. COG work was initially begun way back in the late 1970s, and involved early input from the likes of Iran-Contra criminal Oliver North. That’s where FEMA actually came from. If this thinking is not curtailed, then as the economic collapse of the United States becomes ever harder to conceal, FEMA will have been given a green light to impose the most draconian and heartless of measures in our country. FEMA will have the ability to divide the US up into ten autonomous regions, independently governed. Denver will be key to that decentralization and I note with irony that the CIA recently announced it was moving its National Resources (formerly Domestic Operations) Division to Denver (Washington Post, May 5, 2005) . FEMA will have the authority to confiscate any private property, food, medicine, personal vehicles, water supplies and even to impress citizens into forced labor and relocation as needed. FEMA will be able to override all local governments in a declared national emergency, quarantine neighborhoods and compel people to receive untested (for efficacy) vaccinations of drugs which may be dangerous (remember the smallpox vaccines?) and which will only enrich the pharmaceutical companies. FEMA will have the authority to confiscate firearms and gold held by private individuals. The government records proving what I say here are available in abundance and have been widely circulated over the internet for years. The little that remains of our Bill of Rights will simply cease to exist with a Code Red terror alert or another Katrina. And global warming makes another Katrina somewhere inevitable. In short, what is being set up here is a massive, misguided and stupid effort to take convenient retribution for Katrina in a way that only ensures the more rapid demise of this once great nation. Do not put the blame on FEMA or believe that giving FEMA more money and power will solve anything. Too many of the bad decisions which cost lives in New Orleans, Mississippi, and Alabama were made at the White House, probably by Dick Cheney who has yet to make a public appearance. Condi’s been too busy shopping for $7,000 shoes in New York to do anything. The poor, distressed, homeless people out there, the ones who have lost families, all physical belongings and, in some cases, their sanity, are vulnerable and exploitable and they will continue to be so for years. We cannot afford to let them – and all of us – be sold out one more time in Katrina’s wake. American collapse will be evident soon enough. Simply throwing money and power at FEMA, without at the same time addressing the corruption, depravity and outright evil that has become official Washington is probably more dangerous than Katrina was and I sure hope we don’t have to find that out. In accordance with Title 17 U.S.C. Section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. Before 9/11 the Federal Emergency Management Agency listed the three most likely catastrophic disasters facing America: a terrorist attack on New York, a major earthquake in San Francisco and a hurricane strike on New Orleans. "The New Orleans hurricane scenario," The Houston Chronicle wrote in December 2001, "may be the deadliest of all." It described a potential catastrophe very much like the one now happening. So why were New Orleans and the nation so unprepared? After 9/11, hard questions were deferred in the name of national unity, then buried under a thick coat of whitewash. This time, we need accountability. First question: Why have aid and security taken so long to arrive? Katrina hit five days ago - and it was already clear by last Friday that Katrina could do immense damage along the Gulf Coast. Yet the response you'd expect from an advanced country never happened. Thousands of Americans are dead or dying, not because they refused to evacuate, but because they were too poor or too sick to get out without help - and help wasn't provided. Many have yet to receive any help at all. There will and should be many questions about the response of state and local governments; in particular, couldn't they have done more to help the poor and sick escape? But the evidence points, above all, to a stunning lack of both preparation and urgency in the federal government's response. Even military resources in the right place weren't ordered into action. "On Wednesday," said an editorial in The Sun Herald in Biloxi, Miss., "reporters listening to horrific stories of death and survival at the Biloxi Junior High School shelter looked north across Irish Hill Road and saw Air Force personnel playing basketball and performing calisthenics. Playing basketball and performing calisthenics!" Maybe administration officials believed that the local National Guard could keep order and deliver relief. But many members of the National Guard and much of its equipment - including high-water vehicles - are in Iraq. "The National Guard needs that equipment back home to support the homeland security mission," a Louisiana Guard officer told reporters several weeks ago. Second question: Why wasn't more preventive action taken? After 2003 the Army Corps of Engineers sharply slowed its flood-control work, including work on sinking levees. "The corps," an Editor and Publisher article says, citing a series of articles in The Times-Picayune in New Orleans, "never tried to hide the fact that the spending pressures of the war in Iraq, as well as homeland security - coming at the same time as federal tax cuts - was the reason for the strain." In 2002 the corps' chief resigned, reportedly under threat of being fired, after he criticized the administration's proposed cuts in the corps' budget, including flood-control spending. Third question: Did the Bush administration destroy FEMA's effectiveness? The administration has, by all accounts, treated the emergency management agency like an unwanted stepchild, leading to a mass exodus of experienced professionals. Last year James Lee Witt, who won bipartisan praise for his leadership of the agency during the Clinton years, said at a Congressional hearing: "I am extremely concerned that the ability of our nation to prepare for and respond to disasters has been sharply eroded. I hear from emergency managers, local and state leaders, and first responders nearly every day that the FEMA they knew and worked well with has now disappeared." I don't think this is a simple tale of incompetence. The reason the military wasn't rushed in to help along the Gulf Coast is, I believe, the same reason nothing was done to stop looting after the fall of Baghdad. Flood control was neglected for the same reason our troops in Iraq didn't get adequate armor. At a fundamental level, I'd argue, our current leaders just aren't serious about some of the essential functions of government. They like waging war, but they don't like providing security, rescuing those in need or spending on preventive measures. And they never, ever ask for shared sacrifice. Yesterday Mr. Bush made an utterly fantastic claim: that nobody expected the breach of the levees. In fact, there had been repeated warnings about exactly that risk. So America, once famous for its can-do attitude, now has a can't-do government that makes excuses instead of doing its job. And while it makes those excuses, Americans are dying.
{ "pile_set_name": "Pile-CC" }
/* * Copyright 2016 Azavea * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package geotrellis.raster.io.geotiff import geotrellis.raster.{GridBounds, RasterExtent, TileLayout, PixelIsArea, Dimensions} import geotrellis.raster.rasterize.Rasterizer import geotrellis.vector.{Extent, Geometry} import spire.syntax.cfor._ import scala.collection.mutable /** * This case class represents how the segments in a given [[GeoTiff]] are arranged. * * @param totalCols The total amount of cols in the GeoTiff * @param totalRows The total amount of rows in the GeoTiff * @param tileLayout The [[TileLayout]] of the GeoTiff * @param storageMethod Storage method used for the segments (tiled or striped) * @param interleaveMethod The interleave method used for segments (pixel or band) */ case class GeoTiffSegmentLayout( totalCols: Int, totalRows: Int, tileLayout: TileLayout, storageMethod: StorageMethod, interleaveMethod: InterleaveMethod ) { def isTiled: Boolean = storageMethod match { case _: Tiled => true case _ => false } def isStriped: Boolean = !isTiled def hasPixelInterleave: Boolean = interleaveMethod == PixelInterleave /** * Finds the corresponding segment index given GeoTiff col and row. * If this is a band interleave geotiff, returns the segment index * for the first band. * * @param col Pixel column in overall layout * @param row Pixel row in overall layout * @return The index of the segment in this layout */ private [geotiff] def getSegmentIndex(col: Int, row: Int): Int = { val layoutCol = col / tileLayout.tileCols val layoutRow = row / tileLayout.tileRows (layoutRow * tileLayout.layoutCols) + layoutCol } /** Partition a list of pixel windows to localize required segment reads. * Some segments may be required by more than one partition. * Pixel windows outside of layout range will be filtered. * Maximum partition size may be exceeded if any window size exceeds it. * Windows will not be split to satisfy partition size limits. * * @param windows List of pixel windows from this layout * @param maxPartitionSize Maximum pixel count for each partition */ def partitionWindowsBySegments(windows: Seq[GridBounds[Int]], maxPartitionSize: Long): Array[Array[GridBounds[Int]]] = { val partition = mutable.ArrayBuilder.make[GridBounds[Int]] partition.sizeHintBounded(128, windows) var partitionSize: Long = 0l var partitionCount: Long = 0l val partitions = mutable.ArrayBuilder.make[Array[GridBounds[Int]]] def finalizePartition() { val res = partition.result if (res.nonEmpty) partitions += res partition.clear() partitionSize = 0l partitionCount = 0l } def addToPartition(window: GridBounds[Int]) { partition += window partitionSize += window.size partitionCount += 1 } val sourceBounds = GridBounds(0, 0, totalCols - 1, totalRows - 1) // Because GeoTiff segment indecies are enumorated in row-major order // sorting windows by the min index also provides spatial order val sorted = windows .filter(sourceBounds.intersects) .map { window => window -> getSegmentIndex(col = window.colMin, row = window.rowMin) }.sortBy(_._2) for ((window, _) <- sorted) { if ((partitionCount == 0) || (partitionSize + window.size) < maxPartitionSize) { addToPartition(window) } else { finalizePartition() addToPartition(window) } } finalizePartition() partitions.result } private def bestWindowSize(maxSize: Int, segment: Int): Int = { var i: Int = 1 var result: Int = -1 // Search for the largest factor of segment that is > 1 and <= // maxSize. If one cannot be found, give up and return maxSize. while (i < math.sqrt(segment) && result == -1) { if ((segment % i == 0) && ((segment/i) <= maxSize)) result = (segment/i) i += 1 } if (result == -1) maxSize; else result } def listWindows(maxSize: Int): Array[GridBounds[Int]] = { val segCols = tileLayout.tileCols val segRows = tileLayout.tileRows val colSize: Int = if (maxSize >= segCols * 2) { math.floor(maxSize.toDouble / segCols).toInt * segCols } else if (maxSize >= segCols) { segCols } else bestWindowSize(maxSize, segCols) val rowSize: Int = if (maxSize >= segRows * 2) { math.floor(maxSize.toDouble / segRows).toInt * segRows } else if (maxSize >= segRows) { segRows } else bestWindowSize(maxSize, segRows) val windows = listWindows(colSize, rowSize) windows } /** List all pixel windows that meet the given geometry */ def listWindows(maxSize: Int, extent: Extent, geometry: Geometry): Array[GridBounds[Int]] = { val segCols = tileLayout.tileCols val segRows = tileLayout.tileRows val maxColSize: Int = if (maxSize >= segCols * 2) { math.floor(maxSize.toDouble / segCols).toInt * segCols } else if (maxSize >= segCols) { segCols } else bestWindowSize(maxSize, segCols) val maxRowSize: Int = if (maxSize >= segRows) { math.floor(maxSize.toDouble / segRows).toInt * segRows } else if (maxSize >= segRows) { segRows } else bestWindowSize(maxSize, segRows) val result = scala.collection.mutable.Set.empty[GridBounds[Int]] val re = RasterExtent(extent, math.max(totalCols / maxColSize,1), math.max(totalRows / maxRowSize,1)) val options = Rasterizer.Options(includePartial=true, sampleType=PixelIsArea) Rasterizer.foreachCellByGeometry(geometry, re, options)({ (col: Int, row: Int) => result += GridBounds( col * maxColSize, row * maxRowSize, math.min((col + 1) * maxColSize - 1, totalCols - 1), math.min((row + 1) * maxRowSize - 1, totalRows - 1) ) }) result.toArray } /** List all pixel windows that cover a grid of given size */ def listWindows(cols: Int, rows: Int): Array[GridBounds[Int]] = { val result = scala.collection.mutable.ArrayBuilder.make[GridBounds[Int]] result.sizeHint((totalCols / cols) * (totalRows / rows)) cfor(0)(_ < totalCols, _ + cols) { col => cfor(0)(_ < totalRows, _ + rows) { row => result += GridBounds( col, row, math.min(col + cols - 1, totalCols - 1), math.min(row + rows - 1, totalRows - 1) ) } } result.result } def bandSegmentCount: Int = tileLayout.layoutCols * tileLayout.layoutRows def getSegmentCoordinate(segmentIndex: Int): (Int, Int) = (segmentIndex % tileLayout.layoutCols, segmentIndex / tileLayout.layoutCols) /** * Calculates pixel dimensions of a given segment in this layout. * Segments are indexed in row-major order relative to the GeoTiff they comprise. * * @param segmentIndex: An Int that represents the given segment in the index * @return Tuple representing segment (cols, rows) */ def getSegmentDimensions(segmentIndex: Int): Dimensions[Int] = { val normalizedSegmentIndex = segmentIndex % bandSegmentCount val layoutCol = normalizedSegmentIndex % tileLayout.layoutCols val layoutRow = normalizedSegmentIndex / tileLayout.layoutCols val cols = if(layoutCol == tileLayout.layoutCols - 1) { totalCols - ((tileLayout.layoutCols - 1) * tileLayout.tileCols) } else { tileLayout.tileCols } val rows = if(layoutRow == tileLayout.layoutRows - 1) { totalRows - ((tileLayout.layoutRows - 1) * tileLayout.tileRows) } else { tileLayout.tileRows } Dimensions(cols, rows) } private [geotrellis] def getGridBounds(segmentIndex: Int): GridBounds[Int] = { val normalizedSegmentIndex = segmentIndex % bandSegmentCount val Dimensions(segmentCols, segmentRows) = getSegmentDimensions(segmentIndex) val (startCol, startRow) = { val (layoutCol, layoutRow) = getSegmentCoordinate(normalizedSegmentIndex) (layoutCol * tileLayout.tileCols, layoutRow * tileLayout.tileRows) } val endCol = (startCol + segmentCols) - 1 val endRow = (startRow + segmentRows) - 1 GridBounds(startCol, startRow, endCol, endRow) } } trait GeoTiffSegmentLayoutTransform { private [geotrellis] def segmentLayout: GeoTiffSegmentLayout private lazy val GeoTiffSegmentLayout(totalCols, totalRows, tileLayout, isTiled, interleaveMethod) = segmentLayout /** Count of the bands in the GeoTiff */ def bandCount: Int /** Calculate the number of segments per band */ private def bandSegmentCount: Int = tileLayout.layoutCols * tileLayout.layoutRows /** * Calculates pixel dimensions of a given segment in this layout. * Segments are indexed in row-major order relative to the GeoTiff they comprise. * * @param segmentIndex: An Int that represents the given segment in the index * @return Tuple representing segment (cols, rows) */ def getSegmentDimensions(segmentIndex: Int): Dimensions[Int] = { val normalizedSegmentIndex = segmentIndex % bandSegmentCount val layoutCol = normalizedSegmentIndex % tileLayout.layoutCols val layoutRow = normalizedSegmentIndex / tileLayout.layoutCols val cols = if(layoutCol == tileLayout.layoutCols - 1) { totalCols - ((tileLayout.layoutCols - 1) * tileLayout.tileCols) } else { tileLayout.tileCols } val rows = if(layoutRow == tileLayout.layoutRows - 1) { totalRows - ((tileLayout.layoutRows - 1) * tileLayout.tileRows) } else { tileLayout.tileRows } Dimensions(cols, rows) } /** * Calculates the total pixel count for given segment in this layout. * * @param segmentIndex: An Int that represents the given segment in the index * @return Pixel size of the segment */ def getSegmentSize(segmentIndex: Int): Int = { val Dimensions(cols, rows) = getSegmentDimensions(segmentIndex) cols * rows } /** * Finds the corresponding segment index given GeoTiff col and row. * If this is a band interleave geotiff, returns the segment index * for the first band. * * @param col Pixel column in overall layout * @param row Pixel row in overall layout * @return The index of the segment in this layout */ private [geotiff] def getSegmentIndex(col: Int, row: Int): Int = segmentLayout.getSegmentIndex(col, row) private [geotiff] def getSegmentTransform(segmentIndex: Int): SegmentTransform = { val id = segmentIndex % bandSegmentCount if (segmentLayout.isStriped) StripedSegmentTransform(id, GeoTiffSegmentLayoutTransform(segmentLayout, bandCount)) else TiledSegmentTransform(id, GeoTiffSegmentLayoutTransform(segmentLayout, bandCount)) } def getSegmentCoordinate(segmentIndex: Int): (Int, Int) = (segmentIndex % tileLayout.layoutCols, segmentIndex / tileLayout.layoutCols) private [geotrellis] def getGridBounds(segmentIndex: Int): GridBounds[Int] = { val normalizedSegmentIndex = segmentIndex % bandSegmentCount val Dimensions(segmentCols, segmentRows) = getSegmentDimensions(segmentIndex) val (startCol, startRow) = { val (layoutCol, layoutRow) = getSegmentCoordinate(normalizedSegmentIndex) (layoutCol * tileLayout.tileCols, layoutRow * tileLayout.tileRows) } val endCol = (startCol + segmentCols) - 1 val endRow = (startRow + segmentRows) - 1 GridBounds(startCol, startRow, endCol, endRow) } /** Returns all segment indices which intersect given pixel grid bounds */ private [geotrellis] def getIntersectingSegments(bounds: GridBounds[Int]): Array[Int] = { val colMax = totalCols - 1 val rowMax = totalRows - 1 val intersects = !(colMax < bounds.colMin || bounds.colMax < 0) && !(rowMax < bounds.rowMin || bounds.rowMax < 0) if (intersects) { val tc = tileLayout.tileCols val tr = tileLayout.tileRows val colMin = math.max(0, bounds.colMin) val rowMin = math.max(0, bounds.rowMin) val colMax = math.min(totalCols - 1, bounds.colMax) val rowMax = math.min(totalRows -1, bounds.rowMax) val ab = mutable.ArrayBuilder.make[Int] cfor(colMin / tc)(_ <= colMax / tc, _ + 1) { layoutCol => cfor(rowMin / tr)(_ <= rowMax / tr, _ + 1) { layoutRow => ab += (layoutRow * tileLayout.layoutCols) + layoutCol } } ab.result } else { Array.empty[Int] } } /** Partition a list of pixel windows to localize required segment reads. * Some segments may be required by more than one partition. * Pixel windows outside of layout range will be filtered. * Maximum partition size may be exceeded if any window size exceeds it. * Windows will not be split to satisfy partition size limits. * * @param windows List of pixel windows from this layout * @param maxPartitionSize Maximum pixel count for each partition */ def partitionWindowsBySegments(windows: Seq[GridBounds[Int]], maxPartitionSize: Long): Array[Array[GridBounds[Int]]] = segmentLayout.partitionWindowsBySegments(windows, maxPartitionSize) /** Returns all segment indices which intersect given pixel grid bounds, * and for a subset of bands. * In a band interleave geotiff, generates the segment indices for the first band. * * @return An array of (band index, segment index) tuples. */ private [geotiff] def getIntersectingSegments(bounds: GridBounds[Int], bands: Array[Int]): Array[(Int, Int)] = { val firstBandSegments = getIntersectingSegments(bounds) bands.flatMap { band => val segmentOffset = bandSegmentCount * band firstBandSegments.map { i => (band, i + segmentOffset) } } } } object GeoTiffSegmentLayoutTransform { def apply(_segmentLayout: GeoTiffSegmentLayout, _bandCount: Int): GeoTiffSegmentLayoutTransform = new GeoTiffSegmentLayoutTransform { val segmentLayout = _segmentLayout val bandCount = _bandCount } } /** * The companion object of [[GeoTiffSegmentLayout]] */ object GeoTiffSegmentLayout { /** * Given the totalCols, totalRows, storageMethod, and BandType of a GeoTiff, * a new instance of GeoTiffSegmentLayout will be created * * @param totalCols: The total amount of cols in the GeoTiff * @param totalRows: The total amount of rows in the GeoTiff * @param storageMethod: The [[StorageMethod]] of the GeoTiff * @param bandType: The [[BandType]] of the GeoTiff */ def apply( totalCols: Int, totalRows: Int, storageMethod: StorageMethod, interleaveMethod: InterleaveMethod, bandType: BandType ): GeoTiffSegmentLayout = { val tileLayout = storageMethod match { case Tiled(blockCols, blockRows) => val layoutCols = math.ceil(totalCols.toDouble / blockCols).toInt val layoutRows = math.ceil(totalRows.toDouble / blockRows).toInt TileLayout(layoutCols, layoutRows, blockCols, blockRows) case s: Striped => val rowsPerStrip = math.min(s.rowsPerStrip(totalRows, bandType), totalRows).toInt val layoutRows = math.ceil(totalRows.toDouble / rowsPerStrip).toInt TileLayout(1, layoutRows, totalCols, rowsPerStrip) } GeoTiffSegmentLayout(totalCols, totalRows, tileLayout, storageMethod, interleaveMethod) } }
{ "pile_set_name": "Github" }
The present disclosure generally relates to reinforced packages for holding products and to methods of forming the packages. More specifically, the present disclosure is directed to a package including a bag or liner attached to a carton or blank having features to reinforce the shape of the formed package and allow access to the contents of the package, and features that facilitate forming the package and keeping the package open. Bags or liners, such as paper or plastic bags, traditionally have been used for the packaging and transport of products from bulk materials such as rice or sand to larger items. Bags or liners generally are inexpensive and easy to manufacture and can be formed in different configurations and sizes, and can be used for storage and transport of a wide variety of products. In particular, in the food service industry, bags or liners are frequently used for packaging of prepared food items, such as sandwiches, French fries, cereal, etc. Currently, there is a growing demand for bags or liners or similar packages for use in packaging various products, including sandwiches, French fries, cereal, and other prepared food items, for presentation to consumers. However, it is equally important that the costs of such packages necessarily must be minimized as much as possible. While various packages designs including reinforcing or supporting materials have been developed, often, the manufacture of such specialty bags or liners having reinforcing layers or materials supplied thereto has required multiple stages or operations, which can significantly increase the cost of manufacture of such packages.
{ "pile_set_name": "USPTO Backgrounds" }
European survey of dental X-ray equipment. The implementation of an X-ray Quality Assurance (QA) program is a legal requirement in Europe as stipulated in the EU Council Directive 97/43/EURATOM (MED). A review of the literature has identified that European countries are performing some level of QA testing of their dental X-ray equipment, although the type and level to which testing is performed can differ. The European SENTINEL co-ordination action proposed to collate a survey of equipment data for both conventional and digital dental X-ray installations among the SENTINEL partners. The European QA results confirm that systems can be operated below tolerance, and in some cases significantly so, while still in clinical use. This can occur despite servicing of equipment. The results have emphasised the fact that there is a requirement for the medical physics/engineering professions to become more closely involved in the management of dental radiology equipment. This also includes their involvement in the development and delivery of appropriate training courses for dentists and suppliers of dental radiology equipment.
{ "pile_set_name": "PubMed Abstracts" }
Q: Manually extracting reflog when git reflog fails The HEAD commit object of my .git repo was lost due to a machine crash: $ git rev-parse HEAD 1f411c372caab4767638df0b47be5e2f576cb582 $ git reflog error: object file .git/objects/1f/411c372caab4767638df0b47be5e2f576cb582 is empty fatal: loose object 1f411c372caab4767638df0b47be5e2f576cb582 (stored in .git/objects/1f/411c372caab4767638df0b47be5e2f576cb582) is corrupt It turns out there are only a few files that are corrupted, but because the HEAD commit is one of them, I can't locate the hash of the preceding commit. However, my understanding of reflog is it keeps a history of all the changes to HEAD, even if they are unreachable, so I expect there to be a place in .git where I can locate the previous hash, and I'm surprised reflog is failing. Is there a way to dump the reflog manually and maybe recover this easily? I don't actually care if I loose only the last commit (or even last few commits) because my working dir is OK. But I don't want to lose all of the last 4 days since I pushed up to my server. If I can find the right SHA, I can recover simply with git checkout -B recovery and go on my merry way. Thanks! P.S. Yes I could treat my working directory as a simply squash of the last 4 days of work, but would prefer to capture the history if possible. UPDATE. FYI: How I actually recovered using the answer: tail .git/logs/HEAD 8030ad73461b75e3ce575d5896a9511f6036e45d 1f411c372caab4767638df0b47be5e2f576cb582 REDACTED 1432014000 -0700 commit: REDACTED git branch -f recovery 8030ad73461b75e3ce575d5896a9511f6036e45d echo "ref: refs/heads/recovery" > .git/HEAD A: Look at the bottom of the file .git/logs/HEAD, which tracks all changes to HEAD. (And branch changes are tracked in the files in .git/logs/refs/heads/.)
{ "pile_set_name": "StackExchange" }
Tag: how do you patent an idea Every once in virtually any while, we all develop a flash of effectiveness where great ideas pass our mind. We arise up with outstanding systems to the existing hassles. If someone had ordered you thirty years ago that we would every one of the be connected through smartphones, it would have appeared to be like a scene coming from a Sci-Fi film. And that is the litigation today, and better topics are still to visit. We remain in any dynamic world where everything is presented to variation at an actual particular aim in time. These improvements are contributed to about due to the steps of inventors and forerunners. Their means have strummed a crucial role about shaping the way all of us live very own lives. Coming up with a real unique rationale is incredible and impressive, but turning that thinking into fantastic actual enterprise is what separates great and failure. There are usually so a whole lot things go to become transforming a trustworthy raw vision into a very working corporation. If you and your family think we have this particular next big idea, you need – pay attention to our own following. inventhelp office The thing that any inventor is advised to offer is i would say the patent. These process pertaining to acquiring any kind of patent is usually complex and therefore a extremely one. Your organization need right guidance toward avoid any mistakes the fact that might hurt your commercial enterprise. Funding, market know-how, or the perfect connections are typical crucial to assist you the survival and success of your primary invention. Really innovations die at this stage thanks to deficit of enough funding and market an understanding. InventHelp Intromark Figuring point for manually can sometimes be costly as well as time-consuming. You also have to understand that around is someone else on the one hand with a new same decision as a person. Making fast and the best moves could be its difference regarding you and them. That’s why numerous inventors, chiefly new ones, are advised to take professional aide you to from guests who have relevant practical knowledge in the idea field. InventHelp has already been in the the face line with regard to helping creators turn unique ideas within to reality. This particular company carries handled tens of thousands of innovations and displays helped one and and also one regarding them be successful career ventures. InventHelp permits to article your arrival idea to companies almost the world that possibly will be interested in kind of an goal. These reputable companies assist merely giving insight that decides whether that there is a market by the computer. Positive feedback are the best sign together with other small businesses showing profit in their innovation but might invest or take advantage of the protects from for you. InventHelp simultaneously helps suffering from patenting according to referring the person to fully certified combined with a accredited patent legitimate who will handle each entire tactic. how to file a patent InventHelp conjointly guarantees full confidentiality which will inventors focused on their new technology. This explicates to the perfect full protection of your primary idea right up till you file a clair for my creation. Some people also support to evaluate the possibility of some of the creation suitable for market insist on good so mainly because to travel up that has an eliminate product the responds effectually to often the market great quality. InventHelp is a haven for different inventor hoping guidance and additionally resources to help you build the actual business through their formulation. Check to choose from some InventHelp reviews and so get of touch with any regarding their specialists.
{ "pile_set_name": "Pile-CC" }
Latest entries from this feed Everest region is one of the most glorious regions for the trek, it provides something to everyone. It is the region of world’s highest peak Mt. Everest and popular among the trekkers but it is a tough challenge task when it comes to taking Everest base camp trek you need to come with a clear goal at the end. First of all, the trek leads you deep into Buddhist Sherpa country and allows you to enjoy the beauty of world’s most magnificent peaks and its inspiring beauty. Everest is situated in the northeast part of Nepal, of course, it is the prime attraction is the Mt. Everest and also you can enjoy the views of few other peaks. Even Everest is considered as the coldest region amongst all the major treks so it is important to have proper knowledge about the region to enjoy trekking. Everest base camp trek is the best option for the people who love to get a unique experience, it is also ideal for the nature lovers. Annapurna Base Camp Trek: The Annapurna Region is the famous trekking destination in Nepal; it is unique for its geographic and cultural diversity. Even it is the protected area and provides you with access to fascinating views. By visiting this place you can enjoy the attractive views of snowy peaks and you can enjoy the rich tradition as well as the cultural life of Nepalese people. Most people love to visit this place due to its extreme elevations as well as the geographical diversity. Apart from that, this Region is also contained most remarkable amount of flora so you will get the best experience by visiting this region. To enjoy the natural beauty you must take Annapurna Base Camp trek it is ideal for you to get ultimate fun and excitement. Why Trekking In Nepal? The Annapurna Region is considered as the world’s best trekking place and most people take these routes to explore some new things. By taking this trekking people can enjoy a lot. From the lake city of Pokhara, the trek starts. Overall, this region is also comprised of the wettest, windiest as well as driest places in Nepal. No wonder Nepal is the ideal place for its natural beauty as well as its terrain also ranges from the subtropical jungle so most of the people are prefer to take Trekking in Nepal. Therefore consider choosing best trekking packages to get ultimate fun and excitement. About FeedListing.com You may have a blog, and that blog may be full of very interesting content, but it's not serving much of a purpose beyond honing your writing skills and allowing you to get something off your chest if other people aren't reading it. As with getting visitors or potential customers in any other aspect of your online business, it is going to take some promotion. There are a ton of directories and search engines specifically catering to blogs, and it is probably in the best interest of your blog's readership to get it listed in as many as possible. Start with us and sumbmit your feed at once, it only takes a couple of seconds.
{ "pile_set_name": "Pile-CC" }
Introduction {#Sec1} ============ Almost 20% of the US population over 40 years old and nearly half of the population over 75 years old receive statin therapy as an approach to reduce cardiovascular disease through reduction of blood cholesterol^[@CR1],[@CR2]^. Statins decrease synthesis of cholesterol in the liver by inhibiting 3-hydroxy-3-methylglutaryl coenzyme A reductase. Because cholesterol is involved in numerous metabolic pathways, the overall effects of statins are pleiotropic. The effect of statin therapy on influenza outcome in the elderly population has been debated previously^[@CR3],[@CR4]^. Recent studies indicated that in elderly individuals, statin therapy is associated with a reduced response to influenza vaccination^[@CR5]--[@CR7]^. This association was based on the reduction of the hemagglutination-inhibiting geometric mean titers (HAI GMT)^[@CR5]^, increased incidence of medically attended acute respiratory illness^[@CR6]^ and a higher frequency of laboratory-confirmed influenza^[@CR7]^ in the vaccinated statin users when compared with non-users. This information is concerning because the aged population, which is a target group for statin therapy, is already at high risk for morbidity and mortality caused by influenza due to immunosenescence^[@CR8]--[@CR11]^. Thus, finding a way to overcome statin-induced suppression of immune responses to vaccination in older individuals is an important goal that we have investigated by comparing an alternative route of influenza vaccine delivery to standard systemic vaccination. Cutaneous antigen delivery^[@CR12]^ using variety of devices and vaccines^[@CR13]^ including influenza vaccines^[@CR14]--[@CR20]^ is an active and promising area of research with important implications for public health. We^[@CR17]--[@CR19],[@CR21]--[@CR23]^ and other investigators^[@CR24]--[@CR28]^ have observed improved immune responses to influenza vaccination by MNPs. Improved response to skin-delivered antigens occurs due to a network of immunoregulatory cells in skin^[@CR29]--[@CR31]^ including specialized sets of resident antigen-presenting cells (APC)^[@CR32]^. Activated APCs migrate to the proximal draining lymph nodes where they present vaccine peptides to helper and cytotoxic T cells and interact with B-cells, thus initiating an effective immune response. The uptake of vaccine antigen by a highly motile CD207 (langerin) (+) DC subpopulation of skin APCs was visualized by two-photon microscopy^[@CR33]^. We have demonstrated that depletion of CD207 (+) dermal DCs prior to vaccination resulted in partial impairment of both Th1 and Th2 responses in microneedle-immunized but not systemically-vaccinated mice^[@CR34]^ confirming the important role of this subset of APCs in skin vaccination. MNP insertion alone caused local release of proinflammatory cytokines and chemokines, further increased in the presence of influenza antigen. This local innate response induced activation, maturation and migration of antigen -- loaded APCs^[@CR35]^. "Mechanical adjuvant" properties of MNPs^[@CR36],[@CR37]^ are thought to be due to a limited amount of cell death-induced transient local inflammation responsible for increased production of influenza vaccine-specific antibody that correlated with the increased level of cell death^[@CR38]^. Our previous studies^[@CR23],[@CR39]--[@CR42]^ led to a successful Phase I clinical trial of the safety, immunogenicity, reactogenicity and acceptability of the trivalent influenza vaccine delivered with a MNP^[@CR43]^. We hypothesized that skin-delivery of influenza vaccines will ameliorate the immunosuppressive effect of statin therapy seen with systemic immunization. To test this hypothesis, we compared the outcomes of two routes of immunization in combination with statin treatment: the systemic route by intramuscular injection most widely used in vaccination, and a skin immunization route using MNP. To better model human studies, we used middle-aged mice, administered AT orally on a background of a high-fat WD for 14 weeks prior to immunization, and assayed total cholesterol in blood to confirm that AT treatment affected cholesterol levels prior to vaccination. Results {#Sec2} ======= Age dependency of HAI titers elicited by systemic and MNP vaccine delivery {#Sec3} -------------------------------------------------------------------------- We vaccinated adult (2-3-month-old), mature (6.5-month-old), middle-aged (14-month-old) and advanced aged (20-month-old) mice, none of which received AT, with a single dose of A/Brisbane/59/07 (H1N1) vaccine by IM or MPN delivery and plotted HAI titers detected at day 28 postvaccination against mouse age (Fig. [1](#Fig1){ref-type="fig"}). The highest titers around HAI 80 were observed in the adult mice vaccinated with MNPs, while in the IM-vaccinated animals of the same age they were 2-fold lower (p = 0.04). The titers in both groups declined with age, but the decline was more pronounced in the systemically vaccinated mice. MNP groups demonstrated significantly higher titers than IM groups until at least 6.5 month of age at the time of vaccination (p = 0.004). In mice vaccinated at 14 months, HAI titers above the detection limit of 10 were observed in \~70% of animals in MNP groups, but only in \~20% in the IM groups (Fig. [1](#Fig1){ref-type="fig"}). Thus, MNP vaccination decreased the age -- dependent decline of the functional antibody titers observed in the systemically immunized mice.Figure 1Age-dependent decline of anti-A/Brisbane/59/07 (H1N1) HAI titers measured at day 28 postvaccination with 2.4--3.2 µg vaccine in systemically immunized (red symbols) and skin-immunized (blue symbols) BALB/c mice presented as means with SEM on the log 2 scale. The data from this study are compiled together with previously reported titers and plotted against mouse age at time of immunization. Mouse groups: (a) mice on the RD immunized with MNPs (n = 5, \~2.3 µg HA) or IM (n = 5, 3 µg HA) replotted from^[@CR23]^, (b) and (e) mice on RD immunized with MNP Batch 2 (n = 8 each time point, 3.2 µg HA), (c) Mice on RD immunized with MNP Batch 1 (n = 6, 2.7 µg HA) or IM (n = 5, 2.4 µg HA), (d) Mice on WD immunized with MNP Batch 1 (n = 6, 2.7 µg HA) or IM (n = 5, 2.4 µg HA). P values calculated by 2 tailed unpaired t-test on the log~2~-transformed titers are shown for the groups in which they were below 0.05. Individual antibody responses for each mouse are shown in Supplemental Fig. 3. AT decreases total blood cholesterol {#Sec4} ------------------------------------ Within two weeks after starting the WD alone or with AT, the mice gained an average of 6-7% of their original body weight (p \< 0.05), and by the fifth week the weight stabilized at 103% of the initial weight, although the increase was not statistically significant (Fig. [2A](#Fig2){ref-type="fig"}). Independent of AT treatment, all mice on the high fat WD displayed oily fur compared to the mice on the RD. Thus, AT did not affect the weight or the fur appearance of mice on the WD. We observed that consumption of the WD for 7 weeks elevated the level of total cholesterol in blood by 140% compared to the mice kept on the RD (p \< 0.0001, Fig. [2B](#Fig2){ref-type="fig"}). AT incorporated into the WD lowered total plasma cholesterol by 22% (p = 0.0006). Thus, the mouse model reproduced the main effects of a high fat diet and statin treatment on cholesterol levels observed in humans.Figure 2Effects of WD and AT on mouse weight and blood cholesterol. (**A**) Effect on body weight. Mice were fed RD until they were 10.5 months old and then switched to the high fat WD with or without AT. The individual weights are normalized to the weight at the time of the diet switch (time zero on the graph) and presented as means with SD (n = 18 for WD, n = 17 for WD + AT); (**B**) Effect of WD and AT consumed for 7 weeks on the total cholesterol level measured in mouse blood. Values are expressed as means with SD (n = 18 for WD, n = 13 for WD + AT, n = 5 for RD). Data points for individual mice are shown in Supplemental Fig. 4. AT dampens antibody responses in immunized mice {#Sec5} ----------------------------------------------- The HAI titers elicited by the vaccine were below the detection level of 10 in all naïve and systemically immunized mice except for one mouse with HAI = 10 detected in both WD and RD groups at d 28 postvaccination (Fig. [3A](#Fig3){ref-type="fig"}). This is not an unexpected result given the single vaccination dose and age of the animals (groups "d" in Fig. [1](#Fig1){ref-type="fig"}). The vaccine - specific IgG level in blood was found to be \~3.5 times higher in the mature RD group (6.5-month-old mice at the time of vaccination, groups "c" in Fig. [1](#Fig1){ref-type="fig"}) than in either the WD or AT groups (p = 0.01 and 0.004, respectively) at 2 weeks postvaccination (Fig. [3B](#Fig3){ref-type="fig"}). Similarly, IgG1 levels were two times (p = 0.016) and seven times (p = 0.049) higher in RD group than in the AT group at 2 and 4 weeks postvaccination, respectively (Fig. [3C](#Fig3){ref-type="fig"}). IgG, IgG1, and IgG2a levels in the WD group were up to four times higher than in the AT group (Fig. [3B and C](#Fig3){ref-type="fig"}), but there was no statistically significant difference between the groups. These results indicate a trend of reduction in vaccine-specific IgG, IgG1, and IgG2a levels in the middle aged systemically vaccinated mice receiving AT, and low overall antibody titers.Figure 3Effect of AT on the antibody response to vaccination in systemically immunized groups. Groups: R - RD (n = 6, black symbols), W - WD (n = 6, orange symbols), A - AT (n = 5, blue symbols): (**A**) HAI titers measured at days 7, 14 and 28 post immunization. Samples below limit of detection including all naïve samples (n = 4 in RD, n = 6 in WD, n = 5 in AD) were assigned a titer of 5 (red broken line) for calculations. Data are presented as geometric means with 95% confidence intervals. (**B**--**D**) Total vaccine-specific IgG, IgG1, and IgG2a, respectively. Filled and open circles connected with solid or broken lines represent immunized and naïve groups, respectively. Data are presented as means with SE where "a" and "w" denote statistically significant differences between the RD group and AT or WD groups, respectively: p = 0004 and 0.001 for "a" and "w", respectively, at d 14 on panel B; p = 0.016 and 0.049 for "a" at day 14 and 28, respectively on panel C. Detailed antibody responses of individual mice are shown in Supplemental Fig. 5. HAI titers of 10 or above were detected in all MNP-vaccinated RD and WD groups and in 50% of the AT group as soon as 2 weeks post immunization (Fig. [4A](#Fig4){ref-type="fig"}). The mature mice on the RD developed the highest titers among other groups (GMT 34.8 at 4 weeks postvaccination, Fig. [4A](#Fig4){ref-type="fig"}), while the highest HAI titers for the middle aged mice on WD (GMT 22.4) were observed in the group that did not receive AT on day 14 postvaccination. Addition of AT to the WD decreased this number by 2.5 fold to GMT 8.9 (p = 0.04). The middle aged mice in both MNP groups demonstrated a slight drop in the HAI titers from week 2 to week 4 postvaccination in contrast to the mice on the RD (Fig. [4A](#Fig4){ref-type="fig"}). IgG levels in the RD group were 2.5 fold higher (p = 0.006) than in the WD group at day 28 postvaccination and 2.4 (p = 0.026) and 6 fold (p \< 0.001) higher than in the AT group at days 14 and 28 postvaccination, respectively (Fig. [4A](#Fig4){ref-type="fig"}), while IgG1 was about 9 fold higher than in the AT group at day 28 postvaccination (p = 0.0026, Fig. [4C](#Fig4){ref-type="fig"}). AT decreased total vaccine-specific IgG, IgG1 and IgG2a in comparison with the WD MNP groups by 2.4, 2.7, and 2 -fold by day 28 respectively, although the differences were not statistically significant (Fig. [4B--D](#Fig4){ref-type="fig"}).Figure 4Effect of AT on the antibody response to vaccination in MNP groups. (**A**) HAI presented as described in Fig. [3](#Fig3){ref-type="fig"}. Groups: R - RD (n = 6, black symbols), W - WD (n = 6, orange symbols), A - AT (n = 6, blue symbols). (**B**--**D**) Total vaccine-specific IgG, IgG1, and IgG2a, respectively. Filled and open circles connected with solid or broken lines represent immunized and naïve groups, respectively. "a" and "w" represent differences described in the legend for Fig. [3](#Fig3){ref-type="fig"}: p = 0.026 for "a" at day 14, p \< 0.001 and p = 0.006 for "a" and "w" respectively at 28 on the panel B; p = 0.0026 for "a" on panel C. Detailed antibody responses of individual mice are shown in Supplemental Fig. [6](#MOESM1){ref-type="media"}. MNP vaccine delivery enhances the humoral immune response in AT-treated mice {#Sec6} ---------------------------------------------------------------------------- Our hypothesis is that skin immunization of statin--treated mice using MNPs will overcome the attenuation of antibody responses observed after systemic immunization. Thus, we compared side by side the total (IgG) and functional (HAI) vaccine-specific antibody titers for the two groups (AT-MNP vs. AT-IM) at weeks 2 and 4 postvaccination (Fig. [5](#Fig5){ref-type="fig"}). Comparison of the total IgG titers (Fig. [5A](#Fig5){ref-type="fig"}) demonstrated a clear enhancement due to MNP vaccine delivery: total vaccine-specific IgG was 47 fold (p = 0.017) and 21 fold (p = 0.003) higher in the MNP-vaccinated animals than in the systemically vaccinated animals on the AT regimen by weeks 2 and 4 postvaccination, respectively (Fig. [5A](#Fig5){ref-type="fig"}). HAI titers were low because of the animals' age, but a statistically significant 1.6-fold increase of GMT in the MNP group was observed at day 28 postvaccination (p = 0.037, Fig. [5B](#Fig5){ref-type="fig"}). This side-by-side comparison demonstrates that MNP delivery of vaccine overcomes the attenuation of antibody responses caused by AT in the middle aged mice on the WD.Figure 5MNP vaccination of AT-treated animals elicited higher antibody response than IM vaccination. (**A**) Comparison of the total vaccine-specific IgG at days 14 and 28 postvaccination in the IM (blue bar, replotted from Fig. [3](#Fig3){ref-type="fig"}) and MNP (patterned blue bar, replotted from Fig. [4](#Fig4){ref-type="fig"}) groups. (**B**) Comparison of HAI titers measured at days 14 and 28 postvaccination, same groups as on A. Individual data points for each mouse are shown in Supplemental Fig. [7](#MOESM1){ref-type="media"}. Discussion {#Sec7} ========== Here, we investigated effects and outcomes of statin therapy in a BALB/c mouse model of influenza immunization. Mouse models of influenza^[@CR44]^ have been widely used in vaccine research, but not in the context of statin therapy. Mice lack any preexisting immunity or prior exposure to influenza virus, and thus vaccination results in a primary immune response. In contrast, immunization in humans occurs on a background of immunological memory resulting from prior immunizations and infections. Such complexity, together with the co-morbidities often present in the older population, is a reason for some uncertainties noted in human studies^[@CR45]--[@CR47]^. The mouse model provides improved control over variables that affect the complexity of the immune response in humans, such as individual variation in infection history and health status. The molecular mechanism of aging in mice is similar to that in humans^[@CR48]^ and they are considered an ideal model for aging studies based on the concordance of quantitative trait loci identified by genome mapping of mice and humans^[@CR49]^. Especially important for vaccine research, mice have similar responses to vaccination as humans and age-dependent changes in B and T cells are reproduced in the mouse model^[@CR50],[@CR51]^. Age-related decreased potency of antibody response to vaccination occurs due to deficiencies in generation of plasmablasts induced by vaccination^[@CR52]^ and defects in T cell responses^[@CR53],[@CR54]^. Studies in aged mice have demonstrated diminished T cell and B cell responses to influenza that closely resembled the responses in elderly humans^[@CR55],[@CR56]^. It is also known that suppressive effects of Treg cells are enhanced with aging^[@CR57]^, and decreased antigen-specific B cell stimulation in aged mice was associated with elevated levels of a regulatory subset of effector Tregs and defective Tfh cell function^[@CR58]^. We started mice on the high fat diet with or without AT more than three months prior to vaccination and observed an AT-dependent decrease in total cholesterol level in blood. AT has previously been shown to decrease total cholesterol in C57BL/6J mice when given with an atherogenic diet, but not with a regular rodent diet^[@CR59]^, although AT given with the normal rodent diet in approximately 10-fold higher dose than in our study did decrease plasma cholesterol by 26% in 2 weeks^[@CR60]^. The dose of AT that we used corresponded to that used in previous reports^[@CR61]^. By using a prolonged statin regimen, high fat diet and middle aged mice, we were able to show that chronic administration of AT considerably decreased vaccine-induced antibody titers. Our mouse data are similar to recent data from human studies which indicate that statins, especially synthetic ones such as AT, reduce antibody responses to influenza vaccination^[@CR5]--[@CR7]^. We found that MNP administration of subunit vaccine increased vaccine-specific antibody titers in the AT-treated mice by \~20 fold compared to IM-immunized mice on the same regimen (p = 0.003) and by \~7 fold (P = 0.0004) compared to the IM-vaccinated mice on a WD diet that did not receive AT. The functional HAI titers that are often used as correlate of protective immunity were as low as 14 (GMT) in the mice on the WD in the MNP-immunized groups and below the level of detection in the IM-groups one month postimmunization, most probably due to the age (14 month) of the animals by the time of immunization. Similarly to a recent report^[@CR62]^ we demonstrated that the antibody response to vaccination depends on mouse age. Importantly, we found that the age-dependent decline in HAI titers is reduced in skin-immunized mice compared with systemically-immunized mice. AT dampened antibody titers further in both groups. One possible conclusion from this observation is that vaccinees of more advanced age may need an adjuvant in addition to the skin delivery route to boost the humoral response. Co-delivery of an adjuvant and the vaccine formulated in a MNP^[@CR63],[@CR64]^ would be especially attractive because in this case the adjuvant will be administered locally. A dampening of influenza vaccine-specific antibody titers by AT was previously observed in human studies, and we found a similar trend in the mouse model. The interplay between statin therapy and the outcome of influenza vaccination most likely is a combination of the effect of statins on the host response including cell-mediated and innate immunity, the age of the host and the particular vaccine strain used. Here, we observed that a change from a systemic to a cutaneous route using MNPs has the potential to improve immune responses to existing vaccines which are otherwise compromised by statin therapy. Materials and Methods {#Sec8} ===================== Vaccine and microneedle patches {#Sec9} ------------------------------- The cell-grown influenza A/Brisbane/59/07 (H1N1) vaccine monobulk was kindly provided by Seqirus (Cambridge, MA). It was concentrated and assayed for protein and hemagglutinin content as previously described^[@CR23]^. The dissolving MNPs were prepared essentially as described previously^[@CR23]^ except that polyvinyl alcohol was used as backing material^[@CR65]^. Specifically, the MNPs were made using polyvinyl alcohol and carobxymethyl cellulose (to provide mechanical strength), sucrose (to stabilize the vaccine and enable rapid dissolution) and potassium phosphate buffer (to control pH). The morphology of MNPs was examined by observation under a microscope immediately after the MNPs were fabricated. Each MNP consisted of 100 microneedles in a 10 × 10 pattern, and each microneedle had a sharp tip measured around 700 µm in length and 200 µm in base diameter. The tips were sharp and straight, indicating sufficient drying of materials (Supplementary Fig. [1S](#MOESM1){ref-type="media"}). After immunization the used MNPs were examined again using the same microscopy conditions for the presence of the vaccine-loaded microneedle tips. All microneedle tips disappeared and only the residue bases were left on the patch backing and it was thus concluded that all MNPs resulted in vaccine delivery (Supplementary Fig. [1S](#MOESM1){ref-type="media"}). For antigen quantification, the vaccine was extracted from unused and used MNPs for 20 minutes in PBS and the extracts were assayed by ELISA (Supplementary Method). Two batches of MNPs used in this study were loaded with 3.4 and 3.8 µg of HA (hemagglutinin) and 2.7 and 3.2 µg of HA was delivered into the skin, respectively. Thus, the delivery efficiency calculated from the initial and the residual amount of antigen (Supplementary Fig. [2S](#MOESM1){ref-type="media"}) was 79.8 ± 8% in the batch 1 and 83.5 ± 6.5% in the batch 2. Animals {#Sec10} ------- Female BALB/c mice were obtained from Harlan Laboratories and fed with the regular rodent chow diet (RD) (Laboratory Rodent Diet 5001 (\~13% of energy comes from fat, cholesterol \~0.02%), LabDiet, St. Louis, MO) until they were 10.5 month old. Then they were switched for 14 weeks to a high fat rodent WD (Anhydrous Milkfat (20%, cholesterol 0.2%, 1/2" soft pellets; \~40% energy comes from fat), from BioServ (Flemington, NJ) with or without AT. Mice were 14 months old by the time of immunization (middle aged mice). Female BALB/c mice fed with RD were 6.5 months old (mature) by the time of immunization. All animals including naïve animals were kept on the specified diets for the duration of the study. Mice were housed in microisolators with filter tops in a biocontainment level BSL-1 facility and subjected to 12/12 hour light/dark cycle and temperature between 20--22 °C. Three month old (adult) female BALB/c mice from Envigo and 20 month-old (advanced aged) female BALB/cBy mice obtained through the National Institute of Aging were fed with the RD and used in the experiment on age dependency of the immune response. All institutional and national guidelines for the care and use of laboratory animals were followed in accordance with and approved by the Institutional Animal Care and Use Committee (IACUC) at Emory University. Administration of atorvastatin {#Sec11} ------------------------------ Atorvastatin Ca salt (AK Scientific, Union City, CA), was formulated in the high fat WD by BioServ at 40 mg/kg, which corresponded to 10 mg/kg b.wt per day (\~0. 2 mg/day/mouse) assuming an average daily food intake of 5 g per mouse and average mouse weight of 20 g. Cholesterol assay {#Sec12} ----------------- Total cholesterol was measured in blood of fasted mice using a total cholesterol assay kit (Cell Biolabs, San Diego, CA). Immunization and sample collection {#Sec13} ---------------------------------- Mice were immunized once with a dose containing 2.4 µg HA of A/Brisbane/59/07 (H1N1) vaccine systemically by injection of 0.05 ml into the upper quadrant of the hind leg or through skin with batch 1 MNPs that delivered 2.7 µg of vaccine as described^[@CR23]^. Unvaccinated animals consuming the same diet as the immunized groups were used as controls. Blood was collected at indicated intervals post-immunization and serum was stored at −20 °C until analysis. The mice used in the experiment on age-dependency of the immune response were kept on the RD and vaccinated with 3.2 µg (delivered) of the same vaccine using MNPs from batch 2. Humoral immune responses {#Sec14} ------------------------ Vaccine-specific total antibody levels in blood were determined by quantitative ELISA as previously described^[@CR23]^. Hemagglutination inhibition **(**HAI) titers were assessed based on the WHO protocol^[@CR66]^ using turkey red blood cells (LAMPIRE, Pipersville, PA). The samples below the lowest level of detection (HAI = 10) were assigned a titer of 5 for calculations. Influenza virus A/Brisbane/59/07 H1N1, generously provided by CDC, was grown in MDCK cells and assayed for hemagglutination (HA) activity as described^[@CR66]^. Statistics {#Sec15} ---------- The statistical significance was calculated for selected groups by two-tailed unpaired t-test and *p* ≤ 0.05 was considered significant. HAI titers were converted to log~2~ titers for statistical analysis. Data Availability {#Sec16} ----------------- All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). Electronic supplementary material ================================= {#Sec17} Supplementary Information **Electronic supplementary material** **Supplementary information** accompanies this paper at 10.1038/s41598-017-18140-0. **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We thank Seqirus for providing influenza vaccine. This work was supported by U.S. National Institutes of Health grant NIH/NIAID 1R01 AI110680 (RWC, PI). E.V.V. and R.W.C. designed the study, and wrote the manuscript. S.W. and E.V.V. performed the experiments, analyzed the data, prepared figures. S.L. made and characterized MNP used in the study. M.R.P. provided scientific advice and edited the manuscript. All authors reviewed the manuscript. Competing Interests {#FPar1} =================== Mark Prausnitz is an inventor of patents that have been licensed to companies developing microneedle-based products, is a paid advisor to companies developing microneedle-based products, and is a founder/shareholder of companies developing microneedle-based products (Micron Biomedical, Inc.). The terms of this arrangement have been reviewed and approved by Georgia Tech and Emory University in accordance with their conflict of interest policies. Elena Vassilieva, Shelly Wang, Song Li and Richard Compans do not declare any conflict of interest.
{ "pile_set_name": "PubMed Central" }